id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
4937692
https://en.wikipedia.org/wiki/Adapter%20%28computing%29
Adapter (computing)
An adapter in regard to computing can be either a hardware component (device) or software that allows two or more incompatible devices to be linked together for the purpose of transmitting and receiving data. Given an input, an adapter alters it in order to provide a compatible connection between the components of a system. Both software and hardware adapters are used in many different devices such as mobile phones, personal computers, servers and telecommunications networks for a wide range of purposes. Some adapters are built into devices, while the others can be installed on a computer's motherboard or connected as external devices. A software component adapter is a type of software that is logically located between two software components and reconciles the differences between them. Function Telecommunication Like many industries, the telecommunication industry needs electrical devices such as adapters to transfer data across long distances. For example, analog telephone adapters (ATA) are used by telephone and cable companies. This device connects an analog telephone to a computer or network by connecting them to digital communication lines, which enables users to make a call via the Internet. Personal computers In modern personal computers, almost every peripheral device uses an adapter to communicate with a system bus, for example: Display adapters used to transmit signals to a display device Universal Serial Bus (USB) adapters for printers, keyboards and mice, among others Network adapters used to connect a computer to a network Host bus adapters used to connect hard disks or other storage Analog and digital signals Some hardware adapters convert between analog and digital signals with A/D or D/A converters. This allows adapters to interface with a broader range of devices. One common example of signal conversion is the sound card, which converts digital audio signals from a computer to analog signals for input to an amplifier. Types Host adapter A host adapter, host controller or host bus adapter (HBA) is a circuit board or device which allows peripheral devices (usually internal) to interface with a computer. Host bus adapters are used to connect hard drives, networks, and USB peripherals. They are commonly integrated into motherboards but can also take the form of an expansion card. Adapter card An adapter card or expansion card is a circuit board which is plugged into the expansion bus in a computer to add function or resources, in much the same way as a host bus adapter . Common adapter cards include video cards, network cards, sound cards, and other I/O cards. Video adapter A video adapter (also known as graphics adapter, display adapter, graphics card, or video card) is a type of expansion card for computers which converts data and generates the electrical signal to display text and graphics on a display device. Bus master adapter Bus master adapters fit in EISA or MCA expansion slots in computers, and use bus mastering to quickly transfer data by bypassing the CPU and interfacing directly with other devices. General purpose interface adapter A general purpose interface adapter or GPIA is usually used as an interface between a processing unit and a GPIB (IEEE 488) bus. Fax adapter A fax adapter, also called a fax card or fax board, is an internal fax modem which allows a computer to transmit and receive fax data. Network adapter Network adapters connect a device to a network and enable it to exchange data with other devices on the network. These devices may be computers, servers, or any other networking device. Network adapter usually refers to a piece of computer hardware typically in the form of an Ethernet card, wireless network card, USB network adapter, or wireless game adapter. Hardware network adapters which are either wired or wireless can be installed on a motherboard, connecting the computer to a network.. The term can also refer to a virtual network adapter which exists only in software, either for the purposes of virtualization, or to interface with some other physical adapter Terminal adapter In telecommunications, a terminal adapter or TA acts as an interface between a terminal device, such as a computer or telephone, and a communications network (typically an integrated services digital network). Channel-to-channel adapter A channel-to-channel adapter (CTCA) connects two input/output channels in IBM mainframes. Resource adapters Resource adapters are used to retrieve and route data. They provide access to databases, files, messaging systems, enterprise applications and other data sources and targets. Each adapter includes a set of adapter commands that can be used to customize its operation. Adapter commands specify different queues and queue managers, specific messages by message ID, specific sets of messages with the same message ID, message descriptors in the data, and more. The resource adapters provided with many integration products enable data transformation and adapter-specific behavior recognition on different systems and data structures. See also Computer port (hardware) Controller (computing) Electrical connector References Computer peripherals
70011
https://en.wikipedia.org/wiki/Argos%2C%20Peloponnese
Argos, Peloponnese
Argos (; Greek: Άργος ; Ancient Greek: Ἄργος ) is a city in Argolis, the Peloponnese, Greece and is one of the oldest continuously inhabited cities in the world, and the oldest in Europe. It is the largest city in Argolis and a major center for the area. Since the 2011 local government reform it has been part of the municipality of Argos-Mykines, of which it is a municipal unit. The municipal unit has an area of 138.138 km2. It is from Nafplion, which was its historic harbour. A settlement of great antiquity, Argos has been continuously inhabited as at least a substantial village for the past 7,000 years. A resident of the city of Argos is known as an Argive ( , ; ). However, this term is also used to refer to those ancient Greeks generally who assaulted the city of Troy during the Trojan War; the term is more widely applied by the Homeric bards. Numerous ancient monuments can be found in the city today. Agriculture is the mainstay of the local economy. Geography Climate Argos has a hot Mediterranean climate. It is one of the hottest places in Greece during summer. Etymology There are several proposed etyma. The name is associated with the legendary Argus, the third king of the city in ancient times, who renamed it after himself, thus replacing its older name Phoronikon Asty (Φορωνικόν Άστυ, "Citadel of Phoroneus"). Both the personal name and placename are linked to the word αργός (argós), which meant "white" or "shining";possibly, this had to do with the visual impression given of the Argolic plain during harvest time. According to Strabo, the name could have even originated from the word αγρός "field" by antimetathesis of the consonants. History Antiquity Herodotus first recorded the myth of the traditional story of Argos being the origin of the ancient Macedonian royal house of the Argead dynasty (Greek: Ἀργεάδαι, Argeádai) of Philip II and Alexander the Great. As a strategic location on the fertile plain of Argolis, Argos was a major stronghold during the Mycenaean era. In classical times, Argos was a powerful rival of Sparta for dominance over the Peloponnese, but was eventually shunned by other Greek city-states after remaining neutral during the Greco-Persian Wars. There is evidence of continuous settlement in the area starting with a village about 7000 years ago in the late Neolithic, located on the foot of Aspida hill. Since that time, Argos has been continually inhabited at the same geographical location. And while the name Argos is generally accepted to have a Hellenic Indo-European etymology, Larissa is generally held to derive from a Pre-Greek substrate. The city is located at a rather propitious area, among Nemea, Corinth and Arcadia. It also benefitted from its proximity to lake Lerna, which, at the time, was at a distance of one kilometre from the south end of Argos. Argos was a major stronghold of Mycenaean times, and along with the neighbouring acropolis of Mycenae and Tiryns became a very early settlement because of its commanding positions in the midst of the fertile plain of Argolis. Archaic Argos Argos experienced its greatest period of expansion and power under the energetic 7th century BC ruler King Pheidon. Under Pheidon, Argos regained sway over the cities of the Argolid and challenged Sparta’s dominance of the Peloponnese. Spartan dominance is thought to have been interrupted following the Battle of Hyssiae in 669–668 BC, in which Argive troops defeated the Spartans in a hoplite battle. During the time of its greatest power, the city boasted a pottery and bronze sculpturing school, pottery workshops, tanneries and clothes producers. Moreover, at least 25 celebrations took place in the city, in addition to a regular local products exhibition. A sanctuary dedicated to Hera was also found at the same spot where the monastery of Panagia Katekrymeni is located today. Pheidon also extended Argive influence throughout Greece, taking control of the Olympic Games away from the citizens of Elis and appointing himself organizer during his reign. Pheidon is also thought to have introduced reforms for standard weight and measures in Argos, a theory further reinforced with the unearthing of six "spits" of iron in an Argive Heraion, possibly remainders of a dedication from Pheidon. Classical Argos In 494 BC, Argos suffered a crushing defeat at the hands of its regional rival, Sparta, at the Battle of Sepeia. Following this defeat, Herodotus tells us the city suffered a form of stasis. The political chaos is thought to have resulted in a democratic transition in the city. Argos did not participate in the Hellenic Alliance against the Persian Invasion of 480 BC. This resulted in a period of diplomatic isolation, although there is evidence of an Argive alliance with Tegea prior to 462 BC. In 462 BC, Argos joined a tripartite alliance with Athens and Thessaly. This alliance was somewhat dysfunctional, however, and the Argives are only thought to have provided marginal contributions to the alliance at the Battle of Oenoe and Tanagra. For example, only 1,000 Argive hoplites are thought to have fought alongside the Athenians at the Battle of Tanagra. Following the allies' defeat at Tanagra in 457 BC, the alliance began to fall apart, resulting in its dissolution in 451 BC. Argos remained neutral or the ineffective ally of Athens during the Archidamian War between Sparta and Athens. Argos' neutrality resulted in a rise of its prestige among other Greek cities, and Argos used this political capital to organize and lead an alliance against Sparta and Athens in 421 BC. This alliance included Mantinea, Corinth, Elis, Thebes, Argos, and eventually Athens. This alliance fell apart, however, after the allied loss at the Battle of Mantinea in 418 BC. This defeat, combined with the raiding of the Argolid by the Epidaurians, resulted in political instability and an eventual oligarchic coup in 417 BC. Although democracy was restored within a year, Argos was left permanently weakened by this coup. This weakening led to a loss of power, which in turn led to the shift of commercial focus from the Ancient Agora to the eastern side of the city, delimited by Danaou and Agiou Konstadinou streets. Argos played a minor role in the Corinthian Wars against Sparta, and for a short period of time considered uniting with Corinth to form an expanded Argolid state. For a brief period of time, the two poleis combined, but Corinth quickly rebelled against Argive domination, and Argos returned to its traditional boundaries. After this, Argos continued to remain a minor power in Greek affairs. Argos escaped occupation by Macedon during the reigns of Philip II and Alexander the Great and remained unscathed during the Wars of the Diadochi, however in 272 it was attacked by Pyrrhus of Epirus at the Battle of Argos, in which Pyrrhus was killed. Democracy in Classical Argos Argos was a democracy for most of the classical period, with only a brief hiatus between 418 and 416. Democracy was first established after a disastrous defeat by the Spartans at the Battle of Sepeia in 494. So many Argives were killed in the battle that a revolution ensued, in which previously disenfranchised outsiders were included in the state for the first time. Argive democracy included an Assembly (called the aliaia), a Council (the bola), and another body called 'The Eighty,' whose precise responsibilities are obscure. Magistrates served six-month terms of office, with few exceptions, and were audited at the end of their terms. There is some evidence that ostracism was practiced. Roman period Under Roman rule, Argos was part of the province of Achaea. While prosperous during the early principate, Argos along with much of Greece and the Balkans experienced disasters during the so-called "Crisis of the 3rd Century" when external threats and internal revolts left the Empire in turmoil. During Gallienus' reign, marauding bands of Goths and Heruli sailed down from the Black Sea in 267 A.D. and devastated the Greek coastline and interior. Athens, Sparta, Corinth, Thebes and Argos were all sacked. Gallienus finally cut off their retreat north and destroyed them with great slaughter at Naissus in Moesia. With the death of the last emperor over a unified Empire, Theodosius I, the Visigoths under their leader Alaric I descended into Greece in 396-397 A.D., sacking and pillaging as they went. Neither the eastern or western Roman warlords, Rufinus (consul) or Stilicho, made an effective stand against them due to the political situation between them. Athens and Corinth were both sacked. While the exact level of destruction for Argos is disputed due to the conflicting nature of the ancient sources, the level of damage to the city and people was considerable. Stilicho finally landed in western Greece and forced the Visigoths north of Epirus. Sites said to have been destroyed in Argos include the Hypostyle hall, parts of the agora, the odeion, and the Aphrodision. Byzantine, Crusader and Ottoman rule Under Byzantine rule it was part of the theme of Hellas, and later of the theme of the Peloponnese. In the aftermath of the Fourth Crusade, the Crusaders captured the castle built on Larisa Hill, the site of the ancient acropolis, and the area became part of the lordship of Argos and Nauplia. In 1388 it was sold to the Republic of Venice, but was taken by the Despot of the Morea Theodore I Palaiologos before the Venetians could take control of the city; he sold it anyway to them in 1394. The Crusaders established a Latin bishopric. Venetian rule lasted until 1463, when the Ottomans captured the city. In 1397, the Ottomans plundered Argos, carrying off much of the population, to sell as slaves. The Venetians repopulated the town and region with Albanian settlers, granting them long-term agrarian tax exemptions. Together with the Greeks of Argos, they supplied stratioti troops to the armies of Venice. Some historians consider the French military term "argoulet" to derive from the Greek "argetes", or inhabitant of Argos, as a large number of French stratioti came from the plain of Argos. During Ottoman rule, Argos was divided in four mahalas, or quarters; the Greek (Rûm) mahala, Liepur mahala, Bekir Efenti mahala and Karamoutza or Besikler mahala, respectively corresponding to what is now the northeastern, the northwestern, the southwestern and southeastern parts of the city. The Greek mahala was also called the "quarter of the unfaithful of Archos town" in Turkish documents, whereas Liepur mahala (the quarter of the rabbits) was composed mostly of Albanian emigrants and well-reputed families. Karamoutza mahala was home to the most prominent Turks and boasted a mosque (modern-day church of Agios Konstadinos), a Turkish cemetery, Ali Nakin Bei's serail, Turkish baths and a Turkish school. It is also at this period when the open market of the city is first organised on the site north to Kapodistrias' barracks, at the same spot where it is held in modern times. A mosque would have existed there, too, according to the city planning most Ottoman cities followed. Argos grew exponentially during this time, with its sprawl being unregulated and without planning. As French explorer Pouqueville noted, "its houses are not aligned, without order, scattered all over the place, divided by home gardens and uncultivated areas". Liepur mahala appears to have been the most organised, having the best layout, while Bekir mahala and Karamoutza mahala were the most labyrinthine. However, all quarters shared the same type of streets; firstly, they all had main streets which were wide, busy and public roads meant to allow for communication between neighbourhoods (typical examples are, to a great extent, modern-day Korinthou, Nafpliou and Tripoleos streets). Secondary streets were also common in all four quarters since they lead to the interior of each mahala, having a semi-public character, whereas the third type of streets referred to dead-end private alleys used specifically by families to access their homes. Remnants of this city layout can be witnessed even today, as Argos still preserves several elements of this Ottoman type style, particularly with its long and complicated streets, its narrow alleys and its densely constructed houses. Independence and modern history With the exception of a period of Venetian domination in 1687–1715, Argos remained in Ottoman hands until the beginning of the Greek War of Independence in 1821, when wealthy Ottoman families moved to nearby Nafplio due to its stronger walling. At that time, as part of the general uprising, many local governing bodies were formed in different parts of the country, and the "Consulate of Argos" was proclaimed on 28 March 1821, under the Peloponnesian Senate. It had a single head of state, Stamatellos Antonopoulos, styled "Consul", between 28 March and 26 May 1821. Later, Argos accepted the authority of the unified Provisional Government of the First National Assembly at Epidaurus, and eventually became part of the Kingdom of Greece. With the coming of governor Ioannis Kapodistrias, the city underwent efforts of modernisation. Being an agricultural village, the need for urban planning was vital. For this reason, in 1828, Kapodistrias himself appointed mechanic Stamatis Voulgaris as the creator of a city plan which would offer Argos big streets, squares and public spaces. However, both Voulgaris and, later, French architect de Borroczun's plans were not well received by the locals, with the result that the former had to be revised by Zavos. Ultimately, none of the plans were fully implemented. Still, the structural characteristics of de Borroczun's plan can be found in the city today, despite obvious proof of pre-revolutionary layout, such as the unorganised urban sprawl testified in the area from Inachou street to the point where the railway tracks can be found today. After talks concerning the intentions of the Greek government to move the Greek capital from Nafplio to Athens, discussions regarding the possibility of Argos also being a candidate as the potential new capital became more frequent, with supporters of the idea claiming that, unlike Athens, Argos was naturally protected by its position and benefited from a nearby port (Nafplio). Moreover, it was maintained that construction of public buildings would be difficult in Athens, given that most of the land was owned by the Greek church, meaning that a great deal of expropriation would have to take place. On the contrary, Argos did not face a similar problem, having large available areas for this purpose. In the end, the proposition of the Greek capital being moved to Argos was rejected by the father of king Otto, Ludwig, who insisted in making Athens the capital, something which eventually happened in 1834. During the German occupation, Argos airport was frequently attacked by Allied forces. One of the raids was so large that it resulted in the bombing of the city on October 14, 1943, with the casualties of about 100 dead Argives and several casualties, and 75 of the Germans. The bombing started from the airport heading southeast, hitting the monastery of Katakrykmeni and several areas of the city, up to the railway station. Mythology The mythological kings of Argos are (in order): Inachus, Phoroneus, Apis, Argus, Criasus, Phorbas, Triopas, Iasus, Agenor, Crotopus, Sthenelus, Gelanor AKA Pelasgus, Danaus, Lynceus, Abas, Proetus, Acrisius, Perseus, Megapenthes, Argeus and Anaxagoras. An alternative version supplied by Tatian of the original 17 consecutive kings of Argos includes Apis, Argios, Kriasos and Phorbas between Argus and Triopas, explaining the apparent unrelation of Triopas to Argus. The city of Argos was believed to be the birthplace of the mythological character Perseus, the son of the god Zeus and Danaë, who was the daughter of the king of Argos, Acrisius. After the original 17 kings of Argos, there were three kings ruling Argos at the same time (see Anaxagoras), one descended from Bias, one from Melampus, and one from Anaxagoras. Melampus was succeeded by his son Mantius, then Oicles, and Amphiaraus, and his house of Melampus lasted down to the brothers Alcmaeon and Amphilochus. Anaxagoras was succeeded by his son Alector, and then Iphis. Iphis left his kingdom to his nephew Sthenelus, the son of his brother Capaneus. Bias was succeeded by his son Talaus, and then by his son Adrastus who, with Amphiaraus, commanded the disastrous war of the Seven against Thebes. Adrastus bequeathed the kingdom to his son, Aegialeus, who was subsequently killed in the war of the Epigoni. Diomedes, grandson of Adrastus through his son-in-law Tydeus and daughter Deipyle, replaced Aegialeus and was King of Argos during the Trojan war. This house lasted longer than those of Anaxagoras and Melampus, and eventually the kingdom was reunited under its last member, Cyanippus, son of Aegialeus, soon after the exile of Diomedes. Ecclesiastical history After Christianity became established in Argos, the first bishop documented in extant written records is Genethlius, who in 448 AD took part in the synod called by Archbishop Flavian of Constantinople that deposed Eutyches from his priestly office and excommunicated him. The next bishop of Argos, Onesimus, was at the 451 Council of Chalcedon. His successor, Thales, was a signatory of the letter that the bishops of the Roman province of Hellas sent in 458 to Byzantine Emperor Leo I the Thracian to protest the killing of Proterius of Alexandria. Bishop Ioannes was at the Third Council of Constantinople in 680, and Theotimus at the Photian Council of Constantinople (879). The local see is today the Greek Orthodox Metropolis of Argolis. Under 'Frankish' Crusader rule, Argos became a Latin Church bishopric in 1212, which lasted as a residential see until Argos was taken by the Ottoman Empire in 1463 but would be revived under the second Venetian rule in 1686. Today the diocese is a Catholic titular see. Characteristics Orientation The city of Argos is delimited to the north by dry river Xerias, to the east by Inachos river and Panitsa stream (which emanates from the latter), to the west by the Larissa hill (site of homonymous castle and of a monastery called Panagia Katakekrymeni-Portokalousa) and the Aspida Hill (unofficially Prophetes Elias hill), and to the south by the Notios Periferiakos road. The Agios Petros (Saint Peter) square, along with the eponymous cathedral (dedicated to saint Peter the Wonderworker), make up the town centre, whereas some other characteristic town squares are the Laiki Agora (Open Market) square, officially Dimokratias (Republic) square, where, as implied by its name, an open market takes place twice a week, Staragora (Wheat Market), officially Dervenakia square, and Dikastirion (Court) square. Bonis Park is an essential green space of the city. Currently, the most commercially active streets of the city are those surrounding the Agios Petros square (Kapodistriou, Danaou, Vassileos Konstantinou streets) as well as Korinthou street. The Pezodromi (Pedestrian Streets), i.e. the paved Michael Stamou, Tsaldari and Venizelou streets, are the most popular meeting point, encompassing a wide variety of shops and cafeterias. The neighborhood of Gouva, which extends around the intersection of Vassileos Konstantinou and Tsokri streets, is also considered a commercial point. Population In 700 BC there were at least 5,000 people living in the city. In the fourth century BC, the city was home to as many as 30,000 people. Today, according to the 2011 Greek census, the city has a population of 22,085. It is the largest city in Argolis, larger than the capital Nafplio. Economy The primary economic activity in the area is agriculture. Citrus fruits are the predominant crop, followed by olives and apricots. The area is also famous for its local melon variety, Argos melons (or Argitiko). There is also important local production of dairy products, factories for fruits processing. Considerable remains of the ancient and medieval city survive and are a popular tourist attraction. Monuments Most of Argos' historical and archaeological monuments are currently unused, abandoned, or only partially renovated: The Larissa castle, built during prehistoric time, which has undergone several repairs and expansions since antiquity and played a significant historical role during the Venetian domination of Greece and the Greek War of Independence. It is located on top of the homonymous Larissa Hill, which also constitutes the highest spot of the city (289 m.). In ancient times, a castle was also found in neighbouring Aspida Hill. When connected with walls, these two castles fortified the city from enemy invasions. The Ancient theatre, built in the 3rd century B.C with a capacity of 20,000 spectators, replaced an older neighbouring theatre of the 5th century BC and communicated with the Ancient Agora. It was visible from any part of the ancient city and the Argolic gulf. In 1829, it was used by Ioannis Kapodistrias for the Fourth National Assembly of the new Hellenic State. Today, cultural events are held at its premises during the summer months. The Ancient Agora, adjacent to the Ancient theatre, which developed in the 6th century B.C., was located at the junction of the ancient roads coming from Corinth, Heraion and Tegea. Excavations in the area have uncovered a bouleuterion, built in 460 B.C. when Argos adopted the democratic regime, a Sanctuary of Apollo Lyceus and a palaestra. The "Criterion" of Argos, an ancient monument located on the southwest side of the town, on the foot of Larissa hill, which came to have its current structure during the 6th-3rd century BC period. Initially, it served as a court of ancient Argos, similar to Areopagus of Athens. According to mythology, it was at this area where Hypermnestra, one of the 50 daughters of Danaus, the first king of Argos, was tried. Later, under the reigns of Hadrian, a fountain was created to collect and circulate water coming from the Hadrianean aqueduct located in northern Argos. The site is connected via a paved path with the ancient theatre. The Barracks of Kapodistrias, a preservable building with a long history. Built in the 1690s during the Venetian domination of Greece, they initially served as a hospital run by the Sisters of Mercy. During the Tourkokratia, they served as a market and a post office. Later, in 1829, significant damage caused during the Greek revolution was repaired by Kapodistrias who turned the building into a cavalry barrack, a school (1893-1894), an exhibition space (1899), a shelter for Greek refugees displaced during the population exchange between Greece and Turkey (since 1920) and an interrogation and torture space (during the German occupation of Greece). In 1955–68, it was used by the army for the last time; it now accommodates the Byzantine Museum of Argos, local corporations and also serves as an exhibition space. The Municipal Neoclassical Market building (unofficially the "Kamares", i.e. arches, from the arches that it boasts), built in 1889, which is located next to Dimokratias square, is one of the finest samples of modern Argos' masterly architecture, in Ernst Ziller style. The elongated, two corridor, preservable building accommodates small shops. The Kapodistrian school, in central Argos. Built by architect Labros Zavos in 1830, as part of Kapodistrias' efforts to provide places of education to the Greek people, it could accommodate up to 300 students. However, technical difficulties led to its decay, until it was restored several times, the last of which being in 1932. Today, its neoclassical character is evident, with the building housing the 1st elementary school of the town. The old Town Hall, built during the time of Kapodistrias in 1830, which originally served as a Justice of the peace, the Dimogerontia of Argos, an Arm of Carabineers and a prison. From 1987 to 2012, it housed the Town Hall which is now located in Kapodistriou street. The House of philhellene Thomas Gordon, built in 1829 that served as an all-girls school, a dance school and was home to the 4th Greek artillery regiment. Today it accommodates the French Institute of Athens (Institut Français d' Athènes). The House of Spyridon Trikoupis (built in 1900), where the politician was born and spent his childhood. Also located in the estate, which is not open to public, is the Saint Charalambos chapel where Trikoupis was baptized. The House of general Tsokris, important military fighter in the Greek revolution of 1821 and later assemblyman of Argos. The temple of Agios Konstadinos, one of the very few remaining buildings in Argos dating from the Ottoman Greece era. It is estimated to have been built in the 1570-1600 period, with a minaret also having existed in its premises. It served as a mosque and an Ottoman cemetery up to 1871, when it was declared a Christian temple. The chambered tombs of the Aspida hill. The Hellinikon Pyramid. Dating back to late 4th B.C., there exist many theories as to the purpose it served (tumulus, fortress). Together with the widely accepted scientific chronology, there are some people who claim it was built shortly after the Pharaoh tomb, i.e. the Great Pyramid of Giza, thus a symbol of the excellent relationship the citizens of Argos had with Egypt. A great number of archaeological findings, dating from the prehistoric ages, can be found at the Argos museum, housed at the old building of Dimitrios Kallergis at Saint Peter's square. The Argos airport, located in an homonymous area (Aerodromio) in the outskirts of the city is also worth mentioning. The area it covers was created in 1916-1917 and was greatly used during the Greco-Italian War and for the training of new Kaberos school aviators for the Hellenic Air Force Academy. It also constituted an important benchmark in the organization of the Greek air forces in southern Greece. Furthermore, the airport was used by the Germans for the release of their aerial troops during the Battle of Crete. It was last used as a landing/take off point for spray planes (for agricultural purposes in the olive tree cultivations) up until 1985. Transportation Argos is connected via regular bus services with neighbouring areas as well as Athens. In addition, taxi stands can be found at the Agios Petros as well as the Laiki Agora square. The city also has a railway station which, at the moment, remains closed due to an indefinite halt to all railway services in the Peloponnese area by the Hellenic Railways Organisation. However, in late 2014, it was announced that the station would open up again, as part of an expansion of the Athens suburban railway in Argos, Nafplio and Korinthos. Finally in mid 2020 it was announced by the administration of Peloponnese Region their cooperation with the Hellenic Railways Organisation for the metric line and stations maintenance for the purpose of the line's reoperation in the middle of 2021. Education Argos has a wide range of educational institutes that also serve neighbouring sparsely populated areas and villages. In particular, the city has seven dimotika (primary schools), four gymnasia (junior high), three lyceums (senior high), one vocational school, one music school as well as a Touristical Business and Cooking department and a post-graduate ASPETE department. The city also has two public libraries. Sports Argos hosts two major sport clubs with presence in higher national divisions and several achievements, Panargiakos F.C. football club, founded in 1926 and AC Diomidis Argous handball club founded in 1976. Other sport clubs that are based in Argos: A.E.K. Argous, Apollon Argous, Aristeas Argous, Olympiakos Argous, Danaoi and Panionios Dalamanaras. Notable people Acrisius, mythological king Theoclymenus, mythological prophet Agamemnon, legendary leader of the Achaeans in the Trojan War Acusilaus (6th century BC), logographer and mythographer Ageladas (6th–5th century BC), sculptor Calchas (8th century BC), Homeric mythological seer Karanos (8th century BC), founder of the Macedonian Argead Dynasty Leo Sgouros (13th century), Byzantine despot Nikon the Metanoeite (10th century), Christian saint of Armenian origin, according to some sources born in Argos Pheidon (7th century BC), king of Argos Argus (7th century BC), king of Argos Polykleitos (5th–4th century BC), sculptor Polykleitos the Younger (4th century BC), sculptor Telesilla (6th century BC), Greek poet Bilistiche, hetaira and lover of pharaoh Ptolemy II Philadelphus Eleni Bakopanos (born 1954), Canadian politician Samuel Greene Wheeler Benjamin (1837-1914), American statesman International relations Twin towns and sister cities Argos is twinned with: Veria, Greece Abbeville, France Episkopi, Cyprus Mtskheta, Georgia (1991) See also Argos (dog) Communities of Argos (municipal unit) Kings of Argos List of ancient Greek cities List of settlements in Argolis Notes Sources and external links Website of abolished Municipality of Argos (web archive) GCatholic with incumbent bio links The Theatre at Argos, The Ancient Theatre Archive, Theatre specifications and virtual reality tour of theatre Populated places in ancient Argolis Ancient Greek sanctuaries in Greece Aegean palaces of the Bronze Age Ancient Greek archaeological sites in Peloponnese (region) Mycenaean sites in Argolis Byzantine sites in Greece Stato da Màr Greek city-states Populated places in Argolis
27009996
https://en.wikipedia.org/wiki/Iphidamas
Iphidamas
In Greek mythology, the name Iphidamas (Ancient Greek: Ἰφιδάμας, gen. Ἰφιδάμαντος) may refer to: Iphidamas, also known as Amphidamas, son of Aleus and counted as one of the Argonauts. Iphidamas (or Amphidamas), a son of Busiris killed by Heracles. Iphidamas, a son of Antenor and Theano, and the brother of Crino, Acamas, Agenor, Antheus, Archelochus, Coön, Demoleon, Eurymachus, Glaucus, Helicaon, Laodamas, Laodocus, Medon, Polybus, and Thersilochus. He was raised in Thrace by his maternal grandfather Cisseus, who sought to make him stay at home when the Trojan War broke out, by giving him his daughter in marriage for a bride price of a hundred cows and a thousand goats and sheep. Nevertheless, Iphidamas did leave for Troy the next day after the wedding. He led twelve ships, but left them at Percote and came to Troy by land. He confronted Agamemnon in battle, but his spear bent against the opponent's silver belt, whereupon Agamemnon killed Iphidamas with a sword and stripped him of his armor. He then also fought and killed his brother Coön, who attempted to avenge the death of Iphidamas. Iphidamas, one of the Suitors of Penelope who came from Dulichium along with other 56 wooers. He, with the other suitors, was slain by Odysseus with the aid of Eumaeus, Philoetius, and Telemachus. See also 4791 Iphidamas, Jovian asteroid Notes References Apollodorus, The Library with an English Translation by Sir James George Frazer, F.B.A., F.R.S. in 2 Volumes, Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1921. ISBN 0-674-99135-4. Online version at the Perseus Digital Library. Greek text available from the same website. Dictys Cretensis, from The Trojan War. The Chronicles of Dictys of Crete and Dares the Phrygian translated by Richard McIlwaine Frazer, Jr. (1931-). Indiana University Press. 1966. Online version at the Topos Text Project. Gaius Julius Hyginus, Fabulae from The Myths of Hyginus translated and edited by Mary Grant. University of Kansas Publications in Humanistic Studies. Online version at the Topos Text Project. Homer, The Iliad with an English Translation by A.T. Murray, Ph.D. in two volumes. Cambridge, MA., Harvard University Press; London, William Heinemann, Ltd. 1924. . Online version at the Perseus Digital Library. Homer, Homeri Opera in five volumes. Oxford, Oxford University Press. 1920. . Greek text available at the Perseus Digital Library. The Orphic Argonautica, translated by Jason Colavito. Online version at the Topos Text Project. 2011. Pausanias, Description of Greece with an English Translation by W.H.S. Jones, Litt.D., and H.A. Ormerod, M.A., in 4 Volumes. Cambridge, MA, Harvard University Press; London, William Heinemann Ltd. 1918. . Online version at the Perseus Digital Library Pausanias, Graeciae Descriptio. 3 vols. Leipzig, Teubner. 1903. Greek text available at the Perseus Digital Library. Publius Vergilius Maro, Aeneid. Theodore C. Williams. trans. Boston. Houghton Mifflin Co. 1910. Online version at the Perseus Digital Library. Publius Vergilius Maro, Bucolics, Aeneid, and Georgics. J. B. Greenough. Boston. Ginn & Co. 1900. Latin text available at the Perseus Digital Library. Tzetzes, John, Allegories of the Iliad translated by Goldwyn, Adam J. and Kokkini, Dimitra. Dumbarton Oaks Medieval Library, Harvard University Press, 2015. Argonauts Trojans Achaeans (Homer) People of the Trojan War Characters in the Iliad Suitors of Penelope Characters in Greek mythology Characters in the Argonautica
66051247
https://en.wikipedia.org/wiki/261st%20Medical%20Battalion
261st Medical Battalion
The 261st Medical Battalion is a Multifunctional Medical Battalion of the US Army located at Fort Bragg, North Carolina, under the command and control of the 44th Medical Brigade. It provides a flexible and modular medical battle command, administrative assistance, logistical support, and technical supervision capability for assigned and attached medical organizations (companies and detachments), which can be task-organized to support deployed forces. Command Group Commander: LTC Benjamin P. Donham Command Sergeant Major: CSM James E. Brown Executive Officer: MAJ Kelly Walker Commanders Command Sergeants Major Lineage and Honors Constituted 12 June 1942 in the Regular Army as the 261st Medical Battalion Activated 15 June 1942 at Camp Edwards, Massachusetts Disbanded on 28 January 1945, in France. Reconstituted 1 October 1991 in the Regular Army Activated 16 September 1992 at Fort Bragg, North Carolina Reorganized 16 August 2002 to consist of Headquarters and Headquarters Detachment, 261st Medical Battalion (organic elements concurrently inactivated) Campaign participation credit World War II: Sicily (with Arrowhead) Naples-Foggia Rome-Arno Normandy (with Arrowhead) Northern France The War on Terrorism in Iraq: Iraqi Governance National Resolution Iraqi Surge Additional Campaigns to be determined Decorations Presidential Unit Citation (Army), streamer embroidered NORMANDY Meritorious Unit Commendation (Army) for: Iraq 2005 Iraq 2007-2008 Iraq 2010-2011 French Croix de Guerre With Palm, World War II, streamer embroidered NORMANDY BEACHES Distinctive Unit Insignia Description/Blazon: A Silver color metal and enamel device 1 1/8 inches (2.86 cm) in width overall consisting of a shield blazoned: Per pale and per chevron Argent and Sanguine, a fleur-de-lis counterchanged. Attached below the shield, a tripartite Silver scroll inscribed "PROUD TO TRAIN AND SAVE" in Maroon letters. Symbolism: Maroon and white (Argent) are colors traditionally associated with the Medical Corps. World War II service in France and Italy is represented by the fleur-de-lis and chevron respectively, the last also referring to the assaults in Normandy and suggesting the terrain of Naples and Rome's locales where the unit saw service. The significance of participation in two major landings is denoted by the shield's palewise division and counterchange. The fleur-de-lis further symbolizes the French Croix de Guerre's special honor and the Presidential Unit Citation for action in Normandy. Background: The distinctive unit insignia was approved on 8 July 1992. Coat of Arms Description/Blazon of Shield: Per pale and per chevron Argent and Sanguine, a fleur-de-lis counterchanged. Crest: From a wreath of the colors, Argent and Sanguine, a wreath of laurel Proper surmounted by an arm in armor, gauntleted; grasping a serpent entwined Gules. Motto: PROUD TO TRAIN AND SAVE. Symbolism of Shield: Maroon and white are colors traditionally associated with the Medical Corps. World War II service in France and Italy is represented by the fleur-de-lis and chevron respectively, the last also referring to the assaults in Normandy and suggesting the terrain of Naples and Rome's locales where the unit saw service. The significance of the participation in two major landings is denoted by the palewise division and counterchange of the shield. The fleur-de-lis further symbolizes the French Croix de Guerre's special honor and the Presidential Unit Citation for action in Normandy. Crest: The colors red, white and green represent Italy. The laurel wreath symbolizes the honors earned by the battalion during World War II service in Europe and the upraised fist signifying victory and the service credits won while fulfilling the mission of medical care to the United States personnel. The serpent, an ancient symbol of medicine, refers to the mission. The arrowed first denotes strength and the resolve to protect military personnel in wartime. Red stands for courage and sacrifice, white for integrity, and green for health. The coat of arms was approved on 8 July 1992. History World War II Activation and Early Operations The 261st Medical Battalion was activated on 15 June 1942 at Camp Edwards, Massachusetts under the Engineer Amphibian Command. 1LT Howard F. Conn assumed command the same day. On 30 June 1942, Captain Edward L. Tucker was transferred from the 54th Medical Battalion and assumed command per General Order 1, 261st Medical Battalion, replacing 1LT Conn. On 4 July, a cadre of 219 enlisted men were transferred from the 54th Medical Battalion, with Company A of the 54th Medical Battalion forming Company A of the 261st; Company C of the 54th forming Company B of the 261st, and Company D of the 54th forming Company C of the 261st. Staff for the battalion Headquarters Detachment were drawn from the newly formed companies. This was a transfer of personnel of personnel and equipment to cadre the new battalion, and there is no linkage between the lineage of the two battalions. Major Earle E. Smith was transferred from Headquarters, Engineering Amphibious Command and assumed command of the battalion on 17 July 1942. He would remain in command until November 1944 when, as a Lieutenant Colonel, he would be transferred to the Headquarters of the Utah District, Normandy Base Section, and Major Daniel I Dann assumed Command, remaining in command until the battalion was disbanded on 28 January 1945. A month after the original cadre arrived, the battalion left for the port of embarkation, New York. On 6 August they sailed for the British Isles. The majority of their soldiers had been in the Army less than six months, and the majority of their officers on active duty less than one month. There were many changes during the early days at Carrickfergus, Northern Ireland. Because the battalion's equipment did not reach them in time, they were unable to make the North African landing with the 1st Engineer Amphibian Brigade. In early December the battalion moved to a staging area at Birkenhead, England. On 8 January 1943, they sailed for North Africa, and located at Arzew, Algeria. Upon arrival in Algeria, the three medical companies rotated among the three most important activities. 1. The battalion had assigned and borrowed a total of 72 ambulances. The battalion ran an ambulance service to the various US Army Hospitals for patients arriving at La Senia Airport, Oran, from the Tunisian Front, and also carrying patients from hospitals to the Port of Oran for embarkation. 2. The battalion operated a Clearing Station Hospital in Arzew with a capacity of 100 patients. They rendered dispensary and dental care to units in the environs on an area support basis. 3. The battalion trained at the Fifth Army Training Center at Port Aux Poules. Invasion exercises and tactics were stressed. Sicily and Italy On 11 July 1943, the majority of the battalion's men and equipment landed at Gela, Sicily, to handle the casualties and evacuation of the 1st Engineer Special Brigade Beachhead, over which the 1st Infantry Division had made the assault landing. The casualties were relatively heavy in this particular sector. However, the presence of the 1st Medical Battalion's Clearing Company, the 51st Provisional Collecting-Clearing Company (composed of a clearing platoon from a clearing company and a collecting platoon from a collecting company), and the 2nd Armored Division Medical units in addition to the 261st provided adequate care. Following the beachhead phase the three companies ran small hospitals, each holding up to 150 patients. They were located at Agrigento, Gela, and Licata. On 19 August the battalion was alerted for an immediate move to Bizerte for assignment to Fifth Army for what was later discovered to be the landing at Salerno, Italy, on 9 September. The order was cancelled two days later and the battalion remained in Sicily in bivouac. Up till the time of the alert the three companies admitted 3961 patients constituting a total of 9201 patient-days of treatment. During this period the Headquarters Detachment maintained a continual trucking service, first carrying supplies to the front line companies, and later transporting medical supplies from Licata to Palermo, Sicily. On 19 October 1943, the battalion sailed for Italy and on 24 October set up bivouac at Caserta. Except for some dispensary and evacuation work the battalion saw minimal activity. After turning in most of the battalion's equipment they sailed for Naples on 18 November and proceeded to the British Isles. There, the battalion was garrisoned at Truro, Cornwall, England, on 12 December 1943. From this time until the Normandy Invasion on 6 June 1944, the battalion devoted their efforts to organizing, planning and training for the coming invasion of France. Normandy and Beyond All the training and experience acquired by the 261st Medical Battalion in its operations prepared it for its most important mission, the handling of casualties and evacuation on Utah Beach, Normandy, France. On 12 April 1944, Companies A and B took part in a special problem called "Splint". It was run off in conjunction with the medical group of the 2nd Naval Beach Battalion and the 531st Engineer Shore Regiment, 1st Engineer Special Brigade. The exercise was supervised by Colonel James L. Snyder, MC, Executive Office to the Surgeon, First United States Army. This maneuver was considered very important by the Allied High Command with regard to the evacuation of casualties on the coming invasion. 200 troops were used as simulated casualties. All phases of casualty handling from the time of admission to the reception of patients aboard ships were demonstrated. The problem was run before a large delegation of ranking officers of both the United States and British Armies and Navies. The results clearly demonstrated that large numbers of casualties could be evacuated off a beach without disturbance to the other functions of the beach. The 261st's companies gained more practical value from this exercise than any other because they had available a large number of men to use as simulated casualties. Most previous exercises had only given them experience in movement of men and equipment and the setting up of a bivouac and collecting station. The 1st Engineer Special Brigade established the beach in conjunction with the 4th Infantry Division which made the initial assault landings on Utah Beach, and the 82nd and 101st Airborne Divisions whose parachute and glider-borne troops would land beyond the inundated area guarding the beach on the night before D-day. The 1st Engineer Special Brigade and their attached Naval Shore groups developed this beach for the reception of troops, equipment and vehicles and for the evacuation of casualties. Companies A and C, 261st Medical Battalion, landed on H plus 6 hours, on D-day and Company B landed on D+1 and set up next to Company C, which it relieved for 24 hours. By 1800 hours on D-day major surgery was being done by the 261st's medical officers and attached surgical teams. Working with the 4th Division medical units and those of the Engineer and Naval Shore groups the 261st handled all the casualties on Utah Beach. All patients were cleared through the 261st Medical Battalion. Working with the 2nd Naval Beach Battalion Medical Section the battalion handled all evacuation to the United Kingdom. As there was no air evacuation on this beach in the early phase and very little even much later the battalion evacuated almost all the patients in the chain of evacuation scheme. All definitive surgery was performed at the 261st's companies until D+5. After that the Field and Evacuation Hospitals gradually relieved the battalion of that task. After the first two weeks the battalion was performing only the overflow surgery; cases that occurred in the immediate vicinity of the battalion's companies, or cases that became unstable during evacuation. At this stage the battalion's primary responsibilities were holding and evacuation. Evacuation was accomplished chiefly by jeep ambulances augmented by DUKWs and trucks. Patients were sent primarily to LSTs and also to Hospital Ships via LCTs and DUKWs. The former method is the fastest and also the easiest on the patient as the journey to the Hospital Ship involves moving the patient twice. First of all they are placed on the vehicles, then the LCTs, and then after a rough ride out to the Carrier they are hauled aboard. Or they are loaded on DUKWs at the Clearing Station and proceed directly to the Hospital Carrier. Where a long journey by boat is contemplated the Hospital Ship will assure a more comfortable ride. The battalion was able to schedule evacuation as required except for a two day period, 20–22 June, when a storm at sea prevented the movement of casualties to ships. On 21 June the battalion had 691 patients remaining at their three companies. However, the following day the storm subsided, the battalion was able to evacuate 515 patients, and the emergency was over. Though carrying on the important function of evacuation until the beach closed, the 261st was most valuable in the first four weeks of the operation - and especially during the first week. An analysis of the number of patients admitted and evacuated will illustrate this point: The data demonstrates that for the first week the battalion averaged 1035 admissions each day. Then the rate fell and the average daily admission rate for the first two months was 571.6 patients. After that the rate became much lower - the breakthrough at St. Lo occurred on 25 July 1944, and the front moved in all directions requiring new evacuation points, accounting for the drop in patient load in the 261st's treatment facilities. The battalion headquarters detachment landed on D+1 and immediately started to run the medical supply dump for the beach. It also accomplished was the important job of handling the medical records and reports for the battalion, as well as consolidating records for the 1st Engineer Special Brigade, which numbered over 17,000 troops. In August the 261st Medical Battalion was assigned by the Surgeon Normandy Base Section, COMMZ, to stage all hospital units and all female personnel arriving over Utah and Omaha Beaches from the United States and United Kingdom. The following were staged: 32 General Hospitals 5 Station Hospitals 1 Evacuation Hospital 11 Field Hospitals 2 Auxiliary Surgical Groups 2 Ambulance Companies Miscellaneous groups from female organizations such as the Auxiliary Territorial Service, Women's Army Corps, American Red Cross, etc. Throughout the battalion's stay on the beach their trucks were handling supplies for the 1st Engineer Special Brigade. Later on they hauled supplies from the Depot at Omaha Beach to Paris for the Surgeon, Normandy Base Section. Starting in August some of the battalion's Jeeps were commandeered by the Surgeon, Communications Zone, and they were stationed in Cherbourg, Paris and Rennes. As the beach slowed down our Company A moved a short distance inland, near Ste Marie Du Mont and the battalion headquarters. They set up a Clearing Station Hospital and handled principally troops of the 1st Engineer Special Brigade. During the last week of August they were joined by Company C. At this time Company B started to handle only the POW patients from the nearby stockades, and this policy continued throughout the remainder of 1944. On 11 September 1944, Company B joined the remainder of the battalion. This arrangement lasted through November. On 11 December 1944, Headquarters and Company B moved to Ste Mere Eglise, and were followed shortly by Company A. The latter ran a dispensary at Ste Mere Eglise and dispatched personnel to their 1st Platoon at Valognes to augment the 61st Medical Battalion Clearing Station that was running a 200 bed hospital. Company C moved to Granville during the last week of December. They took over the dispensary care for the surrounding area and also handled quarters cases. This was the battalion's status at the end of 1944. Inactivation In January 1945, the 261st was transferred out of the Normandy Base Section to the Channel Base Section, France. On 18 January 1945, the move took place, and the battalion was arrayed as follows: Headquarters and Headquarters Detachment was located at Feeamp, where it was performing its usual command and control and administrative duties, Battalion Motor and Supply functions. Company A was located at Le Havre, where it was running a 200 bed hospital in buildings. Company B was located at Camp Lucky Strike, near Cany Barville, where it was running a 150 bed tent hospital primarily handling transient troops whose hospitalization period would not exceed seven days and running a large dispensary service and "clearing" all patients from the camp. Company C was located at Camp Twenty Grand near Duclair, where it was running a 120 bed tent hospital primarily handling transient troops whose hospitalization period would not exceed seven days and running a large dispensary service and "clearing" all patients from the camp. On 28 January 1945, the 261st Medical Battalion was disbanded and its personnel and equipment were used to form the 98th Medical Battalion (Separate), which consisted of: Headquarters & Headquarters Detachment, 98th Medical Battalion. 761st Medical Company (Collecting) 762nd Medical Company (Collecting) 763rd Medical Company (Clearing) 764th Medical Company (Ambulance) Because the 261st Medical Battalion was Disbanded, there is no connection between the lineage and honors of the 261st Medical Battalion and the organizations that received personnel and equipment from the 261st. Some relevant facts from the 261st Medical Battalion's one historical report from World War II: The battalion existed for 31 months. There was about a 25% change in personnel from the time it was organized until it disbanded. 75% of its members served 30 months overseas and participated in four campaigns. Number of deaths in action: 5. Number of missing in action: 0. Number of wounded in action: 17. Number of decorations: Legion of Merit: 2 Bronze Star Medal: 21. Purple Heart: 17. Citation to accompany the Presidential Unit Citation, Streamer embroidered NORMANDY GENERAL ORDERS NO. 57 WAR DEPARTMENT Washington: 25, D. C., 16 July 1945. "The 261st Medical Battalion is cited for courageous performance of duty under exceptionally difficult and hazardous conditions during the period from 6 June 1944 to 18 July 1944; Landing on the coast of Normandy, France, in close support of assault troops on, D-day, in the face of intense artillery fire, this unit, within sight of enemy forces, set up its tentage and commenced to collect and evacuate the wounded. By H plus 8 hours, clearing stations were established and major surgery was being performed. With unwavering determination, this unit handled over 75% of all casualties sustained on First Army beaches during the first 10 days of the Normandy invasion. To shoulder this tremendous burden, the officers and men of the 261st Medical Battalion worked day and night with no sleep whatever under enemy artillery fire and air raids. Undaunted by flak which constantly pierced the operating tents; our personnel continued-working in utter disregard for their personal safety in order more speedily to render medical aid to the wounded. From the first critical and uncertain hours on 6 June through 18 July 1944, this unit cared for thousands of casualties, including every single patient evacuated to the United Kingdom from the Cherbourg sector. The valorous and. unfaltering devotion to duty and individual gallantry of the members of the 261st Medical Battalion contributed immeasurably to the successful liberation of Europe and are in keeping with the highest traditions of the armed forces of the United States. General Orders 94, Headquarters European Theater of operations, 15 May 1945, as approved by Commanding' General European Theater of operations" Current The battalion was reconstituted in the regular Army on 1 October 1991 as the 261st Medical Battalion (Area Support). It was activated on 16 September 1992 at Fort Bragg, North Carolina and assigned to the 44th Medical Brigade. The battalion consisted of a Headquarters and Support Company and three-lettered Area Support Medical Companies in its original configuration. The battalion drew upon the 36th Medical Company (Clearing) and the 429th Medical Company (Ambulance), which were concurrently inactivated, for personnel and equipment to resource the battalion, which included paid parachutist positions to allow the battalion to support the XVIII Airborne Corps's forced entry mission. Immediately upon activation, the battalion deployed two companies in support of Disaster Relief Operations in Florida from September through November 1992 in the wake of Hurricane Andrew. During the period 1 October 1992 through 21 April 2000, the 261st Area Support Medical Battalion was a subordinate unit of the 55th Medical Group, which had been activated as an intermediate level headquarters under the 44th Medical Brigade when the 44th converted to a general officer command. When the 55th Medical Group inactivated, the 261st Medical Battalion again became a direct reporting unit to the brigade headquarters. Over time, the Area Medical Battalion concept demonstrated that the idea of a fixed battalion structure with maintenance and other assets centralized in the Headquarters and Support Company limited the Army's ability to deploy individual Area Support Medical Companies effectively. The Area Support Medical Battalions were restructured into a separate battalion Headquarters and Headquarters Detachment and individual, numbered Area Support Medical Companies. The 261st Area Support Medical Battalion converted to this new structure on 16 August 2002, when the Headquarters and Support Company was redesignated as the Headquarters and Headquarters Detachment, 261st Medical Battalion, and the other organic elements of the battalion (the lettered companies) were inactivated, and their personnel and equipment were used to resource the 36th, 550th, 601st and 602nd Area Support Medical Companies, which were concurrently activated. Following the mid-2000s reorganization of Army Aviation, which placed Army Air Ambulance Companies within General Support Aviation Battalions instead of under Medical Evacuation Battalions controlled by the Army Medical Department, the need for separate Evacuation Battalions, Area Medical Support Battalions, Logistics Battalions, and other specialized medical command and control headquarters were reassessed. The resultant command and control structure at the battalion level was the Medical Battalion (Multifunctional), with a more robust clinical operations cell and a standard battalion staff. As part of the overall force structure conversion, the 261st Medical Battalion converted from an Area Support Medical Battalion to a Multifunctional Medical Battalion in May 2005. Units of the 261st Multi-Functional Medical Battalion: Headquarters and Headquarters Detachment, 261st Medical Battalion]], Fort Bragg 36th Medical Company (Area Support), Fort Bragg 51st Medical Company (Logistics), Fort Bragg 550th Medical Company (Area Support), Fort Bragg 601st Medical Company (Area Support), Fort Bragg 602nd Medical Company (Area Support), Fort Bragg 690th Medical Company (Ambulance), Fort Bragg 24th Medical Detachment (Optical Fabrication), Fort Bragg 155 Medical Detachment (Preventive Medicine), Fort Bragg 172nd Medical Detachment (Preventive Medicine), Fort Bragg References External links Military units and formations of the United States Army in World War II 44 Military units and formations in North Carolina
46924613
https://en.wikipedia.org/wiki/OS%20X%20El%20Capitan
OS X El Capitan
OS X El Capitan ( ) () is the twelfth major release of macOS (named OS X at the time of El Capitan's release), Apple Inc.'s desktop and server operating system for Macintosh. It focuses mainly on performance, stability, and security. Following the Northern California landmark-based naming scheme introduced with OS X Mavericks, El Capitan was named after a rock formation in Yosemite National Park. El Capitan is the final version to be released under the name OS X. OS X El Capitan received far better reviews than did Yosemite. The first beta of OS X El Capitan was released to developers shortly following the WWDC keynote on June 8, 2015. The first public beta was made available on July 9, 2015. There were multiple betas released after the keynote. OS X El Capitan was released to end users on September 30, 2015, as a free upgrade through the Mac App Store. System requirements All Macintosh computers that can run Mountain Lion, Mavericks, or Yosemite can run El Capitan, although not all of its features will work on older computers. For example, Apple notes that the newly available Metal API is available on "all Macs since 2012". These computers can run El Capitan, provided they have at least 2GB of RAM: MacBook: Late 2008 or newer MacBook Air: Late 2008 or newer MacBook Pro: Mid 2007 or newer Mac Mini: Early 2009 or newer iMac: Mid 2007 or newer Mac Pro: Early 2008 or newer Xserve: Early 2009 Of these computers, the following models were equipped with 1GB RAM as the standard option on the base model when they were shipped originally. They can only run OS X El Capitan if they have at least 2GB of RAM. iMac: Mid 2007 - Early 2008 Mac Mini: Early 2009 The following computers support features such as Handoff, Instant Hotspot, AirDrop between Mac computers and iOS devices, as well as the new Metal API: iMac: Late 2012 or newer MacBook: Early 2015 or newer MacBook Air: Mid 2012 or newer MacBook Pro: Mid 2012 or newer Mac Mini: Late 2012 or newer Mac Pro: Late 2013 The upgrade varies in size depending upon which Apple Mac computer it is being installed on; in most scenarios, it will require about 6 GB of disk space. Features OS X El Capitan includes features to improve the security, performance, design and usability of OS X. Compared to OS X Yosemite, Apple says that opening PDFs is four times faster, app switching and viewing messages in Mail is twice as fast and launching apps is 40% faster. The maximum amount of memory that could be allocated to the graphics processor has been increased from 1024 MB to 1536 MB on Macs with an Intel HD 4000 GPU. OS X El Capitan supports Metal, Apple's graphics API introduced in iOS 8 to speed up performance in games and professional applications. Apple's typeface San Francisco replaces Helvetica Neue as the system typeface. OS X El Capitan also adopts LibreSSL in replacement of OpenSSL used in previous versions. Window management OS X El Capitan introduces new window management features such as creating a full-screen split screen limited to two app windows side by side in full screen by pressing the green button on left upper corner of the window or Control+Cmd+F keyboard shortcut, then snapping any supported other window to that full screen application. This feature is slightly similar to, although less extensive than, the snap-assist feature in Windows 7 (and later) and several Linux desktop environments, such as GNOME. OS X El Capitan improves Mission Control to incorporate this feature across multiple spaces. It also enables users to spot the pointer more easily by enlarging it by shaking the mouse or swiping a finger back and forth on the trackpad. Applications Messages and Mail OS X El Capitan adds multi-touch gestures to applications like Mail and Messages that allow a user to delete or mark emails or conversations by swiping a finger on a multi-touch device, such as a trackpad. OS X also analyzes the contents of individual emails in Mail and uses the gathered information in other applications, such as Calendar. For example, an invitation in Mail can automatically be added as a Calendar event. Maps Apple Maps in El Capitan shows public transit information similar to Maps in iOS 9. This feature was limited to a handful of cities upon launch: Baltimore, Berlin, Chicago, London, Los Angeles, Mexico City, New York City, Paris, Philadelphia, San Francisco, Shanghai, Toronto and Washington D.C. Notes The Notes application receives an overhaul, similar to Notes in iOS 9. Both applications have more powerful text-processing capabilities, such as to-do lists (like in the Reminders application), inline webpage previews, photos and videos, digital sketches, map locations and other documents and media types. Notes replaces traditional IMAP-based syncing with iCloud, which offers better end-to-end encryption and faster syncing. Safari Safari in El Capitan lets users pin tabs for frequently accessed websites to the tab bar, similar to Firefox and Google Chrome. Users are able to quickly identify and mute tabs that play audio without having to search for individual tabs. Safari supports AirPlay video streaming to an Apple TV without the need to broadcast the entire webpage. Safari extensions are now hosted and signed by Apple as part of the updated Apple Developer program and they received native support for content blocking, allowing developers to block website components (such as advertisements) without JavaScript injection. The app also allows the user to customize the font and background of the Reader mode. Spotlight Spotlight is improved with more contextual information such as the weather, stocks, news and sports scores. It is also able to process queries in natural language. For example, users can type "Show me pictures that I took in Yosemite National Park in July 2014" and Spotlight will use that request to bring up the corresponding info. The app can now be resized and moved across the screen. Photos Photos introduced editing extensions, which allow Photos to use editing tools from other apps. Other applications found in OS X 10.11 El Capitan AirPort Utility App Store Archive Utility Audio MIDI Setup Automator Bluetooth File Exchange Boot Camp Assistant Calculator Calendar Chess ColorSync Utility) Console Contacts Dictionary Digital Color Meter Disk Utility DVD Player FaceTime Font Book Game Center GarageBand (may not be pre-installed) Grab Grapher iBooks (now Apple Books) iMovie (may not be pre-installed) iTunes Image Capture Ink (can only be accessed by connecting a graphics tablet to your Mac) Keychain Access Keynote (may not be pre-installed) Migration Assistant Numbers (may not be pre-installed) Pages (may not be pre-installed) Photo Booth Preview QuickTime Player Reminders Script Editor Stickies System Information Terminal TextEdit Time Machine VoiceOver Utility X11/XQuartz (may not be pre-installed) System Integrity Protection OS X El Capitan has a new security feature called System Integrity Protection (SIP, sometimes referred to as "rootless") that protects certain system processes, files and folders from being modified or tampered with by other processes even when executed by the root user or by a user with root privileges (sudo). Apple says that the root user can be a significant risk factor to the system's security, especially on systems with a single user account on which that user is also the administrator. System Integrity Protection is enabled by default, but can be disabled. Reception Upon release, OS X El Capitan was met with positive reception from both users and critics, with praise mostly going towards the overall functionality of the new features and improved stability. Dieter Bohn of The Verge awarded the operating system a score of 8.5 out of 10; while Jason Snell of Macworld was also positive, rating it 4.5 out of 5. Many people criticized Apple's native apps for not having improved beyond third-party applications. Issues After the 10.11.4 update, many users started reporting that their MacBooks were freezing, requiring a hard reboot. This issue mostly affects Early 2015 MacBook Pro computers, although many others have reported freezes in other models. Several users created videos on YouTube which showed the freezes. Soon after this, Apple released the 10.11.5 update, which contained stability improvements. Apple later acknowledged these problems, recommending their users to update to the last point release. After the December 13, 2016, release of Security Update 2016–003, users reported problems with the WindowServer process becoming unresponsive, causing the GUI to freeze and sometimes necessitating a hard reboot to fix. In response, on January 17, 2017, Apple released Security Update 2016-003 Supplemental (10.11.6) to fix "a kernel issue that may cause your Mac to occasionally become unresponsive" and at the same time released an updated version of Security Update 2016-003 which includes the fix released in the supplemental. Users who have not previously installed Security Update 2016-003 are advised to install the updated version to reach build 15G1217, while users who have already installed the December 13, 2016 Security Update 2016-003 only need to install the supplemental update. Release history References External links – official site OS X El Capitan download page at Apple 11 X86-64 operating systems 2015 software Computer-related introductions in 2015
50418026
https://en.wikipedia.org/wiki/Microsoft%20Power%20BI
Microsoft Power BI
Power BI is an interactive data visualization software product developed by Microsoft with primary focus on business intelligence. It is part of the Microsoft Power Platform. Power BI is a collection of software services, apps, and connectors that work together to turn unrelated sources of data into coherent, visually immersive, and interactive insights. Data may be input by reading directly from a database, webpage, or structured files such as spreadsheets, CSV, XML, and JSON. General Power BI provides cloud-based BI (business intelligence) services, known as "Power BI Services", along with a desktop based interface, called "Power BI Desktop". It offers data warehouse capabilities including data preparation, data discovery and interactive dashboards. In March 2016, Microsoft released an additional service called Power BI Embedded on its Azure cloud platform. One main differentiator of the product is the ability to load custom visualizations. History This application was originally conceived by Thierry D'Hers and Amir Netz of the SQL Server Reporting Services Team at Microsoft. It was originally designed by Ron George in the summer of 2010 and named Project Crescent. Project Crescent was initially available for public download on July 11, 2011 bundled with SQL Server Codename Denali. Later renamed to Power BI it was then unveiled by Microsoft in September 2013 as Power BI for Office 365. The first release of Power BI was based on the Microsoft Excel-based add-ins: Power Query, Power Pivot and Power View. With time, Microsoft also added many additional features like Question and Answers, enterprise level data connectivity and security options via Power BI Gateways. Power BI was first released to the general public on July 24, 2015. On 14 April 2015, Microsoft announced that they had acquired the Canadian company Datazen, to "complement Power BI, our cloud-based business analytics service, rounding out our mobile capabilities for customers who need a mobile BI solution implemented on-premises and optimized for SQL Server." Most of the 'visuals' in Power BI started life as Datazen visuals. In February 2019, Gartner.com, a software reviewing company, confirmed Microsoft as Leader in the "2019 Gartner Magic Quadrant for Analytics and Business Intelligence Platform" as a result of the capabilities of Power BI platform. This represented the 12th consecutive year of recognition of Microsoft as Leading vendor in this Magic Quadrant category (beginning 3 years before this tool was even created). Key components Key components of the Power BI ecosystem comprises: Power BI Desktop The Windows-desktop-based application for PCs and desktops, primarily for designing and publishing reports to the Service. Power BI Service The SaaS-based (software as a service) online service. This was formerly known as Power BI for Office 365, now referred to as PowerBI.com, or simply Power BI. Power BI Mobile Apps The Power BI Mobile apps for Android and iOS devices, as well as for Windows phones and tablets. Power BI Gateway Gateways used to sync external data in and out of Power BI and are required for automated refreshes. In Enterprise mode, can also be used by Power Automate (previously called Flows) and PowerApps in Office 365. Power BI Embedded Power BI REST API can be used to build dashboards and reports into the custom applications that serves Power BI users, as well as non-Power BI users. Power BI Report Server An on-premises Power BI reporting solution for companies that won't or can't store data in the cloud-based Power BI Service. Power BI Premium Capacity-based offering that includes flexibility to publish reports broadly across an enterprise, without requiring recipients to be licensed individually per user. Greater scale and performance than shared capacity in the Power BI Service Power BI Visuals MarketplaceA marketplace of custom visuals and R-powered visuals. Power BI Dataflow is a Power Query implementation in the cloud which can be used for data transformations to make a common Dataset that can be made available for several report developers in the Common Data Service. It can be used as an alternative to for example doing transformations in SSAS, and may ensure that several report developers use data which has been transformed in a similar way. See also Power Pivot Microsoft Excel SQL Server Reporting Services References Further reading External links Microsoft software Business software Business intelligence Data visualization software
867230
https://en.wikipedia.org/wiki/ArtRage
ArtRage
ArtRage is a bitmap graphics editor for digital painting created by Ambient Design Ltd. It is currently in version 6, and supports Windows, macOS and mobile Apple and Android devices and is available in multiple languages. It caters to all ages and skill levels, from children to professional artists. ArtRage 5 was announced for January 2017 and finally released in February 2017. It is designed to be used with a tablet PC or graphics tablet, but it can be used with a regular mouse as well. Its mediums include tools such as oil paint, spray paint, pencil, acrylic, and others, using relatively realistic physics to simulate actual painting. Other tools include tracing, smearing, blurring, mixing, symmetry, different types of paper for the "canvas" (i.e. crumpled paper, smooth paper, wrinkled tin foil, etc.), as well as special effects, custom brushes and basic digital editing tools. Traditional media simulation and tools ArtRage is designed to be as realistic as possible. This includes varying thickness and textures of media and canvas, the ability to mix media, and a realistic colour blending option, as well as the standard digital RGB blending. It includes a wide array of real life tools, as well as stencils, scrap layers to use as scrap paper or mixing palettes, and the option to integrate reference or tracing images. The later versions (Studio, Studio Pro, and ArtRage 4) include more standard digital tools, such as Select, Transform, Cloner, Symmetry, Fill, and custom brushes ("Sticker"). Each tool is highly customisable, and comes with several presets. It is possible to share custom resources between users and there is a reasonably active ArtRage community that creates and shares presets, canvases, custom brushes, stencils, colour palettes, and other resources. Tools and features Real colour blending ArtRage offers a realistic colour blending option as well as standard digital RGB based blending. It is turned off by default as it is memory intensive but can be turned on from the Tools menu. The most noticeable effect is that green is produced when yellow and blue are mixed. The color picker supports HSL and RGB colors. Custom resources One of the less well known features of ArtRage is the custom resource options. Users can create their own versions of various resources and tools, or record scripts, and share them with other users. Users can save their resource collections as a Package File (.arpack), which acts similar to a ZIP file. It allows folders of resources to be shared and automatically installed. ArtRage can import some Photoshop filters, but not all. It only supports .ttf (TrueType Fonts) which it reads from the computer's fonts folder. Package files do not work with versions earlier than 3.5. ArtRage Studio does not support Photoshop filters, or allow sticker creation, and has fewer options overall. Alternatively, individual resources can be shared directly. Most of the resources have specific file types. Versions ArtRage comes in seven current editions. The mobile apps are ArtRage for Android, ArtRage Oil Painter Free for Android, ArtRage for iPhone, and ArtRage for iPad. A version called ArtRage Touch is also available in the Windows App Store for Metro devices. The desktop versions are ArtRage Lite and ArtRage 4. A free demo version of ArtRage 4 is also available. ArtRage 1, ArtRage 2, ArtRage Studio and ArtRage Studio Pro have been discontinued. Ambient Design releases a new edition of ArtRage on three to four yearly upgrade cycle. There is a major free update halfway through this cycle (the X.5 edition) and ongoing free patches and minor updates. Some updates continue for the previous version, although support is slowly phased out over time. For example, backdated upgrades to Studio Pro included bug fixes, DRM removal for Steam users, and a fix to allow it to work properly on OS X Mavericks, and the iPad version was updated to include Retina support. ArtRage is also available through Steam. It was part of the very first non-gaming software launch on Steam, on October 10, 2012. The Steam version does not include DRM and can be used without Steam running. Languages ArtRage 4 is available in several languages, but the manual is only available in English. The other versions have manuals available in assorted languages. Language is chosen when installing the program (except for the ArtRage 2 alternative editions). Japanese/Chinese are only available as Alternative Editions in ArtRage 2 ArtRage Studio & Studio Pro Wacom China Editions support English, Traditional Chinese, and Simplified Chinese. ArtRage 2 Japanese Edition supports: Japanese and English interfaces and manuals. ArtRage 2 Wacom China Edition supports Simplified Chinese interface. Release history New desktop editions of ArtRage are released approximately every three years, with a major .5 update being released halfway through. There is no official release schedule, and new updates are announced as they are ready. The latest edition is ArtRage 4.5, which was released 11 August 2014. Free demo versions These were released alongside their respective editions, as trial software. The earlier editions have been discontinued, and only the current ArtRage 4 demo is now available. Work can only be exported to JPEG format and contains an ArtRage watermark. Images larger than 1280x1024 cannot be saved or exported. ArtRage 2.2 Free Edition: limited trial version Studio Pro Demo Edition: full trial version with some restrictions, expires after 30 days ArtRage 4 Demo edition: full trial version with some restrictions ArtRage Touch: one week trial through the Windows App Store ArtRage Oil Painter Free : limited free app on the Play Store and Amazon App Store ArtRage for Android: free download on some Samsung devices via the Galaxy Gifts program Software downloads ArtRage is only sold individually as an online download. It can be bought directly from the company website, or through Steam, but is not sold on any other websites. The free demo is only available from the main site. The mobile (iPad and iPhone) versions can only be bought via the Apple app store and cannot be registered on the ArtRage site. The Android version is available from the GALAXY Gift store for specific Samsung devices, and will be made available on the Samsung Apps and Google Play stores. Upgrades Upgrading from ArtRage Lite or any pre-existing desktop editions gives a permanent 50% discount on ArtRage 4 (the most recent full desktop edition). This upgrade discount is handled separately by the Steam and the ArtRage stores, so users cannot currently switch between stores. To upgrade, owners must register their serial number in the ArtRage members area (unless it is a Steam or mobile version). This also allows users to download both the macOS and the Windows versions of their software at any time, an unlimited number of times. Steam and mobile versions are updated through Steam and the App Store. Hardware bundles The ArtRage for Android app was released as part of a bundle deal with Samsung. It is free for the new Samsung Galaxy Note 4 and Samsung Galaxy Note Edge smartphones, distributed through the GALAXY Gifts store, and is available for purchase from the Galaxy App store for other Samsung devices. It was released for sale on the Google Play Store in February 2015. ArtRage Lite comes free with the Wacom Intuos Draw tablet. The older editions of ArtRage also come as bundled software with various devices. ArtRage 2 and ArtRage Studio Pro are still available bundled with several WACOM graphics tablets, as well as various other devices, such as ASUS EP121 tablets, Sony VAIO Laptops, and Adesso Cybertablets. The serial numbers in these cases are handled by the companies distributing the hardware. ArtRage is usually provided as a software download, although it can come pre-installed or on an accompanying CD. A version of ArtRage called "Ink Art" was included in Microsoft's Experience Pack for the Tablet PC in 2005 and on some older Wacom tablets. Ink Art contained a subset of features offered in the full ArtRage program. Promethean Planet, an educator community, distributes a free version of ArtRage for classroom use on Promethean's range of interactive whiteboards. Sony Duo PCs ArtRage is included on the following models from the Sony Duo touchscreen range. Sony VAIO Tap 20 Sony VAIO Duo 11 Sony VAIO L24 Sony VAIO E14P Sony VAIO T13 Sony VAIO Duo 13 Wacom Tablet models (see full Tablet details here) System support ArtRage 4.5 has full 64-bit support on Windows and Macs. There are also iOS supported versions for iPhone and iPad. ArtRage for Android supports Ice Cream Sandwich 4.0 and later. ∞Mavericks support added in ArtRage Studio/ Studio Pro versions 3.5.10 and 3.5.11 ∞∞ Windows XP support was dropped in the 4.5 update. The XP compatible version of ArtRage 4 is still available for existing and future owners of the program through the member area. ∞∞∞ ArtRage 3 Multithreading is not compatible with Windows 10. Disable this in ArtRage Preferences. ∞∞∞∞Linux support is unofficial. ∞∞∞∞∞ The Wine installer for ArtRage 4 does not currently work, but it can be worked around by copying program files from a Windows installation Stylus support ArtRage 4 supports various Wacom stylus features, although they may vary depending on the tool being used. Pressure, Tilt, Airbrush Wheel, and Barrel Rotation Wacom Stylus Recognition Live Tilt (4.5 only) ArtRage for iPad supports four Bluetooth stylus brands, as well as the Apple Pencil, which has full pressure and tilt support. Wacom TenOne Pogo Connect Adonit Jot Touch Pro Adobe Creative ArtRage for Android supports the native Android system for pressure sensitivity, including the Samsung S Pen. The desktop editions of ArtRage fully support the following Windows Graphics tablet and Tablet computer drivers: AES (ArtRage 4.5.10 onwards) WinTab RealTime Stylus Ink Services (ArtRage 2 only) Supported file types ArtRage uses a proprietary file type, ".ptg", which stands for "painting". It can only save as PTG and can only open PSD and PTG files using the File|Open command. However, images can be exported to the following formats: PNG, JPEG, GIF, BMP, TIFF and Adobe Photoshop's .psd format. ArtRage can import all of these file types using the File > Import Image or Import Image as Layer command. Importing PTG files will open the PNG used for thumbnail images instead (this can be used to rescue images from corrupted PTG files). *Only when exported as an individual layer Transparency ArtRage supports transparency on imported files, but not on all exported files (for example, GIF and TIFF). It is often easier to export a transparent image as an individual layer, as the Canvas settings can save as opaque on full saves for some file types and in older editions. Ambient Design Ambient Design Ltd. is a New Zealand-based software development and publishing firm, specializing in creative applications and user interfaces for artists. It was founded in 2000 by Andy Bearsley and Matt Fox-Wilson. The founders formerly worked for MetaCreations, the developer of Painter, Bryce and Kai's Power Tools, and have worked for Corel, Adobe, Digital Anarchy and Jasc Software. Before that, they developed Deep Paint 3D for Right Hemisphere Ltd, and hid various Easter Eggs in the code. Awards December 2004 Microsoft® Tablet PC Does Your Application Think in Ink? grand prize winner for ArtRage 1 March 2012 "Hot One" Award for Best New Gear from Professional Photographer for ArtRage Studio Pro (edition 3.5) March 2014 Parents' Choice Gold Award for ArtRage for iPad See also Digital Art Digital Painting Art software Computer painting Tradigital art Graphic art software Raster graphics List of raster graphics editors Comparison of raster graphics editors References Reviews . . . . . . . . . External links Raster graphics editors Macintosh graphics software MacOS graphics software Windows graphics-related software IOS software Digital art
186501
https://en.wikipedia.org/wiki/Revolution%20OS
Revolution OS
Revolution OS is a 2001 documentary film that traces the twenty-year history of GNU, Linux, open source, and the free software movement. Directed by J. T. S. Moore, the film features interviews with prominent hackers and entrepreneurs including Richard Stallman, Michael Tiemann, Linus Torvalds, Larry Augustin, Eric S. Raymond, Bruce Perens, Frank Hecker and Brian Behlendorf. Synopsis The film begins with glimpses of Raymond, a Linux IPO, Torvalds, the idea of Open Source, Perens, Stallman, then sets the historical stage in the early days of hackers and computer hobbyists when code was shared freely. It discusses how change came in 1978 as Microsoft co-founder Bill Gates, in his Open Letter to Hobbyists, pointedly prodded hobbyists to pay up. Stallman relates his struggles with proprietary software vendors at the MIT Artificial Intelligence Lab, leading to his departure to focus on the development of free software, and the GNU Project. Torvalds describes the development of the Linux kernel, the GNU/Linux naming controversy, Linux's further evolution, and its commercialization. Raymond and Stallman clarify the philosophy of free software versus communism and capitalism, as well as the development stages of Linux. Michael Tiemann discusses meeting Stallman in 1987, getting an early version of Stallman's GCC, and founding Cygnus Solutions. Larry Augustin describes combining GNU software with a normal PC to create a Unix-like workstation at one third the price and twice the power of a Sun workstation. He relates his early dealings with venture capitalists, the eventual capitalization and commodification of Linux for his own company, VA Linux, and its IPO. Brian Behlendorf, one of the original developers of the Apache HTTP Server, explains that he started to exchange patches for the NCSA web server daemon with other developers, which led to the release of "a patchy" webserver, dubbed Apache. Frank Hecker of Netscape discusses the events leading up to Netscape's executives releasing the source code for Netscape's browser, one of the signal events which made open source a force to be reckoned with by business executives, the mainstream media, and the public at large. This point was validated further after the film's release as the Netscape source code eventually became the Firefox web browser, reclaiming a large percentage of market share from Microsoft's Internet Explorer. The film also documents the scope of the first full-scale LinuxWorld Summit conference, with appearances by Linus Torvalds and Larry Augustin on the keynote stage. Much of the footage for the film was shot in Silicon Valley. Screenings The film appeared in several film festivals including South by Southwest, the Atlanta Film and Video Festival, Boston Film Festival, and Denver International Film Festival; it won Best Documentary at both the Savannah Film and Video Festival and the Kudzu Film Festival. Quotes Reception Every review noted the historical significance of the information, and those that noticed found the production values high, but the presentation of history mainly too dry, even resembling a lecture. Ron Wells of Film Threat found the film important, worthwhile, and well thought out for explaining the principles of the free software and open source concepts. Noting its failure to represent on camera any debate with representatives of the proprietary software camp, Wells gave the film 4 of 5 stars. TV Guide rated the film 3 of 4 stars: "surprisingly exciting", "fascinating" and "sharp looking" with a good soundtrack. Daily Variety saw the film as "targeted equally at the techno-illiterate and the savvy-hacker crowd;" educating and patting one group on the head, and canonizing the other, but strong enough for an "enjoyable" recommendation. On the negative side, The New York Times faulted the film's one-sidedness, found its reliance on jargon "fairly dense going", and gave no recommendation. Internet Reviews found it "a didactic and dull documentary glorifying software anarchy. Raging against Microsoft and Sun. . .", lacking follow-through on Red Hat and VALinux stock (in 2007, at 2% of peak value), with "lots of talking heads". Toxicuniverse.com noted "Revolution OS blatantly serves as infomercial and propaganda. Bearded throwback to the sixties, hacker Richard Stallman serves as the movement's spiritual leader while Scandinavian Linus Torvalds acts as its mild mannered chief engineer (as developer of the Linux kernel)." To Tim Lord, reviewing for Slashdot, the film is interesting and worthy of viewing, with some misgivings: it is "about the growth of the free software movement, and its eventual co-option by the open source movement. . . it was supposed to be about Linux and its battle about Microsoft, but the movie is quickly hijacked by its participants." The film "lacks the staple of documentaries: scenes with multiple people that are later analyzed individually by each of the participants" (or indeed, much back-and-forth at all). Linux itself and its benefits are notably missing, and, "[w]e are never shown anyone using Linux, except for unhappy users at an Installfest." The debate over Linux VS Windows is missing, showing the origin of the OS only as a response to proprietary and expensive Sun and DEC software and hardware, and its growth solely due to the Apache web server. And Lord notes that the film shows, but does not challenge Torvalds or Stallman about their equally disingenuous remarks about the "Linux" vs "GNU/Linux" naming issue. See also The Code – another documentary film about Linux Pirates of Silicon Valley Open source Linux Free software movement Copyleft The Cathedral and the Bazaar References External links Revolution OS Slashdot (20 April 2002) 2001 films 2001 documentary films Documentary films about free software Works about the information economy English-language films Linux Works about computer hacking
51089566
https://en.wikipedia.org/wiki/Unlimited%20Cities
Unlimited Cities
Unlimited Cities (in French Villes sans limites) are methods and apps to facilitate the civil society involvement in urban transformations. Unlimited Cities DIY is an Open Source upgrade of the application linked with the New Urban Agenda of the United Nations "Habitat III" Conference. Use These apps that can be used on mobile devices (tablets and smartphones) for people to express their views on the evolution of a neighbourhood before future developments are outlined by professionals. "Through a simple interface, they make up a realistic representation of their expectations for a given site. Six cursors can be played with: urban density, nature, mobility, neighbourhood life, digital, creativity/art in the city. Designed by the UFO urban planning agency in partnership with the HOST architectural and urban planning firm, the apps provides upstream information to urban project developers, as well as to people to query their design wishes and thus to appropriate the future project.”. Thus the Unlimited Cities method gives the civil society the opportunity to act and co-construct with professional urban developers without being subject to solutions predetermined by experts and public authorities. According to one of its creators, the urban architect Alain Renk: "Today the future of cities and metropoles lies less in the poetic, imaginary and solitary techniques found in Jules Verne’s novels than in capacities offered by digital mediations to imagine, represent and openly share knowledge, through the collective intelligence, offering opportunities to consider less standardized and prioritized lifestyles, freer creativity, shorter design and manufacturing circuits of circular economies and, ultimately, preservation of common goods. Backgrounds The project originates in 2002 at the Orleans ArchiLab international meetings, with the publication of the book named Construire la Ville Complexe? (Building the Complex City?), published by Jean Michel Place, a well-known editor in the world of architecture. Then in 2007 with a research using digital urban ecosystem simulators on the Plan Construction Architecture du Ministère du développement durable (Construction Architecture Plan of the Ministry of Sustainable Development). A crossed interview with Alain Renk and the sociologist Marc Augé discusses the possibilities of simulators to operate the collective intelligence. In 2009, the HOST Agency, responsible for the creation of the Civic-Tech UFO, got the certification of the Cap Digital and Advancity competitiveness clusters to mount the UrbanD collaborative research program, intended to lay the collaborative software theoretical and technical basis for the evaluation and representation of the quality of urban life to enlighten decisions. This 3-year program (from early 2010 to late 2012) was the basis for the creation of "Unlimited Cities", apps and required an 800 000 € budget that was half funded by European Regional Development Fund (ERDF) subsidies. In June 2011 a beta version of Unlimited Cities PRO is presented in Paris in the Futur en Seine Festival with real world tests with visitors, then shown in Tokyo in November and in Rio de Janeiro in December of the same year. On October 2, 2012 the first operational deployment is implemented by the town hall of Rennes,: " the first tests were carried out in the area around the TGV1 train station and the prison demolition site in Rennes, and we discovered that when being able to build what they wanted, users quickly forgot reluctance such as for urban density, and they conceived urban projects that often went against conventional wisdom. The idea of urban density and tall buildings is often rejected, but it is accepted as soon as people can adapt it to their own logic.”. Then the tool is implemented in Montpellier in June 2013. Then in June, July and August in Evreux, where UFO worked on the conversion of the downtown former Saint-Louis Hospital., In June 2015 in Grenoble, "the application is used to imagine, jointly with the population, solutions to give more visibility to the transport offer. It is a different way of working. We do not turn anymore only to the planners but we directly go to the locals and ask for their opinion, their vision. The purpose is obviously to increase buses utilization, but it is also to have people satisfied with the arrangements put in place. ". The first cities to use Unlimited Cities PRO call the attention due to the mediators’ ability to query people in the street, often off guard, with an appealing playful tablet. Their presence in the neighbourhoods for several weeks, right where people live and work, prompts the number of participants to be much higher (over 1 600 people in Evreux) than with conventional methods of consultation that struggle to get people go to places allocated for this. These achievements arouse the interest of some researchers who will analyse some attitude changes in urban professionals and citizens. Can we talk about a rebirth of participatory democracy? Are those images, which belong to hyperrealism, misleading or conversely are they accessible to all kinds of people? Are the Open-Source dimension of the collected data and its accessibility in real time involved in building trust between experts and non-experts? The method is the topic of several scientific articles, and has been honoured with several awards in France, as well as with the Open Cities Award from the European Commission. UN-Habitat and Unlimited Cities DIY The first requests were initially applied for in Rio, in 2011, for associations to use the software in the favelas, and then recurrently in Africa, South America and India. In parallel associations and groups in Europe also wanted to be free to implement, in their territories and independently, the collaborative urban planning device without needing further financing than users’ support. In June 2013 the Civic-Tech UFO presented at the festival Futur en Seine the Unlimited Cities DIY prototype: an open source, free and easy to implement upgrade. Presentations of the beta version are then non-stop: September 2013 Nantes for Ecocity symposium; November 2013, Barcelona in the Open Cities Award Ceremony; January 2014 Rennes for a meeting at the Institute of Urbanism, March 2014 Le Havre at a conference of Urbanism collaboration; May 2014 London in the Franco-British symposium on Smart-Cities; July 2014 Berlin for the Open Knowledge Festival; early October 2014 Hyderabad, India for Congress Metropolis; late October 2014 Wuhan China for the conference of the Sino-French ecocity Caiden; 2015 Wroclaw for the Hacking of the Social Operating System; September 2015, Lyon at the annual conference of the national Federation Planning Agencies and many other workshops that confirmed recurring applications for an Open Source version easy to implement. 2016 has been showing a strong acceleration of the Open Source version expansion because several workshops were organized with the University of Lyon in April to redesign the campus of the Central School (Wikibuilding-campus project), and then again in China several conferences and use of the software with farmers (Wikibuilding-Village project), with children (project Wikibuilding Natur), and with students and faculty of the University of Wuhan HUST. The first contacts between the agency UN-Habitat and Unlimited Cities DIY software are held in October 2014 in Hyderabad with the City Resilience Profiling Program and then in Barcelona in 2015. The connection is concretized in the following Habitat III Conference. Held every twenty years, Habitat conferences organized by the UN form a sounding board that accelerates the consideration of major urban issues in public policy. This year 2016, the preparatory document for the Habitat III Conference in Quito highlights the need to evolve towards urban planning construction carried together with civil society. The non-profit organisation "7 Milliards d'urbanistes (7 billion urban planners) " will be present in Quito to introduce the open source Unlimited Cities DIY software to delegates of the 197 countries member for collaborative urban planning to become available to the greatest possible number of people. Honours and awards In 2015, the Wikibuilding project designed for the future Paris Rive gauche is preselected under the contest "Réinventer Paris." 2013 Winner of Printemps du numérique (Rural TIC) 2013 Winner of Territoires innovants (Interconnected) 2013 Winner of Open Cities Awards (European Commission) 2011 Winner of the call for projects for Futur en Seine (Cap digital) 2011 Nominee of the 2011 Prix de la croissance verte numérique Award (Acidd) 2010 Selection of the Carrefours Innovations&Territoires (CDC) Publications (fr) Créer virtuellement un urbanisme collectif by Julie Nicolas and Xavier Crépin, Le Moniteur - N°5813, April 2015. (fr) L’urbanisme collaboratif, expérience et contexte by Nancy Ottaviano, GIS Symposium Participation. (fr) Clément Marquet, Nancy Ottaviano and Alain Renk, « Pour une ville contributive », Urbanisme dossier "Villes numériques, villes intelligentes?", Autumn 2014, p. 53-55. (fr) L’appropriation de la ville par le numérique by Clément Marquet : Undergoing Thesis, Institut Mines Telecom. (fr) Et si on inventait l’enquête d’imagination publique? by Sylvain Rolland, La Tribune hors-série Grand-Paris. (fr) Villes sans limite, un outil pour stimuler l’imagination publique by Karim Ben Merien and Xavier Opige, Les Cahiers de l’IAU idf (fr) Wikibuilding : l’urbanisme participatif de demain ? by Ludovic Clerima, Explorimmo, 2015 Alain Renk, Urban Diversity: Cities Of Differences Create Different Cities, in WorldCrunch.com, November 12, 2013 (visited on May 28, 2016) (fr) Philippe Gargov, Samsung et son safari imaginaire : l’urbanisme collaboratif is now mainstream, on pop-up-urbain.com, December 2012 (visited on June 13, 2016) July 8, 2011 radio broadcast: Qu’est-ce que la ville numérique? : The field of the possible, France Culture, 2011 References Urban planning Free software
28542189
https://en.wikipedia.org/wiki/DDC-I
DDC-I
DDC-I, Inc. is a privately held company providing software development of real-time operating systems, software development tools, and software services for safety-critical embedded applications, headquartered in Phoenix, Arizona. It was first created in 1985 as the Danish firm DDC International A/S (also known as DDC-I A/S), a commercial outgrowth of Dansk Datamatik Center, a Danish software research and development organization of the 1980s. The American subsidiary was created in 1986. For many years, the firm specialized in language compilers for the programming language Ada. In 2003, the Danish office was closed and all operations moved to the Phoenix location. Origins The origins of DDC International A/S lay in Dansk Datamatik Center, a Danish software research and development organization that was formed in 1979 to demonstrate the value of using modern techniques, especially those involving formal methods, in software design and development. Among its several projects was the creation of a compiler system for the programming language Ada. Ada was a difficult language to implement and early compiler projects for it often proved disappointments. But the DDC compiler design was sound and it first passed the United States Department of Defense-sponsored Ada Compiler Validation Capability (ACVC) tests on a VAX/VMS system in September 1984. As such, it was the first European Ada compiler to meet this standard. Success of the Ada project led to a separate company being formed in 1985, called DDC International A/S, with the purpose of commercializing the Ada compiler system product. Like its originator, it was based in Lyngby, Denmark. Ole N. Oest was named the managing director of DDC International. In 1986, DDC-I, Inc. was founded as the American subsidiary company. Located in Phoenix, Arizona, it focused on sales, customer support, and engineering consulting activities in the United States. Ada compiler DDC-I established a business in selling the Ada compiler system product, named DACS, directly to firms, both as software to develop projects in Ada with, and as source code to computer makers and others, who would rehost or retarget it to other processors and operating systems. The first business sold both native compilers and cross compilers, with the latter more common since Ada was primarily used in the embedded systems realm. One of the first cross compilers that DDC-I developed was from VAX/VMS to the Intel 8086 and Intel 80286; the effort was already underway by early 1985. It began as a joint venture with the Italian defense electronics company Selenia that would target both their MARA-860 and MARA-286 multi-microprocessor computers, based on the 8086 and 80286 architectures, and generic embedded and OS-hosting 8086 and 80286 systems. This work was the start of what would become the largest-selling product line for the firm. DDC-I developed a reputation for quality Ada cross compilers and runtime systems for Intel 80x86 processors. The second business made use of what became termed the DDC OEM Compiler Kit, who could be using the Ada front end for compilers to other hosts or targets or for other tools such as VLSI. In a September 1985 meeting in Lund, Sweden, several of the OEM Kit customers formed the DDC Ada Compiler Retargeter's Group. It held at least three meetings over the course of 1985 and 1986. The early OEM customers included the University of Lund, Defence Materiel Administration, and Ericsson Radio Systems in Sweden; Softplan and Nokia Information Systems in Finland; Selenia and Olivetti in Italy; ICL Defence Systems and STL Ltd in the United Kingdom; Aitech Software Engineering in Israel; and Advanced Computer Techniques, Rockwell Collins, Control Data Corporation, and General Systems Group in the United States. Later developers were often less well versed in formal methods and did not use them in their work on the compiler. This was even more so in the case of companies retargeting the compiler, many of which were unfamiliar with the Ada language. DDC-I was in the same market as several other Ada compiler firms, including Alsys, TeleSoft, Verdix, Tartan Laboratories, and TLD Systems. (DDC-I would go on to stay in business longer than any of these others.) As with other Ada compiler vendors, much of the time of DDC-I engineers was spent in conforming to the large, difficult Ada Compiler Validation Capability (ACVC) standardized language and runtime test suite. Starting in 1988 and continuing for several years, DDC-I consultants collaborated with Honeywell Air Transport Systems to retarget and optimize the DDC-I Ada compiler to the AMD 29050 processor. This DDC-I-based cross compiler system was used to develop the primary flight software for the Boeing 777 airliner. This software, named the Airplane Information Management System, would become arguably the best-known of any Ada-in-use project, civilian or military. Some 550 developers at Honeywell worked on the flight system and it was publicized as a major Ada success story. In October 1991, it was announced that DDC-I had acquired the Ada and JOVIAL language embedded systems business of InterACT, which had become a venture of Advanced Computer Techniques. This wholly owned New York-based entity was briefly named DDC-Inter before being subsumed into DDC-I proper. This brought Ada cross compilers for the MIL-STD-1750A and MIPS R3000 processors, and JOVIAL language cross compilers for the MIL-STD-1750A and Zilog Z8002 into the product line. The MIPS product was one which DDC-I emphasised, with engineering efforts that included automatic recognition of certain tasking optimizations, and work in the U.S. Air Force-sponsored Common Ada Runtime System (CARTS) project towards providing standard interfaces into Ada runtime environments. At the end of 1993, the New York office was closed, and its work transferred to the Phoenix office. By the early 1990s, DDC-I offered Ada native compilers for VAX/VMS, Sun-3 and SPARC under SunOS, and Intel 80386 under UNIX System V and OS/2, and offered cross compilers for the Motorola 680x0 and Intel i860 in addition to the abovementioned targets. Ada 95 and explorations of other product lines In the early 1990s, DDC-I worked on redesigning the compiler system for the wide-ranging Ada 95 revision of the language standard. They used a new object-based programming design and still adhered to a formal methods approach as well, using VDM-SL. The work was done under sponsorship of the European Community-based Open Microprocessor Initiative's Global Language and Uniform Environment -project (OMI/GLUE), where DDC-I's role was to create a compiler targeting the Architecture Neutral Distribution Format (ANDF) intermediate form, with the intention of bringing Ada 95 to more platforms quickly. As part of this work, DDC-I collaborated with the Defence Evaluation and Research Agency in expanding some of ANDF's abilities to express semantics of Ada and the fast-growing programming language C++. Work in Ada-specific areas, such as bounds-checking elimination, was done to get optimal run-time performance. The Ada software environment was originally thought to be a promising market. But the Ada compiler business proved to be a difficult one to be in. During this time, 1987–97, a U.S. government mandate for Ada use was in effect, albeit with some waivers granted. Many of the advantages of the language for general-purpose programming were not seen as such by the general software engineering community or by educators. The sales situation was challenging, with periodic small layoffs. Despite consolidation among other Ada tool providers, DDC-I remained an independent company. In any case, DDC-I was an enthusiastic advocate of the Ada language, for use in the company and externally. A paper one of its engineers published in 1993 assessed Ada 95's object-oriented features favorably to those of C++ and attracted some attention. At the same time, the firm attempted to expand and augment its product line. The RAISE toolset was available, as was Cedar, a design tool for real-time systems. Also offered was Beologic, a tool to develop and run state/event parts of applications, that had been licensed from Bang & Olufsen and integrated with the Ada compiler system. The biggest effort was in the direction of C++. DDC-I began offering 1st Object Exec, a C++-based real-time operating system intended for direct, object-level support of embedded applications. Despite considerable efforts during 1993–94, 1st Object Exec failed to gain traction in the marketplace. The one area where Ada did gain a solid foothold was in real-time, high-reliability, high-integrity, safety-critical applications such as aerospace. Based on its experience with Honeywell and other customers, DDC-I acquired expertise in the mapping of Ada language and runtime features to the requirements of safety-critical certifications, in particular those for the DO-178B (Software Considerations in Airborne Systems and Equipment Certification) standard, and provided tools for that process. Such applications continued even after the Ada mandate was dropped in 1997. For instance, in 1997 the firm was awarded a joint contract with Sikorsky Aircraft and Boeing Defense & Space Group's Helicopters Division to develop software to be used in the Boeing/Sikorsky RAH-66 Comanche. In March 1998, DDC-I acquired from Texas Instruments the development and sales and marketing rights to the Tartan Ada compilers for the Intel i960, Motorola 680x0, and MIL-STD-1750A targets. Support for mixed language development was added in 2000 with the addition of the programming language C as part of DDC-I's mixed-language integrated development environment for SCORE (for Safety-Critical, Object-oriented, Real-time Embedded). Leveraging the ANDF format, the DWARF standardized debugging format, and the OMI protocol for communicating with target board debug monitors, SCORE was able to provide a common building and debugging environment for real-time application developers. Support for Embedded C++ was added to SCORE in 2003, by which time it could integrate with a variety of target board scenarios on Intel x86 and Power PC processors. The C and Embedded C++ compilers for ANDF came from a licensing arrangement for the TenDRA Compiler (later DDC-I became the maintainer of those compilers). Subsequently, Ada 95 support for the older 1750A and TMS320C4x processors was added to SCORE. U.S. headquarters and real-time operating systems By April 2003 the industry move away from Ada and the declining position of the aircraft industry had taken its toll and DDC-I suffered significant financial losses. DDC-I decided to close its Denmark office in Lyngby and move all operations to Phoenix. In September 2005, the company named Bob Morris, formerly of LynuxWorks, as its president and chief executive officer. Oest became Chief Technology Officer. In April 2006, DDC-I moved to new offices in northern Phoenix, stating that it was expanding and that it expected revenue to grow 40–50 percent over the previous year. Since 2006, the company has been contributing to the Java Expert Group for Safety Critical Java. This work, which uses the Real-time specification for Java as a base and then specifies language and library subsets and coding rules for use to provide sufficient determinism, is seen by the firm's representatives as making Java possibly equal or superior to either Ada or C++ as a language for safety-critical applications. The company has viewed the safety-critical Java profile as one that can help the defense industry deal with the issue of aging software and hardware applications. By 2008, DDC-I was referring to Ada as a legacy language and offering semi-automated tools and professional services to help customers migrate to newer solutions. In November 2008, the company entered the embedded real-time operating system (RTOS) market with two products, Deos and HeartOS. Both were based on underlying software technology originated at Honeywell International and already deployed on many commercial and military aircraft. As part of the action, DDC-I hired some of the key Honeywell engineering staff who had designed Deos. Other firms in the same RTOS market segment as DDC-I include LynuxWorks, Wind River Systems, SYSGO, and Express Logic. Products Deos is a time and space partitioned real-time operating system (RTOS) that was first certified to DO-178B level A in 1998. Deos contains several patented architectural features including enhancements for processor utilization, binary software reuse and safe scheduling for multi-core processors. Deos users have the ability to add on optional ARINC 653 personality modules designed to fit different application needs. Deos supports the processors ARM, MIPS, PowerPC, and x86, and is supported by popular SSL/TLS libraries such as wolfSSL. It was listed as one of the Hot 100 Electronic Products of 2009 by EDN magazine. HeartOS is a POSIX-based hard real-time operating system, designed for small to medium embedded applications including safety-critical types. It supports ARM, PowerPC, x86 and other 16-bit and 32-bit processors. It is configurable without the POSIX interface layer for memory-constrained systems. OpenArbor is an Eclipse-based integrated development environment for C, Embedded C++, and Ada application development. It was announced in 2007. SCORE is a mixed-language set of integrated tools for safety-critical, object-oriented, real-time embedded software applications, supporting Ada, C, and Embedded C++ applications for a variety of embedded architectures. Legacy Ada 83 and JOVIAL compiler system products also continue to be supported. Bibliography A slightly expanded version of this chapter is available online at https://www.researchgate.net/publication/221271386_Dansk_Datamatik_Center. A further expanded version is part of Bjørner's online memoir at http://www.imm.dtu.dk/~dibj/trivia/node5.html. A slides presentation by Gram based on the paper is available online as Why Dansk Datamatik Center? WorldCat entry References External links Freescale Semiconductor – Alliance Network entry ARM – Connected Community entry Software companies based in Arizona Software companies of Denmark Companies based in Lyngby-Taarbæk Municipality Companies based in Phoenix, Arizona Technology companies established in 1985 1985 establishments in Denmark 2003 disestablishments in Denmark 1986 establishments in Arizona Development software companies Ada (programming language) Software companies of the United States
67892185
https://en.wikipedia.org/wiki/ANOM
ANOM
The ANOM (also stylized as AN0M or ΛNØM) sting operation (known as Operation Trojan Shield or Operation Ironside) is a collaboration by law enforcement agencies from several countries, running between 2018 and 2021, that intercepted millions of messages sent through the supposedly secure smartphone-based messaging app ANOM. The ANOM service was widely used by criminals, but instead of providing secure communication, it was actually a trojan horse covertly distributed by the United States Federal Bureau of Investigation (FBI) and the Australian Federal Police (AFP), enabling them to monitor all communications. Through collaboration with other law enforcement agencies worldwide, the operation resulted in the arrest of over 800 suspects allegedly involved in criminal activity, in 16 countries. Among the arrested people were alleged members of Australian-based Italian mafia, Albanian organised crime, outlaw motorcycle clubs, drug syndicates and other organised crime groups. Background The shutdown of the Canadian secure messaging company Phantom Secure in March 2018 left international criminals in need of an alternative system for secure communication. Around the same time, the San Diego FBI branch had been working with a person who had been developing a "next-generation" encrypted device for use by criminal networks. The person was facing charges and cooperated with the FBI in exchange for a reduced sentence. The person offered to develop ANOM and then distribute it to criminals through their existing networks. The first communication devices with ANOM were offered by this informant to three former distributors of Phantom Secure in October 2018. The FBI also negotiated with an unnamed third country to set up a communication interception, but based on a court order that allowed passing the information back to the FBI. Since October 2019, ANOM communications have been passed on to the FBI from this third country. The FBI named the operation "Trojan Shield", and the AFP named it "Ironside". Distribution and usage The ANOM devices consisted of a messaging app running on Android smartphones that had been specially modified to disable normal functions such as voice telephony, email, or location services, and with the addition of PIN entry screen scrambling to randomise the layout of the numbers, the deletion of all information on the phone if a specific PIN is entered, and the option for the automatic deletion of all information if unused for a specific period of time. The app was opened by entering a specific calculation within the calculator app, described by the developer of GrapheneOS as "quite amusing security theater", where the messaging app then communicated with other devices via supposedly secure proxy servers, which also — unknown to the app's users — copied all sent messages to servers controlled by the FBI. The FBI could then decrypt the messages with a private key associated with the message, without ever needing remote access to the devices. The devices also had a fixed identification number assigned to each user, allowing messages from the same user to be connected to each other. About 50 devices were distributed in Australia for beta testing from October 2018. The intercepted communications showed that every device was used for criminal activities, primarily being used by organised criminal gangs. About 125 devices were shipped to different drop-off points to the United States in 2020. Use of the app spread through word of mouth, and was also encouraged by undercover agents; drug trafficker Hakan Ayik was identified "as someone who was trusted and was going to be able to successfully distribute this platform", and without his knowledge was encouraged by undercover agents to use and sell the devices on the black market, further expanding its use. After users of the devices requested smaller and newer phones, new devices were designed and sold; customer service and technical assistance was also provided by the company. The most commonly used languages on the app were Dutch, German and Swedish. After a slow start, the rate of distribution of ANOM increased from mid-2019. By October 2019, there were several hundred users. By May 2021, there had been 11,800 devices with ANOM installed, of which about 9,000 were in use. New Zealand had 57 users of the ANOM communication system. The Swedish Police had access to conversations from 1,600 users, of which they focused their surveillance on 600 users. Europol stated 27 million messages were collected from ANOM devices across over 100 countries. Some skepticism of the app did exist; one March 2021 WordPress blog post called the app a scam. Arrests and reactions The sting operation culminated in search warrants that were executed simultaneously around the globe on 8 June 2021. It is not entirely clear why this date was chosen, but news organisations have speculated it might be related to a warrant for server access expiring on 7 June. The background to the sting operation and its transnational nature was revealed following the execution of the search warrants. Over 800 people were arrested in 16 countries. Among the arrested people were alleged members of Australian-based Italian mafia, Albanian organised crime, outlaw motorcycle gangs, drug syndicates and other crime groups. In the European Union, arrests were coordinated through Europol. Arrests were also made in the United Kingdom, although the National Crime Agency was unwilling to provide details about the number arrested. The seized evidence included almost 40 tons of drugs (over eight tons of cocaine, 22 tons of cannabis and cannabis resin, six tons of synthetic drug precursors, two tons of synthetic drugs), 250 guns, 55 luxury cars and more than $48 million in various currencies and cryptocurrencies. In Australia, 224 people were arrested on 526 total charges. In New Zealand, 35 people were arrested and faced a total of 900 charges. Police seized $3.7 million in assets, including 14 vehicles, drugs, firearms and more than $1 million in cash. Over the course of the three years, more than 9,000 police officers across 18 countries were involved in the sting operation. Australian Prime Minister Scott Morrison said that the sting operation had "struck a heavy blow against organised crime". Europol described it as the "biggest ever law enforcement operation against encrypted communication". Australia About 50 of the devices had been sold in Australia. Police arrested 224 suspects and seized 104 firearms and confiscated cash and possessions valued at more than 45 million AUD. Germany In Germany, the majority of the police activity was in the state of Hesse where 60 of the 70 nationwide suspects were arrested. Police searched 150 locations and in many cases under suspicion of drug trafficking. Netherlands In the Netherlands, 49 people were arrested by Dutch police while they investigated 25 drug production facilities and narcotics caches. Police also seized eight firearms, large supplies of narcotics and more than 2.3 million euros. Sweden In Sweden, 155 people were arrested as part of the operation. According to police in Sweden which received intelligence from the FBI, during an early phase of the operation it was discovered that many of the suspects were in Sweden. Linda Staaf, head of the Swedish police's intelligence activities, said that the suspects in Sweden had a higher rate of violent crime than the other countries. United States No arrests were made in the United States because of privacy laws that prevented law enforcement from collecting messages from domestic subjects. See also EncroChat – a network infiltrated by law enforcement to investigate organized crime in Europe Ennetcom – a network seized by Dutch authorities, who used it to make arrests Phantom Secure Sky Global References External links ANOM.io - Domain Seized - as of 8 June 2021, this displays FBI and AFP graphics, a "Trojan Shield" graphic and a "This domain has been seized" notice, with a form inviting visitors "To determine if your account is associated with an ongoing investigation, please enter any device details below" 2021 in international relations 2021 in law Anonymity networks Deception operations Encryption debate Secure communication Operations against organized crime Law enforcement operations Trojan horses June 2021 events History of cryptography Federal Bureau of Investigation operations
44130571
https://en.wikipedia.org/wiki/ESAN%20University
ESAN University
ESAN University or Universidad ESAN in Spanish (acronym: ESAN) is a non-profit private University, located in Santiago de Surco district in Lima, Peru. ESAN University is a leading academic institution in business education, that was founded in 1963 as ESAN - Escuela de Administración de Negocios para Graduados, the first graduate business school in Latin America. Throughout these years ESAN has achieved a relevant role in Peru, based on the quality of its MBA program, specialized masters, executive education programs and others. Currently ESAN University offers undergraduate programs divided in three schools: School of Economics and Management: Administration and Marketing, Administration and Finance, Economy and International Business, School of Engineering: Information Technology and Systems Engineering, Industrial Engineering, Environmental Management and Engineering, School of Law and Social Sciences: Organizational Psychology Consumer Psychology Corporate Law History In 1962, the USAID -U.S. Agency for International Development, established by the US president John F. Kennedy, summons the main business schools to study the possibility of developing management and businesses programs in Latin America. This project was trusted to Stanford Graduate School of Business. Its Dean, Ernst Arbuckle, assumed the challenge and grouped up a team of professors led by Gail M. Oxley and Alan B. Coleman to evaluate in-site the feasibility of undertaking this ambitious project in Peru. This is how on July 25 of 1963 the Peruvian and American governments founded the “Escuela de Administración de Negocios para Graduados, ESAN” or “ESAN, Graduate School of Business” as known in English. Its organization was trusted to the Stanford Graduate School of Business and professor Alan B. Coleman. Shortly after, a seminar on international high management was taught. A few months later, ESAN opened its doors and professionals from all Latin America applied for studying in the first full-time Master in Business Administration - MBA program in the Spanish-speaking world. The following year, on April 1 of 1964, the 1st promotion of the MBA program or Programa Magister started classes. More than 55 years have gone by since then, but the tone, style and spirit of the American teachers who forged ESAN remain valid even these days in ESAN University and ESAN Graduate School of Business. International accreditation ESAN received AMBA (Association of MBAs) accreditation in 2002, being the first Peruvian MBA program and institution of higher education to be internationally accredited by this association. In 2013 ESAN University received international accreditation from AACSB (Association to Advance Collegiate Schools of Business) for ten of their programs: 7 Masters Degrees programs: Master in Business Administration - MBA Master in Finance Master in Marketing Master in Human Resources Management & Organization Master in Information Technology Master in Supply Chain Management 3 Bachelors Degrees programs: Administration & Marketing Administration & Finance Economy & International Business. In 2017, ICACIT - Instituto de Calidad y Acreditación de Programas de Computación, Ingeniería y Tecnología, member of Washington Accord, recognized by SINEACE - Sistema Nacional de Evaluación, Acreditación y Certificación de la Calidad Educativa of Peru, awarded the accreditation for two of ESAN´s Engineering programs, using ABET Criteria for Accrediting Technology Engineering Programs: Industrial de Commercial Engineering Information Technology and System Engineering In 2020, CONAED - Consejo para la Acreditación en la Enseñanza en Derecho, a Mexican organization that recognizes and supports academic excellence in higher education, recognized by SINEACE - Sistema Nacional de Evaluación, Acreditación y Certificación de la Calidad Educativa of Peru, awarded the accreditation of ESAN´s program in: Corporate Law Dual Masters Degrees ESAN´s International MBA program offers double master degree with the following partners universities: ICHEC Brussels Management School, Brussels, Belgium Schulich School of Business, York University, Toronto, Canada EDHEC Business School, Lille, France ESC Clermont Business School, Clermont-Ferrand, France IÉSEG School of Management, Lille, France Montpellier Business School, Montepllier, France HHL Leipzig Graduate School of Management, Leipzig, Germany NUCB, Nagoya University of Commerce and Business, Nagoya, Japan The University of Texas at Austin, Austin, Texas, USA University of Dallas, Dallas, Texas, United States Florida International University, Florida, USA Technological Infraestructure: ESAN Data and ESAN FabLab Founded in 1981, as its information technology center, ESAN Data, and inaugurated by the ex-president of Peru Arch. Fernando Belaunde Terry, it is in charge of creating technological services and tools for the service of education and the business sector. In 1991, ESAN Data is a pioneer of the internet in Peru, installing the first internet connection point of Peru in its university campus in Monterrico. Since 2013, ESAN has created the digital manufacturing laboratory, ESAN Fab lab, which is part of the global innovation network FAB LAB Network of the Massachusetts Institute of Technology. This laboratory allows students and teachers of various careers to make 3D prints using machines, equipment and printers. ESAN FabLab has been recognized as a Center for Scientific Research, Technological Development and Technological Innovation (R + D + i) by the National Council of Science, Technology and Technological Innovation -CONCYTEC, becoming the first digital manufacturing laboratory in Peru with this certification. References External links Official website in English Verified twitter account of ESAN Verified twitter account of ESAN University Official facebook account of ESAN University Official facebook account of ESAN Universities in Lima Universities in Peru
30671765
https://en.wikipedia.org/wiki/Sound%20and%20music%20computing
Sound and music computing
Sound and music computing (SMC) is a research field that studies the whole sound and music communication chain from a multidisciplinary point of view. By combining scientific, technological and artistic methodologies it aims at understanding, modeling and generating sound and music through computational approaches. History The Sound and Music Computing research field can be traced back to the 1950s, when a few experimental composers, together with some engineers and scientists, independently and in different parts of the world, began exploring the use of the new digital technologies for music applications. Since then the SMC research field has had a fruitful history and different terms have been used to identify it. Computer Music and Music Technology might be the terms that have been used the most, "Sound and Music Computing" being a more recent term. In 1974, the research community established the International Computer Music Association and the International Computer Music Conference. In 1977 the Computer Music Journal was founded. The Center for Computer Research in Music and Acoustics (CCRMA) at Stanford University was created in the early 1970s and the Institute for Research and Coordination Acoustic/Music (IRCAM) in Paris in the late 1970s. The Sound and Music Computing term was first proposed in the mid 1990s and it was included in the ACM Computing Classification System. Using this name, in 2004 the Sound and Music Computing Conference was started and also in 2004 a roadmapping initiative was funded by the European Commission that resulted in the SMC Roadmap and in the Sound and Music Computing Summer School. With increasing research specialization within the SMC field, a number of focused conferences have been created. Particularly relevant are the International Conference on Digital Audio Effects, established in 1998, the International Conference on Music Information Retrieval (ISMIR), established in 2000, and the International Conference on New Interfaces for Musical Expression (NIME), established in 2001. Subfields The current SMC research field can be grouped into a number of subfields that focus on specific aspects of the sound and music communication chain. Processing of sound and music signals: This subfield focuses on audio signal processing techniques for the analysis, transformation and resynthesis of sound and music signals. Understanding and modeling sound and music: This subfield focuses on understanding and modeling sound and music using computational approaches. Here we can include Computational musicology, Music information retrieval, and the more computational approaches of Music cognition. Interfaces for sound and music: This subfield focuses on the design and implementation of computer interfaces for sound and music. This is basically related to Human Computer Interaction. Assisted sound and music creation: This subfield focuses on the development of computer tools for assisting Sound design and Music composition. Here we can include traditional fields like Algorithmic composition. Areas of application SMC research is a field driven by applications. Examples of applications are: Digital music instruments: This application focuses on musical sound generation and processing devices. It encompasses simulation of traditional instruments, transformation of sound in recording studios or at live performances and musical interfaces for augmented or collaborative instruments. Music production: This application domain focuses on technologies and tools for music composition. Applications range from music modeling and generation to tools for music post–production and audio editing. Music information retrieval: This application domain focuses on retrieval technologies for music (both audio and symbolic data). Applications range from music audio–identification and broadcast monitoring to higher–level semantic descriptions and all associated tools for search and retrieval. Digital music libraries: This application places particular emphasis on preservation, conservation and archiving and the integration of musical audio content and meta–data descriptions, with a focus on flexible access. Applications range from large distributed libraries to mobile access platforms. Interactive multimedia systems: These are for use in everyday appliances and in artistic and entertainment applications. They aim to facilitate music–related human–machine interaction involving various modalities of action and perception (e.g. auditory, visual, olfactory, tactile, haptic, and all kinds of body movements) which can be captured through the use of audio/visual, kinematic and bioparametric (skin conduction, temperature) devices. Auditory interfaces: These include all applications where non–verbal sound is employed in the communication channel between the user and the computing device. Auditory displays are used in applications and objects that require monitoring of some type of information. Sonification is used as a method for data display in a wide range of application domains where auditory inspection, analysis and summarisation can be more efficient than traditional visual display. Sonic interaction design emphasizes the role of sound in interactive contexts. Augmented action and perception: This refers to tools that increase the normal action and perception capabilities of humans. The system adds virtual information to a user's sensory perception by merging real images, sounds, and haptic sensation with virtual ones. This has the effect of augmenting the user's sense of presence, and of making possible a symbiosis between her view of the world and the computer interface. Possible applications are in the medical domain, manufacturing and repair, entertainment, annotation and visualization, and robot tele-operation. See also List of music software External links Research centers Centre for Interdisciplinary Research in Music Media and Technology (CIRMMT) Montreal, Canada Institute de Recherche et Coordination Acoustique/Musique (IRCAM) Paris, France GRAME - National Center for Music Creation, Lyon, France Sound & Music Computing, Aalborg University Copenhagen, Denmark Audio Analysis Lab, Aalborg University, Denmark Music Technology Group, Universitat Pompeu Fabra, Barcelona, Spain Centre for Digital Music (C4DM), Queen Mary, University of London, London, UK Center for Computer Research in Music and Acoustics (CCRMA) Stanford University, USA The Music Computing Lab, The Open University, Milton Keynes, UK Centro di Sonologia Computazionale (CSC), University of Padova, Padova, IT Laboratorio di Informatica Musicale (LIM), Università degli Studi di Milano, Milano, IT Institute for Electronic Music and Acoustics (IEM), University for Music and Dramatic Arts Graz, Austria Center For New Music and Audio Technologies (CNMAT), UC Berkeley, USA Sound and Music Computing, CSC School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden Music Informatics Research Group, School of Informatics, City University London, London, UK Interdisciplinary Centre for Computer Music Research, Faculty of Arts, University of Plymouth, Plymouth, UK Sound & Music Computing Lab, School of Computing, National University of Singapore, Singapore Mexican Center for Music and Sonic Arts, Morelia, Mexico Associations International Society for Music Information Retrieval (ISMIR) International Computer Music Association (ICMA) Journals Computer Music Journal Journal of New Music Research Organized Sound Conferences Sound and Music Computing Conference (SMC) International Conference on Music Information Retrieval (ISMIR) International Conference on New Interfaces for Musical Expression (NIME) International Conference on Digital Audio Effects (DAFX) International Computer Music Conference (ICMC) Open software tools List of software tools related to SMC Undergraduate Programmes Computing, Audio and Music Technology BSc (Hons), University of Plymouth, UK MSc Programmes MSc in Sound & Music Computing, Queen Mary, University of London, UK MSc in Sound & Music computing, Universitat Pompeu Fabra, Barcelona, Spain MSc in Sound & Music Computing, Aalborg University, Denmark References Information science Music technology
418775
https://en.wikipedia.org/wiki/Serial%20digital%20interface
Serial digital interface
Serial digital interface (SDI) is a family of digital video interfaces first standardized by SMPTE (The Society of Motion Picture and Television Engineers) in 1989. For example, ITU-R BT.656 and SMPTE 259M define digital video interfaces used for broadcast-grade video. A related standard, known as high-definition serial digital interface (HD-SDI), is standardized in SMPTE 292M; this provides a nominal data rate of 1.485 Gbit/s. Additional SDI standards have been introduced to support increasing video resolutions (HD, UHD and beyond), frame rates, stereoscopic (3D) video, and color depth. Dual link HD-SDI consists of a pair of SMPTE 292M links, standardized by SMPTE 372M in 1998; this provides a nominal 2.970 Gbit/s interface used in applications (such as digital cinema or HDTV 1080P) that require greater fidelity and resolution than standard HDTV can provide. 3G-SDI (standardized in SMPTE 424M) consists of a single 2.970 Gbit/s serial link that allows replacing dual link HD-SDI. 6G-SDI and 12G-SDI standards were published on March 19, 2015. These standards are used for transmission of uncompressed, unencrypted digital video signals (optionally including embedded audio and time code) within television facilities; they can also be used for packetized data. SDI is used to connect together different pieces of equipment such as recorders, monitors, PCs and vision mixers. Coaxial variants of the specification range in length but are typically less than . Fiber optic variants of the specification such as 297M allow for long-distance transmission limited only by maximum fiber length or repeaters. SDI and HD-SDI are usually available only in professional video equipment because various licensing agreements restrict the use of unencrypted digital interfaces, such as SDI, prohibiting their use in consumer equipment. Several professional video and HD-video capable DSLR cameras and all uncompressed video capable consumer cameras use the HDMI interface, often called clean HDMI. There are various mod kits for existing DVD players and other devices, which allow a user to add a serial digital interface to these devices. Electrical interface The various serial digital interface standards all use (one or more) coaxial cables with BNC connectors, with a nominal impedance of 75 ohms. This is the same type of cable used in analog video setups, which potentially makes for easier upgrades (though higher quality cables may be necessary for long runs at the higher bitrates). The specified signal amplitude at the source is 800 mV (±10%) peak-to-peak; far lower voltages may be measured at the receiver owing to attenuation. Using equalization at the receiver, it is possible to send 270 Mbit/s SDI over without use of repeaters, but shorter lengths are preferred. The HD bitrates have a shorter maximum run length, typically . Uncompressed digital component signals are transmitted. Data is encoded in NRZI format, and a linear feedback shift register is used to scramble the data to reduce the likelihood that long strings of zeroes or ones will be present on the interface. The interface is self-synchronizing and self-clocking. Framing is done by detection of a special synchronization pattern, which appears on the (unscrambled) serial digital signal to be a sequence of ten ones followed by twenty zeroes (twenty ones followed by forty zeroes in HD); this bit pattern is not legal anywhere else within the data payload. Standards Bit rates Several bit rates are used in serial digital video signal: For standard definition applications, as defined by SMPTE 259M, the possible bit rates are 270 Mbit/s, 360 Mbit/s, 143 Mbit/s, and 177 Mbit/s. 270 Mbit/s is by far the most commonly used; though the 360 Mbit/s interface (used for widescreen standard definition) is sometimes encountered. The 143 and 177 Mbit/s interfaces were intended for transmission of composite-encoded (NTSC or PAL) video digitally, and are now considered obsolete. For enhanced definition applications (mainly 525P), there are several 540 Mbit/s interfaces defined, as well as an interface standard for a dual-link 270 Mbit/s interface. These are rarely encountered. For HDTV applications, the serial digital interface is defined by SMPTE 292M. Two bit rates are defined, 1.485 Gbit/s, and 1.485/1.001 Gbit/s. The factor of 1/1.001 is provided to allow SMPTE 292M to support video formats with frame rates of 59.94 Hz, 29.97 Hz, and 23.98 Hz, in order to be compatible with existing NTSC systems. The 1.485 Gbit/s version of the standard supports other frame rates in widespread use, including 60 Hz, 50 Hz, 30 Hz, 25 Hz, and 24 Hz. It is common to collectively refer to both standards as using a nominal bit rate of 1.5 Gbit/s. For very high-definition applications, requiring greater resolution, frame rate, or color fidelity than the HD-SDI interface can provide, the SMPTE 372M standard defines the dual link interface. As the name suggests, this interface consists of two SMPTE 292M interconnects operating in parallel. In particular, the dual link interface supports 10-bit, 4:2:2, 1080P formats at frame rates of 60 Hz, 59.94 Hz, and 50 Hz, as well as 12-bit color depth, RGB encoding, and 4:4:4 colour sampling. A nominal 3 Gbit/s interface (more accurately, 2.97 Gbit/s, but commonly referred to as "3 gig") was standardized by SMPTE as 424M in 2006. Revised in 2012 as SMPTE ST 424:2012, it supports all of the features supported by the dual 1.485 Gbit/s interface, but requires only one cable rather than two. Other interfaces SMPTE 297-2006 defines an optical fiber system for transmitting bit-serial digital signals It is intended for transmitting SMPTE ST 259 signals (143 through 360 Mbit/s), SMPTE ST 344 signals (540 Mbit/s), SMPTE ST 292-1/-2 signals (1.485 Gbit/s and 1.485/1.001 Gbit/s) and SMPTE ST 424 signals (2.970 Gbit/s and 2.970/1.001 Gbit/s). In addition to optical specification, ST 297 also mandates laser safety testing and that all optical interfaces are labelled to indicate safety compliance, application and interoperability. An 8-bit parallel digital interface is defined by ITU-R Rec. 601; this is obsolete (however, many clauses in the various standards accommodate the possibility of an 8-bit interface). Data format In SD and ED applications, the serial data format is defined to 10 bits wide, whereas in HD applications, it is 20 bits wide, divided into two parallel 10-bit datastreams (known as Y and C). The SD datastream is arranged like this: whereas the HD datastreams are arranged like this: Y C For all serial digital interfaces (excluding the obsolete composite encodings), the native color encoding is 4:2:2 YCbCr format. The luminance channel (Y) is encoded at full bandwidth (13.5 MHz in 270 Mbit/s SD, ~75 MHz in HD), and the two chrominance channels (Cb and Cr) are subsampled horizontally, and encoded at half bandwidth (6.75 MHz or 37.5 MHz). The Y, Cr, and Cb samples are co-sited (acquired at the same instance in time), and the Y' sample is acquired at the time halfway between two adjacent Y samples. In the above, Y refers to luminance samples, and C to chrominance samples. Cr and Cb further refer to the red and blue "color difference" channels; see Component Video for more information. This section only discusses the native color encoding of SDI; other color encodings are possible by treating the interface as a generic 10-bit data channel. The use of other colorimetry encodings, and the conversion to and from RGB colorspace, is discussed below. Video payload (as well as ancillary data payload) may use any 10-bit word in the range 4 to 1,019 (004 to 3FB) inclusive; the values 0–3 and 1,020–1,023 (3FC–3FF) are reserved and may not appear anywhere in the payload. These reserved words have two purposes; they are used both for Synchronization packets and for Ancillary data headers. Synchronization packets A synchronization packet (commonly known as the timing reference signal or TRS) occurs immediately before the first active sample on every line, and immediately after the last active sample (and before the start of the horizontal blanking region). The synchronization packet consists of four 10-bit words, the first three words are always the same—0x3FF, 0, 0; the fourth consists of 3 flag bits, along with an error correcting code. As a result, there are 8 different synchronization packets possible. In the HD-SDI and dual link interfaces, synchronization packets must occur simultaneously in both the Y and C datastreams. (Some delay between the two cables in a dual link interface is permissible; equipment which supports dual link is expected to buffer the leading link in order to allow the other link to catch up). In SD-SDI and enhanced definition interfaces, there is only one datastream, and thus only one synchronization packet at a time. Other than the issue of how many packets appear, their format is the same in all versions of the serial-digital interface. The flags bits found in the fourth word (commonly known as the XYZ word) are known as H, F, and V. The H bit indicates the start of horizontal blank; and synchronization bits immediately preceding the horizontal blanking region must have H set to one. Such packets are commonly referred to as End of Active Video, or EAV packets. Likewise, the packet appearing immediately before the start of the active video has H set to 0; this is the Start of Active Video or SAV packet. Likewise, the V bit is used to indicate the start of the vertical blanking region; an EAV packet with V=1 indicates the following line (lines are deemed to start at EAV) is part of the vertical interval, an EAV packet with V=0 indicates the following line is part of the active picture. The F bit is used in interlaced and segmented-frame formats to indicate whether the line comes from the first or second field (or segment). In progressive scan formats, the F bit is always set to zero. Line counter & CRC In the high definition serial digital interface (and in dual-link HD), additional check words are provided to increase the robustness of the interface. In these formats, the four samples immediately following the EAV packets (but not the SAV packets) contain a cyclic redundancy check field, and a line count indicator. The CRC field provides a CRC of the preceding line (CRCs are computed independently for the Y and C streams), and can be used to detect bit errors in the interface. The line count field indicates the line number of the current line. The CRC and line counts are not provided in the SD and ED interfaces. Instead, a special ancillary data packet known as an EDH packet may be optionally used to provide a CRC check on the data. Line and sample numbering Each sample within a given datastream is assigned a unique line and sample number. In all formats, the first sample immediately following the SAV packet is assigned sample number 0; the next sample is sample 1; all the way up to the XYZ word in the following SAV packet. In SD interfaces, where there is only one datastream, the 0th sample is a Cb sample; the 1st sample a Y sample, the 2nd sample a Cr sample, and the third sample is the Y' sample; the pattern repeats from there. In HD interfaces, each datastream has its own sample numbering—so the 0th sample of the Y datastream is the Y sample, the next sample the Y' sample, etc. Likewise, the first sample in the C datastream is Cb, followed by Cr, followed by Cb again. Lines are numbered sequentially, starting from 1, up to the number of lines per frame of the indicated format (typically 525, 625, 750, or 1125 (Sony HDVS)). Determination of line 1 is somewhat arbitrary; however it is unambiguously specified by the relevant standards. In 525-line systems, the first line of vertical blank is line 1, whereas in other interlaced systems (625 and 1125-line), the first line after the F bit transitions to zero is line 1. Note that lines are deemed to start at EAV, whereas sample zero is the sample following SAV. This produces the somewhat confusing result that the first sample in a given line of 1080i video is sample number 1920 (the first EAV sample in that format), and the line ends at the following sample 1919 (the last active sample in that format). Note that this behavior differs somewhat from analog video interfaces, where the line transition is deemed to occur at the sync pulse, which occurs roughly halfway through the horizontal blanking region. Link numbering Link numbering is only an issue in multi-link interfaces. The first link (the primary link), is assigned a link number of 1, subsequent links are assigned increasing link numbers; so the second (secondary) link in a dual-link system is link 2. The link number of a given interface is indicated by a VPID packet located in the vertical ancillary data space. Note that the data layout in dual link is designed so that the primary link can be fed into a single-link interface, and still produce usable (though somewhat degraded) video. The secondary link generally contains things like additional LSBs (in 12-bit formats), non-cosited samples in 4:4:4 sampled video (so that the primary link is still valid 4:2:2), and alpha or data channels. If the second link of a 1080P dual link configuration is absent, the first link still contains a valid 1080i signal. In the case of 1080p60, 59.94, or 50 Hz video over a dual link; each link contains a valid 1080i signal at the same field rate. The first link contains the 1st, 3rd, and 5th lines of odd fields and the 2nd, 4th, 6th, etc. lines of even fields, and the second link contains the even lines on the odd fields, and the odd lines on the even fields. When the two links are combined, the result is a progressive-scan picture at the higher frame rate. Ancillary data Like SMPTE 259M, SMPTE 292M supports the SMPTE 291M standard for ancillary data. Ancillary data is provided as a standardized transport for non-video payload within a serial digital signal; it is used for things such as embedded audio, closed captions, timecode, and other sorts of metadata. Ancillary data is indicated by a 3-word packet consisting of 0, 3FF, 3FF (the opposite of the synchronization packet header), followed by a two-word identification code, a data count word (indicating 0–255 words of payload), the actual payload, and a one-word checksum. Other than in their use in the header, the codes prohibited to video payload are also prohibited to ancillary data payload. Specific applications of ancillary data include embedded audio, EDH, VPID and SDTI. In dual link applications; ancillary data is mostly found on the primary link; the secondary link is to be used for ancillary data only if there is no room on the primary link. One exception to this rule is the VPID packet; both links must have a valid VPID packet present. Embedded audio Both the HD and SD serial interfaces provide for 16 channels of embedded audio. The two interfaces use different audio encapsulation methods — SD uses the SMPTE 272M standard, whereas HD uses the SMPTE 299M standard. In either case, an SDI signal may contain up to sixteen audio channels (8 pairs) embedded 48 kHz, 24-bit audio channels along with the video. Typically, 48 kHz, 24-bit (20-bit in SD, but extendable to 24 bit) PCM audio is stored, in a manner directly compatible with the AES3 digital audio interface. These are placed in the (horizontal) blanking periods, when the SDI signal carries nothing useful, since the receiver generates its own blanking signals from the TRS. In dual-link applications, 32 channels of audio are available, as each link may carry 16 channels. SMPTE ST 299-2:2010 extends the 3G SDI interface to be able to transmit 32 audio channels (16 pairs) on a single link. EDH As the standard definition interface carries no checksum, CRC, or other data integrity check, an EDH (Error Detection and Handling) packet may be optionally placed in the vertical interval of the video signal. This packet includes CRC values for both the active picture, and the entire field (excluding those lines at which switching may occur, and which should contain no useful data); equipment can compute their own CRC and compare it with the received CRC in order to detect errors. EDH is typically only used with the standard definition interface; the presence of CRC words in the HD interface make EDH packets unnecessary. VPID VPID (or video payload identifier) packets are increasingly used to describe the video format. In early versions of the serial digital interface, it was always possible to uniquely determine the video format by counting the number of lines and samples between H and V transitions in the TRS. With the introduction of dual link interfaces, and segmented-frame standards, this is no longer possible; thus the VPID standard (defined by SMPTE 352M) provides a way to uniquely and unambiguously identify the format of the video payload. Video payload and blanking The active portion of the video signal is defined to be those samples which follow an SAV packet, and precede the next EAV packet; where the corresponding EAV and SAV packets have the V bit set to zero. It is in the active portion that the actual image information is stored. Color encoding Several color encodings are possible in the serial digital interface. The default (and most common case) is 10-bit linearly sampled video data encoded as 4:2:2 YCbCr. (YCbCr is a digital representation of the YPbPr colorspace). Samples of video are stored as described above. Data words correspond to signal levels of the respective video components, as follows: The luma (Y) channel is defined such that a signal level of 0 mV is assigned the codeword 64 (40 hex), and 700 millivolts (full scale) is assigned the codeword 940 (3AC hex) . For the chroma channels, 0 mV is assigned the code word 512 (200 hex), −350 mV is assigned a code word of 64 (40 hex), and +350 mV is assigned a code word of 960 (3C0 hex). Note that the scaling of the luma and chroma channels is not identical. The minimum and maximum of these ranges represent the preferred signal limits, though the video payload may venture outside these ranges (providing that the reserved code words of 0–3 and 1020–1023 are never used for video payload). In addition, the corresponding analog signal may have excursions further outside of this range. Colorimetry As YPbPr (and YCbCr) are both derived from the RGB colorspace, a means of converting is required. There are three colorimetries typically used with digital video: SD and ED applications typically use a colorimetry matrix specified in ITU-R Rec. 601. Most HD, dual link, and 3 Gbit/s applications use a different matrix, specified in ITU-R Rec. 709. The 1035-line MUSE HD standards specified by SMPTE 260M (primarily used in Japan and now largely considered obsolete), used a colorimetry matrix specified by SMPTE 240M. This colorimetry is nowadays rarely used, as the 1035-line formats have been superseded by 1080-line formats. Other color encodings The dual-link and 3 Gbit/s interfaces additionally support other color encodings besides 4:2:2 YCbCr, namely: 4:2:2 and 4:4:4 YCbCr, with an optional alpha (used for linear keying, a.k.a. alpha compositing) or data (used for non-video payload) channel 4:4:4 RGB, also with an optional alpha or data channel 4:2:2 YCbCr, 4:4:4 YCbCr, and 4:4:4 RGB, with 12 bits of color information per sample, rather than 10. Note that the interface itself is still 10 bit; the additional 2 bits per channel are multiplexed into an additional 10-bit channel on the second link. If an RGB encoding is used, the three primaries are all encoded in the same fashion as the Y channel; a value of 64 (40 hex) corresponds to 0 mV, and 940 (3AC hex) corresponds to 700 mV. 12-bit applications are scaled in a similar fashion to their 10-bit counterparts; the additional two bits are considered to be LSBs. Vertical and horizontal blanking regions For portions of the vertical and horizontal blanking regions which are not used for ancillary data, it is recommended that the luma samples be assigned the code word 64 (40 hex), and the chroma samples be assigned 512 (200 hex); both of which correspond to 0 mV. It is permissible to encode analog vertical interval information (such as vertical interval timecode or vertical interval test signals) without breaking the interface, but such usage is nonstandard (and ancillary data is the preferred means for transmitting metadata). Conversion of analog sync and burst signals into digital, however, is not recommended—and neither is necessary in the digital interface. Different picture formats have different requirements for digital blanking, for example all so called 1080 line HD formats have 1080 active lines, but 1125 total lines, with the remainder being vertical blanking. Supported video formats The various versions of the serial digital interface support numerous video formats. The 270 Mbit/s interface supports 525-line, interlaced video at a 59.94 Hz field rate (29.97 Hz frame rate), and 625-line, 50 Hz interlaced video. These formats are highly compatible with NTSC and PAL-B/G/D/K/I systems respectively; and the terms NTSC and PAL are often (incorrectly) used to refer to these formats. (PAL is a composite color encoding scheme, and the term does not define the line-standard, though it is most usually encountered with 625i) while the serial digital interface— other than the obsolete 143 Mbit/s and 177 Mbit/s forms- is a component standard. The 360 Mbit/s interface supports 525i and 625i widescreen. It can also be used to support 525p, if 4:2:0 sampling is used. The various 540 Mbit/s interfaces support 525p and 625p formats. The nominal 1.49 Gbit/s interfaces support most high-definition video formats. Supported formats include 1080/60i, 1080/59.94i, 1080/50i, 1080/30p, 1080/29.97p, 1080/25p, 1080/24p, 1080/23.98p, 720/60p, 720/59.94p, and 720/50p. In addition, there are several 1035i formats (an obsolete Japanese television standard), half-bandwidth 720p standards such as 720/24p (used in some film conversion applications, and unusual because it has an odd number of samples per line), and various 1080psf (progressive, segmented frame) formats. Progressive Segmented frames formats appear as interlace video but contain video which is progressively scanned. This is done to support analog monitors and televisions, many of which are incapable of locking to low field rates such as 30 Hz and 24 Hz. The 2.97 Gbit/s dual link HD interface supports 1080/60p, 1080/59.94p, and 1080/50p, as well as 4:4:4 encoding, greater color depth, RGB encoding, alpha channels, and nonstandard resolutions (often encountered in computer graphics or digital cinema). A quad link interface of HD-SDI supports UHDTV-1 resolution 2160/60p Related interfaces In addition to the regular serial digital interface described here, there are several other similar interfaces which are similar to, or are contained within, a serial digital interface. SDTI There is an expanded specification called SDTI (Serial Data Transport Interface), which allows compressed (i.e. DV, MPEG and others) video streams to be transported over an SDI line. This allows for multiple video streams in one cable or faster-than-realtime (2x, 4x,...) video transmission. A related standard, known as HD-SDTI, provides similar capability over an SMPTE 292M interface. The SDTI interface is specified by SMPTE 305M. The HD-SDTI interface is specified by SMPTE 348M. ASI The asynchronous serial interface (ASI) specification describes how to transport a MPEG Transport Stream (MPEG-TS), containing multiple MPEG video streams, over 75-ohm copper coaxial cable or multi-mode optical fiber. ASI is popular way to transport broadcast programs from the studio to the final transmission equipment before it reaches viewers sitting at home. The ASI standard is part of the Digital Video Broadcasting (DVB) standard. SMPTE 349M The standard SMPTE 349M: Transport of Alternate Source Image Formats through SMPTE 292M, specifies a means to encapsulate non-standard and lower-bitrate video formats within an HD-SDI interface. This standard allows, for example, several independent standard definition video signals to be multiplexed onto an HD-SDI interface, and transmitted down one wire. This standard doesn't merely adjust EAV and SAV timing to meet the requirements of the lower-bitrate formats; instead, it provides a means by which an entire SDI format (including synchronization words, ancillary data, and video payload) can be encapsulated and transmitted as ordinary data payload within a 292M stream. High-Definition Multimedia Interface (HDMI) The HDMI interface is a compact audio/video interface for transferring uncompressed video data and compressed/uncompressed digital audio data from an HDMI-compliant device to a compatible computer monitor, video projector, digital television, or digital audio device. It is mainly used in the consumer area, but increasingly used in professional devices including uncompressed video, often called clean HDMI. G.703 The G.703 standard is another high-speed digital interface, originally designed for telephony. HDcctv The HDcctv standard embodies the adaptation of SDI for video surveillance applications, not to be confused with TDI, a similar but different format for video surveillance cameras. CoaXPress The CoaXPress standard is another high-speed digital interface, originally design for industrial camera interfaces. The data rates for CoaXPress go up to 12.5 Gbit/s over a single coaxial cable. A 41 Mbit/s uplink channel and power over coax are also included in the standard. References Sources Standards Society of Motion Picture and Television Engineers: SMPTE 274M-2005: Image Sample Structure, Digital Representation and Digital Timing Reference Sequences for Multiple Picture Rates Society of Motion Picture and Television Engineers: SMPTE 292M-1998: Bit-Serial Digital Interface for High Definition Television Society of Motion Picture and Television Engineers: SMPTE 291M-1998: Ancillary Data Packet and Space Formatting Society of Motion Picture and Television Engineers: SMPTE 372M-2002: Dual Link 292M Interface for 1920 x 1080 Picture Raster External links Standards of SMPTE HDcctv Alliance (Security organization supporting SDI for security surveillance) Film and video technology ITU-R recommendations Serial buses Digital display connectors Television technology Audiovisual connectors High-definition television Broadcast engineering Video signal Television terminology
16454187
https://en.wikipedia.org/wiki/5285%20Krethon
5285 Krethon
5285 Krethon is a Jupiter trojan from the Greek camp, approximately in diameter. It was discovered on 9 March 1989, by American astronomer couple Carolyn and Eugene Shoemaker at the Palomar Observatory in California. The dark Jovian asteroid belongs the 100 largest Jupiter trojans and has a rotation period of 12.0 hours. It was named from Greek mythology, after the warrior Crethon (Krethon), twin-brother of Orsilochus. Orbit and classification Krethon is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the leading Greek camp at the Gas Giant's Lagrangian point, 60° ahead of its orbit . It is also a non-family asteroid in the Jovian background population. It orbits the Sun at a distance of 4.9–5.4 AU once every 11 years and 9 months (4,299 days; semi-major axis of 5.17 AU). Its orbit has an eccentricity of 0.05 and an inclination of 25° with respect to the ecliptic. The body's observation arc begins with a precovery taken at Palomar in March 1956, or 33 years prior to its official discovery observation. Physical characteristics Krethon is an assumed, carbonaceous C-type asteroid, while most larger Jupiter trojans are D-types. It has a high V–I color index of 1.09 (see table below). Rotation period In March 2013 and June 2015, two rotational lightcurves of Krethon were obtained from photometric observations by Robert Stephens at the Center for Solar System Studies in Landers, California. Lightcurve analysis gave an identical rotation period of 12.04 hours with a brightness amplitude between 0.33 and 0.46 magnitude (). This supersedes a previous, poorly-rated period determination of 20.88 hours (). Diameter and albedo According to the surveys carried out by the Infrared Astronomical Satellite IRAS, the Japanese Akari satellite, and the NEOWISE mission of NASA's Wide-field Infrared Survey Explorer, Krethon measures between 49.606 and 58.53 kilometers in diameter and its surface has an albedo between 0.062 and 0.079. The Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 53.16 kilometers based on an absolute magnitude of 10.1. Naming This minor planet was named after the Greek warrior Crethon (Krethon), son of Diocles and twin brother of Orsilochus (also see 5284 Orsilocus), who were fighting under Agamemnon and Menelaus in the Trojan War. Both were slain by Aeneas. The official naming citation was published by the Minor Planet Center on 12 July 1995 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (5001)-(10000) – Minor Planet Center 005285 Discoveries by Carolyn S. Shoemaker Discoveries by Eugene Merle Shoemaker Minor planets named from Greek mythology Named minor planets 19890309
1079134
https://en.wikipedia.org/wiki/Carnivore%20%28software%29
Carnivore (software)
Carnivore, later renamed DCS1000, was a system implemented by the Federal Bureau of Investigation (FBI) that was designed to monitor email and electronic communications. It used a customizable packet sniffer that could monitor all of a target user's Internet traffic. Carnivore was implemented in October 1997. By 2005 it had been replaced with improved commercial software. Development Carnivore grew out of an earlier FBI project called "Omnivore", which itself replaced an older undisclosed (at the time) surveillance tool migrated from the US Navy by FBI Director of Integrity and Compliance, Patrick W. Kelley. In September 1998, the FBI's Data Intercept Technology Unit (DITU) in Quantico, Virginia, launched a project to migrate Omnivore from Sun's Solaris operating system to a Windows NT platform. This was done to facilitate the miniaturization of the system and support a wider range of personal computer (PC) equipment. The migration project was called "Phiple Troenix" and the resulting system was named "Carnivore." Configuration The Carnivore system was a Microsoft Windows-based workstation with packet-sniffing software and a removable Jaz disk drive. This computer must be physically installed at an Internet service provider (ISP) or other location where it can "sniff" traffic on a LAN segment to look for email messages in transit. The technology itself was not highly advanced—it used a standard packet sniffer and straightforward filtering. No monitor or keyboard was present at the ISP. The critical components of the operation were the filtering criteria. Copies of every packet were made, and required filtering at a later time. To accurately match the appropriate subject, an elaborate content model was developed. An independent technical review of Carnivore for the Justice Department was prepared in 2000. Controversy Several groups expressed concern regarding the implementation, usage, and possible abuses of Carnivore. In July 2000, the Electronic Frontier Foundation submitted a statement to the Subcommittee on the Constitution of the Committee on the Judiciary in the United States House of Representatives detailing the dangers of such a system. The Electronic Privacy Information Center also made several releases dealing with it. The FBI countered these concerns with statements highlighting the target-able nature of Carnivore. Assistant FBI Director Donald Kerr was quoted as saying: The Carnivore device works much like commercial "sniffers" and other network diagnostic tools used by ISPs every day, except that it provides the FBI with a unique ability to distinguish between communications which may be lawfully intercepted and those which may not. For example, if a court order provides for the lawful interception of one type of communication (e.g., e-mail), but excludes all other communications (e.g., online shopping) the Carnivore tool can be configured to intercept only those e-mails being transmitted either to or from the named subject. ... [it] is a very specialized network analyzer or "sniffer" which runs as an application program on a normal personal computer under the Microsoft Windows operating system. It works by "sniffing" the proper portions of network packets and copying and storing only those packets which match a finely defined filter set programmed in conformity with the court order. This filter set can be extremely complex, and this provides the FBI with an ability to collect transmissions which comply with pen register court orders, trap & trace court orders, Title III interception orders, etc.... ...It is important to distinguish now what is meant by "sniffing." The problem of discriminating between users' messages on the Internet is a complex one. However, this is exactly what Carnivore does. It does NOT search through the contents of every message and collect those that contain certain key words like "bomb" or "drugs." It selects messages based on criteria expressly set out in the court order, for example, messages transmitted to or from a particular account or to or from a particular user. After prolonged negative coverage in the press, the FBI changed the name of its system from "Carnivore" to the more benign-sounding "DCS1000." DCS is reported to stand for "Digital Collection System"; the system has the same functions as before. Successor The Associated Press reported in mid-January 2005 that the FBI essentially abandoned the use of Carnivore in 2001, in favor of commercially available software, such as NarusInsight, a mass surveillance system. A report in 2007 described the successor system as being located "inside an Internet provider's network at the junction point of a router or network switch" and capable of indiscriminately storing data flowing through the provider's network. See also Other FBI cyber-assets: COINTELPRO: a series of covert and illegal FBI projects aimed at surveilling, infiltrating, discrediting, and disrupting American political organizations DWS-EDMS: an electronic FBI database; its full capabilities are classified but at a minimum, provides a searchable archive of intercepted electronic communications, including email sent over the Internet DITU: an FBI unit responsible for intercepting telephone calls and e-mail messages DCSNet: FBI's point-and-click surveillance system Magic Lantern: FBI's keylogger Similar projects: ECHELON: NSA's worldwide digital interception program Room 641A: NSA's interception program, started circa 2003, but first reported in 2006 Total Information Awareness: a mass detection program by the United States Defense Advanced Research Projects Agency (DARPA) Related: Communications Assistance For Law Enforcement Act: wiretapping laws passed in 1994 Harris Corporation: an American technology company, defense contractor, and information technology services provider Policeware: software designed to police citizens by monitoring discussion and interaction of its citizens References External links EPIC collection of documents on Carnivore Carnivore Software Official Website Law enforcement in the United States Mass surveillance Network analyzers Federal Bureau of Investigation
21523956
https://en.wikipedia.org/wiki/Readahead
Readahead
Readahead is a system call of the Linux kernel that loads a file's contents into the page cache. This prefetches the file so that when it is subsequently accessed, its contents are read from the main memory (RAM) rather than from a hard disk drive (HDD), resulting in much lower file access latencies. Many Linux distributions use readahead on a list of commonly used files to speed up booting. In such a setup, if the kernel is booted with the boot parameter, it will record all file accesses during bootup and write a new list of files to be read during later boot sequences. This will make additional installed services start faster, because they are not included in the default readahead list. In Linux distributions that use systemd, readahead binary (as part of the boot sequence) was replaced by systemd-readahead. However, support for readahead was removed from systemd in its version 217, being described as unmaintained and unable to provide expected performance benefits. Certain experimental page-level prefetching systems have been developed to further improve performance. In filesystem Bcache support readahead of files and metadata. ZFS supports readahead of files and metadata, when using ARC. References Preloading and prebinding Linux file system-related software
23804650
https://en.wikipedia.org/wiki/FireDaemon
FireDaemon
FireDaemon Pro is an operating system service management application. FireDaemon Pro allows you to install and run most standard Windows applications as a service. These include regular standard Windows executables as well as applications written in scripting or pcode languages such as Perl, Java, Python and Ruby. FireDaemon is popular amongst the online gaming community for running dedicated servers such as Minecraft, Rust, and America's Army. It is possible to add services to Windows without FireDaemon Pro or use other free tools found in the Windows Resource Kits. However, setting up services manually can be complicated and error-prone as the Windows registry needs to be edited directly. Windows services by default will generally be restarted after a minimum of 1 minute has passed. However, FireDaemon Pro proactively monitors the application and ensures an immediate restart. This can be critical when using server-based applications such as web servers, FTP servers, etc. Installation Procedure FireDaemon Pro is available as a .exe package. The installation package doesn't contain any malware or spyware and deploys not only the product but also the Microsoft Visual C++ Runtime. Licensing FireDaemon Pro is currently released as licensed software. In the past, there was a free "Lite" version that was limited to a single service, but it has since been officially discontinued. It can be sourced by searching for "FireDaemon Lite". Limitations FireDaemon Pro is sometimes unable to close all error popup windows for applications such as Source Dedicated Server. This is because FireDaemon only intercepts popups of type WS_POPUP and the window class is #32770. Workarounds include leaving the computer logged in rather than logged out or writing custom GUI automation scripts with tools such as AutoIT to automatically close popups. Windows Error Reporting can also interfere with the correct function FireDaemon Pro and should generally be disabled. Security Issues FireDaemon Lite was used in a variety of trojans and worms from around 1999 to 2004 that exploited various security holes in Windows (e.g. Symantec: W32. Tkbot. Worm, Backdoor. Vmz, NAI: Generic Dropper.h, BAT/Mumu.worm.c). Typically, very old or cracked versions of FireDaemon would have been included in the trojan/worm payload and was used to run tools that facilitated the establishment of botnets (e.g. IRC, FTP servers). The use of FireDaemon in botnets is well documented across several security forums as well as written about in several books on botnets and internet security (e.g. Hacking Exposed). Probably the best known botnet that included FireDaemon is XDCC. On rare occasions, anti-malware products may misidentify recent legitimate versions of FireDaemon Pro as being a potential threat by performing "trojan like behaviour". Advances in anti-malware software since 2005 has resulted in malware authors ceasing to use FireDaemon Pro. As such, there have been no reports of FireDaemon Pro being used in malware since 2005. See also Operating system service management References External links Trojan/Worm Cleanup Notes Windows-only software
22905049
https://en.wikipedia.org/wiki/Biopac%20student%20lab
Biopac student lab
The Biopac Student Lab is a proprietary teaching device and method introduced in 1995 as a digital replacement for aging chart recorders and oscilloscopes that were widely used in undergraduate teaching laboratories prior to that time. It is manufactured by BIOPAC Systems, Inc., of Goleta, California. The advent of low cost personal computers meant that older analog technologies could be replaced with powerful and less expensive computerized alternatives. Students in undergraduate teaching labs use the BSL system to record data from their own bodies, animals or tissue preparations. The BSL system integrates hardware, software and curriculum materials including over sixty experiments that students use to study the cardiovascular system, muscles, pulmonary function, autonomic nervous system, and the brain. History of physiology and electricity One of the more complicated concepts for students to grasp is the fact that electricity is flowing throughout a living body at all times and that it is possible to use the signals to measure the performance and health of individual parts of the body. The Biopac Student Lab System helps to explain the concept and allows students to understand physiology. Physiology and electricity share a common history, with some of the pioneering work in each field being done in the late 18th century by Count Alessandro Giuseppe Antonio Anastasio Volta and Luigi Galvani. Count Volta invented the battery and had a unit of electrical measurement named in his honor (the Volt). These early researchers studied "animal electricity" and were among the first to realize that applying an electrical signal to an isolated animal muscle caused it to twitch. The Biopac Student Lab uses procedures similar to Count Volta’s to demonstrate how muscles can be electrically stimulated. Concept The BSL system includes data acquisition hardware with built-in universal amplifiers to record and condition electrical signals from the heart, muscle, nerve, brain, eye, respiratory system, and tissue preparations. The data acquisition system receives the signals from electrodes and transducers. The electrical signals are extremely small—with amplitudes sometimes in the microVolt (1/1,000,000 of a volt) range—so the hardware amplifies these signals, filters out unwanted electrical noise or interfering signals, and converts them to a set of numbers that the computer can read. Biopac Student Lab software then displays the numbers as waveforms on the monitor. The data acquisition system connects to a PC running Windows or Macintosh operating systems, via USB. The electrodes and transducers employ sensors that allow the software to communicate with the students to ensure that they are using the correct devices and collecting good data. Software guides students by using onscreen instructions and a detailed lab manual follows the scientific method. Once students have collected data, they use analysis tools to measure the amplitude and frequency, plus a wide range of other values from the electrical signals. The analysis process allows students to make general comparisons with the data. They can compare their results to published normal values, or the values before and after a subject performed a specified task. They can also compare results with other students in the lab. The software is available in English, French, Spanish, Italian, Japanese and Chinese. The Biopac Student Lab System is widely used by undergraduate labs to teach physiology, pharmacology, biology, neuroscience, psychology, psychophysiology, exercise physiology, and biomedical engineering. Publishers have adopted the curriculum materials and included them in commercially available lab manuals. Lab manuals that include the Biopac Student Lab Human Anatomy & Physiology Laboratory Manual, Main Version, Update, 8/E Elaine N. Marieb, Holyoke Community College Susan J. Mitchell, Onondaga Community College Publisher: Pearson Benjamin Cummings Laboratory Manual for Anatomy & Physiology, Third Edition Author: Michael G. Wood, M.S., Del Mar College, Corpus Christi, Texas Publisher: PEARSON Benjamin Cummings Laboratory Guide to Human Physiology Version: 12 Author: Stuart I. Fox, Los Angeles Pierce College Publisher: WCB/McGraw-Hill Laboratory Investigations in Anatomy and Physiology Version: Main Stephen N. Sarikas, Lasell College (Newton, MA) Publisher: PEARSON Higher Education Manuel de travaux pratiques physiologie humaine Version: 6 Michel Dauzat, Collectif Broché Publisher: SAURAMPS Psychophysiology/Cognitive Neuroscience Editors: Chad Stephens, Ben Allen, Naeem Thompson Publisher: Kendall Hunt Publishing References Laboratory equipment Data analysis software Data collection Cross-platform software Physiological instruments
254812
https://en.wikipedia.org/wiki/Rudy%20Rucker
Rudy Rucker
Rudolf von Bitter Rucker (; born March 22, 1946) is an American mathematician, computer scientist, science fiction author, and one of the founders of the cyberpunk literary movement. The author of both fiction and non-fiction, he is best known for the novels in the Ware Tetralogy, the first two of which (Software and Wetware) both won Philip K. Dick Awards. Until its closure in 2014 he edited the science fiction webzine Flurb. Early life Rucker was born and raised in Louisville, Kentucky, son of Embry Cobb Rucker Sr (October 1, 1914 - August 1, 1994), who ran a small furniture-manufacture company and later became an Episcopal priest and community activist, and Marianne (née von Bitter). The Rucker family were of Huguenot descent. Through his mother, he is a great-great-great-grandson of Georg Wilhelm Friedrich Hegel. Rucker attended St. Xavier High School before earning a B.A. in mathematics from Swarthmore College (1967) and M.S. (1969) and Ph.D. (1973) degrees in mathematics from Rutgers University. Career Rucker taught mathematics at the State University of New York at Geneseo from 1972 to 1978. Although he was liked by his students and "published a book [Geometry, Relativity and the Fourth Dimension] and several papers," several colleagues took umbrage at his long hair and convivial relationships with English and philosophy professors amid looming budget shortfalls; as a result, he failed to attain tenure in the "dysfunctional" department. Thanks to a grant from the Alexander von Humboldt Foundation, Rucker taught at the Ruprecht Karl University of Heidelberg from 1978 to 1980. He then taught at Randolph-Macon Women's College in Lynchburg, Virginia from 1980 to 1982, before trying his hand as a full-time author for four years. Inspired by an interview with Stephen Wolfram, Rucker became a computer science professor at San José State University in 1986, from which he retired as professor emeritus in 2004. From 1988 to 1992 he was hired as a programmer of cellular automata by John Walker of Autodesk which inspired his book The Hacker and the Ants. A mathematician with philosophical interests, he has written The Fourth Dimension and Infinity and the Mind. Princeton University Press published new editions of Infinity and the Mind in 1995 and in 2005, both with new prefaces; the first edition is cited with fair frequency in academic literature. As his "own alternative to cyberpunk," Rucker developed a writing style he terms transrealism. Transrealism, as outlined in his 1983 essay "The Transrealist Manifesto", is science fiction based on the author's own life and immediate perceptions, mixed with fantastic elements that symbolize psychological change. Many of Rucker's novels and short stories apply these ideas. One example of Rucker's transreal works is Saucer Wisdom, a novel in which the main character is abducted by aliens. Rucker and his publisher marketed the book, tongue in cheek, as non-fiction. His earliest transreal novel, White Light, was written during his time at Heidelberg. This transreal novel is based on his experiences at SUNY Geneseo. Rucker often uses his novels to explore scientific or mathematical ideas; White Light examines the concept of infinity, while the Ware Tetralogy (written from 1982 through 2000) is in part an explanation of the use of natural selection to develop software (a subject also developed in his The Hacker and the Ants, written in 1994). His novels also put forward a mystical philosophy that Rucker has summarized in an essay titled, with only a bit of irony, "The Central Teachings of Mysticism" (included in Seek!, 1999). His non-fiction book, The Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me About Ultimate Reality, the Meaning Of Life, and How To Be Happy summarizes the various philosophies he's believed over the years and ends with the tentative conclusion that we might profitably view the world as made of computations, with the final remark, "perhaps this universe is perfect." Personal life Rucker was the roommate of Kenneth Turan during his freshman year at Swarthmore College. In 1967, Rucker married Sylvia Rucker. Together they have three children. On July 1, 2008, Rucker suffered a cerebral hemorrhage. Thinking he may not be around much longer, this prompted him to write Nested Scrolls, his autobiography. Rucker resided in Highland Park, New Jersey during his graduate studies at Rutgers University. Bibliography Novels The Ware Tetralogy Software (1982) Wetware (1988) Freeware (1997) Realware (2000) Transreal Trilogy The Secret of Life (1985) White Light (1980) Saucer Wisdom (1999) novel marketed as non-fiction Transreal novels Spacetime Donuts (1981) The Sex Sphere (1983) Master of Space and Time (1984) The Hollow Earth (1990) The Hacker and the Ants (1994) (Revised 'Version 2.0' 2003) Spaceland (2002) Frek and the Elixir (2004) Mathematicians in Love (2006) Jim and the Flims (2011) The Big Aha (2013) All the Visions (1991), memoir/novel Other novels As Above, So Below: A Novel of Peter Bruegel (2002) Postsingular (2007) Hylozoic (sequel to Postsingular, May 2009) Turing and Burroughs (2012) Return to the Hollow Earth (2018) Million Mile Road Trip (2019) Juicy Ghosts (2021) Short fiction Collections The Fifty-Seventh Franz Kafka (1983) Transreal!, includes poetry and non-fiction essays (1991) Gnarl! (2000), complete short stories Mad Professor (2006) Surfing the Gnarl (2012), includes an essay and interview with the author Complete Stories (2012) Transreal Cyberpunk, with Bruce Sterling (2016) Stories (by date of composition) Non-fiction Geometry, Relativity and the Fourth Dimension (1977) Infinity and the Mind (1982) The Fourth Dimension: Toward a Geometry of Higher Reality (1984) Mind Tools (1987) Seek! (1999), collected essays Software Engineering and Computer Games (2002), textbook The Lifebox, the Seashell, and the Soul: What Gnarly Computation Taught Me about Ultimate Reality, the Meaning of Life, and how to be Happy (Thunder's Mouth Press, 2005) Nested Scrolls - autobiography (2011) Collected Essays (2012) How To Make An Ebook (2012) Better Worlds (2013), art book of Rucker’s paintings Journals 1990-2014 (2015) As editor Speculations on the Fourth Dimension: Selected Writings of Charles H. Hinton, Dover (1980), Mathenauts: Tales of Mathematical Wonder, Arbor House (1987) Semiotext(e) SF, Autonomedia (1989) Critical studies and reviews of Rucker's work Review of Turing & Burroughs. Filmography As actor-speaker in Manual of Evasion LX94, a 1994 film by Edgar Pêra Notes References External links Rudy Rucker Portal Rudy Rucker Books 1946 births Living people 20th-century American novelists 21st-century American novelists American male novelists American science fiction writers American technology writers Writers from California Cyberpunk writers Writers from Louisville, Kentucky San Jose State University faculty Swarthmore College alumni Cellular automatists Wired (magazine) people Rutgers University alumni State University of New York at Geneseo faculty American people of German descent 20th-century American male writers 21st-century American male writers Novelists from New York (state) Novelists from Kentucky People from Highland Park, New Jersey People from Los Gatos, California 20th-century American non-fiction writers 21st-century American non-fiction writers American male non-fiction writers
11443527
https://en.wikipedia.org/wiki/N-version%20programming
N-version programming
N-version programming (NVP), also known as multiversion programming or multiple-version dissimilar software, is a method or process in software engineering where multiple functionally equivalent programs are independently generated from the same initial specifications. The concept of N-version programming was introduced in 1977 by Liming Chen and Algirdas Avizienis with the central conjecture that the "independence of programming efforts will greatly reduce the probability of identical software faults occurring in two or more versions of the program". The aim of NVP is to improve the reliability of software operation by building in fault tolerance or redundancy. NVP approach The general steps of N-version programming are: An initial specification of the intended functionality of the software is developed. The specification should unambiguously define: functions, data formats (which include comparison vectors, c-vectors, and comparison status indicators, cs-indicators), cross-check points (cc-points), comparison algorithm, and responses to the comparison algorithm. From the specifications, two or more versions of the program are independently developed, each by a group that does not interact with the others. The implementations of these functionally equivalent programs use different algorithms and programming languages. At various points of the program, special mechanisms are built into the software which allow the program to be governed by the N-version execution environment (NVX). These special mechanisms include: comparison vectors (c-vectors, a data structure representing the program's state), comparison status indicators (cs-indicators), and synchronization mechanisms. The resulting programs are called N-version software (NVS). Some N-version execution environment (NVX) is developed which runs the N-version software and makes final decisions of the N-version programs as a whole given the output of each individual N-version program. The implementation of the decision algorithms can vary ranging from simple as accepting the most frequently occurring output (for instance, if a majority of versions agree on some output, then it is likely to be correct) to some more complex algorithm. Criticisms Researchers have argued that different programming teams can make similar mistakes. In 1986, Knight & Leveson conducted an experiment to evaluate the assumption of independence in NVP, they found that the assumption of independence of failures in N-version programs failed statistically. The weakness of an NVP program lies in the decision algorithm. The question of correctness of an NVP program depends partially on the algorithm the NVX uses to determine what output is "correct" given the multitude of outputs by each individual N-version program. In theory, output from multiple independent versions is more likely to be correct than output from a single version. However, there is debate whether or not the improvements of N-version development is enough to warrant the time, additional requirements, and costs of using the NVP method. “There has been considerable debate as to realizing the full potential from n-version programming as it makes the assumption that the independence will lead to statistically independent mistakes. Evidence has shown that this premise may be faulty [12].” Applications N-version programming has been applied to software in switching trains, performing flight control computations on modern airliners, electronic voting (the SAVE System), and the detection of zero-day exploits, among other uses. See also Redundancy (engineering) Triple modular redundancy Data redundancy Fault tolerant design Reliability engineering Safety engineering References External links N-version programming in the RKBExplorer Software quality Fault-tolerant computer systems
33120
https://en.wikipedia.org/wiki/Web%20crawler
Web crawler
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering). Web search engines and some other websites use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently. Crawlers consume resources on visited systems and often visit sites unprompted. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For example, including a robots.txt file can request bots to index only parts of a website, or nothing at all. The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today, relevant results are given almost instantly. Crawlers can validate hyperlinks and HTML code. They can also be used for web scraping and data-driven programming. Nomenclature A web crawler is also known as a spider, an ant, an automatic indexer, or (in the FOAF software context) a Web scutter. Overview A Web crawler starts with a list of URLs to visit. Those first URLs are called the seeds. As the crawler visits these URLs, by communicating with web servers that respond to those URLs, it identifies all the hyperlinks in the retrieved web pages and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies. If the crawler is performing archiving of websites (or web archiving), it copies and saves the information as it goes. The archives are usually stored in such a way they can be viewed, read and navigated as if they were on the live web, but are preserved as 'snapshots'. The archive is known as the repository and is designed to store and manage the collection of web pages. The repository only stores HTML pages and these pages are stored as distinct files. A repository is similar to any other system that stores data, like a modern-day database. The only difference is that a repository does not need all the functionality offered by a database system. The repository stores the most recent version of the web page retrieved by the crawler. The large volume implies the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change can imply the pages might have already been updated or even deleted. The number of possible URLs crawled being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content. As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained." A crawler must carefully choose at each step which pages to visit next. Crawling policy The behavior of a Web crawler is the outcome of a combination of policies: a selection policy which states the pages to download, a re-visit policy which states when to check for changes to the pages, a politeness policy that states how to avoid overloading Web sites. a parallelization policy that states how to coordinate distributed web crawlers. Selection policy Given the current size of the Web, even large search engines cover only a portion of the publicly available part. A 2009 study showed even large-scale search engines index no more than 40-70% of the indexable Web; a previous study by Steve Lawrence and Lee Giles showed that no search engine indexed more than 16% of the Web in 1999. As a crawler always downloads just a fraction of the Web pages, it is highly desirable for the downloaded fraction to contain the most relevant pages and not just a random sample of the Web. This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Web site). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling. Junghoo Cho et al. made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies. The ordering metrics tested were breadth-first, backlink count and partial PageRank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain. Cho also wrote his PhD dissertation at Stanford on web crawling. Najork and Wiener performed an actual crawl on 328 million pages, using breadth-first ordering. They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that "the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates." Abiteboul designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation). In OPIC, each page is given an initial sum of "cash" that is distributed equally among the pages it points to. It is similar to a PageRank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of "cash". Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web. Boldi et al. used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against depth-first, random ordering and an omniscient strategy. The comparison was based on how well PageRank computed on a partial crawl approximates the true PageRank value. Surprisingly, some visits that accumulate PageRank very quickly (most notably, breadth-first and the omniscient visit) provide very poor progressive approximations. Baeza-Yates et al. used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies. They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one. Daneshpajouh et al. designed a community based algorithm for discovering good seeds. Their method crawls web pages with high PageRank from different communities in less iteration in comparison with crawl starting from random seeds. One can extract good seed from a previously-crawled-Web graph using this new method. Using these seeds, a new crawl can be very effective. Restricting followed links A crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a Web resource's MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may examine the URL and only request a resource if the URL ends with certain characters such as .html, .htm, .asp, .aspx, .php, .jsp, .jspx or a slash. This strategy may cause numerous HTML Web resources to be unintentionally skipped. Some crawlers may also avoid requesting any resources that have a "?" in them (are dynamically produced) in order to avoid spider traps that may cause the crawler to download an infinite number of URLs from a Web site. This strategy is unreliable if the site uses URL rewriting to simplify its URLs. URL normalization Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of "." and ".." segments, and adding trailing slashes to the non-empty path component. Path-ascending crawling Some crawlers intend to download/upload as many resources as possible from a particular web site. So path-ascending crawler was introduced that would ascend to every path in each URL that it intends to crawl. For example, when given a seed URL of http://llama.org/hamster/monkey/page.html, it will attempt to crawl /hamster/monkey/, /hamster/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling. Focused crawling The importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawler or topical crawlers. The concepts of topical and focused crawling were first introduced by Filippo Menczer and by Soumen Chakrabarti et al. The main problem in focused crawling is that in the context of a Web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton in the first web crawler of the early days of the Web. Diligenti et al. propose using the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points. Academic-focused crawler An example of the focused crawlers are academic crawlers, which crawls free-access academic related documents, such as the citeseerxbot, which is the crawler of CiteSeerX search engine. Other academic search engines are Google Scholar and Microsoft Academic Search etc. Because most academic papers are published in PDF formats, such kind of crawler is particularly interested in crawling PDF, PostScript files, Microsoft Word including their zipped formats. Because of this, general open source crawlers, such as Heritrix, must be customized to filter out other MIME types, or a middleware is used to extract these documents out and import them to the focused crawl database and repository. Identifying whether these documents are academic or not is challenging and can add a significant overhead to the crawling process, so this is performed as a post crawling process using machine learning or regular expression algorithms. These academic documents are usually obtained from home pages of faculties and students or from publication page of research institutes. Because academic documents takes only a small fraction in the entire web pages, a good seed selection are important in boosting the efficiencies of these web crawlers. Other academic crawlers may download plain text and HTML files, that contains metadata of academic papers, such as titles, papers, and abstracts. This increases the overall number of papers, but a significant fraction may not provide free PDF downloads. Semantic focused crawler Another type of focused crawlers is semantic focused crawler, which makes use of domain ontologies to represent topical maps and link Web pages with relevant ontological concepts for the selection and categorization purposes. In addition, ontologies can be automatically updated in the crawling process. Dong et al. introduced such an ontology-learning-based crawler using support vector machine to update the content of ontological concepts when crawling Web Pages. Re-visit policy The Web has a very dynamic nature, and crawling a fraction of the Web can take weeks or months. By the time a Web crawler has finished its crawl, many events could have happened, including creations, updates, and deletions. From the search engine's point of view, there is a cost associated with not detecting an event, and thus having an outdated copy of a resource. The most-used cost functions are freshness and age. Freshness: This is a binary measure that indicates whether the local copy is accurate or not. The freshness of a page p in the repository at time t is defined as: Age: This is a measure that indicates how outdated the local copy is. The age of a page p in the repository, at time t is defined as: Coffman et al. worked with a definition of the objective of a Web crawler that is equivalent to freshness, but use a different wording: they propose that a crawler must minimize the fraction of time pages remain outdated. They also noted that the problem of Web crawling can be modeled as a multiple-queue, single-server polling system, on which the Web crawler is the server and the Web sites are the queues. Page modifications are the arrival of the customers, and switch-over times are the interval between page accesses to a single Web site. Under this model, mean waiting time for a customer in the polling system is equivalent to the average age for the Web crawler. The objective of the crawler is to keep the average freshness of pages in its collection as high as possible, or to keep the average age of pages as low as possible. These objectives are not equivalent: in the first case, the crawler is just concerned with how many pages are out-dated, while in the second case, the crawler is concerned with how old the local copies of pages are. Two simple re-visiting policies were studied by Cho and Garcia-Molina: Uniform policy: This involves re-visiting all pages in the collection with the same frequency, regardless of their rates of change. Proportional policy: This involves re-visiting more often the pages that change more frequently. The visiting frequency is directly proportional to the (estimated) change frequency. In both cases, the repeated crawling order of pages can be done either in a random or a fixed order. Cho and Garcia-Molina proved the surprising result that, in terms of average freshness, the uniform policy outperforms the proportional policy in both a simulated Web and a real Web crawl. Intuitively, the reasoning is that, as web crawlers have a limit to how many pages they can crawl in a given time frame, (1) they will allocate too many new crawls to rapidly changing pages at the expense of less frequently updating pages, and (2) the freshness of rapidly changing pages lasts for shorter period than that of less frequently changing pages. In other words, a proportional policy allocates more resources to crawling frequently updating pages, but experiences less overall freshness time from them. To improve freshness, the crawler should penalize the elements that change too often. The optimal re-visiting policy is neither the uniform policy nor the proportional policy. The optimal method for keeping average freshness high includes ignoring the pages that change too often, and the optimal for keeping average age low is to use access frequencies that monotonically (and sub-linearly) increase with the rate of change of each page. In both cases, the optimal is closer to the uniform policy than to the proportional policy: as Coffman et al. note, "in order to minimize the expected obsolescence time, the accesses to any particular page should be kept as evenly spaced as possible". Explicit formulas for the re-visit policy are not attainable in general, but they are obtained numerically, as they depend on the distribution of page changes. Cho and Garcia-Molina show that the exponential distribution is a good fit for describing page changes, while Ipeirotis et al. show how to use statistical tools to discover parameters that affect this distribution. Note that the re-visiting policies considered here regard all pages as homogeneous in terms of quality ("all pages on the Web are worth the same"), something that is not a realistic scenario, so further information about the Web page quality should be included to achieve a better crawling policy. Politeness policy Crawlers can retrieve data much quicker and in greater depth than human searchers, so they can have a crippling impact on the performance of a site. If a single crawler is performing multiple requests per second and/or downloading large files, a server can have a hard time keeping up with requests from multiple crawlers. As noted by Koster, the use of Web crawlers is useful for a number of tasks, but comes with a price for the general community. The costs of using Web crawlers include: network resources, as crawlers require considerable bandwidth and operate with a high degree of parallelism during a long period of time; server overload, especially if the frequency of accesses to a given server is too high; poorly written crawlers, which can crash servers or routers, or which download pages they cannot handle; and personal crawlers that, if deployed by too many users, can disrupt networks and Web servers. A partial solution to these problems is the robots exclusion protocol, also known as the robots.txt protocol that is a standard for administrators to indicate which parts of their Web servers should not be accessed by crawlers. This standard does not include a suggestion for the interval of visits to the same server, even though this interval is the most effective way of avoiding server overload. Recently commercial search engines like Google, Ask Jeeves, MSN and Yahoo! Search are able to use an extra "Crawl-delay:" parameter in the robots.txt file to indicate the number of seconds to delay between requests. The first proposed interval between successive pageloads was 60 seconds. However, if pages were downloaded at this rate from a website with more than 100,000 pages over a perfect connection with zero latency and infinite bandwidth, it would take more than 2 months to download only that entire Web site; also, only a fraction of the resources from that Web server would be used. This does not seem acceptable. Cho uses 10 seconds as an interval for accesses, and the WIRE crawler uses 15 seconds as the default. The MercatorWeb crawler follows an adaptive politeness policy: if it took t seconds to download a document from a given server, the crawler waits for 10t seconds before downloading the next page. Dill et al. use 1 second. For those using Web crawlers for research purposes, a more detailed cost-benefit analysis is needed and ethical considerations should be taken into account when deciding where to crawl and how fast to crawl. Anecdotal evidence from access logs shows that access intervals from known crawlers vary between 20 seconds and 3–4 minutes. It is worth noticing that even when being very polite, and taking all the safeguards to avoid overloading Web servers, some complaints from Web server administrators are received. Brin and Page note that: "... running a crawler which connects to more than half a million servers (...) generates a fair amount of e-mail and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen." Parallelization policy A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes. Architectures A crawler must not only have a good crawling strategy, as noted in the previous sections, but it should also have a highly optimized architecture. Shkapenyuk and Suel noted that: Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents others from reproducing the work. There are also emerging concerns about "search engine spamming", which prevent major search engines from publishing their ranking algorithms. Security While most of the website owners are keen to have their pages indexed as broadly as possible to have strong presence in search engines, web crawling can also have unintended consequences and lead to a compromise or data breach if a search engine indexes resources that shouldn't be publicly available, or pages revealing potentially vulnerable versions of software. Apart from standard web application security recommendations website owners can reduce their exposure to opportunistic hacking by only allowing search engines to index the public parts of their websites (with robots.txt) and explicitly blocking them from indexing transactional parts (login pages, private pages, etc.). Crawler identification Web crawlers typically identify themselves to a Web server by using the User-agent field of an HTTP request. Web site administrators typically examine their Web servers' log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the Web site administrator may find out more information about the crawler. Examining Web server log is tedious task, and therefore some administrators use tools to identify, track and verify Web crawlers. Spambots and other malicious Web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler. Web site administrators prefer Web crawlers to identify themselves so that they can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a Web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their Web pages to be indexed by a particular search engine. Crawling the deep web A vast amount of web pages lie in the deep or invisible web.<ref>Shestakov, Denis (2008). Search Interfaces on the Web: Querying and Characterizing . TUCS Doctoral Dissertations 104, University of Turku</ref> These pages are typically only accessible by submitting queries to a database, and regular crawlers are unable to find these pages if there are no links that point to them. Google's Sitemaps protocol and mod oai are intended to allow discovery of these deep-Web resources. Deep web crawling also multiplies the number of web links to be crawled. Some crawlers only take some of the URLs in <a href="URL"> form. In some cases, such as the Googlebot, Web crawling is done on all text contained inside the hypertext content, tags, or text. Strategic approaches may be taken to target deep Web content. With a technique called screen scraping, specialized software may be customized to automatically and repeatedly query a given Web form with the intention of aggregating the resulting data. Such software can be used to span multiple Web forms across multiple Websites. Data extracted from the results of one Web form submission can be taken and applied as input to another Web form thus establishing continuity across the Deep Web in a way not possible with traditional web crawlers. Pages built on AJAX are among those causing problems to web crawlers. Google has proposed a format of AJAX calls that their bot can recognize and index. Web crawler bias A recent study based on a large scale analysis of robots.txt files showed that certain web crawlers were preferred over others, with Googlebot being the most preferred web crawler. Visual vs programmatic crawlers There are a number of "visual web scraper/crawler" products available on the web which will crawl pages and structure data into columns and rows based on the users requirements. One of the main difference between a classic and a visual crawler is the level of programming ability required to set up a crawler. The latest generation of "visual scrapers" remove the majority of the programming skill needed to be able to program and start a crawl to scrape web data. The visual scraping/crawling method relies on the user "teaching" a piece of crawler technology, which then follows patterns in semi-structured data sources. The dominant method for teaching a visual crawler is by highlighting data in a browser and training columns and rows. While the technology is not new, for example it was the basis of Needlebase which has been bought by Google (as part of a larger acquisition of ITA Labs), there is continued growth and investment in this area by investors and end-users. List of web crawlers The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features: Historical web crawlers World Wide Web Worm was a crawler used to build a simple index of document titles and URLs. The index could be searched by using the grep Unix command. Yahoo! Slurp was the name of the Yahoo! Search crawler until Yahoo! contracted with Microsoft to use Bingbot instead. In-house web crawlers Applebot is Apple's web crawler. It supports Siri and other products. Bingbot is the name of Microsoft's Bing webcrawler. It replaced Msnbot''. Baiduspider is Baidu's web crawler. Googlebot is described in some detail, but the reference is only about an early version of its architecture, which was written in C++ and Python. The crawler was integrated with the indexing process, because text parsing was done for full-text indexing and also for URL extraction. There is a URL server that sends lists of URLs to be fetched by several crawling processes. During parsing, the URLs found were passed to a URL server that checked if the URL have been previously seen. If not, the URL was added to the queue of the URL server. WebCrawler was used to build the first publicly available full-text index of a subset of the Web. It was based on lib-WWW to download pages, and another program to parse and order URLs for breadth-first exploration of the Web graph. It also included a real-time crawler that followed links based on the similarity of the anchor text with the provided query. WebFountain is a distributed, modular crawler similar to Mercator but written in C++. Xenon is a web crawler used by government tax authorities to detect fraud. Commercial web crawlers The following web crawlers are available, for a price:: SortSite - crawler for analyzing websites, available for Windows and Mac OS Swiftbot - Swiftype's web crawler, available as software as a service Open-source crawlers GNU Wget is a command-line-operated crawler written in C and released under the GPL. It is typically used to mirror Web and FTP sites. GRUB was an open source distributed search crawler that Wikia Search used to crawl the web. Heritrix is the Internet Archive's archival-quality crawler, designed for archiving periodic snapshots of a large portion of the Web. It was written in Java. ht://Dig includes a Web crawler in its indexing engine. HTTrack uses a Web crawler to create a mirror of a web site for off-line viewing. It is written in C and released under the GPL. mnoGoSearch is a crawler, indexer and a search engine written in C and licensed under the GPL (*NIX machines only) Apache Nutch is a highly extensible and scalable web crawler written in Java and released under an Apache License. It is based on Apache Hadoop and can be used with Apache Solr or Elasticsearch. Open Search Server is a search engine and web crawler software release under the GPL. PHP-Crawler is a simple PHP and MySQL based crawler released under the BSD License. Scrapy, an open source webcrawler framework, written in python (licensed under BSD). Seeks, a free distributed search engine (licensed under AGPL). StormCrawler, a collection of resources for building low-latency, scalable web crawlers on Apache Storm (Apache License). tkWWW Robot, a crawler based on the tkWWW web browser (licensed under GPL). Xapian, a search crawler engine, written in c++. YaCy, a free distributed search engine, built on principles of peer-to-peer networks (licensed under GPL). See also Automatic indexing Gnutella crawler Web archiving Webgraph Website mirroring software Search Engine Scraping Web scraping References Further reading Cho, Junghoo, "Web Crawling Project", UCLA Computer Science Department. A History of Search Engines, from Wiley WIVET is a benchmarking project by OWASP, which aims to measure if a web crawler can identify all the hyperlinks in a target website. Shestakov, Denis, "Current Challenges in Web Crawling" and "Intelligent Web Crawling", slides for tutorials given at ICWE'13 and WI-IAT'13. A History of Search Engines, from Blogingguru Search engine software Internet search algorithms
2698660
https://en.wikipedia.org/wiki/Methods%20of%20computing%20square%20roots
Methods of computing square roots
Methods of computing square roots are numerical analysis algorithms for approximating the principal, or non-negative, square root (usually denoted , , or ) of a real number. Arithmetically, it means given , a procedure for finding a number which when multiplied by itself, yields ; algebraically, it means a procedure for finding the non-negative root of the equation ; geometrically, it means given the area of a square, a procedure for constructing a side of the square. Every real number has two square roots. The principal square root of most numbers is an irrational number with an infinite decimal expansion. As a result, the decimal expansion of any such square root can only be computed to some finite-precision approximation. However, even if we are taking the square root of a perfect square integer, so that the result does have an exact finite representation, the procedure used to compute it may only return a series of increasingly accurate approximations. The continued fraction representation of a real number can be used instead of its decimal or binary expansion and this representation has the property that the square root of any rational number (which is not already a perfect square) has a periodic, repeating expansion, similar to how rational numbers have repeating expansions in the decimal notation system. The most common analytical methods are iterative and consist of two steps: finding a suitable starting value, followed by iterative refinement until some termination criterion is met. The starting value can be any number, but fewer iterations will be required the closer it is to the final result. The most familiar such method, most suited for programmatic calculation, is Newton's method, which is based on a property of the derivative in the calculus. A few methods like paper-and-pencil synthetic division and series expansion, do not require a starting value. In some applications, an integer square root is required, which is the square root rounded or truncated to the nearest integer (a modified procedure may be employed in this case). The method employed depends on what the result is to be used for (i.e. how accurate it has to be), how much effort one is willing to put into the procedure, and what tools are at hand. The methods may be roughly classified as those suitable for mental calculation, those usually requiring at least paper and pencil, and those which are implemented as programs to be executed on a digital electronic computer or other computing device. Algorithms may take into account convergence (how many iterations are required to achieve a specified precision), computational complexity of individual operations (i.e. division) or iterations, and error propagation (the accuracy of the final result). Procedures for finding square roots (particularly the square root of 2) have been known since at least the period of ancient Babylon in the 17th century BCE. Heron's method from first century Egypt was the first ascertainable algorithm for computing square root. Modern analytic methods began to be developed after introduction of the Arabic numeral system to western Europe in the early Renaissance. Today, nearly all computing devices have a fast and accurate square root function, either as a programming language construct, a compiler intrinsic or library function, or as a hardware operator, based on one of the described procedures. Initial estimate Many iterative square root algorithms require an initial seed value. The seed must be a non-zero positive number; it should be between 1 and , the number whose square root is desired, because the square root must be in that range. If the seed is far away from the root, the algorithm will require more iterations. If one initializes with (or ), then approximately iterations will be wasted just getting the order of magnitude of the root. It is therefore useful to have a rough estimate, which may have limited accuracy but is easy to calculate. In general, the better the initial estimate, the faster the convergence. For Newton's method (also called Babylonian or Heron's method), a seed somewhat larger than the root will converge slightly faster than a seed somewhat smaller than the root. In general, an estimate is pursuant to an arbitrary interval known to contain the root (such as ). The estimate is a specific value of a functional approximation to over the interval. Obtaining a better estimate involves either obtaining tighter bounds on the interval, or finding a better functional approximation to . The latter usually means using a higher order polynomial in the approximation, though not all approximations are polynomial. Common methods of estimating include scalar, linear, hyperbolic and logarithmic. A decimal base is usually used for mental or paper-and-pencil estimating. A binary base is more suitable for computer estimates. In estimating, the exponent and mantissa are usually treated separately, as the number would be expressed in scientific notation. Decimal estimates Typically the number is expressed in scientific notation as where and n is an integer, and the range of possible square roots is where . Scalar estimates Scalar methods divide the range into intervals, and the estimate in each interval is represented by a single scalar number. If the range is considered as a single interval, the arithmetic mean (5.5) or geometric mean () times are plausible estimates. The absolute and relative error for these will differ. In general, a single scalar will be very inaccurate. Better estimates divide the range into two or more intervals, but scalar estimates have inherently low accuracy. For two intervals, divided geometrically, the square root can be estimated as This estimate has maximum absolute error of at a = 100, and maximum relative error of 100% at a = 1. For example, for factored as , the estimate is . , an absolute error of 246 and relative error of almost 70%. Linear estimates A better estimate, and the standard method used, is a linear approximation to the function over a small arc. If, as above, powers of the base are factored out of the number and the interval reduced to , a secant line spanning the arc, or a tangent line somewhere along the arc may be used as the approximation, but a least-squares regression line intersecting the arc will be more accurate. A least-squares regression line minimizes the average difference between the estimate and the value of the function. Its equation is . Reordering, . Rounding the coefficients for ease of computation, That is the best estimate on average that can be achieved with a single piece linear approximation of the function y=x2 in the interval . It has a maximum absolute error of 1.2 at a=100, and maximum relative error of 30% at S=1 and 10. To divide by 10, subtract one from the exponent of , or figuratively move the decimal point one digit to the left. For this formulation, any additive constant 1 plus a small increment will make a satisfactory estimate so remembering the exact number isn't a burden. The approximation (rounded or not) using a single line spanning the range is less than one significant digit of precision; the relative error is greater than 1/22, so less than 2 bits of information are provided. The accuracy is severely limited because the range is two orders of magnitude, quite large for this kind of estimation. A much better estimate can be obtained by a piece-wise linear approximation: multiple line segments, each approximating some subarc of the original. The more line segments used, the better the approximation. The most common way is to use tangent lines; the critical choices are how to divide the arc and where to place the tangent points. An efficacious way to divide the arc from y=1 to y=100 is geometrically: for two intervals, the bounds of the intervals are the square root of the bounds of the original interval, 1*100, i.e. [1,] and [,100]. For three intervals, the bounds are the cube roots of 100: [1,], [,()2], and [()2,100], etc. For two intervals, = 10, a very convenient number. Tangent lines are easy to derive, and are located at x = and x = . Their equations are: and . Inverting, the square roots are: and . Thus for : The maximum absolute errors occur at the high points of the intervals, at a=10 and 100, and are 0.54 and 1.7 respectively. The maximum relative errors are at the endpoints of the intervals, at a=1, 10 and 100, and are 17% in both cases. 17% or 0.17 is larger than 1/10, so the method yields less than a decimal digit of accuracy. Hyperbolic estimates In some cases, hyperbolic estimates may be efficacious, because a hyperbola is also a convex curve and may lie along an arc of Y = x2 better than a line. Hyperbolic estimates are more computationally complex, because they necessarily require a floating division. A near-optimal hyperbolic approximation to x2 on the interval is y=190/(10-x)-20. Transposing, the square root is x = -190/(y+20)+10. Thus for : The floating division need be accurate to only one decimal digit, because the estimate overall is only that accurate, and can be done mentally. A hyperbolic estimate is better on average than scalar or linear estimates. It has maximum absolute error of 1.58 at 100 and maximum relative error of 16.0% at 10. For the worst case at a=10, the estimate is 3.67. If one starts with 10 and applies Newton-Raphson iterations straight away, two iterations will be required, yielding 3.66, before the accuracy of the hyperbolic estimate is exceeded. For a more typical case like 75, the hyperbolic estimate is 8.00, and 5 Newton-Raphson iterations starting at 75 would be required to obtain a more accurate result. Arithmetic estimates A method analogous to piece-wise linear approximation but using only arithmetic instead of algebraic equations, uses the multiplication tables in reverse: the square root of a number between 1 and 100 is between 1 and 10, so if we know 25 is a perfect square (5 × 5), and 36 is a perfect square (6 × 6), then the square root of a number greater than or equal to 25 but less than 36, begins with a 5. Similarly for numbers between other squares. This method will yield a correct first digit, but it is not accurate to one digit: the first digit of the square root of 35 for example, is 5, but the square root of 35 is almost 6. A better way is to the divide the range into intervals half way between the squares. So any number between 25 and half way to 36, which is 30.5, estimate 5; any number greater than 30.5 up to 36, estimate 6. The procedure only requires a little arithmetic to find a boundary number in the middle of two products from the multiplication table. Here is a reference table of those boundaries: The final operation is to multiply the estimate by the power of ten divided by 2, so for , The method implicitly yields one significant digit of accuracy, since it rounds to the best first digit. The method can be extended 3 significant digits in most cases, by interpolating between the nearest squares bounding the operand. If , then is approximately k plus a fraction, the difference between and k2 divided by the difference between the two squares: where The final operation, as above, is to multiply the result by the power of ten divided by 2; is a decimal digit and is a fraction that must be converted to decimal. It usually has only a single digit in the numerator, and one or two digits in the denominator, so the conversion to decimal can be done mentally. Example: find the square root of 75. , so is 75 and is 0. From the multiplication tables, the square root of the mantissa must be 8 point something because 8 × 8 is 64, but 9 × 9 is 81, too big, so is 8; something is the decimal representation of . The fraction is 75 - k2 = 11, the numerator, and 81 - k2 = 17, the denominator. 11/17 is a little less than 12/18, which is 2/3s or .67, so guess .66 (it's ok to guess here, the error is very small). So the estimate is . to three significant digits is 8.66, so the estimate is good to 3 significant digits. Not all such estimates using this method will be so accurate, but they will be close. Binary estimates When working in the binary numeral system (as computers do internally), by expressing as where , the square root can be estimated as which is the least-squares regression line to 3 significant digit coefficients. has maximum absolute error of 0.0408 at =2, and maximum relative error of 3.0% at =1. A computationally convenient rounded estimate (because the coefficients are powers of 2) is: which has maximum absolute error of 0.086 at 2 and maximum relative error of 6.1% at =0.5 and =2.0. For , the binary approximation gives . , so the estimate has an absolute error of 19 and relative error of 5.3%. The relative error is a little less than 1/24, so the estimate is good to 4+ bits. An estimate for good to 8 bits can be obtained by table lookup on the high 8 bits of , remembering that the high bit is implicit in most floating point representations, and the bottom bit of the 8 should be rounded. The table is 256 bytes of precomputed 8-bit square root values. For example, for the index 111011012 representing 1.851562510, the entry is 101011102 representing 1.35937510, the square root of 1.851562510 to 8 bit precision (2+ decimal digits). Babylonian method Perhaps the first algorithm used for approximating is known as the Babylonian method, despite there being no direct evidence beyond informed conjecture that the eponymous Babylonian mathematicians employed this method. The method is also known as Heron's method, after the first-century Greek mathematician Hero of Alexandria who gave the first explicit description of the method in his AD 60 work Metrica. The basic idea is that if is an overestimate to the square root of a non-negative real number then will be an underestimate, or vice versa, and so the average of these two numbers may reasonably be expected to provide a better approximation (though the formal proof of that assertion depends on the inequality of arithmetic and geometric means that shows this average is always an overestimate of the square root, as noted in the article on square roots, thus assuring convergence). This is equivalent to using Newton's method to solve . More precisely, if is our initial guess of and is the error in our estimate such that , then we can expand the binomial and solve for since . Therefore, we can compensate for the error and update our old estimate as Since the computed error was not exact, this becomes our next best guess. The process of updating is iterated until desired accuracy is obtained. This is a quadratically convergent algorithm, which means that the number of correct digits of the approximation roughly doubles with each iteration. It proceeds as follows: Begin with an arbitrary positive starting value (the closer to the actual square root of , the better). Let be the average of and (using the arithmetic mean to approximate the geometric mean). Repeat step 2 until the desired accuracy is achieved. It can also be represented as: This algorithm works equally well in the -adic numbers, but cannot be used to identify real square roots with -adic square roots; one can, for example, construct a sequence of rational numbers by this method that converges to +3 in the reals, but to −3 in the 2-adics. Example To calculate , where = 125348, to six significant figures, use the rough estimation method above to get Therefore, . Convergence Suppose that x0 > 0 and S > 0. Then for any natural number n, xn > 0. Let the relative error in xn be defined by and thus Then it can be shown that And thus that and consequently that convergence is assured, and quadratic. Worst case for convergence If using the rough estimate above with the Babylonian method, then the least accurate cases in ascending order are as follows: Thus in any case, Rounding errors will slow the convergence. It is recommended to keep at least one extra digit beyond the desired accuracy of the being calculated to minimize round off error. Bakhshali method This method for finding an approximation to a square root was described in an ancient Indian mathematical manuscript called the Bakhshali manuscript. It is equivalent to two iterations of the Babylonian method beginning with x0. Thus, the algorithm is quartically convergent, which means that the number of correct digits of the approximation roughly quadruples with each iteration. The original presentation, using modern notation, is as follows: To calculate , let x02 be the initial approximation to S. Then, successively iterate as: This can be used to construct a rational approximation to the square root by beginning with an integer. If x0 = N is an integer chosen so N2 is close to S, and d = S − N2 is the difference whose absolute value is minimized, then the first iteration can be written as: The Bakhshali method can be generalized to the computation of an arbitrary root, including fractional roots. Example Using the same example as given with the Babylonian method, let Then, the first iteration gives Likewise the second iteration gives Digit-by-digit calculation This is a method to find each digit of the square root in a sequence. It is slower than the Babylonian method, but it has several advantages: It can be easier for manual calculations. Every digit of the root found is known to be correct, i.e., it does not have to be changed later. If the square root has an expansion that terminates, the algorithm terminates after the last digit is found. Thus, it can be used to check whether a given integer is a square number. The algorithm works for any base, and naturally, the way it proceeds depends on the base chosen. Napier's bones include an aid for the execution of this algorithm. The shifting nth root algorithm is a generalization of this method. Basic principle First, consider the case of finding the square root of a number Z, that is the square of a two-digit number XY, where X is the tens digit and Y is the units digit. Specifically: Now using the digit-by-digit algorithm, we first determine the value of X. X is the largest digit such that X2 is less or equal to Z from which we removed the two rightmost digits. In the next iteration, we pair the digits, multiply X by 2, and place it in the tenth's place while we try to figure out what the value of Y is. Since this is a simple case where the answer is a perfect square root XY, the algorithm stops here. The same idea can be extended to any arbitrary square root computation next. Suppose we are able to find the square root of N by expressing it as a sum of n positive numbers such that By repeatedly applying the basic identity the right-hand-side term can be expanded as This expression allows us to find the square root by sequentially guessing the values of s. Suppose that the numbers have already been guessed, then the m-th term of the right-hand-side of above summation is given by where is the approximate square root found so far. Now each new guess should satisfy the recursion such that for all with initialization When the exact square root has been found; if not, then the sum of s gives a suitable approximation of the square root, with being the approximation error. For example, in the decimal number system we have where are place holders and the coefficients . At any m-th stage of the square root calculation, the approximate root found so far, and the summation term are given by Here since the place value of is an even power of 10, we only need to work with the pair of most significant digits of the remaining term at any m-th stage. The section below codifies this procedure. It is obvious that a similar method can be used to compute the square root in number systems other than the decimal number system. For instance, finding the digit-by-digit square root in the binary number system is quite efficient since the value of is searched from a smaller set of binary digits {0,1}. This makes the computation faster since at each stage the value of is either for or for . The fact that we have only two possible options for also makes the process of deciding the value of at m-th stage of calculation easier. This is because we only need to check if for If this condition is satisfied, then we take ; if not then Also, the fact that multiplication by 2 is done by left bit-shifts helps in the computation. Decimal (base 10) Write the original number in decimal form. The numbers are written similar to the long division algorithm, and, as in long division, the root will be written on the line above. Now separate the digits into pairs, starting from the decimal point and going both left and right. The decimal point of the root will be above the decimal point of the square. One digit of the root will appear above each pair of digits of the square. Beginning with the left-most pair of digits, do the following procedure for each pair: Starting on the left, bring down the most significant (leftmost) pair of digits not yet used (if all the digits have been used, write "00") and write them to the right of the remainder from the previous step (on the first step, there will be no remainder). In other words, multiply the remainder by 100 and add the two digits. This will be the current value c. Find p, y and x, as follows: Let p be the part of the root found so far, ignoring any decimal point. (For the first step, p = 0.) Determine the greatest digit x such that . We will use a new variable y = x(20p + x). Note: 20p + x is simply twice p, with the digit x appended to the right. Note: x can be found by guessing what c/(20·p) is and doing a trial calculation of y, then adjusting x upward or downward as necessary. Place the digit as the next digit of the root, i.e., above the two digits of the square you just brought down. Thus the next p will be the old p times 10 plus x. Subtract y from c to form a new remainder. If the remainder is zero and there are no more digits to bring down, then the algorithm has terminated. Otherwise go back to step 1 for another iteration. Examples Find the square root of 152.2756. 1 2. 3 4 / \/ 01 52.27 56 01 1*1 <= 1 < 2*2 x = 1 01 y = x*x = 1*1 = 1 00 52 22*2 <= 52 < 23*3 x = 2 00 44 y = (20+x)*x = 22*2 = 44 08 27 243*3 <= 827 < 244*4 x = 3 07 29 y = (240+x)*x = 243*3 = 729 98 56 2464*4 <= 9856 < 2465*5 x = 4 98 56 y = (2460+x)*x = 2464*4 = 9856 00 00 Algorithm terminates: Answer is 12.34 Binary numeral system (base 2) This section uses the formalism from Digit-by-digit_calculation, with the slight variation that we let , with each or . We iterate all , from down to , and build up an approximate solution , the sum of all for which we have determined the value. To determine if equals or , we let . If (i.e. the square of our approximate solution including does not exceed the target square) then , otherwise and . To avoid squaring in each step, we store the difference and incrementally update it by setting with . Initially, we set for the largest with . As an extra optimization, we store and , the two terms of in case that is nonzero, in separate variables , : and can be efficiently updated in each step: Note that: , which is the final result returned in the function below. An implementation of this algorithm in C: int32_t isqrt(int32_t n) { assert(("sqrt input should be non-negative", n > 0)); // Xₙ₊₁ int32_t x = n; // cₙ int32_t c = 0; // dₙ which starts at the highest power of four <= n int32_t d = 1 << 30; // The second-to-top bit is set. // Same as ((unsigned) INT32_MAX + 1) / 2. while (d > n) d >>= 2; // for dₙ … d₀ while (d != 0) { if (x >= c + d) { // if Xₘ₊₁ ≥ Yₘ then aₘ = 2ᵐ x -= c + d; // Xₘ = Xₘ₊₁ - Yₘ c = (c >> 1) + d; // cₘ₋₁ = cₘ/2 + dₘ (aₘ is 2ᵐ) } else { c >>= 1; // cₘ₋₁ = cₘ/2 (aₘ is 0) }; d >>= 2; // dₘ₋₁ = dₘ/4 }; return c; // c₋₁ } Faster algorithms, in binary and decimal or any other base, can be realized by using lookup tables—in effect trading more storage space for reduced run time. Exponential identity Pocket calculators typically implement good routines to compute the exponential function and the natural logarithm, and then compute the square root of S using the identity found using the properties of logarithms () and exponentials (): The denominator in the fraction corresponds to the nth root. In the case above the denominator is 2, hence the equation specifies that the square root is to be found. The same identity is used when computing square roots with logarithm tables or slide rules. A two-variable iterative method This method is applicable for finding the square root of and converges best for . This, however, is no real limitation for a computer based calculation, as in base 2 floating point and fixed point representations, it is trivial to multiply by an integer power of 4, and therefore by the corresponding power of 2, by changing the exponent or by shifting, respectively. Therefore, can be moved to the range . Moreover, the following method does not employ general divisions, but only additions, subtractions, multiplications, and divisions by powers of two, which are again trivial to implement. A disadvantage of the method is that numerical errors accumulate, in contrast to single variable iterative methods such as the Babylonian one. The initialization step of this method is while the iterative steps read Then, (while ). Note that the convergence of , and therefore also of , is quadratic. The proof of the method is rather easy. First, rewrite the iterative definition of as . Then it is straightforward to prove by induction that and therefore the convergence of to the desired result is ensured by the convergence of to 0, which in turn follows from . This method was developed around 1950 by M. V. Wilkes, D. J. Wheeler and S. Gill for use on EDSAC, one of the first electronic computers. The method was later generalized, allowing the computation of non-square roots. Iterative methods for reciprocal square roots The following are iterative methods for finding the reciprocal square root of S which is . Once it has been found, find by simple multiplication: . These iterations involve only multiplication, and not division. They are therefore faster than the Babylonian method. However, they are not stable. If the initial value is not close to the reciprocal square root, the iterations will diverge away from it rather than converge to it. It can therefore be advantageous to perform an iteration of the Babylonian method on a rough estimate before starting to apply these methods. Applying Newton's method to the equation produces a method that converges quadratically using three multiplications per step: Another iteration is obtained by Halley's method, which is the Householder's method of order two. This converges cubically, but involves five multiplications per iteration: , and . If doing fixed-point arithmetic, the multiplication by 3 and division by 8 can implemented using shifts and adds. If using floating-point, Halley's method can be reduced to four multiplications per iteration by precomputing and adjusting all the other constants to compensate: , and . Goldschmidt’s algorithm Some computers use Goldschmidt's algorithm to simultaneously calculate and . Goldschmidt's algorithm finds faster than Newton-Raphson iteration on a computer with a fused multiply–add instruction and either a pipelined floating point unit or two independent floating-point units. The first way of writing Goldschmidt's algorithm begins (typically using a table lookup) and iterates until is sufficiently close to 1, or a fixed number of iterations. The iterations converge to , and . Note that it is possible to omit either and from the computation, and if both are desired then may be used at the end rather than computing it through in each iteration. A second form, using fused multiply-add operations, begins (typically using a table lookup) and iterates until is sufficiently close to 0, or a fixed number of iterations. This converges to , and . Taylor series If N is an approximation to , a better approximation can be found by using the Taylor series of the square root function: As an iterative method, the order of convergence is equal to the number of terms used. With two terms, it is identical to the Babylonian method. With three terms, each iteration takes almost as many operations as the Bakhshali approximation, but converges more slowly. Therefore, this is not a particularly efficient way of calculation. To maximize the rate of convergence, choose N so that is as small as possible. Continued fraction expansion Quadratic irrationals (numbers of the form , where a, b and c are integers), and in particular, square roots of integers, have periodic continued fractions. Sometimes what is desired is finding not the numerical value of a square root, but rather its continued fraction expansion, and hence its rational approximation. Let S be the positive number for which we are required to find the square root. Then assuming a to be a number that serves as an initial guess and r to be the remainder term, we can write Since we have , we can express the square root of S as By applying this expression for to the denominator term of the fraction, we have Compact notation The numerator/denominator expansion for continued fractions (see left) is cumbersome to write as well as to embed in text formatting systems. Therefore, special notation has been developed to compactly represent the integer and repeating parts of continued fractions. One such convention is use of a lexical "dog leg" to represent the vinculum between numerator and denominator, which allows the fraction to be expanded horizontally instead of vertically: Here, each vinculum is represented by three line segments, two vertical and one horizontal, separating from . An even more compact notation which omits lexical devices takes a special form: For repeating continued fractions (which all square roots do), the repetend is represented only once, with an overline to signify a non-terminating repetition of the overlined part: For , the value of is 1, so its representation is: Proceeding this way, we get a generalized continued fraction for the square root as The first step to evaluating such a fraction to obtain a root is to do numerical substitutions for the root of the number desired, and number of denominators selected. For example, in canonical form, is 1 and for , is 1, so the numerical continued fraction for 3 denominators is: Step 2 is to reduce the continued fraction from the bottom up, one denominator at a time, to yield a rational fraction whose numerator and denominator are integers. The reduction proceeds thus (taking the first three denominators): Finally (step 3), divide the numerator by the denominator of the rational fraction to obtain the approximate value of the root: rounded to three digits of precision. The actual value of is 1.41 to three significant digits. The relative error is 0.17%, so the rational fraction is good to almost three digits of precision. Taking more denominators gives successively better approximations: four denominators yields the fraction , good to almost 4 digits of precision, etc. Usually, the continued fraction for a given square root is looked up rather than expanded in place because it's tedious to expand it. Continued fractions are available for at least square roots of small integers and common constants. For an arbitrary decimal number, precomputed sources are likely to be useless. The following is a table of small rational fractions called convergents reduced from canonical continued fractions for the square roots of a few common constants: In general, the larger the denominator of a rational fraction, the better the approximation. It can also be shown that truncating a continued fraction yields a rational fraction that is the best approximation to the root of any fraction with denominator less than or equal to the denominator of that fraction - e.g., no fraction with a denominator less than or equal to 70 is as good an approximation to as 99/70. Lucas sequence method the Lucas sequence of the first kind Un(P,Q) is defined by the recurrence relations:and the characteristic equation of it is:it has the discriminant and the roots:all that yield the following positive value:so when we want , we can choose and , and then calculate using and for large value of . The most effective way to calculate and is:Summary:then when : Approximations that depend on the floating point representation A number is represented in a floating point format as which is also called scientific notation. Its square root is and similar formulae would apply for cube roots and logarithms. On the face of it, this is no improvement in simplicity, but suppose that only an approximation is required: then just is good to an order of magnitude. Next, recognise that some powers, , will be odd, thus for 3141.59 = 3.14159 rather than deal with fractional powers of the base, multiply the mantissa by the base and subtract one from the power to make it even. The adjusted representation will become the equivalent of 31.4159 so that the square root will be . If the integer part of the adjusted mantissa is taken, there can only be the values 1 to 99, and that could be used as an index into a table of 99 pre-computed square roots to complete the estimate. A computer using base sixteen would require a larger table, but one using base two would require only three entries: the possible bits of the integer part of the adjusted mantissa are 01 (the power being even so there was no shift, remembering that a normalised floating point number always has a non-zero high-order digit) or if the power was odd, 10 or 11, these being the first two bits of the original mantissa. Thus, 6.25 = 110.01 in binary, normalised to 1.1001 × 22 an even power so the paired bits of the mantissa are 01, while .625 = 0.101 in binary normalises to 1.01 × 2−1 an odd power so the adjustment is to 10.1 × 2−2 and the paired bits are 10. Notice that the low order bit of the power is echoed in the high order bit of the pairwise mantissa. An even power has its low-order bit zero and the adjusted mantissa will start with 0, whereas for an odd power that bit is one and the adjusted mantissa will start with 1. Thus, when the power is halved, it is as if its low order bit is shifted out to become the first bit of the pairwise mantissa. A table with only three entries could be enlarged by incorporating additional bits of the mantissa. However, with computers, rather than calculate an interpolation into a table, it is often better to find some simpler calculation giving equivalent results. Everything now depends on the exact details of the format of the representation, plus what operations are available to access and manipulate the parts of the number. For example, Fortran offers an EXPONENT(x) function to obtain the power. Effort expended in devising a good initial approximation is to be recouped by thereby avoiding the additional iterations of the refinement process that would have been needed for a poor approximation. Since these are few (one iteration requires a divide, an add, and a halving) the constraint is severe. Many computers follow the IEEE (or sufficiently similar) representation, and a very rapid approximation to the square root can be obtained for starting Newton's method. The technique that follows is based on the fact that the floating point format (in base two) approximates the base-2 logarithm. That is So for a 32-bit single precision floating point number in IEEE format (where notably, the power has a bias of 127 added for the represented form) you can get the approximate logarithm by interpreting its binary representation as a 32-bit integer, scaling it by , and removing a bias of 127, i.e. For example, 1.0 is represented by a hexadecimal number 0x3F800000, which would represent if taken as an integer. Using the formula above you get , as expected from . In a similar fashion you get 0.5 from 1.5 (0x3FC00000). To get the square root, divide the logarithm by 2 and convert the value back. The following program demonstrates the idea. Note that the exponent's lowest bit is intentionally allowed to propagate into the mantissa. One way to justify the steps in this program is to assume is the exponent bias and is the number of explicitly stored bits in the mantissa and then show that /* Assumes that float is in the IEEE 754 single precision floating point format */ #include <stdint.h> float sqrt_approx(float z) { union { float f; uint32_t i; } val = {z}; /* Convert type, preserving bit pattern */ /* * To justify the following code, prove that * * ((((val.i / 2^m) - b) / 2) + b) * 2^m = ((val.i - 2^m) / 2) + ((b + 1) / 2) * 2^m) * * where * * b = exponent bias * m = number of mantissa bits */ val.i -= 1 << 23; /* Subtract 2^m. */ val.i >>= 1; /* Divide by 2. */ val.i += 1 << 29; /* Add ((b + 1) / 2) * 2^m. */ return val.f; /* Interpret again as float */ } The three mathematical operations forming the core of the above function can be expressed in a single line. An additional adjustment can be added to reduce the maximum relative error. So, the three operations, not including the cast, can be rewritten as val.i = (1 << 29) + (val.i >> 1) - (1 << 22) + a; where a is a bias for adjusting the approximation errors. For example, with a = 0 the results are accurate for even powers of 2 (e.g. 1.0), but for other numbers the results will be slightly too big (e.g. 1.5 for 2.0 instead of 1.414... with 6% error). With a = −0x4B0D2, the maximum relative error is minimized to ±3.5%. If the approximation is to be used for an initial guess for Newton's method to the equation , then the reciprocal form shown in the following section is preferred. Reciprocal of the square root A variant of the above routine is included below, which can be used to compute the reciprocal of the square root, i.e., instead, was written by Greg Walsh. The integer-shift approximation produced a relative error of less than 4%, and the error dropped further to 0.15% with one iteration of Newton's method on the following line. In computer graphics it is a very efficient way to normalize a vector. float invSqrt(float x) { float xhalf = 0.5f * x; union { float x; int i; } u; u.x = x; u.i = 0x5f375a86 - (u.i >> 1); /* The next line can be repeated any number of times to increase accuracy */ u.x = u.x * (1.5f - xhalf * u.x * u.x); return u.x; } Some VLSI hardware implements inverse square root using a second degree polynomial estimation followed by a Goldschmidt iteration. Negative or complex square If S < 0, then its principal square root is If S = a+bi where a and b are real and b ≠ 0, then its principal square root is This can be verified by squaring the root. Here is the modulus of S. The principal square root of a complex number is defined to be the root with the non-negative real part. See also Alpha max plus beta min algorithm nth root algorithm Square root of 2 Integer square root Notes References External links Square roots by subtraction Integer Square Root Algorithm by Andrija Radović Personal Calculator Algorithms I : Square Roots (William E. Egbert), Hewlett-Packard Journal (may 1977) : page 22 Calculator to learn the square root Root-finding algorithms Computer arithmetic algorithms
19066038
https://en.wikipedia.org/wiki/Cybernetic%20Serendipity
Cybernetic Serendipity
Cybernetic Serendipity was an exhibition of cybernetic art curated by Jasia Reichardt, shown at the Institute of Contemporary Arts, London, England, from 2 August to 20 October 1968, and then toured across the United States. Two stops in the United States were the Corcoran Annex (Corcoran Gallery of Art), Washington, D.C., from 16 July to 31 August 1969, and the newly opened Exploratorium in San Francisco, from 1 November to 18 December 1969. Content One part of the exhibition was concerned with algorithms and devices for generating music. Some exhibits were pamphlets describing the algorithms, whilst others showed musical notation produced by computers. Devices made musical effects and played tapes of sounds made by computers. Peter Zinovieff lent part of his studio equipment - visitors could sing or whistle a tune into a microphone and his equipment would improvise a piece of music based on the tune. Another part described computer projects such as Gustav Metzger's self-destructive Five Screens With Computer, a design for a new hospital, a computer programmed structure, and dance choreography. The machines and installations were a very noticeable part of the exhibition. Gordon Pask produced a collection of large mobiles (Colloquy of Mobiles (1968)) with interacting parts that let the viewers join in the conversation. Many machines formed kinetic environments or displayed moving images. Bruce Lacey contributed his radio-controlled robots and a light-sensitive owl. Nam June Paik was represented by Robot K-456 and televisions with distorted images. Jean Tinguely provided two of his painting machines. Edward Ihnatowicz's biomorphic hydraulic ear (Sound Activated Mobile (SAM, 1968)) turned toward sounds and John Billingsley's Albert 1967 turned to face light. Wen-Ying Tsai presented his interactive cybernetic sculptures of vibrating stainless-steel rods, stroboscopic light, and audio feedback control. Several artists exhibited machines that drew patterns that the visitor could take away, or involved visitors in games. Cartoonist Rowland Emett designed the mechanical computer Forget-me-not, which was commissioned by Honeywell. Another section explored the computer's ability to produce text - both essays and poetry. Different programs produced Haiku, children's stories, and essays. One of the first computer-generated poems, by Alison Knowles and James Tenney, was included in the exhibition and catalogue. Computer-generated movies were represented by John Whitney's permutations and a Bell Labs movie on their technology for producing movies. Some samples included images of tesseracts rotating in four dimensions, a satellite orbiting the earth, and an animated data structure. Computer graphics were also represented, including pictures produced on cathode ray oscilloscopes and digital plotters. There was a variety of posters and graphics demonstrating the power of computers to do complex (and apparently random) calculations. Other graphics showed a simulated Mondrian and the iconic decreasing squares spiral that appeared on the exhibition's poster and book. The Boeing Company exhibited their use of wireframe graphics. Keith Albarn & Partners contributed to the design of the exhibition. Reflecting the prominence of music in the show, a ten-track album Cybernetic Serendipity Music was released by the ICA to accompany the show. Artists featured included Iannis Xenakis, John Cage, and Peter Zinovieff, a detail of whose graphic score for 'Four Sacred April Rounds’ (1968) was used as the cover artwork. Attendance Time magazine noted that there had been 40,000 visitors to the London exhibition. Other reports suggested visitor numbers were as high as 44,000 to 60,000. However, the ICA did not accurately count visitors. After-effects The exhibition provided the energy for the formation of British Computer Arts Society which continued to explore the interaction between science, technology and art, and put on exhibitions (for example Event One at the Royal College of Art ). Several pieces were purchased by the Exploratorium in 1971, some of which are on display to this day. In 2020, The Centre Pompidou exhibited the replica of Gordon Pask’s 1968 Colloquy of Mobiles, reproduced by Paul Pangaro and TJ McLeish in 2018. In 2014 the ICA held a retrospective exhibition Cybernetic Serendipity: A Documentation which included documents, installation photographs, press reviews and publications and a series of discussions in one of which Peter Zinovieff took part. To coincide with the exhibition, Cybernetic Serendipity Music was re-released as a limited-edition vinyl LP by The Vinyl Factory. See also Algorithmic art Computer art Post-conceptual Electronic Art Generative art New Media Art Virtual art Cybernetics References External links (requires membership) (contemporary TV report, presented by Jasia Reichardt) 1968 in England 1968 in art Art exhibitions in London Computer graphics Computer art New media art Cybernetics
36121017
https://en.wikipedia.org/wiki/LibSBML
LibSBML
LibSBML is an open-source software library that provides an application programming interface (API) for the SBML (Systems Biology Markup Language ) format. The libSBML library can be embedded in a software application or used in a web servlet (such as one that might be served by Apache Tomcat) as part of the application or servlet's implementation of support for reading, writing, and manipulating SBML documents and data streams. The core of libSBML is written in ISO standard C++; the library provides API for many programming languages via interfaces generated with the help of SWIG. The libSBML library is free software released under the terms of the GNU Lesser General Public License as published by the Free Software Foundation; either version 2.1 of the License, or any later version. LibSBML was developed thanks to funding from many agencies, particularly the National Institute of General Medical Sciences (NIGMS, USA) as well as the Defense Advanced Research Projects Agency (DARPA, USA) under the Bio-SPICE program. Description The Systems Biology Markup Language (SBML) is an XML-based format for encoding computational models of a sort common in systems biology. Although SBML is based upon XML, and thus software developers could support SBML using off-the-shelf XML parser libraries, libSBML offers numerous advantages that make it easier for developers to implement support for SBML in their software. The premise behind the development of libSBML is that it is more convenient and efficient for developers to start with a higher-level API tailored specifically to SBML and its distinctive features than it is to start with a plain XML parser library. Significant features of libSBML The following is a partial list of libSBML's features: Supports all Levels and Versions of SBML with common API classes and methods, thus smoothing the differences between different flavors of SBML from the perspective of the application software. Provides facilities for manipulating mathematical formulas in both text-string format and MathML 2.0 format, as well as the ability to interconvert mathematical expressions between these forms. Internally, libSBML uses familiar Abstract Syntax Trees (ASTs) to represent formulas, and provides AST-oriented methods for calling applications. Performs validation of XML and SBML at the time of parsing files and data streams. This helps verify the correctness of models in a way that goes beyond simple syntactic validation. Offers support for dimensional analysis and unit checking. LibSBML implements a thorough system for dimensional analysis and checking units of quantities in a model. Provides facilities for the creation and manipulation of SBML annotations and notes. These have a specific format dictated by the SBML specifications. The formats and standards supported by libSBML include MIRIAM (Minimal Information Requested in the Annotation of a Model) and SBO (the Systems Biology Ontology). Supports transparently reading and writing compressed files in the ZIP, GZIP and BZIP formats. Provides interfaces for the C, C++, C#, Java, Python, Perl, MATLAB, Octave, and Ruby programming languages. The C and C++ interfaces are implemented natively; the C#, Java, Perl, Python, and Ruby interfaces are implemented using SWIG, the Simplified Wrapper Interface Generator; and the MATLAB and Octave interfaces are implemented through custom hand-written code. Provides many convenience methods, such as for obtaining a count of the number of boundary condition species, determining the modifier species of a reaction (assuming the reaction provides kinetics), constructing the stoichiometric matrix for all reactions in a model, and more. Manipulation of mathematical formulas Some further explanations may be warranted concerning libSBML's support for working with mathematical formulas. In SBML Level 1, mathematical formulas are represented as text strings using a C-like syntax. This representation was chosen because of its simplicity, widespread familiarity and use in applications such as GEPASI and Jarnac, whose authors contributed to the initial design of SBML. In SBML Levels 2 and 3, there was a need to expand the mathematical vocabulary of Level 1 to include additional functions (both built-in and user-defined), mathematical constants, logical operators, relational operators and a special symbol to represent time. Rather than growing the simple C-like syntax into something more complicated and esoteric in order to support these features, and consequently having to manage two standards in two different formats (XML and text string formulas), SBML Levels 2 and 3 leverage an existing standard for expressing mathematical formulas, namely the content portion of MathML. As mentioned above, LibSBML provides an abstraction for working with mathematical expressions in both text-string and MathML form: Abstract Syntax Trees (ASTs). Abstract Syntax Trees are well known in the computer science community; they are simple recursive data structures useful for representing the syntactic structure of sentences in certain kinds of languages (mathematical or otherwise). Much as libSBML allows programmers to manipulate SBML at the level of domain-specific objects, regardless of SBML Level or version, it also allows programmers to work with mathematical formula at the level of ASTs regardless of whether the original format was C-like infix or MathML. LibSBML goes one step further by allowing programmers to work exclusively with infix formula strings and instantly convert them to the appropriate MathML whenever needed. Dependencies LibSBML requires a separate library to do low-level read/write operations on XML. It can use any one of three XML parser libraries: Xerces, expat or libxml2. Users can specify which library they wish to use at libSBML compilation time. LibSBML hides the differences between these parser libraries behind an abstraction layer; it seamlessly uses whichever library against which a given instance of libSBML has been compiled. (However, released binary distributions of libSBML all make use of the libxml2 library.) Usage LibSBML uses software objects (i.e., instances of classes) that correspond to SBML components, with member variables representing the attributes of the corresponding SBML objects. The libSBML API is constructed to provide an intuitive way of relating SBML and the code needed to create or manipulate it with a class hierarchy that mimics the SBML structure. More information about the libSBML objects is available in the libSBML API documentation. Reading and writing SBML LibSBML enables reading from and writing to either files or strings. Once an SBML document is read, libSBML stores the SBML content in an SBMLDocument object. This object can be written out again later. The following is an example written in Python: >>> import libsbml # read a document >>> doc = libsbml.readSBMLFromFile(filename) >>> doc = libsbml.readSBMLFromString(string) # helper function that takes either a string # or filename as argument >>> doc = libsbml.readSBML(filename) >>> doc = libsbml.readSBML(string) # write a document >>> libsbml.writeSBMLToFile(doc, filename) >>> True >>> libsbml.writeSBMLToString(doc) >>> '<?xml version="1.0" encoding="UTF-8"?>\n <sbml xmlns="http://www.sbml.org/sbml/level3/version1/core" level="3" version="1">\n <model/>\n </sbml>\n' Creating and manipulating SBML The libSBML API allows easy creation of objects and subobjects representing SBML elements and the subelements contained within them. The following is an example written in C++: void createSBML() { // create an SBML Level 3 Version 1 document SBMLDocument* doc = new SBMLDocument(3, 1); // create the model as a sub element of the document Model * model = doc->createModel(); // create a compartment as a sub element of the model Compartment * compartment1 = model->createCompartment(); // create an independent compartment and then add it to the model Compartment * compartment2 = new Compartment(3, 1); model->addCompartment(compartment2); } Accessing attributes Each component in SBML has a number of attributes associated with it. These are stored as member variables of a given class, and libSBML provides functions to retrieve and query these values. The syntax of these functions is consistent throughout libSBML. The following is an example written in Python: >>> import libsbml # create an SBML Level 3 Version 1 document >>> sbmlns = libsbml.SBMLNamespaces(3, 1) >>> doc = libsbml.SBMLDocument(sbmlns) #create the model as a sub element of the document >>> model = doc.createModel() #create a compartment as a sub element of the model >>> compartment = model.createCompartment() # set the attributes on the compartment # note a return value of 0 indicates success >>> compartment.setId("cell") 0 >>> compartment.setSize(2.3) 0 >>> compartment.setSpatialDimensions(3) 0 >>> compartment.setUnits("litre") 0 >>> compartment.setConstant(True) 0 # get the attribute values >>> compartment.getId() 'cell' >>> compartment.getSpatialDimensions() 3 # examine the status of the attribute >>> compartment.isSetSize() True >>> compartment.getSize() 2.3 #unset an attribute value >>> compartment.unsetSize() 0 >>> compartment.isSetSize() False >>> compartment.getSize() nan See also JSBML libxml2 Xerces Expat XML validation XML BioModels Database BioPAX CellML MIASE MIRIAM Systems Biology Ontology (SBO) MathML References External links libSBML Home Page C++ libraries Free software programmed in C++ Free software Cross-platform free software Free computer libraries Free science software Articles with example Python (programming language) code
198123
https://en.wikipedia.org/wiki/CinePaint
CinePaint
CinePaint is a free and open source computer program for painting and retouching bitmap frames of films. It is a fork of version 1.0.4 of the GNU Image Manipulation Program (GIMP). It enjoyed some success as one of the earliest open source tools developed for feature motion picture visual effects and animation work. The main reason for this adoption over mainline GIMP was its support for high bit depths (greater than 8-bits per channel) which can be required for film work. The mainline GIMP project later added high bit depths in GIMP 2.9.2, released November 2015. It is free software under the GPL-2.0-or-later. In 2018, "CinePaint 2.0 Making Progress" post announced progress, but years later that version isn't out. Main features Features that set CinePaint apart from its photo-editing predecessor include the frame manager, onion skinning, and the ability to work with 16-bit and floating point pixels for high-dynamic-range imaging (HDR). CinePaint supports a 16-bit color managed workflow for photographers and printers, including CIE*Lab and CMYK editing. It supports the Cineon, DPX, and OpenEXR image file formats. HDR creation from bracketed exposures is easy. CinePaint is a professional open-source raster graphics editor, not a video editor. Per-channel color engine core: 8-bit, 16-bit, and 32-bit. The image formats it supports include BMP, CIN, DPX, EXR, GIF, JPEG, OpenEXR, PNG, TIFF, and XCF. CinePaint is currently available for UNIX and Unix-like OSes including Mac OS X and IRIX. The program is available on Linux, Mac OS X, FreeBSD and NetBSD. Its main competitors are the mainline GIMP and Adobe Photoshop, although the latter is only available for Mac OS X and Microsoft Windows. Glasgow, a completely new code architecture being used for CinePaint, is expected to make a new Windows version possible and is currently under production. The Glasgow effort is FLTK based. This effort appears to have stalled. CinePaint version 1.4.4 appeared on SourceForge on 2021/5/6, followed by CinePaint 1.4.5 on 30. May 2021. Movies Examples of the software's application in the movie industry include: Elf (2003) Looney Tunes: Back in Action (2003) League of Extraordinary Gentlemen (2003) Duplex (2003) The Last Samurai (2003) Showtime (2002) Blue Crush (2002) 2 Fast 2 Furious (2003) The Harry Potter series Cats & Dogs (2001) Dr. Dolittle 2 (2001) Little Nicky (2000) The Grinch (2000) The 6th Day (2000) Stuart Little (1999) Planet of the Apes (2001) Stuart Little 2 (2002) Spider-Man (2002) Under its former name Film Gimp, CinePaint was used for films such as Scooby-Doo (2002), Harry Potter and the Philosopher's Stone (2001), The Last Samurai (2003) and Stuart Little (1999). See also Comparison of raster graphics editors References External links Sourceforge project site CinePaint Wiki and downloads 16-bit imaging. From digital camera to print a colour management tutorial Basic color management for X (linux.com) High Dynamic Range images under Linux (linux.com) GIMP and Film Production Cross-platform software Free raster graphics editors Free video software Raster graphics editors for Linux Software forks Graphics software that uses GTK Video software that uses GTK
52385983
https://en.wikipedia.org/wiki/Associativity-based%20routing
Associativity-based routing
Associativity-based routing (commonly known as ABR) is a mobile routing protocol invented for wireless ad hoc networks, also known as mobile ad hoc networks (MANETs) and wireless mesh networks. ABR was invented in 1993, filed for a U.S. patent in 1996, and granted the patent in 1999. ABR was invented by Chai Keong Toh while doing his Ph.D. at Cambridge University. Route discovery phase ABR has three phases. The first phase is the route discovery phase. When a user initiates to transmit data, the protocol will intercept the request and broadcast a search packet over the wireless interfaces. As the search packet propagates node to node, node identity and stability information are appended to the packet. When the packet eventually reaches the destination node, it would have received all the information describing the path from source to destination. When that happens, the destination then chooses the best route (because there may be more than one path from the source to the destination) and sends a REPLY back to the source node, over the chosen path. Note that when the packet transits backwards from destination to the source, each intermediate node will update their routing table, signifying that it will now know how to route when it receives data from the upstream node. When the source node receives the REPLY, the route is successfully discovered and established. This process is done in real-time and only takes a few milli-seconds. Route reconstruction phase ABR establishes routes that are long-lived or associativity-stable, thus most routes established will seldom experience link breaks; however, if one or more links are broken, their ABR will immediately invoke the RRC – route reconstruction phase. The RRC basically repairs the broken link by having the upstream node (which senses the link break) perform a localized route repair. The localized route repair is performed by carrying out a localized broadcast query that searches for an alternative long-lived partial route to the destination. ABR route maintenance consists of: (a) partial route discovery, (b) invalid route erasure, (c) valid route update, and (d) new route discovery (worse case). Route deletion phase When a discovered route is no longer needed, a RD (Route Delete) packet will be initiated by the source node so that all intermediate nodes in the route will update their routing table entries and stop relay data packets associated with this deleted route. In addition to using RD to delete a route, ABR can also implement a soft state approach where route entries are expired or invalidated after timed out, when there is no traffic activity related to the route over a period of time. Practicality In 1998, ABR was successfully implemented into the Linux kernel, in various different branded laptops (IBM Thinkpad, COMPAQ, Toshiba, etc.) that are equipped with WaveLAN 802.11a PCMCIA wireless adapters. A working 6-node wide wireless ad hoc network spanning a distance of over 600 meters was achieved and the successful event was published in Mobile Computing Magazine in 1999. Various tests were performed with the network: Transmission of up to 500MBytes of data from source to destination over a 3-hop route. Link breaks and automatic link repairs proven to be working Automatic Route Discovery Route Delete Web Server in Ad Hoc mode – with source being client and destination being the web server Transmission of multimedia information (audio and video) TELNET over Ad Hoc FTP over Ad Hoc HTTP over Ad Hoc Also, network performance measurements on the following were made: End-to-end delay TCP throughput Packet loss ratio Route discovery delay Route repair delay Impact of packet size on throughput Impact of beaconing interval on throughput and remaining battery life An enhanced version of the protocol was implemented in the field by defense contractor TRW Inc. in 2002. The enhancement made to the protocol include: (a) network-layer QoS additions and (b) route precedence capabilities. Patent and work extensions ABR was granted a US patent 5987011 and the assignee being King's College, Cambridge, UK. A few other mobile ad hoc routing protocols have incorporated ABR's stability concept or have done extensions of the ABR protocol, including: Signal Stability-based Adaptive Routing Protocol (SSA) Enhanced Associativity Based Routing Protocol (EABR) Alternative Enhancement of Associativity-Based Routing (AEABR) Optimized Associativity Threshold Routing (OABTR) Associativity-Based Clustering Protocol (ABCP), Fuzzy Based Trust Associativity-Based Routing (Fuzzy-ABR) Associativity Tick Averaged Associativity-Based Routing (ATA-AR), Self-adaptive Q-learning based trust ABR (QTABR) Quality of Service Extensions to ABR (QoSE-ABR) Associativity-based Multicast Routing (ABAM) Multipath Associativity Based Routing (MABR) Associativity routing for Wireless Sensor Networks Associative Vehicular Ad Hoc Networks (VANETs) References Mobile computers Wireless sensor network Ad hoc routing protocols Routing protocols
981218
https://en.wikipedia.org/wiki/Galaksija%20%28computer%29
Galaksija (computer)
The Galaksija (; , meaning "Galaxy") was a build-it-yourself computer designed by Voja Antonić. It was featured in the special edition Računari u vašoj kući (Computers in your home, written by Dejan Ristanović) of a popular eponymous science magazine, published late December 1983 in Belgrade, Yugoslavia. Kits were available but not required as it could be built entirely out of standard off-the-shelf parts. It was later also available in complete form. History In the early eighties, restrictions in SFR Yugoslavia prevented importing computers into the country. At the same time, even the cheapest computers available in the West were nearing average monthly salaries. This meant that only a relative minority of people owned one – mostly a ZX Spectrum or a Commodore 64, though most Yugoslavs were only familiar with a programmable calculator. According to his own words, some time in 1983, Voja Antonić, while vacationing in Hotel Teuta in Risan, was reading the application handbook for the RCA CDP1802 CPU and stumbled upon CPU-assisted video generation. Since the CDP1802 was very primitive, he decided that a Zilog Z80 processor could perform the task as well. Before he returned home to Belgrade, he already had the conceptual diagrams of a computer that used software to generate a video picture. Although using software as opposed to hardware would significantly reduce his design's performance, it also simplified the hardware and reduced its cost. His next step was to find a magazine to publish the diagrams in. The obvious choice was SAM Magazine published in Zagreb, but due to prior bad experiences he decided to publish elsewhere. Near the same time that Antonić made his discovery, Dejan Ristanović, a computer programmer and journalist was entrusted with preparing a special edition of the Galaksija magazine that would be focused on home computers. After Ristanović and Antonić met, they decided to collaborate and publish the computer's diagram in a special issue of the magazine entitled Računari u vašoj kući (Computers in your home). It was released late December 1983. The name of the magazine (Galaksija) would become twinned with the name of the computer. Antonić and Ristanović guesstimated that around a thousand people would try to build the computer by themselves, given that the magazine's circulation was 30,000. Some 8,000 people wound up ordering the build-it-yourself kits from Antonić. This number may in reality be greater if people who did not purchase any kits (including PCB and ROMs) were accounted for. Components were provided by various manufacturers and suppliers: MIPRO and Elektronika from Buje, together with Institut za elektroniku i vakuumsku tehniku (en. Institute for electronics and vacuum technology) delivered PCBs, keyboards and masks Mikrotehnika from Graz sent integrated circuits Voja Antonić personally programmed all EPROMs Galaksija collected requisition forms and organized deliveries Later, Institute for school books and teaching aids together with Elektronika Inženjering started mass commercial production of Galaksija computers, mainly to be delivered to schools. Technical specifications CPU: Zilog Z80A 3.072 MHz ROM "A" or "1" – 4 KB (2732 EPROM) contains bootstrap, core control and Galaksija BASIC interpreter code ROM "B" or "2" – 4 KB (optional, also 2732 EPROM) – additional Galaksija BASIC commands, assembler, machine code monitor, etc. Character ROM – 2 KB (2716 EPROM) contains character definitions, characters are 8 x 13 pixels, the block graphics were vertically divided in a 4:5:4 scheme, and horizontally in a 4:4 scheme. RAM: 2 to 6 KB of 6116 static RAM in base model, expandable to 54 KB Text mode 32 x 16 characters, monochrome Pseudographics: 2x3 dot matrix combinations in graphic character subset – 64x48 dots total. Sound: None according to specifications, but tape interface was occasionally used as audio output port – like the "EAR" port on ZX Spectrum can be used both as audio and cassette tape interface. See cassette port for details. Storage media: cassette tape, recording at 280 bit/s rate I/O ports: 44-pin edge connector with Z80 Bus, tape (DIN connector), monochrome video out (PAL timings, DIN connector), and UHF TV out (RCA connector) BASIC ROMs Galaksija BASIC is a BASIC interpreter originally partly based on code taken from TRS-80 Level 1 BASIC, which the creator believed to have been a Microsoft BASIC. However, after extensive modifications to include video generation code (as the CPU was a major participant to reduce the cost of hardware) and improve the programming language, what remained from the original is said to be mainly flow-control and floating point code. It was fully contained in 4 KB ROM "A" or "1". Additional ROM "B" or "2" provided more Galaksija BASIC commands, assembler, monitor, etc. ROM "A" The chip labeled as "A" by the creator of Galaksija, Voja Antonić was commonly referred to as "ROM 1" or just "ROM". ROM "A" contained bootstrap code of Galaksija, its control code (rudimentary operating system), video generation code (as Galaksija did not have advanced video subsystem its Z80 CPU was responsible even for generating video signal) and Galaksija BASIC. Fitting all this functionality in 4 KB of 2732 EPROM required a lot of effort and some sacrifices. For example, some message text areas were also used actual code (e.g. "READY" message) and the number of error messages was reduced to only three ("WHAT?", "HOW?" and "SORRY"). ROM "B" ROM "B" of the Galaksija is a 2732 EPROM chip that contains extensions to the original Galaksija BASIC available in base ROM ("A"). It was labeled as "B" by the creator of the Galaksija, Voja Antonić, but was commonly referred to as "ROM 2". ROM "B" contained added Galaksija BASIC commands and functions (mostly trigonometric) as well as a Z80 assembler and a machine code monitor. This ROM was not required and was an optional upgrade. Although planned on the mainboard, the content of ROM "B" was not automatically initialized during booting. Instead, users had to execute a Galaksija BASIC command to run a machine code program from ROM "B" before they can gain additional features. This also meant that even Galaksijas with ROM "B" plugged in can behave entirely as base models. Character ROM Character ROM of home computer Galaksija is a 2716 EPROM chip that contains graphical definitions of Galaksija's character set. It had no special name and was labeled "2716" after the type of 2 KB EPROM needed. Galaksija had a slightly modified (localized) ASCII character set: There were no lowercase characters Codes 91 to 94 represented the Serbian characters Č, Ć, Ž and Š, respectively. The letter "Đ" was not present in original version and was commonly replaced with "DJ". It contained 64 pseudo-graphics characters, having different combinations of dots in 2x3 matrix. Character codes 64 and 39 are used for two-halves of the logo of Elektronika Inženjering company (they can be seen in "READY" prompt) Each character was represented as 8x13 matrix of pixels. In this ROM, 8-pixel rows of each character are represented as 8 bits of one byte. "Cassette" port Galaksija used cassette tape as secondary storage. It featured a 5-pin DIN connector used to connect the computer to a cassette tape recorder. Tape interface circuitry was rudimentary – other than few elements controlling the levels it was essentially one-bit digital equivalent to the one in the ZX Spectrum. The input signal was routed to the integrated circuit otherwise responsible for keyboard, so the CPU would "see" the input signal as a series of very fast key presses of varying lengths and gaps between them. It is normally stated that original Galaksija does not have any dedicated (separate) audio ports and most of the programs were written as silent. It was, however, possible to utilize the cassette tape port as an audio output as well like it is done in ZX Spectrum (its "EAR" connector). The only technical difference between ZX Spectrum and Galaksija in regards to existence of audio is that ZX Spectrum has a built-in beeper, while Galaksija's plans do not include any kind of a speaker. Design To simplify "do-it-yourself" building and reduce cost, the printed circuit board was designed as single-layer (one-side) board. This resulted in a relatively complicated design requiring many components-side connections to be made using wires. Galaksija's case was not pre-built. Instead, the guide suggested it to be built out of the printed circuit board material (such as Pertinax) also used for the mainboard. Thus, the top, sides and reinforcements were soldered together to form the "lid". Acrylic glass was recommended for the underside. The guide included instructions on cleaning, painting and even decorating the assembled case. The name "GALAKSIJA" and decorative border were to be added using Letraset transfer letter sheets after the first (white) coat of paint but before the second coat of final colour. After the paint dried, transferred decorations were supposed to be scratched off, exposing underlying white paint. The keyboard is laid out such that keys have their own memory-mapped addresses that, in most cases, follow the same order as ASCII code of the letter on the key. This saved the ROM space by reducing lookup tables but significantly increased the complexity of single-layer keyboard PCB such that it alone required 35 jumpers. Gallery See also History of computer hardware in Yugoslavia Galaksija BASIC – details about Galaksija's BASIC programming language Galaksija Plus – improved version of Galaksija, announced in Jun/July 1984 (6th) issue of "Računari" magazine (in English: Computers, renamed from "Računari u vašoj kući") Voja Antonić – the creator of Galaksija Dejan Ristanović – well known Serbian writer and computer publicist who authored much of the special issue magazine featuring Galaksija Z80 – Galaksija's CPU ZX80 - Sinclair ZX80 which predates the Galaksija by 4 years and has a remarkably similar system design including using the Z80A to drive the video output. References External links Articles Computers in your home – short overview by Dejan Ristanović, the author of Računari u vašoj kući magazine issue, in English 1983: Galaksija – how it all started, by Galaksija's creator Voja Antonić himself (in Serbian) Computer Galaksija – detailed description of computer operation for those planning to build it, as published in the Računari u vašoj kući magazine issue. Written by creator Voja Antonić, in Serbian. Uputstvo za upotrebu – complete, original, user manual on-line, in Serbian. Magazine Scans – scans of original magazine pages containing schematic diagrams, building and other instructions and programs for Galaksija (text in Serbian) Računar Galaksija by Dejan Ristanović, the author of Računari u vašoj kući magazine issue, in Serbian Crowd Supply Project - Crowd Supply Project may offer another Galaksija Presentations The Ultimate Galaksija Talk - in-depth presentation by Tomaž Šolc given at the 29C3 conference Remakes μGalaksija – FPGA Galaksija CMOS – CMOS Galaksija Emulators Galaksija Emulator – original DOS-based emulator by Miodrag Jevremović (in Serbian) Galaksija Emulator pages – Microsoft Windows port of original DOS emulator (in Serbian) MESS – The open source multi-platform multi-system emulator MESS supports Galaksija Sam Coupé — A Galaksija emulator running under Sam Coupé GALe - Galaksija Emulator - Emulates Galaksija in web browser. Online museums Old-Computers.com Museum page on Galaksija Zgodovina – an article in Slovene Other Zoran Modli Home page home page of Ventilator 202 radio show host (in Serbian). Same site contains a story of Ventilator 202 show, (also in Serbian). #247 – An Interview with Voja Antonic – Gerontogenous Galaksija Genesis An audio podcast interview with Voja Antonic about the creation of the Galaksija, in English. galaksija info in English with reproduced schematics, and an english translation of the thesis document "about the CMOS implementation of the GALAKSIJA retro home build computer" Home computers Serbian inventions Z80-based home computers
56958010
https://en.wikipedia.org/wiki/Area%201%20Security
Area 1 Security
Area 1 Security, Inc. is an American cybersecurity company headquartered in San Mateo, California. History The company was incorporated in 2013 by founders Oren Falkowitz, Blake Darché, and Phil Syme, all of whom were formerly employed by the U.S. National Security Agency. The company announced venture capital financing led by Kleiner Perkins and including Icon ventures, Allegis Capital, Cowboy Ventures, Data Collective, First Round Capital, RedSeal Networks CEO Ray Rothrock, and Shape Security CEO Derek Smith. Area 1 was named a Cool Vendor by Gartner in 2016 and listed among "20 Rising Stars" of the Cloud 100 by Forbes magazine in 2017. Area 1 was recognized by the San Francisco Business Times as a "Best Place to Work" in 2017. In 2018, Area 1 was a Cybersecurity Excellence Awards finalist. In 2019, Area 1 was named Google Cloud Global Technology Partner of the Year Award for Security. In December 2018, Area 1 revealed a Chinese government cyber campaign targeting intergovernmental organizations, ministries of foreign affairs, ministries of finance, trade unions, and think tanks. Over 100 hundred organizations were identified in this campaign by Area 1 Security as targets of the Chinese government’s Strategic Support Force (SSF), which ultimately led to the breach of a diplomatic communications network of the European Union. In 2019, to deter foreign interference in the 2020 United States elections, the Federal Election Commission ruled in AO 2019-12 that Area 1 "may offer its services to federal candidates and political committees at the same “low or no cost” tier that it offers to all qualified customers without making an impermissible in-kind contribution". In January 2020, Area 1 revealed a Russian government phishing campaign targeting Burisma Holdings and its subsidiaries. Products Area 1 Horizon provides a cloud-based service to prevent phishing campaigns, whether by nation-state or commercial actors, and stop cyberattacks, including BEC scam, ransomware, malware, watering holes, malvertising, and other socially engineered threats. Area 1 Horizon seeks and detects phishing attacks missed by spam filters and legacy devices. The company "maintains a network of sensors on web servers around the globe — many known to be used by state-sponsored hackers — which gives the firm a front-row seat to phishing attacks," the New York Times wrote in 2020. This approach contrasts with the strategy of training, educating, and relying on end users to recognize and report all such attacks. Technology Area 1 Security Horizon identifies attack indicators by specific elements of their infrastructure. Flexible enforcement platforms enable preemptive detection and disablement of targeted phishing attacks across email, web, and network, whether at the edge, or in the cloud. Area 1 Security also extends protection to customers’ partners and digital ecosystem stakeholders. References External links Software companies established in 2013 Companies based in California
37710931
https://en.wikipedia.org/wiki/Radeon%20Rx%20200%20series
Radeon Rx 200 series
The AMD Radeon R5/R7/R9 200 series is a family of GPUs developed by AMD. These GPUs are manufactured on a 28 nm Gate-Last process through TSMC or Common Platform Alliance. Release The Rx 200 series was announced on September 25, 2013, at the AMD GPU14 Tech Day event. Non-disclosure agreements were lifted on October 15, except for the R9 290X, and pre-orders opened on October 3. Architecture Graphics Core Next 3 (Volcanic Islands) is found on the R9 285 (Tonga Pro) branded products. Graphics Core Next 2 (Sea Islands) is found on R7 260 (Bonaire), R7 260X (Bonaire XTX), R9 290 (Hawaii Pro), R9 290X (Hawaii XT), and R9 295X2 (Vesuvius) branded products. Graphics Core Next 1 (Southern Islands) is found on R9 270, 270X, 280, 280X, R7 240, 250, 250X, 265, and R5 240 branded products. TeraScale 2 (VLIW5) (Northern Islands or Evergreen) is found on R5 235X and below branded products. OpenGL 4.x compliance requires supporting FP64 shaders. These are implemented by emulation on some TeraScale (microarchitecture) GPUs. Vulkan 1.0 requires GCN-Architecture. Vulkan 1.1 requires GCN 2 or higher. Multi-monitor support The AMD Eyefinity-branded on-die display controllers were introduced in September 2009 in the Radeon HD 5000 Series and have been present in all products since. AMD TrueAudio AMD TrueAudio was introduced with the AMD Radeon Rx 200 Series, but can only be found on the dies of GCN 2/3 products. Video acceleration AMD's SIP core for video acceleration, Unified Video Decoder and Video Coding Engine, are found on all GPUs and supported by AMD Catalyst and by the free and open-source graphics device driver. Use in cryptocurrency mining During 2014 the Radeon R9 200 series GPUs offered a very competitive price for usage in cryptocurrency mining. This led to limited supply and huge price increases of up to 164% over the MSRP in Q4 of 2013 and Q1 of 2014. Since Q2 of 2018 availability of AMD GPUs as well as pricing has, in most cases, returned to normal. CrossFire Compatibility Because many of the products in the range are rebadged versions of Radeon HD products, they remain compatible with the original versions when used in CrossFire mode. For example, the Radeon HD 7770 and Radeon R7 250X both use the 'Cape Verde XT' chip so have identical specifications and will work in CrossFire mode. This provides a useful upgrade option for anyone who owns an existing Radeon HD card and has a CrossFire compatible motherboard. Virtual super resolution support Starting with the driver release candidate version v14.501-141112a-177751E, officially named as Catalyst Omega, AMD's driver release introduced VSR on the R9 285 and R9 290 series graphics cards. This feature allows users to run games with higher image quality by rendering frames at above native resolution. Each frame is then downsampled to native resolution. This process is an alternative to supersampling which is not supported by all games. Virtual super resolution is similar to Dynamic Super Resolution, a feature available on competing nVidia graphics cards, but trades flexibility for increased performance. VSR can run at a resolution upwards of 2048 x 1536 at a 120 Hz refresh rate or 3840 x 2400 at 60 Hz. OpenCL (API) OpenCL accelerates many scientific Software Packages against CPU up to factor 10 or 100 and more. Open CL 1.0 to 1.2 are supported for all Chips with Terascale and GCN Architecture. OpenCL 2.0 is supported with GCN 2nd Gen. or 1.2 and higher) For OpenCL 2.1 and 2.2 only Driver Updates are necessary with OpenCL 2.0 conformant Cards. Vulkan (API) API Vulkan 1.0 is supported for all GCN architecture cards. Vulkan 1.2 requires GCN 2nd gen or higher with the Adrenalin 20.1 and Linux Mesa 20.0 drivers and newer. Desktop models Radeon R9 295X2 The Radeon R9 295X2 was released on April 21, 2014. It is a dual GPU card. Press samples were shipped in a metal case. It is the first reference card to utilize a closed looped liquid cooler. At 11.5 teraflops of computing power, the R9 295X2 was the most powerful dual-gpu consumer-oriented card in the world, until it was succeeded by the Radeon Pro Duo on April 26, 2016, which is essentially a combination of two R9 Fury X (Fiji XT) GPUs on a single card. The R9 295x2 has essentially two R9 290x (Hawaii XT) GPUs each with 4GB GDDR5 VRAM. Radeon R9 290X The Radeon R9 290X, codename "Hawaii XT", was released on October 24, 2013 and features 2816 Stream Processors, 176 TMUs, 64 ROPs, 512-bit wide buses, 44 CUs (compute units) and 8 ACE units. The R9 290X had a launch price of $549. Radeon R9 290 The Radeon R9 290 and R9 290X were announced on September 25, 2013. The R9 290 is based on AMD's Hawaii Pro chip and R9 290X on Hawaii XT. R9 290 and R9 290X will support AMD TrueAudio, Mantle, Direct3D 11.2, and bridge-free Crossfire technology using XDMA. A limited "Battlefield 4 Edition" pre-order bundle of R9 290X that includes Battlefield 4 was available on October 3, 2013, with reported quantity being 8,000. The R9 290 had a launch price of $399. Radeon R9 285 The Radeon R9 285 was announced on August 23, 2014 at AMD's 30 years of graphics celebration and released September 2, 2014. It was the first card to feature AMD's GCN 3 microarchitecture, in the form of a Tonga-series GPU. Radeon R9 280X Radeon R9 280X was announced on September 25, 2013. With a launch price of $299, it is based on the Tahiti XTL chip, being a slightly upgraded, rebranded Radeon HD 7970 GHz Edition. Radeon R9 280 Radeon R9 280 was announced on March 4, 2014. With a launch MSRP set at $279, it is based on a rebranded Radeon HD 7950 with a slightly increased boost clock speed, from 925 MHz to 933 MHz. Radeon R9 270X Radeon R9 270X was announced on September 25, 2013. With a launch price of $199, it is based on the Curaçao XT chip, which was formerly called Pitcairn. It is speculated to be faster than a Radeon HD 7870 GHz edition. Radeon R9 270 has a launch price of $179. Radeon R7 260X Radeon R7 260X was announced on September 25, 2013. With a launch price of $139, it is based on the Bonaire XTX chip, a faster iteration of Bonaire XT that the Radeon HD 7790 is based on. It will have 2 GB of GDDR5 memory as standard and will also feature TrueAudio, on-chip audio DSP based on Tensilica HiFi EP architecture. The stock card features a boost clock of 1100 MHz. It has 2 GBs of GDDR5 memory with a 6.5 GHz memory clock over a 128-bit Interface. The 260X will draw around 115 W in typical use. Radeon R7 250 Radeon R7 250 was announced on September 25, 2013. It has a launch price of $89. The card is based on the Oland core with 384 GCN cores. On February 10, 2014, AMD announced the R7 250X which is based on the Cape Verde GPU with 640 GCN cores and an MSRP of $99. Chipset table Desktop models Mobile models Radeon Feature Matrix Graphics device drivers AMD's proprietary graphics device driver "Catalyst" AMD Catalyst is being developed for Microsoft Windows and Linux. As of July 2014, other operating system are not officially supported. This may be different for the AMD FirePro brand, which is based on identical hardware but features OpenGL-certified graphics device drivers. AMD Catalyst supports of course all features advertised for the Radeon brand. Free and open-source graphics device driver "Radeon" The free and open-source drivers are primarily developed on Linux and for Linux, but have been ported to other operating systems as well. Each driver is composed out of five parts: Linux kernel component DRM Linux kernel component KMS driver: basically the device driver for the display controller user-space component libDRM user-space component in Mesa 3D; a special and distinct 2D graphics device driver for X.Org Server, which if finally about to be replaced by Glamor The free and open-source "Radeon" graphics driver supports most of the features implemented into the Radeon line of GPUs. Unlike the nouveau project for Nvidia graphics cards, the open-source "Radeon" drivers are not reverse engineered, but based on documentation released by AMD. See also AMD FirePro AMD FireMV AMD FireStream List of AMD graphics processing units References External links TechPowerUp! GPU Database AMD Radeon R9 Series Graphics AMD Radeon R7 Series Graphics GPU14 Tech Day Public Presentation.pdf AMD Announces FirePro W9100 Advanced Micro Devices graphics cards Computer-related introductions in 2013 Graphics processing units Graphics cards
37194181
https://en.wikipedia.org/wiki/Science%20DMZ%20Network%20Architecture
Science DMZ Network Architecture
The term Science DMZ refers to a computer subnetwork that is structured to be secure, but without the performance limits that would otherwise result from passing data through a stateful firewall. The Science DMZ is designed to handle high volume data transfers, typical with scientific and high-performance computing, by creating a special DMZ to accommodate those transfers. It is typically deployed at or near the local network perimeter, and is optimized for a moderate number of high-speed flows, rather than for general-purpose business systems or enterprise computing. The term Science DMZ was coined by collaborators at the US Department of Energy's ESnet in 2010. A number of universities and laboratories have deployed or are deploying a Science DMZ. In 2012 the National Science Foundation funded the creation or improvement of Science DMZs on several university campuses in the United States. The Science DMZ is a network architecture to support Big Data. The so-called information explosion has been discussed since the mid 1960s, and more recently the term data deluge has been used to describe the exponential growth in many types of data sets. These huge data sets, often need to be copied from one location to another using the Internet. The movement of data sets of this magnitude in a reasonable amount of time should be possible on modern networks. For example, it should only take less than 4 hours to transfer 10 TeraBytes of data on a 10 Gigabit Ethernet network path, assuming disk performance is adequate The problem is that this requires networks that are free from packet loss and middleboxes such as traffic shapers or firewalls that slow network performance. Stateful firewalls Most businesses and other institutions use a firewall to protect their internal network from malicious attacks originating from outside. All traffic between the internal network and the external Internet must pass through a firewall, which discards traffic likely to be harmful. A stateful firewall tracks the state of each logical connection passing through it, and rejects data packets inappropriate for the state of the connection. For example, a website would not be allowed to send a page to a computer on the internal network, unless the computer had requested it. This requires a firewall to keep track of the pages recently requested, and match requests with responses. A firewall must also analyze network traffic in much more detail, compared to other networking components, such as routers and switches. Routers only have to deal with the network layer, but firewalls must also process the transport and application layers as well. All this additional processing takes time, and limits network throughput. While routers and most other networking components can handle speeds of 100 billion bits per second (Gbps), firewalls limit traffic to about 1 Gbit/s, which is unacceptable for passing large amounts of scientific data. Modern firewalls can leverage custom hardware (ASIC) to accelerate traffic and inspection, in order to achieve higher throughput. This can present an alternative to Science DMZs and allows in place inspection through existing firewalls, as long as unified threat management (UTM) inspection is disabled. While stateful firewall may be necessary for critical business data, such as financial records, credit cards, employment data, student grades, trade secrets, etc., science data requires less protection, because copies usually exists in multiple locations and there is less economic incentive to tamper. DMZ A firewall must restrict access to the internal network but allow external access to services offered to the public, such as web servers on the internal network. This is usually accomplished by creating a separate internal network called a DMZ, a play on the term “demilitarized zone." External devices are allowed to access devices in the DMZ. Devices in the DMZ are usually maintained more carefully to reduce their vulnerability to malware. Hardened devices are sometimes called bastion hosts. The Science DMZ takes the DMZ idea one step farther, by moving high performance computing into its own DMZ. Specially configured routers pass science data directly to or from designated devices on an internal network, thereby creating a virtual DMZ. Security is maintained by setting access control lists (ACLs) in the routers to only allow traffic to/from particular sources and destinations. Security is further enhanced by using an intrusion detection system (IDS) to monitor traffic, and look for indications of attack. When an attack is detected, the IDS can automatically update router tables, resulting in what some call a Remotely Triggered BlackHole (RTBH). Justification The Science DMZ provides a well-configured location for the networking, systems, and security infrastructure that supports high-performance data movement. In data-intensive science environments, data sets have outgrown portable media, and the default configurations used by many equipment and software vendors are inadequate for high performance applications. The components of the Science DMZ are specifically configured to support high performance applications, and to facilitate the rapid diagnosis of performance problems. Without the deployment of dedicated infrastructure, it is often impossible to achieve acceptable performance. Simply increasing network bandwidth is usually not good enough, as performance problems are caused by many factors, ranging from underpowered firewalls to dirty fiber optics to untuned operating systems. The Science DMZ is the codification of a set of shared best practices—concepts that have been developed over the years—from the scientific networking and systems community. The Science DMZ model describes the essential components of high-performance data transfer infrastructure in a way that is accessible to non-experts and scalable across any size of institution or experiment. Components The primary components of a Science DMZ are: A high performance Data Transfer Node (DTN) running parallel data transfer tools such as GridFTP A network performance monitoring host, such as perfSONAR A high performance router/switch Optional Science DMZ components include: Support for layer-2 Multiprotocol Label Switching (MPLS) Virtual Private Networks (VPN) Support for Software Defined Networking See also Big Data perfSONAR References External links ESnet web pages describing the Science DMZ NSF Program funding Science DMZs Announcement on Ohio State University Science DMZ NSF Solicitation on funding to build Science DMZs University of Utah's Science DMZ Computer network security Network architecture Network performance
41513227
https://en.wikipedia.org/wiki/Universal%20Windows%20Platform%20apps
Universal Windows Platform apps
Universal Windows Platform (UWP) apps (formerly Windows Store apps and Metro-style apps) are applications that can be used across all compatible Microsoft Windows devices, including personal computers (PCs), tablets, smartphones, Xbox One, Microsoft HoloLens, and Internet of Things. UWP software is primarily purchased and downloaded via the Microsoft Store. Nomenclature Starting with Windows 10, Windows initially used "Windows app" to refer to a UWP app. Any app installed from Microsoft Store (formerly Windows Store) was initially "Trusted Windows Store app" and later "Trusted Microsoft Store apps." Other computer programs running on a desktop computer are "desktop apps." Starting with Windows 10 1903, Windows indiscriminately refers to all of them as "Apps." The terms "Universal Windows Platform" (or "UWP") and "UWP app" only appear on Microsoft documentation for its developers. Microsoft started to retrospectively use "Windows Runtime app" to refer to the precursors of UWP app, for which there was no unambiguous name before. In Windows 8.x Windows software first became available under the name "Metro-style apps" when the Windows Store opened in 2012 and were marketed with Windows 8. Look and feel In Windows 8.x, Metro-style apps do not run in a window. Instead, they either occupy the entire screen or are snapped to one side, in which case they occupy the entire height of the screen but only part of its width. They have no title bar, system menu, window borders or control buttons. Command interfaces like scroll bars are usually hidden at first. Menus are located in the "settings charm." Metro-style apps use the UI controls of Windows 8.x and typically follow Windows 8.x UI guidelines, such as horizontal scrolling and the inclusion of edge-UIs, like the app bar. In response to criticism from customers, in Windows 8.1, a title bar is present but hidden unless users move the mouse cursor to the top of the screen. The "hamburger" menu button on their title bar gives access to the charms. Distribution and licensing For most users, the only point of entry of Metro-style apps is Windows Store. Enterprises operating a Windows domain infrastructure may enter into a contract with Microsoft that allows them to sideload their line-of-business Metro-style apps, circumventing Windows Store. Also, major web browser vendors such as Google and Mozilla Foundation are selectively exempted from this rule; they are allowed to circumvent Microsoft guidelines and Windows Store and run a Metro-style version of themselves if the user chooses to make their product the default web browser. Metro-style apps are the only third-party apps that run on Windows RT. Traditional third-party apps do not run on this operating system. Multiple copies Before Windows 8, computer programs were identified by their static computer icons. Windows taskbar was responsible for representing every app that had a window when they run. Metro-style apps, however, are identified by their "tiles" that can show their icon and also other dynamic contents. In addition, in Windows 8 and Windows 8.1 RTM, they are not shown on the Windows taskbar when they run, but on a dedicated app switcher on the left side of the screen. Windows 8.1 Update added taskbar icons for Metro-style apps. There is no set limit on how many copies of desktop apps can run simultaneously. For example, one user may run as many copies of programs such as Notepad, Paint or Firefox as the system resources support. (Some desktop apps, such as Windows Media Player, are designed to allow only a single instance, but this is not enforced by the operating system.) However, in Windows 8, only one copy of Metro-style apps may run at any given time; invoking the app brings the running instance to the front. True multi-instancing of these apps were not available until Windows 10 version 1803 (released in May 2018). In Windows 10 Windows 10 brings significant changes to how UWP apps look and work. Look and feel How UWP apps look depends on the app itself. UWP apps built specifically for Windows 10 typically have a distinct look and feel, as they use new UI controls that look different from those of previous versions of Windows. The exception to this are apps that use custom UI, which is especially the case with video games. Apps designed for Windows 8.x look significantly different from those designed for Windows 10. UWP apps can also look almost identical to traditional desktop apps, using the same legacy UI controls from Windows versions dating back to Windows 95. These are legacy desktop apps that are converted to the UWP apps and distributed using the APPX file format. Multitasking In Windows 10, most UWP apps, even those designed for Windows 8.x, are run in floating windows, and users use the Windows taskbar and Task View to switch between both UWP apps and desktop apps. Windows 10 also introduced "Continuum" or "Tablet Mode". This mode is by default disabled on desktop computers and enabled on tablet computers, but desktop users can switch it on or off manually. When the Tablet Mode is off, apps may have resizable windows and visible title bars. When the Tablet Mode is enabled, resizable apps use the windowing system similar to that of Metro-style apps on Windows 8.x in that they are forced to either occupy the whole screen or be snapped to one side. UWP apps in Windows 10 can open in multiple windows. Microsoft Edge, Calculator, and Photos are examples of apps that allow this. Windows 10 v1803 (released in May 2018) added true multi-instancing capabilities, so that multiple independent copies of a UWP app can run. Licensing and distribution UWP apps can be downloaded from Windows Store or sideloaded from another device. The sideloading requirements were reduced significantly from Windows 8.x to 10, but the app must still be signed by a trusted digital certificate that chains to a root certificate. Lifecycle Metro-style apps are suspended when they are closed; suspended apps are terminated automatically as needed by a Windows app manager. Dynamic tiles, background components and contracts (interfaces for interacting with other apps) may require an app to be activated before a user starts it. For six years, invoking an arbitrary Metro-style app or UWP app from the command line was not supported; this feature was first introduced in the Insider build 16226 of Windows 10, which was released on 21 June 2017. Development Windows Runtime Traditionally, Windows software are developed using Windows API. Software have access to the Windows API with no arbitrary restrictions. Developers were free to choose their own programming language and development tools. Metro-style apps can only be developed using Windows Runtime (WinRT). (Note that not every app using WinRT is a Metro-style app.) A limited subset of WinRT is available for also conventional desktop apps. Calling a forbidden API disqualifies the app from appearing on Windows Store. Metro-style apps can only be developed using Microsoft's own development tools. According to Allen Bauer, Chief Scientist of Embarcadero Technologies, there are APIs that every computer program must call but Microsoft has forbidden them, except when the call comes from Microsoft's own Visual C++ runtime. Universal apps Apps developed to work intrinsically on smartphones, personal computers, video game consoles and HoloLens are called universal apps. This is accomplished by using the universal app API, first introduced in Windows 8.1 and Windows Phone 8.1. Visual Studio 2013 with Update 2 could be used to develop these apps. Windows 10 introduced Universal Windows Platform (UWP) 10 for developing universal apps. Apps that take advantage of this platform are developed with Visual Studio 2015 or later. Older Metro-style apps for Windows 8.1, Windows Phone 8.1 or for both (universal 8.1) need modifications to migrate to this platform. UWP is not distinct from Windows Runtime; rather, it is an extension of it. Universal apps no longer indicate having been written for specific OS in their manifest; instead, they target one or more device families, e.g. desktop, mobile, Xbox or Internet of Things (IoT). They react to the capabilities that become available to the device. A universal app may run on both a small mobile phone and a tablet and provide suitable experience. The universal app running on the mobile phone may start behaving the way it would on a tablet when the phone is connected to a monitor or a suitable docking station. APPX APPX is the file format used to distribute and install apps on Windows 8.x and 10, Windows Phone 8.1, Windows 10 Mobile, Xbox One, Hololens, and Windows 10 IoT Core. Unlike legacy desktop apps, APPX is the only installation system allowed for UWP apps. It replaces the XAP file format on Windows Phone 8.1, in an attempt to unify the distribution of apps for Windows Phone and Windows 8. APPX files are only compatible with Windows Phone 8.1 and later versions, and with Windows 8 and later versions. The Windows Phone 8.x Marketplace allows users to download APPX files to an SD Card and install them manually. In contrast, sideloading is prohibited on Windows 8.x, unless the user has a developers license or in a business domain. Security Traditional Windows software have the power to use and change their ecosystem however they want to. Windows user account rights, User Account Control and antivirus software attempt to keep this ability in check and notify the user when the app tries to use it, possibly for malicious purposes. Metro-style apps, however, are sandboxed and cannot permanently change a Windows ecosystem. They need permission to access hardware devices such as webcam and microphone and their file system access is restricted to user folders, such as My Documents. Microsoft further moderates these programs and may remove them from the Windows Store if they are discovered to have security or privacy issues. See also Windows App Studio WinJS References External links Index of Windows 10 apps .NET Computer-related introductions in 2012 Executable file formats Windows APIs Windows architecture Windows technology
38501610
https://en.wikipedia.org/wiki/List%20of%20University%20of%20California%2C%20Berkeley%20alumni%20in%20science%20and%20technology
List of University of California, Berkeley alumni in science and technology
This page lists notable alumni and students of the University of California, Berkeley. Alumni who also served as faculty are listed in bold font, with degree and year. Notable faculty members are in the article List of UC Berkeley faculty. Astronauts Leroy Chiao, B.S. 1983 – astronaut, first Asian-American and ethnic Chinese person to perform a spacewalk F. Drew Gaffney, B.A. 1968 – astronaut Tamara E. Jernigan, M.S. 1985 – astronaut Don L. Lind, Ph.D. 1964 – astronaut Brian T. O'Leary, Ph.D. 1967 – astronaut Margaret Rhea Seddon, B.A. 1970 – astronaut Charles Simonyi, B.S. 1972 – fifth space tourist (also listed in section Business founders and co-founders) James van Hoften, B.S. 1966 – astronaut Rex Walheim, B.S. 1984 – astronaut, member of the "Final Four" astronauts who flew on the very last Space Shuttle flight of STS-135 Mary Weber, Ph.D. 1988 – astronaut Astronomers and space explorers William F. Ballhaus, Jr., B.S. 1967, M.S. 1968, Ph.D. 1971 – former director of NASA's Ames Research Center, president and CEO of Aerospace Corporation (also listed in "Business and entrepreneurship" section) Michael C. Malin, B.A. (physics) 1967 – astronomer, principal investigator for the camera on Mars Global Surveyor, MacArthur Fellow, founder and CEO of Malin Space Science Systems, recipient of a NASA Exceptional Scientific Achievement Medal in 2002, recipient of the 2005 Carl Sagan Memorial Award Gerry Nelson, Ph.D 1972 – inventor of the segmented mirror telescope, for which he was awarded the Kavli Prize, leading to the building of the Keck telescopes Roger J. Phillips, Ph.D. 1968 – team leader of Apollo 17 Lunar Sounder Experiment, former director of Lunar and Planetary Institute and recipient of the G K Gilbert Award and the Whipple Award. H. Paul Shuch, Ph.D. 1990 – SETI scientist Peter Smith, B.S. 1969 – principal investigator and project leader for the $420 million NASA robotic explorer Phoenix, which physically confirmed the presence of water on the planet Mars for the first time David J. Schlegel, Ph.D. 1995 – pioneered the largest dust maps of the Universe, used to map the expansion rate of the Universe to more than 10 billion light years, recipient of the Lawrence Award Joel Stebbins, Ph.D. Physics 1903 – pioneered photoelectric photometry in astronomy, Royal Astronomical Society Gold Medal (1950), Henry Draper Medal (1915), Rumford Prize (1913), namesake of asteroid 2300 Stebbins and the moon crater Stebbins Charles Bruce Stephenson, Ph.D. 1958 – astronomer Theodore Van Zelst, B.S. 1944 – co-founder of Soiltest (testing company for soil, rock, concrete, and asphalt), recipient of the 1988 ASCE's "Chicago Engineer of the Year" award, developed the swing-wing design that allows supersonic aircraft to exceed the sound barrier, developed the first mobile baggage inspection unit, and developed lunar construction and soil testing for humankind's first steps on the moon Biologists David E. Garfin, Ph.D. – biophysicist who made significant contributions to electrophoresis in both the engineering and biology communities Edmund C. Jaeger – graduate student in 1918, became a renowned naturalist Lidia Mannuzzu, Ph.D. 1990 – biologist and physiologist, inventor of the biomolecular optical sensors with Ehud Y. Isacoff and Mario Moronne Donald W. Roberts, Ph.D. 1964 Research Professor Emeritus at Utah State University, early contributor to the idea of biological pest control Howard Schachman, professor of biochem and molecular biology Gopalan Shyamala, conducted cancer and zoology research; mother of California U.S. Senator and American Vice President-elect, Kamala Harris Computer scientists and engineers Allan Alcorn, 1971 – employee #3 at video game company Atari, electronics designer behind Atari's seminal Pong video arcade unit, and erstwhile boss of Steve Jobs at Atari Eric Allman, B.S. EECS 1977, M.S. C.S. 1980 – creator of Sendmail (mail transfer agent which delivers 70% of the email in the world); inducted into the Internet Hall of Fame Ken Arnold, B.A. CS 1985 – creator of the Curses software library, co-creator of Rogue Richard O. Buckius, Bachelor's '72 in Mechanical Engineering, Masters '73, Ph.D. '75 – Chief Operating Officer of the National Science Foundation David Chaum, Ph.D. CS 1982 – creator of the company DigiCash and the first digital currency, eCash Wen-Tsuen Chen, Ph.D. 1976 – helped establish the Taiwan Academic Network (TANet), the first Internet in Taiwan; winner of the 2011 Taylor L. Booth Education Award (also listed in Chancellors and Presidents) Wesley A. Clark, B.S. Physics 1947 – designed the first modern personal computer (LINC) George Crow, B.S. EE 1966 – one of the original computer hardware designers of the Apple Macintosh computer Alyosha Efros, Ph.D. 2003 – computer vision researcher and winner of the 2017 ACM Prize in Computing Sally Floyd. B.S. 1971, Ph.D. 1989 – invented Random Early Detection, or RED, an algorithm widely used in the internet. Andrea Frome, Ph.D. 2007 – known in the fields of computer vision, deep learning, and machine learning. John Gage, B.S. 1975 – fifth employee of Sun Microsystems, former chief researcher and vice-president of the Science Office for Sun Microsystems, current partner at venture capital firm Kleiner Perkins with Al Gore; credited with creating the phrase "the network is the computer" Gary Grossman, B.A. CS – software engineer, the "inventor of ActionScript" (the programming language utilized by Web content authors using the Adobe Flash Player platform) Jean Paul Jacob, M.S. and Ph.D. in Mathematics and Engineering (1966) – long research manager at the Almaden IBM Research Center, California; recipient of the University of California Research Leadership Award in 2003 for his 40 years of work and research development in its departments; electronic engineering degree (1960) from the Brazilian ITA Eugene Jarvis, B.S. EECS 1976 – creator of the classic Defender video arcade game; recipient of the Academy of Interactive Arts and Sciences Pioneer Award Lynne Greer Jolitz, B.A. 1989 – co-author, with husband William Jolitz, of 386BSD, which is the ancestor of FreeBSD, which in turn is an ancestor of Apple's Darwin operating system William Jolitz, B.A. 1997 – co-author, with wife Lynne Greer Jolitz, of 386BSD Spencer Kimball, B.A. CS 1996 – creator of the GIMP software Phil Lapsley, B.S. EECS 1988, M.S. EECS 1991 – co-creator of the NNTP (Network News Transfer Protocol used by Usenet newsgroups) Anthony Levandowski, B.S. Industrial Engineering 2002, M.S. IEOR 2003 – product manager of the Google driverless car; inventor of robotic motorcycle "Ghostrider" featured at the Smithsonian Institution, software developer at Google serving on the inaugural StreetView team Ed Logg, B.A. C.S. – engineering creator of the classic video games Asteroids, Centipede, and Gauntlet at Atari; recipient of the Academy of Interactive Arts and Sciences Pioneer Award Gordon Eugene Martin, B.S. EE 1947 – pioneering piezoelectric materials researcher for underwater sound transducers John M. Martinis, B.S., Ph.D. – first to achieve quantum supremacy Peter Mattis, B.S. CS 1997 – creator of GTK software Jack McCauley, B.S. EE and C.S. 1986 – engineer, inventor and video game developer Peter Merholz, B.A. 1993 – coined the term "blog" Ralph Merkle, B.A. 1974, M.S. 1977 – pioneer in public-key cryptography computer algorithms Jay Miner, 1959 – inventor of the Amiga personal computer Larry Nagel, BS 1969, MS 1970, PhD 1975 – IEEE Donald O. Pederson Award in Solid-State Circuits for "the development and demonstration of SPICE as a tool to design and optimize electronic circuits." Hans Reiser, B.A. 1992 – creator of the ReiserFS and Reiser4 computer filesystems Lucy Suchman, B.A. 1972, M.A. 1977, Ph.D. 1984 – Professor of Sociology, Lancaster University (UK); former research anthropologist at Xerox PARC and pioneer of human-computer interaction studies; author of Plans and Situated Actions (1987); awarded 2002 Benjamin Franklin Medal in Computer and Cognitive Science Andrew Tanenbaum, Ph.D. 1971 – computer scientist and creator of Minix, the precursor to Linux Ken Thompson, B.S., 1965; M.S., 1966 – Turing Award winner who designed and implemented the original Unix operating system Murray Turoff, B.A. Math and Physics 1958 – recipient of the Electronic Frontier Foundation's EFF Pioneer Award in 1994 for "significant and influential contributions to computer-based communications and to the empowerment of individuals in using computers"; distinguished professor emeritus at the New Jersey Institute of Technology David Wagner, M.S. 1999, Ph.D. 2000 – Professor of Computer Science; known for research in cryptography and security generally, including electronic voting Laurel van der Wal, B.S. 1949 – bioastronautics researcher William Yeager, B.A. 1964 – software developer who created the first multiple-protocol router software, which comprised the core of the first Cisco Systems IOS Ian A. Young, Ph.D. 1978 – senior fellow of Intel; co-inventor of Intel BiCMOS logic circuit family and clock design of Pentium series microprocessors from 50 MHz to 3 GHz Enrico Fermi Award John N. Bahcall, B.A. 1956 – 2003 Enrico Fermi Award for "innovative research in astrophysics leading to a revolution in understanding the properties of the elusive neutrino, the lightest known particle with mass." John S. Foster, Jr., Ph.D. 1952 – 1992 Enrico Fermi Award for "his outstanding contributions to national security, in technical leadership in the development of nuclear weapons, in leadership of Lawrence Livermore National Laboratory in its formative years, in technical leadership in the defense industry; and for excellent service and continued counsel to the government." M. Stanley Livingston, Ph.D. 1931 – 1986 Enrico Fermi Award for "his leadership contributions to the development of nuclear accelerators over a half century, from his involvement in the designing of the first cyclotrons to his role in the discovery of strong (alternating gradient) focusing, now used throughout the world for the design of nuclear accelerators and particle beams of the highest energies." Glenn T. Seaborg, Ph.D. 1937 – 1959 Enrico Fermi Award for "discoveries of plutonium and several additional elements and for leadership in the development of nuclear chemistry and atomic energy." Charles Shank, B.S. 1965, M.S. 1966, Ph.D. 1969– director (1989-2004) of the Lawrence Berkeley National Laboratory and professor (1989-2004) of chemistry, physics, and EE CS; 2015 Fermi Award for “the seminal development of ultrafast lasers and their application in many areas of scientific research, for visionary leadership of national scientific and engineering research communities, and for exemplary service supporting the National Laboratory complex.”, Stafford L. Warren, B.A. 1918 – pioneer in nuclear medicine; first dean of the School of Medicine at UCLA; 1971 Enrico Fermi Award for "the imaginative, prescient, and vigorous efforts which made possible the early development of atomic energy so as to assure the protection of man and the environment, and for the establishment of a biomedical research program which has resulted in many substantial applications of ionizing radiation to diagnosis and treatment of disease and to the general welfare." Robert R. Wilson, B.A. 1936, Ph.D. 1940 – 1973 National Medal of Science, 1984 Enrico Fermi Award for "his outstanding contributions to physics and particle accelerator designs and construction. He was the creator and principal designer of the Fermi National Laboratory and what is, at present, the highest energy accelerator in the world. His contributions have always been characterized by the greatest ingenuity and innovation and accomplished with grace and style." Herbert York, Ph.D. 1949 – 2000 Enrico Fermi Award for "his contributions to formulating and implementing arms control policy under four Presidents; for his founding direction of the Lawrence Livermore National Laboratory and his leadership in Research and Engineering at the Department of Defense; and for his publications analyzing and explaining these complex issues with clarity and simplicity." Feynman Prize The Feynman Prize in Nanotechnology is awarded by the Foresight Institute for significant advancements in nanotechnology. The prize is named in honor of Nobel physicist Richard Feynman, whose 1959 talk '"There's Plenty of Room at the Bottom" is considered to have inspired the beginning of the field of nanotechnology. David Baker, Ph.D. 1989 – biochemist and computation biologist, professor at the University of Washington, known for protein structure prediction distributed computing project Rosetta@home and the video game Foldit, recipient of the 2004 Feynman Prize Marvin L. Cohen, B.A. Physics 1957 – professor of Physics at UC Berkeley, 2003 Feynman Prize Steven Gwon Sheng Louie, Ph.D. 1976 – computational condensed-matter physicist, professor of Physics at UC Berkeley, 2003 Feynman Prize Alex Zettl, B.A. 1978 – Professor of Condensed Matter Physics and Materials Science at UC Berkeley, Senior Scientist, Material Sciences Division, Lawrence Berkeley National Laboratory, recipient of the 2013 Feynman Prize Mathematicians and physicists David Bohm, Ph.D. 1943 – founded pilot-wave theory of quantum mechanics, also known as Bohmian mechanics Edward Condon, Ph.D. 1926 – pioneer in quantum physics, director of the National Bureau of Standards, president of the American Physical Society Marc Culler Ph.D. 1978 – mathematician working in geometric group theory and low-dimensional topology George Dantzig, Ph.D. 1946 – father of linear programming, created the simplex algorithm Andreas Floer, mathematician, inventor of Floer homology Albert Ghiorso, B.S. EE 1937 – co-discoverer of twelve chemical elements such as americium, berkelium, and californium Edward Ginzton, B.S. 1936, M.S. 1937 – recipient of the 1969 IEEE Medal of Honor, namesake of the Ginzton Laboratory at Stanford University Michio Kaku, Ph.D. 1972 – theoretical physicist, co-creator of string field theory, author of the New York Times bestsellers Hyperspace and Physics of the Impossible, radio host of Science Fantastic Joseph W. Kennedy, Ph.D. 1939 – codiscoverer of the element plutonium; later, professor and head of the department of chemistry at Washington University in St. Louis Arthur Scott King, Ph.D. 1903 – first ever Ph.D. in physics from this university Robyn Millan, B.A. 1995, M.A. 1999, Ph.D. 2002 – experimental physicist known for work on Earth's radiation belts John H. Schwarz, Ph.D. 1966 – theoretical physicist, one of the founders of superstring theory Physicians and Allied Medical Specialists Zubin Damania (born 1973), physician, comedian, internet personality, musician, and founder of Turntable Health Madhu Pai, PhD in epidemiology at University of California, Berkeley, is the Canada Research Chair of Epidemiology and Global Health at McGill University. Helen B. Taussig, B.A. 1921 – cardiologist, namesake of Blalock–Taussig shunt for blue baby syndrome; recipient of 1964 Medal of Freedom from President Lyndon Johnson; first female president of the American Heart Association; namesake of the "Helen B. Taussig Children's Pediatric Cardiac Center" at Johns Hopkins University; namesake of the Helen B. Taussig College at the Johns Hopkins University School of Medicine Other Hal Anger, B.S. 1943 – inventor of the scintillation camera (known as the Anger camera), pioneer in nuclear medicine Mary Kalin Arroyo, Ph.D. 1971 in Botany – professor, University of Chile Arlene Blum, Ph.D. 1971 in Chemistry – Executive Director of the Green Science Policy Institute, author, mountaineer who lead an all-woman ascent of Annapurna Michael J. Carey, B.S. 1983 – technical director at BEA Systems, member of the National Academy of Engineering Alan H. Coogan, M.A. 1957 – geologist specializing in applied sedimentary geology Richard M. Eakin, B.A. 1931, Ph.D. 1935 – Professor of Zoology, known for lecturing dressed as famous scientists Glen Edwards, B.S. 1941 – U.S. Air Force test pilot, namesake of Edwards Air Force Base Lillian Moller Gilbreth, B.A. 1900, M.A. 1902 – industrial/organizational psychologist along with her husband Frank Bunker Gilbreth who researched industrial worker efficiency; first woman member of the American Society of Mechanical Engineers; she and her husband were the basis of the books Cheaper by the Dozen and Belles on Their Toes, which were written by their children; commemorated on a United States Postal Service stamp in 1984; portrayed by Myrna Loy in the 1950 film Cheaper by the Dozen Maurice K. Goddard, M.S. 1938 – former secretary of the Pennsylvania Department of Conservation and Natural Resources, a driving force in the creation of 45 Pennsylvania state parks during his 24 years in office Charles Scott Haley, B.S. 1907 – was an expert in the field of placer gold deposits. Charles F. Harbison, B.A. 1933 – entomologist and the curator of entomology at the San Diego Natural History Museum Denham Harman, B.S. Chemistry, M.S. Chemistry, Ph.D. Chemistry 1943 – father of the free-radical theory of aging Dorothy M. Horstmann B.S. 1936 – virologist who made important discoveries about polio Susan Hough, B.A. 1982 – seismologist and author Harvey Itano, B.S. 1942 – professor of pathology at the University of California, San Diego, first Japanese American elected to the National Academy of Sciences, and pioneering researcher in sickle cell anemia Hope Jahren, Ph.D. 1996 in soil science – geobotanist and geochemist Richard F. Johnston – ornithologist, academic and author David Julius, Ph.D. 1984 – awarded Breakthrough Prize for discovering molecules, cells, and mechanisms underlying pain sensation Greg Kasavin – video game developer and former editor of GameSpot Ancel Keys, B.A. 1925, M.S. 1928, Ph.D. 1930 – originator of the low fat diet for cardiovascular disease, and the Keys Equation John Augustus Larson, Ph.D. 1920 – inventor of the modern lie detector Jane McGonigal, M.A., 2003, Ph.D. 2006 in performance studies – game designer and games researcher; named one of the world's top innovators under the age of 35 by MIT's Technology Review in 2006 Margaret Melhase, B.S. – co-discoverer of Caesium-137 Anna María Nápoles, M.P.H., Ph.D. – behavioral epidemiologist and science administrator Roger Revelle, Ph.D. 1936 – one of the "grandfathers" (with Hans Suess) of the global warming hypothesis (but later wrote "the scientific base for a greenhouse warming is too uncertain to justify drastic action at this time"), "father" of the University of California, San Diego; founder of the Center for Population Studies at Harvard University where he mentored undergraduate Al Gore Loren L. Ryder, B.S. Physics 1924 – invented the use of magnetic tape in the sound of films, recipient of five Academy Awards for his technical expertise Zdenka Samish, M.A. 1933 – Czech-Israeli food technology researcher; one of first agricultural researchers in pre-state Israel Carol Shaw, B.S. Engineering, M.A. C.S. – first woman video game designer Milicent Shinn, Ph.D. 1898 – child psychologist and author, first woman to earn a doctorate at Berkeley Tiffany Shlain, B.A. 1992 – founder of Webby Awards, filmmaker Simon Schwartzman, Ph.D. 1973 – recipient of the Brazilian Order of Scientific Merit Oktay Sinanoğlu, B.S. 1956 – the "Turkish Einstein"; professor of chemistry and molecular biophysics and biochemistry at Yale University Tracy I. Storer, B.S. 1912, M.S. 1913, Ph.D. 1921 – zoologist specializing in California wildlife, founded Department of Zoology at U.C. Davis Keith Tantlinger, B.S. 1941 – mechanical engineer, developer of the modern intermodal container (including the twistlock) Jenny Y. Yang, B.S. 2001 – chemist Cher Wang, co-founder of VIA Technologies and HTC, and a pioneer of the smartphone. Gardner F. Williams, B.A. 1865, M.A. 1869 (first master's degree conferred by "College of California", aka UC/Berkeley) – first general manager of De Beers Consolidated Mines; mining engineer; wrote The Diamond Mines of South Africa; some account of their rise and development; awarded silver medal by the Royal Academy of Science in Sweden in 1905; awarded honorary doctorate of laws by the University of California in 1910 See also List of UC Berkeley faculty University of California, Berkeley School of Law References Berkeley alumni in science and technology Alumni Science
574775
https://en.wikipedia.org/wiki/Abstraction%20layer
Abstraction layer
In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem, allowing the separation of concerns to facilitate interoperability and platform independence. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL and other graphics libraries. In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized. A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions. A famous aphorism of David Wheeler is "All problems in computer science can be solved by another level of indirection". This is often deliberately misquoted with "abstraction" substituted for "indirection". It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection." Computer architecture In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as: software programmable logic hardware Programmable logic is often considered part of the hardware, while the logical definitions are also sometimes seen as part of a device's software or firmware. Firmware may include only low-level software, but can also include all software, including an operating system and applications. The software layers can be further divided into hardware abstraction layers, physical and logical device drivers, repositories such as filesystems, operating system kernels, middleware, applications, and others. A distinction can also be made from low-level programming languages like VHDL, machine language, assembly language to a compiled language, interpreter, and script language. Input and output In the Unix operating system, most types of input and output operations are considered to be streams of bytes read from a device or written to a device. This stream of bytes model is used for file I/O, socket I/O, and terminal I/O in order to provide device independence. In order to read and write to a device at the application level, the program calls a function to open the device, which may be a real device such as a terminal or a virtual device such as a network port or a file in a file system. The device's physical characteristics are mediated by the operating system which in turn presents an abstract interface that allows the programmer to read and write bytes from/to the device. The operating system then performs the actual transformation needed to read and write the stream of bytes to the device. Graphics Most graphics libraries such as OpenGL provide an abstract graphical device model as an interface. The library is responsible for translating the commands provided by the programmer into the specific device commands needed to draw the graphical elements and objects. The specific device commands for a plotter are different from the device commands for a CRT monitor, but the graphics library hides the implementation and device-dependent details by providing an abstract interface which provides a set of primitives that are generally useful for drawing graphical objects. See also Application programming interface (API) Application binary interface (ABI) Compiler, a tool for abstraction between source code and machine code Hardware abstraction Information hiding Layer (object-oriented design) Protection ring Operating System, an abstraction layer between a program and computer hardware Software engineering References Computer architecture Abstraction
33825965
https://en.wikipedia.org/wiki/University%20of%20Scranton%20buildings%20and%20landmarks
University of Scranton buildings and landmarks
The University of Scranton's 58-acre hillside campus is located in the heart of Scranton, a community of 75,000 within a greater metropolitan area of 750,000 people, located in northeast Pennsylvania. Founded in 1888 as St. Thomas College and elevated to university status in 1938, the university has grown and changed over time. Since 1984, the university has completed over 50 renovation projections. Over the past decade alone, the university's campus has undergone a dramatic transformation, as the school has invested more than $240 million towards campus improvements and construction, including the Loyola Science Center, the DeNaples Center, Pilarz and Montrone Halls, Condron Hall, Edward R. Leahy, Jr. Hall, and the Dionne Green. The Harry and Jeanette Weinberg Memorial Library Completed in 1992, the Harry and Jeanette Weinberg Memorial Library was designed to replace the Alumni Memorial Library, which proved unable to serve adequately the growing student population, to house the vast library collections, and lacked the necessary wiring for modernizing the library with new technological advances. More than double the size of the Alumni Memorial Library, the Weinberg Memorial Library has five floors which can seat anywhere from 700 to 1000 users at cubicles, tables, group study rooms, and lounges. It currently houses 473,830 volumes, over 15,500 electronic journals, 562,368 microform pieces and 1,709 periodical subscriptions, both current and archived. It is also home to the University Archives and Special Collections, which features many rare books, as well as university records. On the third floor, there are a number of administrative offices as well as two large classrooms which are used for classes based on learning about the library and the services it can provide. The fourth floor has a large reading room featuring a stained glass window. The fifth floor is the Scranton Heritage Room which is a large open hall featuring thirty-nine panel paintings by Trevor Southey depict art, religion, and science in the Lackawanna Valley and in the world. Renovations at the Library include the opening of multiple 24-hour study rooms, including the Pro Deo Room, the Reilly Learning Commons, and, most recently, the entire second floor. The Pro Deo Room contains a computer lab with networked PCs, two laser printers, a vending machine area, and a Java City Café. The Pro Deo Room also features a 46-inch touchscreen table PC. In order to raise the $13.3 million needed to build the Library, the University of Scranton launched the "Gateway to the Future" Fundraising Campaign. In late 1989, Harry Weinberg, a former Scranton businessman and long-time benefactor of the University of Scranton, made significant headway in the fundraising goal by announcing a six million dollar donation to the university from the Harry and Jeanette Weinberg Foundation, with five million dollars going to the library and the other one million going to the school's Judaic Studies Institute. In order to honor the significant contribution of Mr. Weinberg, the new library was named for him and his wife. Before becoming home to the Weinberg Memorial Library, the site had once belonged to Worthington Scranton where he lived until moving to the estate in 1899, at which point the house was converted into the Hahneman Hospital until it relocated in 1906 to the current Community Medical Center site. In 1941, Scranton donated the land to the university. In the 1950s, the site held the A Building barracks, which were purchased by the university in order to accommodate increased enrollment due to the GI Bill which were used as classrooms and offices, until they were demolished in 1962. Until the construction of the Weinberg Memorial Library in the 1990s, the site housed asphalt playing courts. The Patrick and Margaret DeNaples Center On January 31, 2006, the university announced plans for the DeNaples Center, a new $30,000,000 campus center that would replace Gunster Memorial Student Center and mark the university's most ambitious project in its 118-year history. In the four decades since Gunster had been constructed in 1960, the University of Scranton, as University President Father Pilarz said, "has evolved into a broadly regional, comprehensive institution with students coming from more than 30 states and more than 35 countries," and thus "has simply outgrown the 77,000 square foot Gunster Center, which was built for a time when only 228 of our total student enrollment of 2,300 lived on campus." The first floor of the building includes a grand lobby, the campus bookstore, the student mail center, commuter lockers, a Provisions on Demand (P.O.D.) convenience and the DeNaples Food Court, a retail dining option with seating for 250, which includes Starbucks Coffee, Chick-Fil-A, and Quiznos among other options. The second floor offers a fireplace lounge, offices for Student Affairs, University Ministries and the Student Forum. The Student Forum contains a computer lab for students to use, as well as student space with couches and tables. The Student Forum is home to the Center for Student Engagement, including the offices for the University of Scranton Programming Board (USPB), the Aquinas newspaper, the Windhover yearbook, the Jane Kopas Women's Center, the Multicultural Center, Student Government, and Community Outreach. The third floor serves as the primary dining space in the building and has seating for 800. The fourth floor includes a subdividable 7,000 sq. ft. ballroom with dinner seating for 425 and lecture seating for more than 700 and three multipurpose meeting rooms as well as the Ann and Leo Moscovitz Theater. On September 13, 2009, the fourth floor ballroom was dedicated in honor of Rev. Bernard R. McIllhenny, S.J., who served as headmaster at Scranton Prep from 1958 to 1966 and dean of admissions at the university from 1966 to 1997. The DeNaples Center was the first building on campus designed and constructed to achieve the LEED certification as part of the university's Sustainability initiatives, which it received in February 2009. LEED stands for Leadership in Energy and Environmental Design, a cutting-edge system for certifying design, construction and operations of "green" buildings, coordinated by the U.S. Green Building Council. The DeNaples Center is named in honor of the late Patrick and Margaret DeNaples, the parents of Louis DeNaples Sr., a local business owner, active community volunteer and philanthropist, former university board member, and reputed organized crime associate. The DeNaples Center was dedicated in February 2008. Upon the completion and opening of the DeNaples Center, the Gunster Memorial Student Center was demolished and was replaced by the Dionne Green, a large green space located directly in front of the DeNaples Center. The University Commons For twenty-five years, there had been an effort by the University of Scranton to close the 900 and 1000 blocks of Linden Street which ran through the school's campus. In 1980, the improvement project was actualized. The Commons project was intended to create a more attractive, park-like atmosphere on the campus and to eliminate the safety hazards associated with pedestrian and vehicle traffic. With that new space, the university hoped to create a 20-foot-wide brick walkway, trees, benches, a water fountain, and patio area in addition to developing the area with landscaping. The University Commons proposal was approved by the Scranton City Council on December 20, 1978. Construction on the project was begun on June 2, 1980, as parts of Linden Street were removed. The project was completed around November 1980 and dedication ceremonies were held in December 1980. Currently, it serves as the main walkway through the university's campus. Royal Way In 1991, the University Commons was extended on the 300 block of Quincy Avenue between Linden and Mulberry Streets, which had been closed to vehicular traffic and owned by the university since 1987. This pedestrian pathway, named Royal Way, serves as an official entrance to the university and the GLM (Gannon-Lavis-McCormick) student residences. At the time of its construction, the 24-foot-wide Royal Way was paved in z-brick and featured landscaping with trees and shrubs. The Mulberry Street entrance to the Royal Way featured a campus gate, a gift from the University of Scranton Classes of 1985, 1990 and 1991, and the opposing terminus was Metanoia, the bronze sculpture of St. Ignatius by Gerard Baut. The sculpture has since been moved to the opposite side of the University Commons, in front of the Long Center. Current academic buildings Alumni Memorial Hall Completed in 1960, the two-story building, formerly called Alumni Memorial Library, was designed to hold 150,000 volumes; the collection at the time numbered approximately 62,000 volumes. It also had study space for approximately 500 students. The split-level design also included conference rooms, a music room, a visual aid room, microfilm facilities, and a smoking lounge. The buff iron-spot building was considered cutting edge at the time, with glare-reducing thermo-pane glass, noise-reducing solid brick walls, radiant heating and cooling, and humidity control. Although originally estimated at $750,000, overall construction costs were approximately $806,000 after complications occurred when a massive mining cavity, complete with a network of surrounding tunnels, was discovered to lie only forty feet below the surface of the building site. Using a digging rig brought in from Texas, contractors sunk 33 steel casings into the ground, each more than 40 feet long, and then poured concrete through them to form pillars in order to support the structure. To raise money for the construction, a fundraising campaign led by Judge James F. Brady sought individual contributions from each of the university's alumni. The building was extensively renovated in 1993 after the completion of the new Weinberg Memorial Library. No longer needed to house the university's book collection or to serve as a study space for students, Alumni Memorial Hall was converted to house the Psychology Department on the second floor, which had formerly been located in O’Hara Hall, as well as the Division of Planning and Information Resources, which was formerly known as the University Computing and Data Services Center. The new location in Alumni Memorial Hall "significantly enhance[d] educational and research facilities" for the Psychology Department, as John Norcross, chairman of the Psychology Department, remarked. Brennan Hall Completed in 2000, Brennan Hall houses the departments of the Arthur J. Kania School of Management, or KSOM. The five-story, 71,000-square-foot building, located on the east side of Madison Avenue, features nine classrooms, seminar rooms, offices, a 140-seat auditorium, a quiet study area, an advising center, board rooms, and an Executive Education Center. The classrooms are located on the first two floors of Brennan Hall. Two of the nine classrooms are two tiered case-study rooms equipped for video teleconferencing. Two other classrooms are computers rooms, while the rest are traditional classrooms. In 2008, the university dedicated one of Brennan Hall's classrooms. The Jack and Jean Blackledge Sweeney Classroom on the first floor honors Jack Sweeney '61, the retired president and co-founder of Special Defense Systems in Dunmore, a member of the Pride, Passion, Promise Campaign Executive Committee, and an active University of Scranton alumni. The first floor also contains the Irwin E. Alperin Financial Center, which was opened in 2007. The Alperin Center was designed to simulate a stock market trading floor, complete with an electronic ticker and data displays, 40 computers, a surround sound system, conference facilities, and a network of specialized software-designed to support the Kania School business curriculum with simulation capabilities and faculty-student research on financial and commodity markets. The third and fourth floors house faculty offices, departmental offices, the dean's office, and conference rooms. There is a behavioral lab for teaching and research purposes, meeting and storage places for clubs and an MBA lounge that will include locker space for master's degree students. The fifth floor houses the Executive Education Center. The Executive Center includes five main areas: a dining room, a board room, a meeting room, a large reception area, and an auditorium on the second floor. The Executive Education Center provides technologically advanced conference space for the university, and businesses and organizations throughout northeastern Pennsylvania. In 2005, it was named the Joseph M. McShane, S.J., Executive Center. The Pearn Auditorium, which seats 140, serves as a gathering space for various lectures, presentations and community events. Dedicated in 2008, the James F. Pearn Auditorium on the second floor of Brennan Hall is named for the late father of Frank Pearn ‘83, the chief administrative officer of the Mergers and Acquisitions Division of Lehman Brothers, the chair of the university's Economic Strength Committee of the board of trustees, and a member of the Campaign Executive Committee. Dedicated in 2008, the Rose Room, located on the fifth floor of Brennan Hall, is used for lectures, dinners, luncheons, seminars, and other campus events. It can accommodate more than 200 people. It honors Harry Rose '65, the president and chief executive officer of The Rose Group, a restaurant management company, a member of the university's board of trustees, and a member of the Campaign Executive Committee. The Executive Center also contains a 50-seat board room which is used by various governing boards of the university, including the board of trustees, University Council and University Senate. In 2003, the University of Scranton named the board room in honor of PNC Bank to recognize a significant grant from the PNC Foundation for the construction of Brennan Hall and to acknowledge the support PNC has consistently provided to the university. Additional facilities of the Executive Center, which is available to organizations outside the university, include a lobby and reception area, and a meeting room accommodating 20 people. Financed by the Campaign of Scranton, a $35 million capital fundraising effort, Brennan Hall cost $11.5 million to construct. The funds raised to build Brennan Hall included a $3.5 million gift from alumnus John E. Brennan ‘68 and $1 million of a $4 million gift from alumnus Arthur J. Kania ‘53, for whom the School of Management is named, with additional Campaign funds coming from alumni, friends of the university, corporations, and foundations. In order to recognize Brennan's generous contribution to the university, the new building was named in his honor. John E. Brennan is the president of Activated Communications, New York City; a director and vice-chairman of the Board of Southern Union Company; a member of the board of directors for Spectrum Signal Processing; and a founder of Metro Mobile CTS, Inc., and served as its president and chief operating officer until its sale to Bell Atlantic Corp. Ciszek Hall Ciszek Hall, formerly known as the Center for Eastern Christian Studies, was built as an ecumenical and academic institute designed to promote knowledge about and understanding of the religious and cultural traditions of Eastern Christianity. In addition to the Byzantine Rite chapel in the building, the center was designed to house a 15,000-volume library, office, social area, and a cloister garden. Construction was begun in 1987 and completed later that year. The Center for Eastern Christian Studies was renamed Ciszek in 2005 in the memory of Fr. Walter Ciszek, S.J., a native of northeastern Pennsylvania and a candidate for sainthood who spent twenty-three years ministering in Soviet prisons and the labor camps of Siberia. Currently, Cisek Hall houses the university's Office of Career Services, a chapel which celebrates service in the Byzantine Rite, and a library containing 15,000 books. Edward R. Leahy, Jr. Hall In November 2013, the university broke ground on its newest building, the 111,500-square-foot, eight-story rehabilitation center designed to house the departments of Exercise Science, Occupational Therapy, and Physical Therapy. Leahy Hall contains 25 interactive rehabilitation laboratories, 9 traditional and active-learning classrooms, research facilities, multiple simulation environments, more than 50 faculty offices, and 9 group study rooms. A unified entrance for Leahy Hall and McGurrin Hall was also created, in order to promote and allow interaction between the various departments in the Panuska College of Professional Studies, the rest of which, including Nursing, Education, Counseling & Human Services, Health Administration and Human Resources, are housed in McGurrin Hall. At 140 feet, it is now the tallest building on the university campus. The building was designed for and was constructed in accordance with Leadership in Energy and Environmental Design (LEED) standards for certification. Leahy Hall contains 25 different laboratories, including three pediatric laboratories, focused on the physical, mental and emotional development of children. Some of its laboratories are the Human Motion Laboratory, the Strength Laboratory, the Physiology Laboratory, the Human Anatomy Laboratory, the Active Learning Laboratory, the Body Composition Laboratory, the Therapeutic Modalities and Orthopedic Physical Therapy Laboratory, the Rehabilitation and Neurological Physical Therapy Laboratory, the Pediatrics Gross Motor Laboratory, the Kinesiology and Physical Rehabilitation Occupational Therapy Laboratory, the Occupational Performance Laboratory, the Hand and Rehabilitation Laboratory, and the Pediatric and Rehabilitation Suite containing the Gross Motor Rehabilitation Lab, Fine Motor Rehabilitation Lab, and the Sensory/Snoezelen Room. Leahy Hall is located on the former site of the Scranton chapter of the Young Women's Christian Association, on the southwest corner of Jefferson Avenue and Linden Street. Originally constructed in 1907 and purchased by the University of Scranton in 1976, the YWCA was transformed into Jefferson Hall, serving as an off-campus residence for university students until the building was converted into old Leahy Hall in 1984, used to house facilities for the university's Physical Therapy and Occupational Therapy departments. Leahy Hall opened for the Fall 2015 semester and was dedicated on September 18, 2015, as Edward R. Leahy, Jr. Hall, bearing the same name as the hall it replaced to recognize and honor the Leahy family for their service to the university, particularly in their endowment of health care education, dating back to the early 1990s. The son of Edward and Patricia Leahy, Edward R. Leahy, Jr., was born in 1984 with cerebral palsy and several related disabilities. He died shortly before his ninth birthday in 1993. The Houlihan-McLean Center In 1986, the University of Scranton acquired the former Immanuel Baptist Church at the corner of Jefferson Avenue and Mulberry Street in order to house the school's Performance Music Program, which includes the university's orchestra, bands, and singers, as well as to serve as a site for musical and other arts performances, lectures, and special liturgies. The church was built in 1909 in the Victorian Gothic style. In 1984, the church was vacated when the congregation merged with the Bethany and Green Ridge Baptist churches before being acquired by the University of Scranton. After its purchase by the university, the building underwent extensive renovations and restoration, including plaster repair and floor refinishing, painting and carpeting, extension of the stage, electrical re-wiring, new lighting, a new sound system, refurbishing the organ, pressure cleaning and restoration of the building's masonry, and the installation of a new roof. The main floor of the building houses the Aula, a concert hall which can seat approximately 650 people; the Atrium, a large space which can be used as a recital, reception, or lecture hall that can seat 400 people and formerly served as the church's Sunday School; the Wycliffe A. Gordon Guest Artist Hospitality Suite, and the sound control room. The ground floor of the building includes a large rehearsal hall, small ensembles areas, a musicians' lounge, practice rooms, offices, music library, and secure instrument storage and repair areas. The Nelhybel Collection Research Room is on the top floor, along with the organ loft and organ chamber. Houlihan-McLean features an historic 1910 Austin Opus 301 symphonic pipe organ, one of only a few surviving examples of early 20th-century organ building. The 3,157 pipes, which include some as large as 17 feet long which weigh 200 pounds and others which are smaller than a pencil, were transported to Stowe, Pennsylvania to be cleaned and repaired by specialists at Patrick J. Murphy & Associates, Inc. The Houlihan-McLean Center also has a bell tower which holds a large bell, forged in 1883 by the Buckeye Bell Foundry and Van Duzen and Tift, Cincinnati, Ohio, and installed by the Immanuel Baptist congregation in the church when the Church moved into the current Houlihan-McLean Center from its former location. The bell's inscription reads, "Presented by the Choir in Memory of Mrs. C. F. Whittemore, Who Died July 7, 1883." In 1991, the university installed an electronic bell ringer, programmed to ring the bell every hour using a motor and hammer manufactured in England. The building is named for Atty. Daniel J. Houlihan and Prof. John P. McLean, two dedicated, longtime faculty members at the university. A former student of theirs was the benefactor whose contribution, made in their honor, enabled the university to acquire the structure in 1986. The Houlihan McLean Center is one of three churches the university acquired and preserved during the 1980s once their congregations were no longer able to maintain the buildings. In 1985, the university converted the former Assembly of God Church at 419 Monroe Avenue into Rev. Joseph A. Rock, S.J., Hall. It currently houses Madonna Della Strada Chapel, the principal campus setting for university liturgies, as well as the university's Military Science department and ROTC program. In 1986, the university acquired the Immanuel Baptist Church and converted it into the Houlihan McLean center. Currently, it houses the university's Performance Music Programs. The university acquired the former John Raymond Memorial Church, Madison Avenue and Vine Street, in 1987. It now serves as the Smurfit Arts Center, which houses studio space for the university's Fine Arts department. The university's efforts were cited in a 1988 edition of Inspired, a bi-monthly publication devoted to the preservation of historic religious buildings. Hyland Hall Completed in 1987, Kathryn and Bernard Hyland Hall is a four-story facility which contains sixteen classrooms and a 180-seat tiered lecture hall, in addition to a cafe and lounge. Hyland Hall also housed the university's bookstore until it was moved to the DeNaples Center in 2008. The site of Hyland Hall was previously occupied by Lackawanna College, prior to its move to 901 Prospect Avenue. Since 2001, Hyland has also been home to the university's Hope Horn Art Gallery. Before moving to Hyland, the university's art gallery had been located in The Gallery, which was demolished in 2001. Institute of Molecular Biology and Medicine Completed in August 1996, the Institute of Molecular Biology and Medicine was funded by a $7.5 million grant from the U.S. Air Force and the Department of Defense. The 1,500 square-foot facility houses research laboratories, offices, and the Northeast Regional Cancer Institute. The IMBM is dedicated to the molecular biological research, chiefly in the field of proteomics, or the study of the full set of proteins encoded by a genome. Loyola Science Center Completed in 2011, the Loyola Science Center, also known as the Unified Science Center, houses the university's Biology, Chemistry, Computing Sciences, Mathematics, and Physics/Electrical Engineering departments as well as any programs currently associated with these departments. In addition, it is designed to serve as a center for collaborative learning for all members of the campus and the community and to create a physical space that would deepen the university's culture of engagement. The center includes a nearly 150,000-square-foot, four-story new structure on what was previously a parking lot along Monroe Avenue and Ridge Row which has been seamlessly integrated into nearly 50,000 square feet of renovated space in the Harper McGinnis Wing of St. Thomas Hall, which was built in 1987 to house the physics and electrical engineering departments. The Harper-McGinnis Wing of St. Thomas Hall was extensively renovated in 2012 while the Science Center was being built. It now houses the departments of Theology and Religious Studies, Communication, Philosophy, History as well as the office of LA/WS, or Latin American and Women's Studies, and the university's radio station, 99.5 WUSR. Finally, the design includes a new entrance into St. Thomas Hall and the science center from the Commons. The Loyola Science Center contains 34 teaching and research laboratories, a rooftop greenhouse for teaching and research, a 180-seat lecture hall for symposia and seminars, numerous group study and research areas, 22 classrooms, 80 offices, a multi-story atrium, and a vivarium. Additionally, the second floor of the Harper-McGinnis wing contains an area which highlights student, faculty, and community work and engages visitors. It contains a large television which displays the University Twitter feeds, the science center's energy usage, and videos featuring student and faculty research; glass exhibits which feature research projects and science displays; and aquariams which house fish for student study from a variety of different ecosystems. The Loyola Science Center also contains Bleeker Street, a coffee shop and cafe. The center was designed to meet the Silver standard for Leadership in Energy and Environmental Design (LEED) certification, though it has not gone through the certification process. The $85 million, nearly 200,000-square foot building is the largest capital project in the history of the Jesuit university and the culmination of more than 15 years of planning and preparation. After the Science Education Committee created the vision that would eventually become the Loyola Science Center in the fall of 1998, it took two years to complete a paper about the vision. After seven years of programming meetings, the university broke ground May 14, 2009, for the facility's construction. The Loyola Science Center was dedicated on September 28, 2012. The center was named in honor of St. Ignatius of Loyola, the founder of the Society of Jesus. Additionally, three wings inside the building have been named to honor the contributions and service of members of the University of Scranton community. On November 11, 2011, the first wing was dedicated as McDonald Hall. Herbert McDonald served as president of the staff and chairman of the department of surgery at Hahnemann Hospital, now known as the Geisinger Community Medical Center of Scranton, and his wife Mary McDonald served on the university's board of trustees and vice chair from 1989 to 1992. Milani Hall was dedicated on March 24, 2012, in honor of Dr. Frank Milani '55, as a recognition of his family's continued support of the university after he received his Bachelor of Science in biology from the university in 1955. In recognition of Carl J. Keuhner and JoAnne M. Keuhner, Keuhner Hall was dedicated on August 5, 2012. Carl Kuehner served on the board of trustees from 2003 to 2009 as well as chairman of the board from 2007 to 2009. The fourth wing, Harper-McGinnis Hall, located in St. Thomas Hall, was built and dedicated in 1987 in recognition of physics professors Joseph P. Harper, Ph.D., the chairman of the physics department, and Eugene A. McGinnis, PhD, a long-time physics professor at the university. Together, these men contributed more than 70 years of teaching service to the university. In 1968, the University of Scranton purchased the land where Loyola Science Center stands from the Scranton Redevelopment Authority for $25,221.60 as part of the city's urban renewal project. The 42,007 square foot lot, located at the eastern corner of Monroe Avenue and Ridge Row, had previously been occupied by Auto Express Company. From the time of its purchase until construction began on the Loyola Science Center, the site served as a parking lot with sidewalks, landscaping, and lighting. The McDade Center for Literary and Performing Arts The McDade Center for Literary and Performing Arts was constructed in 1992 on the former site of the Lackawanna County Juvenile Center. Home to the university's English & Theatre department's classrooms, offices, labs, meeting spaces, and a black box studio theatre, the McDade Center also houses the 300-seat Royal Theater where the University Players stage their productions. The building's other features include a computer writing and instructions lab, a seminar room, a small screening room for film classes and an office for Esprit, the university's Review of Arts and Letters. Additionally, the building contains stained glass in the lobby and an engraved quotation above the main entrance. The building's exterior features "The Doorway to the Soul," a steel and wire sculpture by Pennsylvania artist Lisa Fedon. "The Doorway" consists of 18 framed images fabricated variously of steel plate, perforated steel, round steel bars and wire cloth which each represent experiences in the human journey towards truth while the grid itself represented a matrix of inner-connectedness. The individual panels within the grid are titled: The Thinker; Reaching Out To My Self; Natural and Curious Yearning of a Child; Eternal Bridge; Acceptance; A State of Calm, Peace, Knowing; Trials and Tribulation/The Ascent; The Void/God; The Writer; Father, Son, and Holy Spirit; Hope/Prayer; Christ; The Climb/The Worn Steps/The Invitation to Enter; The Written Word; Unconditional Love and Caring/Innocence of Children; The Self Exposed. The two external panels are: The Self Observing and The Only Begotten Son. At the dedication ceremony in 1993, the building was named in honor of the Hon. Joseph M. McDade because of "his continuous support of this area and of the university and its academic mission," Rev. Panuska noted. The McDade Center location was once the site of Crawford House, the 1898 Tudor Revival home of coal operator, baron, and Peoples Coal Company owner James L. Crawford. In 1992, several years after Crawford's wife died, Lackawanna County purchased the estate to serve as the Juvenile Detention Center. In 1989, after four years of negotiations, the University of Scranton acquired Crawford House. Originally, the university planned to renovate and restore the property, where it would relocate the Admissions and Financial Aid offices as well as a combinations switchboard and a visitors area. However, the university discovered that the interior damage was too severe and that it would not be economically feasible to renovate it. The university's decision to demolish the Crawford House ignited fierce controversy because of strong opposition from local historical organizations, such as the Lackawanna Historical Society, the State Historic Preservation Office, and the Architectural Heritage Association who believed the house "represent[ed] the lifestyle of a coal baron of the late nineteenth century," and was therefore significant for Scranton, a city founded on coal. In an attempt to compromise with those upset by the potential demolition of Crawford House, the university proposed that the building be relocated in order to preserve its historical aspects but this too proved too costly so Crawford House was demolished in 1991. Rather than using the site for administrative offices as originally planned, the university decided to build the Instructional Arts Facility which would be home to the English and Theater departments, as the need for performing arts space was identified back in 1983. The Crawford House was subsequently delisted from the National Register in 1992. McGurrin Hall Completed in 1998, McGurrin Hall houses many of the departments in the J.A. Panuska College of Professional Studies, including Education, Nursing, Counseling and Human Services, and Health Administration and Human Resources. The departments of Exercise Science, Occupational Therapy, and Physical Therapy, also part of the Panuska College, are housed in the adjacent Center for Rehabilitation Education, also known as Edward R. Leahy Jr. Hall. McGurrin's four stories include classrooms, laboratories, teaching instruction labs, and counseling suites as well as the Panuska College of Professional Studies’ advising center and administration offices. When it was built, McGurrin was outfitted with the latest, most advanced technology in its labs and media-based equipment to deal with instruction in electronic media. McGurrin Hall is named in honor of Mary Eileen Patricia McGurrin, R.N., M.S.N., a former student at the University of Scranton and the daughter of Kathleen Hyland McGurrin and the late John F. McGurrin Sr. Ms. McGurrin was an honors student at Abington Heights High School, earned her bachelor's and master's degrees in nursing from Thomas Jefferson College of Allied Health Services in Philadelphia. A member of the American Nurses Association, she was a registered nurse who served on the staff of Wills Eye Hospital in Philadelphia following completion of her training. She died of cancer in 1995 at the age of thirty-nine. In loving memory of his niece, McGurrin's uncle, Bernard V. Hyland, M.D., made a significant contribution to the Campaign for Scranton, which helped finance the building named in her memory. Dr. Hyland hoped that all of the students who pass through the doors of McGurrin Hall will be filled with the same spirit of selfless service animated by Mary Eileen. University President Rev. McShane noted that "it’s really appropriate and magnificent that the home of a professional studies is named for a nurse." Leahy Community Health & Family Center In 2003, the University of Scranton opened the Leahy Community Health & Family Center, which is located on the bottom floor of McGurrin Hall. The center is named for Edward J. Leahy, the late son of benefactors Patricia and Edward R. Leahy who died at the age of eight due to his significant disabilities. O'Hara Hall Built in 1922, O’Hara Hall was originally called the Glen Alden building and served as the Scranton administrative headquarters for the Glen Alden Coal Company, which at one time had extensive anthracite operations in the Scranton area. Located at the corner of Jefferson and Linden Avenues, the building was sold to the GA Building Corp in 1955 before being acquired by Alden Associates in 1958 before its title was transferred to the Prudential Savings Bank of Brooklyn. During this time, it served as office space for a variety of local Scranton businesses and professional offices. The Neoclassical, six-story building was sold to the University of Scranton in 1968 for $157,000. After renovations and improvements, the University of Scranton used the building to provide the school with more room for its facilities, particularly additional classrooms, faculty offices, supporting administrative services, and conference rooms. O’Hara Hall also became the home of the Business Administration and Economics departments, including their accompanying statistics and accounting laboratories. From 1978 until 2001, O’Hara Hall served as the headquarters for the university's School of Management. In 2001, after the Kania School of Management moved to the newly constructed Brennan Hall, O'Hara Hall was renovated and occupied by 11 other university departments, including both administrative offices and some programs for the College of Arts and Sciences such as the Dexter Hanley College (now the College of Graduate and Continuing Education), Alumni Relations, the Annual Fund, Continuing Education, Development, the World Languages and Cultures department, Instructional Development, the Learning Resource Center, the Political Science department, Public Relations, and the Sociology and Criminal Justice department. The renovations include the construction of a foreign language laboratory with 25 computers for students taking courses in the World Languages and Cultures department. In 2016, the office of the Registrar and Academic Services moved from St. Thomas Hall into O’Hara Hall. After polling the university community for suggestions, the university decided to rename the Glen Alden Building as O’Hara Hall, in honor of Frank J. O'Hara to recognize his tireless service and incredible contributions to the university. Known as "Mr University," Dr. O’Hara graduated from the university in 1925, served as the school's registrar for 32 years, worked as the university's director of alumni relations from 1957 until 1970, received an honorary doctor of laws degree from the university, served as moderator of the University of Scranton Alumni Society. St. Thomas Hall In 1960, the University of Scranton announced plans for a new classroom building, intended to replace the unsafe and overcrowded Barrack buildings, which had been purchased from the Navy in order to quickly accommodate the growing student body, which increased in the 1940s due to the G.I. Bill, a law which provided a range of benefits for returning World War II veterans, including paying for tuition and living expenses to attend college. After holding a major fundraising campaign to raise $1,836,000 for the new building as well as to finish other expansion projects at the library and the student center, the university was ready to begin construction on the new building. Before building could commence, however, mine tunnels under the site needed to be backfilled. Excavations underground showed that the proposed building site was directly above the oldest mine in Scranton, whose origins date back to the Civil War. Supported only by wooden beams and decaying tree trunks, these huge mine chambers showed signs of extensive mining and "local" caving more than fifty feet below the surface. In order to ensure that the new building would be constructed on a firm and strong foundation, mining experts created columns of debris more than 13 feet in circumference and flushed the open cavern with more than 18,000 cubic feet of concrete. The total cost of construction was approximately $1,400,000. Constructed at the corner of Linden and Monroe Streets, St. Thomas Hall was completed in 1962. Five stories tall, the modern L-shaped building contained 50 classrooms, 15 utility rooms, 11 equipment rooms, 10 corridors, and 128 offices, for both faculty and administrators. In addition, the building housed ROTC offices, student lounges, the St. Ignatius Loyola Chapel with room for over two hundred participants, and four laboratories. During the dedication of St. Thomas, the original cornerstone from the university's first building, Old Main, was built into the front corner of St. Thomas Hall. Seventy five years after Old Main's blessing in 1888, the University of Scranton transferred its cornerstone to the new campus, linking the university with its past and providing continuity from both the university's former name, St. Thomas College, and its old campus. When the cornerstone was removed from its place in Old Main, it was discovered that it held a copper box, containing six newspapers published on the day of Old Main's dedication and seven silver coins. During the dedication of St. Thomas Hall, the 1888 newspapers were placed back into the cornerstone, along with letters from student body president Jack Kueny, Alumni Society president Atty. James A. Kelly, and alumnus and longtime administrator Frank O'Hara. Also included was a letter from architect Robert P. Moran '25, addressed to the architect of a building that replaced St. Thomas in the future. Over the years, there have been numerous renovations and improvements of Saint Thomas Hall. In 1965, the gas station at Linden Street and Monroe Avenue on the western end of the University of Scranton complex in front of St. Thomas Hall was razed in order to eliminate the cumbersome and dangerous curve at that intersection. In its place, the island was built, allowing traffic onto campus to be routed around it. In 1987, the Harper-McGinnis Wing was added to St. Thomas Hall to house the Physics and Electronics Engineering department. Funded by the university's Second Cornerstone campaign, the Harper-McGinnis Wing is a two-floor addition that contained offices and laboratories for physics, electrical engineering, and computing sciences. At the time of its opening, it contained several cutting-edge research laboratories, including a modern and atomic physical lab, an optics and electronics lab, a microprocessor lab, an electricity and magnetism lab, a very large system integration (VLSI) lab, a microcomputer lab, and a computer assisted design lab. The Harper-McGinnis Wing was dedicated in recognition of physics professors Joseph P. Harper, Ph.D., the chairman of the physics department, and Eugene A. McGinnis, Ph.D., a long-time physics professor at the university. Together, these men contributed more than 70 years of teaching service to the university. Then, in 2009, renovations during the summer targeted the first and fourth floors of St. Thomas Hall, converting the former St. Ignatius of Loyola Chapel space into offices for Human Resources and Financial Aid. St. Thomas Hall was significantly renovated in 2011-2012 as part of the construction of the Loyola Science Center. It now houses the departments of Theology and Religious Studies, Communications, Philosophy, History as well as the office of LA/WS, or Latin American and Women's Studies, and the university's radio station, 99.5 WUSR. St. Thomas Hall was named in honor of the namesake of St. Thomas College, now the University of Scranton. The Smurfit Arts Center In January 1987, the University of Scranton under Rev. Panuska purchased the former John Raymond Memorial Church, Universalist, at Madison Avenue and Vine Street for $125,000. Built in 1906, the Romanesque building contains one of the tallest bell towers in Scranton. The main floor of the small but remarkably designed structure, which contains 7,200 square feet of floor space, is used as a studio-art facility for the Fine Arts program. The basement is used for the department's offices and classrooms. During the renovations of the building, the university had to remove the stained glass windows and replace them with clear glass to provide the area with natural lighting. The two stained glass windows from the Smurfit Arts Center, which were crafted by the Tiffany Glass Company, were moved to be displayed in Hyland Hall. The Smurfit Arts Center was named for Michael W. J. Smurfit H'85, a generous Irish benefactor whose two sons, Anthony and Michael, attended the University of Scranton. Smurfit was the chairman and chief executive officer of Jefferson Smurfit Group, Ltd., a multinational corporation with headquarters in Dublin, IReland; Alton, Illinois; and New York City. The Smurfit Arts Center is one of three churches the university acquired and preserved during the 1980s once their congregations were no longer able to maintain the buildings. In 1985, the university converted the former Assembly of God Church at 419 Monroe Avenue into Rev. Joseph A. Rock, S.J., Hall. It currently houses Madonna Della Strada Chapel, the principal campus setting for university liturgies, as well as the university's Military Science department and ROTC program. In 1986, the university acquired the Immanuel Baptist Church at the corner of Jefferson Avenue and Mulberry Street. Currently, the Houlihan-McLean Center houses the university's Performance Music Programs. The university acquired the former John Raymond Memorial Church, Madison Avenue and Vine Street, in 1987. It now serves as the Smurfit Arts Center, which houses studio space for the university's Fine Arts department. The university's efforts were cited in a 1988 edition of Inspired, a bi-monthly publication devoted to the preservation of historic religious buildings. Athletic facilities Fitzpatrick Field In 1984, the university completed construction on its very first athletic field in the school's 96-year history, which began in 1982 after the university acquired the land from the Scranton Redevelopment Authority. The land had previously been used as a rail yard for the Lackawanna and Wyoming Valley Railroad. The facility was designed as a multi-sports complex, complete with a regulation-size field for men's and women's soccer which also can be used for other sports such as softball, lacrosse, field hockey, and intramural athletics. It also has bleachers which can seat 350 people, an electronic scoreboard, and a maintenance building containing restrooms, a storage area, and a parking lot. Father Panuska noted that the building of the field was important because it fosters "the development of a total learning environment, an environment which supports a balanced life." The university's board of trustees named the field in honor of Rev. John J. Fitzpatrick, S.J., a long-time booster of the university's athletic programs and dedicated member of the university community for twenty-two years. In 1997, a re-dedication ceremony celebrated the installation of new artificial turf and improved lighting for the field. Currently, Fitzpatrick Field remains the university's primary outdoor athletic facility and is used for the Royal's varsity soccer, field hockey, and lacrosse teams. The field is also used for intramural flag football, ultimate frisbee, soccer, and field hockey. Long Center Completed in 1967, the John J. Long Center contained the university's first indoor athletic facilities, as well as instructional areas for physical education. The Long Center is built into the slope of Linden Street, providing a single level on Linden Street and a three-story end of the building, overlooking Ridge Row. The Long Center was built to enable the university to institute an academic program in physical education and provide a space for student assemblies, convocations, group meetings, and other large gatherings. It was also created to give greater emphasis to intramural athletics and improve the school's intercollegiate athletics. At the time of its construction, the top floor featured a large entrance foyer and a gymnasium, complete with movable bleacher seats that could accommodate up to 4,500 people. The gymnasium contained three basketball courts, complete with a folding curtain in order to separate the gym, allowing multiple games or gym classes the occur at the same time. It also contained two ticket rooms, court rooms and rest rooms, a sound control room, offices for the director and assistants of the physical education program, an equipment room, and storage rooms. The second floor housed locker room facilities, rest rooms, and showers, in addition to saunas, whirlpool baths, and a sun room. It also had a training room, small offices for athletic coaches, a weight room, and an all-purpose room. The bottom floor contained a wrestling room, a mechanical room, and laundry facilities. The Long Center was built on land, spanning 4.93 acres, that the university purchased from the Scranton Redevelopment Authority for $96,843, as part of the city's urban renewal project. Before handing over the title to the university, the Scranton Redevelopment Authority cleared the lot, located at the eastern corner of Linden Street and Catlin Court, by demolishing several existing structures. In order to pay for the $1.8 million facility, the University of Scranton acquired a $592,110 grant through the Higher Education Facilities Act and took out a $815,000 federal loan, made possible by the support of Congressman Joseph M. McDade and U.S. Senator Joseph Clark. The university shouldered the remaining costs. In 2001, excavation under the Long Center provided a new home for the Department of Exercise Science and Sport. The additional 10,000 square feet of space accommodated offices, classrooms, a fitness assessment center, and laboratories for sport biomechanics, body composition, cardio-metabolic analysis, biochemistry, and muscular skeletal fitness. However, with the completion of the Center of Rehabilitation Education (also known as Edward R. Leahy, Jr. Hall) in 2015, the Exercise Science Department relocated from the Long Center into the new building. After its completion in 1967, the university dedicated the athletic facility in honor of its former president, John J. Long, S.J., who served the university in that position from 1953 until 1963, to commemorate his dedication and tremendous contributions to the university. After he stepped down from the presidency, Fr. Long continued to serve the university in other positions, including assistant to the president, founder and moderator of the Alumni Society, and vice president for administrative affairs. During his tenure as president, he led the university in its first major building campaign. Starting in 1956, the campus was greatly expanded and modernized through the construction of fifteen new buildings, which included the Loyola Hall of Science, 10 student residence halls, St. Thomas Hall, Alumni Memorial Hall (formerly known as the Alumni Memorial Library) and Gunster Memorial Student Center (formerly known as the Student Union Building, and was demolished in 2008) as well as the Long Center. He successfully led the university through two fundraising drives in order to finance these building projects, which also had the effect of incorporating the university into the Scranton community. Byron Recreation Complex In 1985, the university began construction on a physical education and recreation complex. Completed in 1986, the William J. Byron, S.J. Recreation Complex is a three-level structure which connects to the Long Center, the facility for intercollegiate athletics. The facility contains three multi-use courts for basketball, volleyball, tennis, and one-wall handball as well as a one-tenth mile indoor running track, a six-lane Olympic-sized swimming pool complete with diving boards and an electronic scoreboard, four 4-wall racquetball courts, a gallery which overlooks the swimming pool and the racquetball courts, two different aerobics/dance rooms, men's and women's locker rooms, saunas, and steam rooms. Panuska spoke about the importance of the new recreation complex, stating that it would help the university offer more "health-related activities" and to serve the recreational needs of the student body, including the intramural program. Panuska also noted that naming this facility for Fr. Byron, the president of the University of Scranton from 1975 until 1982, "provides us with a marvelous opportunity to thank him for his leadership at the university and in the region." Additional buildings and spaces Brown Hall Located at 600 Linden Street and Adams Avenue, Brown Hall, formerly named the Adlin Building, was acquired by the university in 2012 from Adlin Building Partnership. On February 18, 2016, the university renamed Adlin Building as Louis Stanley Brown Hall, in memory of Louis Stanley Brown '19, the first black graduate of St. Thomas College. Campion Hall Campion Hall, opened in 1987, is the university's residence building for the Jesuit community. The faculty, named in honor of Saint Edmund Campion, S.J., a 16th-century Jesuit pastor and scholar who was martyred in England during the persecutions of Roman Catholics for defending his faith, provides living and working accommodations for thirty Jesuits. The two-story building features thirty-one bedrooms, an interior garden, an office, kitchen and dining facilities, and a chapel in addition to a flexible design with four discrete sections, such that the building could adapt to the changing needs of the Jesuit Community at the university. Before the construction of Campion Hall, the primary residence for the Jesuits at Scranton was the estate, the former Scranton family residence which was given to the university by the family in 1941, which proved unable to meet their needs, as it only provided living accommodations for seventeen of the university's thirty-six Jesuits in the 1980s. The building of Campion Hall, estimated at $1.7 million, was financed entirely by the university's Jesuit community. Currently, Campion Hall provides housing for Jesuits who teach or hold administrative positions at the University of Scranton or at Scranton Preparatory School, a local Jesuit high school. Chapel of the Sacred Heart Completed in 1928, the Chapel of the Sacred Heart, formerly the Alumni House and the Rupert Mayer House, was originally part of the Scranton Estate. It was designed as a small athletic facility, containing a gym and a squash court. While Worthington Scranton donated the estate to the university in 1941, he reserved this building, the Quain Conservatory greenhouse, and Scranton Hall (the carriage house) for his own personal use. In 1958, the remainder of the late Worthington Scranton's Estate was acquired by the university, including the chapel. The university reportedly paid $48,000 for the title to the land. Over the years, the building has served the university in a variety of ways. First, the facility was used as the center of athletics, complete with a weight facility and the Athletic Director's office. In 1968, when the construction of the Long Center was completed, the athletic facilities were moved from the chapel to the new building. The building was then used as a print shop, which was moved to O’Hara Hall. Then, the chapel was used as the headquarters for the university's Alumni Association, beginning in the 1970s until 2009. In 2009, following improvements and changes in St. Thomas Hall, the Chapel moved from its location on first floor St. Thomas Hall to the newly renovated Rupert Mayer House. Currently, the chapel is used for daily masses, Eucharistic Adoration, and prayer by students, faculty, and staff of the University of Scranton. Dionne Green In 2008, with the completion of the DeNaples Center, the Gunster Memorial Student Center was functionally superseded. As a result, it was demolished. In its place, the university created the Dionne Green, a 25,000-square-foot green space roughly the size of a football field featuring a 3,600 sq ft outdoor amphitheater, a popular spot for classes during pleasant weather. Located directly in front of the DeNaples Center, it serves as the gateway to the campus. Dionne Green, along with the DeNaples Center and Condron Hall, was part of the university's Pride, Passion, Promise campaign, a $100 million effort to improve and update the campus. The Dionne Green was named for John Dionne '86 and Jacquelyn Rasieleski Dionne '89, University of Scranton alumni and benefactors. The Estate In 1867, Joseph H. Scranton, one of the founders of the city of Scranton, commissioned the building of his family home. Designed by New York architect Russell Sturgis, one of America's most outstanding architects in the post Civil War era, the home was created in the French Second Empire style. The house features stone masonry by William Sykes and detailed woodwork carvings designed by William F. Paris. The twenty-five room, three story residence contained a billiards room, a ballroom, a library, a Tiffany glass skylight, and a solid mahogany staircase. It is estimated that the cost of construction totaled $150,000. Throughout the years, a number of renovations and improvements were made on the estate. The house initially featured a tower located on top of the front left corner of the estate's roof, which was later removed. Additionally, while occupying the residence, William W. Scranton built the large granite wall surrounding the property in order to protect the estate and keep out rioting townspeople, upset about the tuberculosis epidemic that they felt had been spread through the city's water supply, the rights of which were owned by the Scranton family. Parts of the wall were later removed during the construction of Loyola Hall in 1956. There used to be two open porches in the back of the estate. In the early 1970s, both porches were enclosed and converted into a sitting room and a dining room. Construction commenced in 1867 and continued for four years, finishing in time for the Scranton family's Thanksgiving celebrations in November 1871. Less than a year later, Joseph H. Scranton died. His son, William W. Scranton, then inherited the property. After William W. Scranton's death, his wife, Katherine M. Scranton, used the home until her death in 1935, at which point it passed into the hands of their son, Worthington Scranton. Because Worthington's wife Marjorie was confined to a wheelchair, she had difficulty navigating around the estate. They built a new home outside of Scranton in Abington called "Marworth." Once construction on Marworth was completed in 1941, the Scrantons moved out of the estate entirely, although that house had never been their primary residence. In 1941, Worthington Scranton donated his home and adjoining estate to Bishop Hafey, the bishop of the Diocese of Scranton and the University of Scranton's board of trustees president, for use by the university, because he felt that this land could be "most advantageously used for the development of an institution of higher learning so that the youth of this vicinity can get an education at a reasonable cost." However, he reserved the former carriage house, which he had converted into an office, the greenhouse, and the squash court for his own personal use. Following Worthington's death in 1958, his son, William W. Scranton, gave the remainder of the estate to the University of Scranton. In 1942, when the Christian Brothers transferred the title of the university to the Society of Jesus, the Jesuits decided to use the estate as the Jesuit community residence. After some renovations by the Jesuits, the first floor of the residence held a chapel, reception parlor, a 5,000-volume library, and recreation room, while the second and third floors served as private rooms for the Jesuits. In 1957, a small fire broke out in the house's main parlor. During the 1960s, the Jesuit community restored the estate. Most of the interior woodwork was refinished for preservation purposes, and the ceiling frescoes were repainted and gold leafing was added to them. In 1987, the Scranton Jesuit community moved from the estate into the newly completed Campion Hall, as the estate proved to be insufficient for the community's needs, as it could only accommodate 17 priests in the then-36 member community. In 2009, the Admissions Office moved its operations into the estate. Founder's Green In 2001, after a period of significant campus expansion at the university, the Gallery Building was functionally superseded, as the departments it housed were moved to the newly remodeled O’Hara Hall and to Hyland Hall, including the university's art gallery, the Counseling Center, and the Department of Career Services as well as classrooms and lecture halls. As a result, it was demolished. In its place, the university created Founders Green, a large, open green space, which is located directly in front of Brennan Hall. Galvin Terrace Upon the completion of St. Thomas Hall in 1962, the Barracks buildings no longer needed to be used by the university for classrooms and lecture halls. As a result, the buildings were demolished. In place of the barrack called the Arts Building, the university created an outdoor recreation facility on the block bounded by Linden Street, Monroe Avenue, Mulberry Street, and Hitchcock Court. The $86,000 project created four volleyball courts, three basketball courts, a grass practice field for football and soccer, and a faculty parking lot. However, the fields were not lighted, so all activities had to be scheduled during the day. In the late 1970s, the university decided to renovate and improve these recreational facilities. The school built the Galvin Terrace Sport and Recreation Complex, which contained six tennis courts, two combination basketball/volleyball courts capable of also accommodating street hockey, four handball/racquetball courts, and recreational and lounging space. Completed in October 1978, the project was funded by the university's Annual Fund Drive and its national capital campaign "Commitments to Excellence." It was used for intramural sports but also served as the home of the university's tennis team. In the early 1990s, the recreational complex was demolished to make room for the Weinberg Memorial Library. A small garden outside the Library is now known as Galvin Terrace. The Galvin Terrace was named for former university president Rev. Aloysius Galvin, S.J., who served as Scranton's president from 1965 until 1970. Born in Baltimore, Rev. Galvin served in the U.S. Navy from 1943 until 1946, and graduated from Loyola College, Baltimore in 1948. He joined the Society of Jesus upon his college graduation in 1948, pursued philosophical and theological studies at Woodstock College, Maryland, and was ordained to the priesthood in 1957. Rev. Galvin served as the Academic Vice President and Dean of Loyola College, Baltimore from 1959 to 1965. After resigning from the university presidency, Rev. Galvin worked at Georgetown Preparatory School as a math teacher and student counselor for thirty-five years before his death in 2007. Mosque In 1996, the university community renovated a university-owned house at 317 North Webster Avenue into the Campus Mosque as a gift to the Muslim community of Scranton. The university established the campus mosque in response to the growing need for a local mosque for the growing number of Muslim students, as there had not previously been any mosques in the city of Scranton. The Mosque contained two large, spacious rooms as the women's and men's prayer rooms as well as a library housing countless reference books on the history of Islam and the Muslim religion, including translations of the interpretations of the Koran. The Mosque was also equipped with an upstairs apartment where two members of the Muslim Student Association lived and served as caretakers of the facility. In 2007, the Mosque, along with several other properties, was razed in order to establish a site for the sophomore residence, Condron Hall. The university then purchased and renovated a house at 306 Taylor Avenue for use as the new mosque, which is open to the public for prayer and reflection. Pantle Rose Garden When the University of Scranton acquired the Scranton family estate in the mid-1950s, the school received a garden, located next to the Chapel of the Sacred Heart at the corner of Linden Street and Monroe Avenue on the former grounds of the estate. Throughout the years, it was known by a couple of different names, including the Rose Garden and Alumni Garden. In 2010, the university dedicated the Rose Garden to Rev. G. Donald Pantle, S.J. during the celebration of the 50th anniversary of his ordination, as a gift from James J. Knipper '81 and Teresa Poloney Knipper '82, in honor of their longtime friendship with Rev. Pantle. Parking and Public Safety Pavilion Completed in 1995, the Parking and Public Safety Pavilion accommodates 510 cars in its five stories, with one floor below ground, one floor at ground level, and three above ground. It was constructed to expand the university's on-campus parking capacity in order to meet the community's need for additional places to park, with designated areas for students, faculty, staff, and guests. Additionally, the parking garage contains the offices of the university's police and the offices of parking services. The structure, which occupies 163,000 square feet, is located on the corner of Mulberry Street and Monroe Avenue. The exterior complements the adjacent McDade Center for Literary and Performing Arts by mirroring its design. The Monroe Avenue facade is also covered by a series of topiary planting screens on which climbing vines have grown. Quain Memorial Conservatory Located between the Chapel of the Sacred Heart and Scranton Hall on the grounds of the original Scranton family Estate, the Quain Conservatory was built in 1872. The Scranton family used the greenhouse to grow and prepare cut flowers. The glass building has a central square (20 ft by 20 ft) flanked by two 40 ft by 15 ft wings on either side. At the time of its construction, each section had its own pool. The Conservatory is one of few Victorian-style conservatories that remains essentially unaltered from its original design. In 1941, Worthington Scranton donated the estate and its grounds to the university, but reserved a portion of the estate for his own personal use, including the greenhouse. In 1958, after the death of Worthington Scranton, the Scranton family donated the remainder of the estate to the university, leading to the acquisition of the greenhouse. In the early 1970s, the student-led University Horticultural Society coordinated and organized an effort to renovate and restore the greenhouse. In order to raise funds for their planned improvements, the Society organized field trips, plant sales, and a lecture series in addition to enrolling paid members to their group. Additionally, Father Quain, the Acting President of the university, found out about the project and contributed additional funds to the project. While restoring the greenhouse, the Society cleaned and painted the structures, refurbished the main pond and installed a new fountain pump, created a mushroom cellar in the greenhouse's basement, and redesigned the plant beds. The group installed a number of rare and exotic plants in the conservatory, including orchids, banana trees, mango trees, bougainvillia, tri-colored dracaena, star-shaped trees, bromeliads, Hawaiian wax flowers, night blooming cereus, bi-colored water lilies, the Rose of China (hibiscus), fig trees, pomegranates, pineapples, and grapefruit trees. In September 1975, the university reopened and dedicated the greenhouse as the Edwin A. Quain Conservatory for his "kindness, interest, and generosity to the Society." Currently, the greenhouse is used for classes as well as faculty and personal research projects. Roche Wellness Center The Roche Wellness Center, located at the corner of Mulberry Street and North Webster Avenue, was acquired by the university in 1992 and opened for student use in 1996. Originally built in 1986 by pharmacist Alex Hazzouri, the Wellness Center previously housed Hazzouri's pharmacy and drugstore as well as a restaurant named Babe's Place. In 1989, Alex Hazzouri was arrested and arraigned on drug-trafficking charges, and his pharmacy was closed indefinitely, as the government seized the building. After the investigation was closed, the government auctioned off the building in 1992. It was purchased by the university for $500,000. Beginning on August 2, 1993, the building served as a home to the Scranton Police Department's Hill Section precinct station. A new Student Health and Wellness Center was soon moved in, along with the university's Drug and Alcohol Information Center and Educators (DICE) Office. In 1996, the Roche Wellness Center opened, housing the Student Health Services department. The building holds a reception area, four exam rooms, a laboratory, an assessment room, an observation room, and storage space. Rock Hall On December 15, 1983, the University of Scranton purchased the Assembly of God Church from the Reformed Episcopalian congregation who could no longer properly maintain the facility as the costs and utilities were too high. Once it was acquired by the university. the Assembly of God Church was renamed to Rock Hall to honor the late Rev. Joseph A. Rock, S.J., a well-known and respected educator at the University of Scranton. Originally, the university intended to use the first floor of the facility for administrative offices which had previously occupied space in St. Thomas and Jefferson Halls, including the Department of Central Services, the Maintenance Department, and the Security Department while the assembly area of the new hall was supposed to provide a needed alternative for smaller social and cultural affairs, including lectures, dinners, and dances, now held in the over-scheduled Jefferson and Eagen Auditoriums. During the renovations of Rock Hall, however, the need for a new chapel was identified, as the St. Ignatius chapel in St. Thomas Hall did not provide adequate seating and contained structural limitations which were not conducive to acoustics or the aesthetics of the liturgies. Named Madonna della Strada, or "Our Lady of the Way", in reference to an image of the Virgin Mary enshrined in the Church of the Gesu in Rome, the Chapel serves as the primary site for the university's major liturgical services, including the regular Sunday masses. Rev. Panuska commented that the building and chapel are important additions to the school, particularly because the chapel "provides the university and the surrounding community with a beautiful setting for liturgical celebrations." The chapel was consecrated on February 15, 1985, by Bishop James C. Timlin, D.D. Currently, the first floor of Rock Hall is the home of the university's Military Science department and ROTC program. Rock Hall is one of three churches the university acquired and preserved during the 1980s once their congregations were no longer able to maintain the buildings. In 1985, the university converted the former Assembly of God Church at 419 Monroe Avenue into Rev. Joseph A. Rock, S.J., Hall. It currently houses Madonna Della Strada Chapel, the principal campus setting for university liturgies, as well as the university's Military Science department and ROTC program. In 1986, the university acquired the Immanuel Baptist Church at the corner of Jefferson Avenue and Mulberry Street. Currently, the Houlihan-McLean Center houses the university's Performance Music Programs. The university acquired the former John Raymond Memorial Church, Madison Avenue and Vine Street, in 1987. It now serves as the Smurfit Arts Center, which houses studio space for the university's Fine Arts department. The university's efforts were cited in a 1988 edition of Inspired, a bi-monthly publication devoted to the preservation of historic religious buildings. Scranton Hall Constructed in 1871, Scranton Hall was built as a one-story carriage house and stable on the Scranton family Estate by Joseph H. Scranton. In 1928 and continuing into 1929, Worthington Scranton and his wife added an additional story, renovating the building and converting it into an office space. In 1941, Worthington Scranton donated his home and adjoining estate to Bishop Hafey, the bishop of the Diocese of Scranton and the University of Scranton's Board of Trustees President, for use by the university, because he felt that this land could be "most advantageously used for the development of an institution of higher learning so that the youth of this vicinity can get an education at a reasonable cost." However, he reserved the former carriage house, the greenhouse, and the squash court for his own personal use. Following Worthington's death in 1958, the university acquired the rest of the estate from his son, William W. Scranton, for $48,000. The former carriage house was of particular interest to the university because it would allow them to centralize the scattered administrative offices on campus. Since it was acquired in 1958, the building has been used to house the President's Office and other administrative offices. From 1958 until 1984, the building was known simply as the President's Office Building. In 1984, the university's president Rev. J.A. Panuska, S.J. renamed the building as Scranton Hall to honor the contributions of the Scranton family. He stated that: "Ever since the Scrantons began migrating to Northeastern Pennsylvania in the late 1830s, their vision has touched the life of this region to such an extent that this city and this University bear their name." Retreat Center at Chapman Lake In 1961, the University of Scranton purchased a nine-acre tract of lakefront property containing three buildings on Chapman Lake, about 30 minutes away from the university. Originally known as the Bosak Summer Estate, the land was owned by Walter Bloes, a tax collector, and was briefly converted into a restaurant-tavern before being sold to the university. The university bought the property for $65,000. When the university acquired the estate, they named it Lakeside Pines. For several years, it was chiefly used as a place for relaxation by the Jesuits and for conferences with faculty members and student leaders. As time progressed, the university's Office of Campus Ministries began using the Chapman Lake property as a Retreat Center. The site originally had one old retreat house, featuring several bedrooms equipped with bunkbeds, a small chapel, a main room with a fireplace, a kitchen, and dining area. In 1998, the university expanded the lakeside Conference and Retreat Center. Doubling the size of the center, the new 16,000 square-foot facility contained a dining room, kitchen, a large meeting room nicknamed the Lake Room, five small meeting rooms, and a residential wing with 11 bedrooms. In 2005, in order to meet the growing demand for retreats, the university expanded the Retreat Center again. The university built an 11,584-square-foot facility adjacent to building constructed in 1998. The old retreat center was demolished over safety concerns in 2004, making room for the expansion. The new addition contained a lounge, 21 more bedrooms, and a 65-seat modern chapel with large window views of the Lake. On November 7, 2006, the university dedicated the Retreat Center chapel, naming it in honor of Blessed Peter Faber, an early Jesuit who, together with St. Francis Xavier and St. Ignatius Loyola, served as the nucleus of the Society of Jesus. Born April 7, 1506, Peter Faber was the first of the companions to be ordained a priest. Residence halls Freshman dorms First-year students are offered traditional double rooms that share a community restroom. All freshmen dorms are located near the center of the university's campus. Freshmen housing does not have air-conditioned or carpeted rooms. Each building has washers and dryers on the first floor for student use as well as light housekeeping services provided to all rooms and bathrooms. Casey Hall: houses 59 students, is co-ed by floor Casey Hall was built in 1958, as part of the "Lower Quad," which also includes Fitch Hall, Martin Hall, and McCourt Hall. These buildings were the first four student residences on campus and were constructed at a cost of $757,000, financed by a loan from the College Housing-Program of the Federal Home and Housing Finance Agency. Portions of the Lower Quad location were formerly the sites of the Moffat residence (306 Quincy Avenue), donated to the university by the Epsteins in April 1955, and the Leonard/Shean family residence (312 Quincy Avenue), donated to the university by the Scranton Lodge of Elks. Casey Hall is named in honor of Joseph G. Casey, president of the Hotel Casey, director of Scranton's Chamber of Commerce, and a graduate of St. Thomas High School, who donated land to the university, as well as the interest, time, and encouragement that he contributed. Denis Edward Hall: houses 75 students, is co-ed by floor Denis Edward Hall was built in 1962, as part of the "Upper Quad," which also includes Hafey Hall, Hannan Hall, and Lynett Hall. Denis Edward Hall is named in honor of Brother Denis Edward, who was the man responsible for changing the name of the university from St. Thomas College to the University of Scranton and was the school's third president. Under his tenure, from 1931 to 1940, enrollment doubled. Driscoll Hall: houses 139 students, is co-ed by floor, contains a kitchen, game room, and study space In 1964, the University of Scranton acquired title to the future site of Driscoll and Nevils Halls at Mulberry Street and Clay Avenue from the Scranton Redevelopment Authority as part of the University Urban Renewal Project. Driscoll and Nevils Halls were built at a cost of $721,175 and originally build to house 120 students in each building, 240 in total. Both Driscoll and Nevils Halls are four story buildings constructed with reinforced concrete members, which are exposed in the exterior brick walls. Rooms are 12 by 16 feet and were designed to accommodate two students along with desks, shelves, and closets. A garden mall divides the two buildings. Driscoll Hall is named in honor of James A. Driscoll, who was appointed to the university's teaching staff in 1925, and remained there for nearly forty years. Fitch Hall: houses 59 male students, contains a kitchen Fitch Hall was built in 1958, as part of the "Lower Quad," which also includes Casey Hall, Martin Hall, and McCourt Hall. These buildings were the first four student residences on campus and were constructed at a cost of $757,000, financed by a loan from the College Housing-Program of the Federal Home and Housing Finance Agency. Portions of the Lower Quad location were formerly the sites of the Moffat residence (306 Quincy Avenue), donated to the university by the Epsteins in April 1955, and the Leonard/Shean family residence (312 Quincy Avenue), donated to the university by the Scranton Lodge of Elks. Prior to the construction of the Lower Quad, three homes on the 300 block of Quincy Avenue were used as student residences and were named Fitch, Martin, and McCourt Halls. In 1958, these names were transferred to the newly constructed residence halls, and the old houses were razed in order to make room for Gunster Memorial Student Center. Fitch Hall is named in honor of Martha Fitch, a registered nurse and superintendent of the old St. Thomas Hospital, who opened her home to university boarders while the college was still developing and donated her estate to the university upon her death. Fitch Hall was the home of the Jane Kopas Women's Center from its founding in 1994 until its move to the DeNaples Center in 2008. Gannon, Lavis, & McCormick Halls: Gannon houses 72 female students, Lavis houses 82 female students, McCormick houses 65 female students. The complex contains three kitchens, large lounges on each first floor, and smaller lounges on each of the other three floors. The three buildings are connected on each floor by an enclosed walkway. The complex was constructed in 1990–91 at a cost of approximately $3.7 million in response to the shrinking pool of students from the Scranton area and an increased number of students coming from outside the region and needing on-campus housing. The university developed Nevils Beach, an open space often used for recreational activities, into the new dorm complex. GLM Patio is adjacent to the GLM residential complex is the Freshman Patio, which frequently hosts musical and comedic performances as well as outdoor movies and serves as a popular spot for tanning, sledding, and barbecuing. Gannon Hall was named for Rev. Edward J. Gannon, S.J., who was a member of the university's philosophy department for 22 years before his death in 1986. Lavis Hall was named for the late Robert G. Lavis, a lifelong resident of Scranton who established two scholarship funds at the university. McCormick Hall was named for Rev. James Carroll McCormick, the Bishop of Scranton from 1966 to 1983. Bishop McCormick was present and gave a blessing at the dedication of the Gannon-Lavis-McCormick complex. Hafey Hall: houses 56 students, contains a kitchen Hafey Hall was built in 1962, as part of the "Upper Quad," which also includes Denis Edward Hall, Hannan Hall, and Lynett Hall. Hafey Hall is named in honor of Bishop William Hafey, the fourth Bishop of Scranton who played an instrumental role in bringing the Jesuits to Lackawanna County, was dedicated to serving the poor, and was named an assistant to the pontifical throne in 1945. Hannan Hall: houses 77 students, is co-ed by floor, is home to the Wellness Living-Learning Community in which students commit to a lifestyle focused on different aspects of wellness Hannan was built in 1961, as part of the "Upper Quad," which also includes Hafey Hall, Denis Edward Hall, and Lynett Hall. Hannan Hall, along with Lynett Hall, was constructed on the site of the former Joseph Casey and Donald Fulton residences on Clay Avenue and Linden Street. The project cost was estimated at $400,000 and was supported by a $375,000 loan from the Federal Housing and Finance Agency's Community Facilities Administration. Hannan Hall is named in honor of Most Reverend Jerome D. Hannan, who succeeded Hafey as Bishop of Scranton for eleven years, helped establish the St. Pius X Seminary, donated $100,000 to the expansion of the university, and was appointed as a consultor to the Pontifical Commission of Bishops by Pope John XXIII. Lynett Hall: houses 47 students Lynett Hall was built in 1961, as part of the "Upper Quad," which also includes Hafey Hall, Hannan Hall, and Denis Edward Hall. Lynett Hall, along with Hannan Hall, was constructed on the site of the former Joseph Casey and Donald Fulton residences on Clay Avenue and Linden Street. The project cost was estimated at $400,000 and was supported by a $375,000 loan from the Federal Housing and Finance Agency's Community Facilities Administration. Lynett Hall is named in honor of Edward J. Lynett, who was the editor and publisher of The Scranton Times, played an important role in raising 1.5 million dollars for the expansion of the university, and was a generous, long-time benefactor of the University of Scranton. Martin Hall: houses 51 students, is co-ed by floor, contains a kitchen, is home to the Cura Personalis Living-Learning Community in which students make a commitment to providing service to others Martin Hall was built in 1958, as part of the "Lower Quad," which also includes Casey Hall, Fitch Hall, and McCourt Hall. These buildings were the first four student residences on campus and were constructed at a cost of $757,000, financed by a loan from the College Housing-Program of the Federal Home and Housing Finance Agency. Portions of the Lower Quad location were formerly the sites of the Moffat residence (306 Quincy Avenue), donated to the university by the Epsteins in April 1955, and the Leonard/Shean family residence (312 Quincy Avenue), donated to the university by the Scranton Lodge of Elks. Prior to the construction of the Lower Quad, three homes on the 300 block of Quincy Avenue were used as student residences and were named Fitch, Martin, and McCourt Halls. In 1958, these names were transferred to the newly constructed residence halls, and the old houses were razed in order to make room for Gunster Memorial Student Center. Martin Hall is named in honor of Attorney M. J. Martin, who donated six lots on Linden Street to the university, and after his death, his wife, Inez, donated over half a million dollars to the college. McCourt Hall: houses 51 students, is co-ed by floor, contains a kitchen and lounge area McCourt Hall was built in 1958, as part of the "Lower Quad," which also includes Casey Hall, Martin Hall, and Fitch Hall. These buildings were the first four student residences on campus and were constructed at a cost of $757,000, which was financed by a loan from the College Housing-Program of the Federal Home and Housing Finance Agency. Portions of the Lower Quad location were formerly the sites of the Moffat residence (306 Quincy Avenue), donated to the university by the Epsteins in April 1955, and the Leonard/Shean family residence (312 Quincy Avenue), donated to the university by the Scranton Lodge of Elks. Prior to the construction of the Lower Quad, three homes on the 300 block of Quincy Avenue were used as student residences and were named Fitch, Martin, and McCourt Halls. In 1958, these names were transferred to the newly constructed residence halls, and the old houses were razed in order to make room for Gunster Memorial Student Center. McCourt Hall is named in honor of Attorney John M. McCourt, an outstanding Pennsylvania lawyer who was appointed as United States Attorney in Scranton. He was a trustee of the original St. Thomas College and his gifts of time, expertise, and funds were important to the university's early growth. Nevils Hall: houses 143 students, is co-ed by floor, contains a community lounge In 1964, the University of Scranton acquired title to the future site of Driscoll and Nevils Halls at Mulberry Street and Clay Avenue from the Scranton Redevelopment Authority as part of the University Urban Renewal Project. Driscoll and Nevils Halls were built at a cost of $721,175 and originally build to house 120 students in each building, 240 in total. Both Driscoll and Nevils Halls are four story buildings constructed with reinforced concrete members, which are exposed in the exterior brick walls. Rooms are 12 by 16 feet and were designed to accommodate two students along with desks, shelves, and closets. A garden mall divides the two buildings. Nevils Hall is named in honor of Father William Coleman Nevils, who was the first member of the Jesuit community to serve as the president of the university from 1942 until 1947. Fr. Nevils was instrumental in transitioning the administration of the university from the Christian Brothers to the Jesuits. Sophomore dorms Sophomore students are offered suite-style housing, in which two double rooms share a shower and toilet, with each room having its own sink. Sophomore housing is air conditioned. All of the buildings have kitchens. Each building has washers and dryers on the first floor for student use as well as light housekeeping services provided to all rooms and bathrooms. The three buildings are located together in a cluster on the university's campus to replicate the close housing arrangement experienced by first-year residential students. Condron Hall: houses 392 students Condron Hall was completed in 2008. In addition to its dorm rooms, the building's seven floors contain a multipurpose meeting room, shared kitchen spaces, multimedia lounges, and study areas. Condron Hall incorporates many environmentally friendly techniques, such as water- and energy-saving fixtures, the use of products produced within a 500-mile radius of the campus and green floor coverings. The building is named in honor of alumnus and long-time benefactor of the University of Scranton Christopher "Kip" Condron and his wife Margaret Condron, Ph.D., who served as the national co-chairs of the largest capital campaign in the 120-year history of the university, the $100 million Pride, Passion, Promise Campaign to transform the campus and secure the future. Gavigan Hall: houses 235 students In 1988, the university began construction on Gavigan Hall. The facility features lounges on each floor, study rooms, and a kitchen as well as a study area for its residents on the top floor which features two-story high glass windows with views of the campus and of the city. The building is dedicated in memory of John R. Gavigan to honor his thirty-eight years of service to the university and his devotion to the institution's students. Redington Hall: houses 242 students Finished in 1985, Redington Hall was designed as a residence complex to house 244 students and accommodations for Jesuit-faculty counselors. In addition to dorm rooms, the building also contains numerous study and lounge areas as well as Collegiate Hall, a large conference room for study, assembly, and ceremonial functions which was modeled after an early Christian basilica, with a clerestory and side aisles, culminating in a four hundred square foot window. The clerestory walls are inscribed in both Latin and English with the founding date of the university and lyrics from its alma mater. The buildings of Redington Hall form a "U" that is open to the south to take advantage of the year-round sunshine and to highlight an excellent view of south Scranton. The west wing contains Collegiate Hall, angled to face the Commons. At the northwest corner of the residence hall, there is a three-storied entry rotunda containing the stairs, lounges, circulation space, a clock tower with a carillon and a glass-pyramid roof and crucifix designed by Rev. Panuska. The carillon system was produced by the Maas-Rowe Co. of Escondido, California. The five largest bells in the bell tower were cast in Loughborough, England by John Taylor and Company and range in diameter from 18 to 30 inches and in weight from 147 to 560 pounds. Each is inscribed: one features a quotation from the Ignatian Spiritual Exercises, another marks the 1888 establishment of the university and cites the university motto (Religio - Mores - Cultura), and three others display the text of the second verse of the university's alma mater. The crucifix features a geometric corpus with head bowed, symbolizing the moment of death and illustrating the expansive love manifested by the freely chosen death of Christ. The facility is named for Francis E. Redington and his wife, Elizabeth Brennan Redington. Upperclassmen and graduate housing Upperclassmen and graduate students are offered apartments and houses. Linden St. Apartments: In 1999, the university purchased the Linden Street apartments from a private business owner to replace the residence houses demolished to make room for Brennan Hall. Located on the 1300 block of Linden Street, the Linden Plaza apartments are arranged as three-bedroom units, each complete with a kitchen, a living room, and a full bathroom. Katharine Drexel has an occupancy of 27, Dorothy Day has an occupancy of 27, and Elizabeth Ann Seton has an occupancy of 29. The apartments are divided into three named residences, each one in honor of women of faith and commitment to service of others: Dorothy Day, a prominent Catholic social activist and journalist who founded the Catholic Worker movement; Saint Katharine Drexel, the founder of the Sisters of the Blessed Sacrament who served as a missionary for Native Americans and African Americans; and Elizabeth Ann Seton, founder of the Sisters of Charity, a religious order that helped to establish the parochial school system in the United States. Madison Square: has 3 different apartment buildings with a capacity of 114. All Apartments offer a semi-private building entrance and a private apartment entrance. Madison Square rooms are mainly single occupancy, with limited double occupancy rooms. All of the apartments have full kitchens and living rooms. Completed in 2003, the Madison Square Apartments is a complex of three townhouses, each three stories. In total, the complex accommodates 114 students in 25 different apartments. Each building has three, four, five and six bedroom apartment suites. Each apartment style suite includes a kitchen and sitting room, and one bathroom for every two to three bedrooms. The three townhouses surround an outdoor garden area and courtyard. In one townhouse, the basement contains a lounge, conference area, and laundry area. The design of the Madison Square apartments closely resembles the layout of the award-winning Mulberry Plaza Apartments, which was recognized by the Boston Society of Architects, the largest branch of the American Institute of Architects, in its 2002 Housing Design Awards Program for design excellence. However, several distinct differences were tweaked by the architect to improve the complex. All apartments and bedrooms will be uniform in size, as opposed to the varying size of bedrooms and sizes of apartments in the Mulberry Complex, more central storage space and walls with a greater width of insulation will be added, and more common room space outside of individual apartments will be allotted. Construction involved the demolition of the Carter Apartments, a multistory residential complex that previously occupied the site. To date, only one of the three units has been named. In 2006, one unit was dedicated as Dexter Hanley House, in memory of Rev. Dexter L. Hanley, S.J., university president from 1970 to 1975. During his tenure as president, Father Hanley oversaw the university's move to coeducation, increased enrollment, and approved a major revision of the undergraduate curriculum offering students much more flexibility while maintaining a common core rooted in the values of Jesuit education. To honor Father Hanley's service and dedication to the university after his death in 1977, the university renamed the Evening College to the Dexter Hanley college. When the Hanley College was later merged with the Graduate School, Fr. Hanley's memory was preserved on campus as one of the student townhouses in the Madison Square complex was named the Dexter L. Hanley House. Pilarz and Montrone Halls: house 396 students in two different buildings. All of the apartments in the new buildings are either 2 or 4 person occupancy and all of the rooms are single occupancy. While Montrone holds 40 four-bedroom suites, accommodating up to 160 students, Pilarz holds 54 four-bedroom suites and 10 two-bedroom suites, accommodating 236 students. In 2010, ground was broken on the Mulberry Street apartment complex. Funded by the $125 million "Pride, Passion and Promise" fundraising campaign launched in 2008, the $33 million complex was because of the growing demand for more on-campus housing options, particularly apartment-style unites, as "more students every year want to live in [the University’s] residence halls and campus apartments." Before the construction of Montrone and Pilarz Halls, the site of the 1000 block of Mulberry Street was occupied by the Storier Apartments, and Aroma Cafe, a popular student hangout. Additionally, 406 Monroe Avenue, a portion of the Pilarz Hall site, was once the residence of local real estate developer William L. Hackett. Montrone Hall is named in honor of Sandra Montrone H'03 and Paul Montrone '62, H'86. A magna cum laude graduate of The University of Scranton and native of this city, Mr. Montrone distinguished himself as a student in academics and as leader on campus. After college, Mr. Motrone went on to earn a doctorate from Columbia University, and served as chairman, president & CEO of Fisher Scientific International Inc. until its merger with Thermo Electron Corporation to form Thermo Fisher Scientific Inc. Mr. Montrone has directed the development of other public and private companies, including the Signal Companies, Inc., and its successor Allied Signal, Inc.; the Henley Group; Wheelabrator Technologies Inc.; Latona Associates; the Metropolitan Opera; Liberty Lane Partners Inc.; and Perspecta Trust LLC. During the Clinton administration, Mr. Montrone served as a member of the President's Advisory Commission on Consumer Protection and Quality in the Healthcare Industry, as well as a founder of the National Forum for Healthcare Quality Measurement and Reporting. In addition to an honorary degree, the university has recognized Mr. Montrone as a recipient of the President's Medal in 2003. Mrs. Montrone, a graduate of Marywood College and also a native of Scranton, serves as president of The Penates Foundation, was a founding director and later president of the board of directors of Seacoast Hospice, which earned national recognition when it was selected as a distinguished service organization by the United Nations and a Point of Light by the first President Bush, and, under President Clinton, she served on the President's Advisory Committee on the Arts. In recognition of the Montrones' great accomplishments and to commemorate their generosity towards the University of Scranton, one of the Mulberry Street apartment complexes was named after them. Pilarz Hall was named in honor of the university's 24th president, Rev. Scott R. Pilarz, S.J. who served from 2003 until 2011. The university's dedication of one of the Mulberry Street Apartment buildings commemorates Rev. Pilarz's great service to the university. As current University of Scranton President, Kevin Quinn, S.J., remarked, "During his tenure as president, Father Pilarz led unprecedented growth at Scranton that goes beyond bricks and mortar, skillfully nurturing our genuine care for students and the unique attributes that each brings to our community. His contribution has – and continues to – transform lives." He was beloved by students and known for accomplishing transformational projects on campus. Rev. Pilarz's list of achievements at Scranton is extensive – reaching from the unprecedented fundraising success of the Pride, Passion, Promise Campaign, to enhancing the university's reputation on a national stage, to the campus’ capital projects. Under Rev. Pilarz, the University of Scranton expanded its international mission and service opportunities, as well as its support for programs to enhance its Catholic and Jesuit identity. More than 100 new faculty members were hired and, five endowed chairs were established. The university saw undergraduate applications grow to record levels and its graduate programs expand dramatically through online degree programs and a renewed focus on campus-based programs. The university also earned the highly selective Community Engagement Classification designated by the Carnegie Foundation for the Advancement of Teaching. Father Pilarz's impact can also be seen in transformational campus improvements. These include the Patrick and Margaret DeNaples Center; the Christopher and Margaret Condron Hall; the John and Jacquelyn Dionne Campus Green; the expansion of the Retreat Center at Chapman Lake; the renovation of The estate as a new home for Admissions; the renovation of the former Visitors’ Center into the Chapel of the Sacred Heart; the Loyola Science Center, and the Mulberry Street Apartment Complex. In addition to his work at the University of Scranton, he also was the president of Marquette University from 2011 until 2013 and currently serves as the president of Georgetown Preparatory School, since 2014. He has received numerous awards for teaching, service and scholarship, including the John Carroll Award from Georgetown University, which is a life achievement award and the highest honor bestowed by the Georgetown University Alumni Association. He was awarded honorary degrees from King's College, Wilkes-Barre, and Marywood University, Scranton. Mulberry Plaza: has 4 different apartment buildings with a capacity of 141. Completed in 2000, the Mulberry Plaza Apartments is a complex of four townhouses, each three stories. In total, the complex accommodates 140 students. Each townhouse provides a mix of duplex and flat-style apartments with anywhere from one to six bedrooms. Each apartment style suite includes a kitchen and sitting room, and one bathroom for every three to four bedrooms. The three townhouses surround an outdoor garden area and courtyard. Mulberry Plaza was recognized by the Boston Society of Architecture, the largest branch of the American Institute of Architects, in their 2002 Housing Design Awards Program for design excellence. The design of the Madison Square apartments, which were built in 2002, closely resembles the layout of the award-winning Mulberry Plaza Apartments. Construction involved the demolition of Wyoming House, formerly known as Jefferson Towne House, which had been acquired by the university in March 1982 from Itzkowitz Catering and had been used by the university as a student residence. The three-floor, Colonial Revival-style Towne House had been originally constructed in 1901 as the L.A. Gates residence at 800 Mulberry Street. Before being used as one of the university's student residences and home to Itzkowitz Catering, it had also served as Snowdon's Funeral Home, a medical office, and a music conservatory. Additionally, part of the Mulberry Plaza was once the site of the residence of T. J. Foster, the president of the International Correspondence Schools of Scranton at 338 Madison Avenue, which was also razed to build Mulberry Plaza. To date, only two of the townhouses have been named. Keating House is named in memory of Robert J. and Flora S. Keating, the parents of Flora K. Karam, who is the wife of university trustee and alumnus Thomas F. Karam. The dedication of the townhouse in their name honors their continued commitment to the community throughout the years. Timlin House honors the Most Rev. James C. Timlin, D. D., who served as the eighth Bishop of Scranton from 1984 to 2003. Before becoming the Bishop of Scranton, Rev. Timlin served as assistant pastor at St. John the Evangelist Church and St. Peter's Cathedral, as Assistant Chancellor of the Diocese and Secretary to the Most Rev. J. Carroll McCormick, D.D., Sixth Bishop of Scranton, and a number of other prestigious positions within the Diocese until Pope John Paul II appointed him Eighth Bishop of Scranton. Quincy Apartments: in 2015, the university opened a new housing facility, located in the historic Hill Section on the 500 block of Quincy Avenue. Originally built in the early 1900s, the former Madison Junior High School was vacated for a number of years before being renovated and converted into an early childhood learning center and graduate student housing complex. The 43,000 square foot building is listed on the National Park Service's "National Register of Historic Places." The early learning center occupies the first floor of the three-story building. The school can accommodate approximately 120 children and is run by the Hildebrandt Learning Centers, a Dallas-based organization. The second and third floors have been converted into 24 one-, two- and three-bedroom apartments that have a configuration that is comparable to the university's other apartment-style offerings. These apartments are reserved exclusively for graduate students, which can be rented for $900 per month per student, including utilities and amenities. Resident Houses: in addition to the apartment-style buildings on campus, the university also owns a number of residential houses scattered throughout the campus and the historic Hill Section of the city which they use to house students depending on the need for additional housing, most of which were originally acquired during the 1970s and 1980s. These include: Blair House, Fayette House, Gonzaga House, Herold House, Liva House, McGowan House, Cambria House, Monroe House, Tioga House, and Wayne House. Landmarks and campus art The Christ The Teacher Sculpture Description: the sculpture depicts Jesus, with Mary sitting in down in front of him. Location: the sculpture stands at the foot of the Commons, near the corner of Linden Street and Monroe Avenue. Artist: Trevor Southey, 1998 Inscription: "For Christ plays in ten thousand places/ Lovely in Limbs, and lovely on eyes not his/ To the Father through the features of men’s faces" - Gerard Manley Hopkins, S.J. Martyrs' Grove Description: the stone memorial dedicated to the victims of the massacre at the University of Central America in San Salvador, El Salvador on November 16, 1989. The memorial remembers the murder of six Jesuit priests, their housekeeper and her daughter. Location: the memorial stands near the entrance to Campion Hall, the Jesuit residence on campus. Inscription: "What does it mean to be a Jesuit today? To be a Jesuit means to commit yourself under the standard of the Cross to the crucial struggle of our time, the struggle for faith and the struggle for justice which that same faith demands' (Decree #2, "Jesuits Today." G.C. 32. 1975). The names of those killed by Salvadoran soldiers are listed on the monument, which are Juan Ramon Moreno Pardo, S.J.; Ignacio Ellacuria, S.J.; Joaquin Lopez y Lopez, S.J.; Amando Lopez Quintana, S.J.; Ignacio Martin-Baro, S.J.; Segundo Montes Mozo, S.J.; Elba Julia Ramos; and Celina Maricet Ramos." St. Ignatius (Metanoia) Description: the twenty foot high bronze statue of St. Ignatius of Loyola depicts the conversion of its subject from Inigo, the soldier, to St. Ignatius, who founded the Jesuits in 1540, representing the transformative power of Jesuit education. Location: the intersection of the Royal Way and the Commons. Artist: Gerhard Baut, 1988 Jacob and the Angel Description: the ten foot high bronze sculpture depicts the story of Jacob’s struggle with the Angel of the Lord, which symbolizes the active confrontation between man and his moral dilemmas. Location: the sculpture stands at the top of the Commons. Artist: Arlene Love, 1982 Inscription: Genesis 32:26-29 - "And he said, Let me go for the day breaketh. And he said, I will not let thee go, except thou bless me. And he said unto him, Why is thy name? And he said, Jacob. And he said, Thy name shall be called no more Jacob, but Israel: for as a prince hast thou power with God and with me, and hast prevailed. And Jacob asked him, and said, Tell me, I pray thee, thy name. And he said, Wherefore is it that thou dost ask after my name? And he blessed him there."'' Woman in Repose Description: a modern, metal statue of a seated woman representing tranquility. Location: the statue is located in the Galvin Terrace outside the Weinberg Memorial Library, and previously was displayed in front of Gunster Memorial Student Center before it was demolished. Hope Horn Gallery Description: an art gallery with a wall of windows, a cathedral ceiling, and moveable walls to enhance the ambiance of the environment as well as an adjoining workshop and classroom space for lectures and workshops. Location: the fourth floor of Hyland Hall. Honoring: In 2004, the Art Gallery was named in honor of Hope Horn, a vibrant force in the arts community of Scranton who was a prolific painter and sculptor who bequeathed her estate to the University of Scranton to support art and music education at her death. Doorway to the Soul Description: a steel and wire sculpture which consists of consists of 18 framed images fabricated variously of steel plate, perforated steel, round steel bars and wire cloth which each represent experiences in the human journey towards truth while the grid itself represented a matrix of inner-connectedness. The individual panels within the grid are titled: The Thinker; Reaching Out To My Self; Natural and Curious Yearning of a Child; Eternal Bridge; Acceptance; A State of Calm, Peace, Knowing; Trials and Tribulation/The Ascent; The Void/God; The Writer; Father, Son, and Holy Spirit; Hope/Prayer; Christ; The Climb/The Worn Steps/The Invitation to Enter; The Written Word; Unconditional Love and Caring/Innocence of Children; The Self Exposed. The two external panels are: The Self Observing and The Only Begotten Son. Location: the exterior of the McDade Center for Literary and Performing Arts. Artist: Lisa Fedon, 1995 Heritage Room Description: thirty-nine panels which depict art, religion, and science in the Lackawanna Valley and in the world. Location: the fifth floor of the Weinberg Memorial Library. Artist: Trevor Southey, 1992 Dante Description:the thirteen foot Carrara marble structure was originally built in 1922 and displayed in downtown Scranton before it was acquired by the university in the mid-1960s. It depicts Dante Alighieri, a fourteenth-century Italian poet, holding his most famous work, the Divine Comedy. Location: the statue is located in front of Alumni Memorial Hall, on the university's Estate grounds. Artist''': Augustino N. Russo, 1922 Former university buildings and spaces Arts Building In 1945, with the end of World War II and the creation of the G.I. Bill enrollment exploded at the University of Scranton. In order to accommodate this dramatic increase in enrollment, the university acquired three "barracks" buildings from the government in 1947, which they placed on the 900 block of Linden Street, part of the former Scranton Estate. They were named A or Arts Building, B or Business Building, and E or Engineering Building, and each housed classrooms and offices pertaining to those specific subjects. Purchased from the Navy for one dollar plus transportation and remodeling costs, the Barracks were naval buildings being used in Portsmouth, Virginia that were dismantled and then reassembled in Scranton. Originally intended as temporary measures to accommodate the larger student body, the Barracks buildings were used for nearly fifteen years before being replaced by permanent structures. The Arts Building featured the "Pennant Room," a lounge that was decorated with pennants from other Jesuit colleges and universities. It was demolished in 1962 after the completion of St. Thomas Hall. Bradford House In 1973, the University of Scranton acquired the Bradford House, formerly the Rose Apartment Building. Bradford House was named for Bradford County in Pennsylvania. Located in the Hill Section, the Bradford House was three-story apartment building which was converted into student apartments as part of an effort to accommodate the growing number of boarding students. Equipped with a kitchen and bathroom, each apartment housed about six students. In 1978, as part of a major accessibility initiative in order to comply with new federal regulations, the first floor of the Bradford House was converted into a wheelchair-accessible apartment for women students. In 1998, Bradford House was razed in order to make room for the Kania School of Management's Brennan Hall. Business Building In 1945, with the end of World War II and the creation of the G.I. Bill enrollment exploded at the University of Scranton. In order to accommodate this dramatic increase in enrollment, the university acquired three "barracks" buildings from the government in 1947, which they placed on the 900 block of Linden Street, part of the former Scranton Estate. They were named A or Arts Building, B or Business Building, and E or Engineering Building, and each housed classrooms and offices pertaining to those specific subjects. Purchased from the Navy for one dollar plus transportation and remodeling costs, the barracks were naval buildings being used in Portsmouth, Virginia that were dismantled and then reassembled in Scranton. Originally intended as temporary measures to accommodate the larger student body, the Barracks buildings were used for nearly fifteen years before being replaced by permanent structures. The university's first cafeteria opened in the Business Building in February 1948, with accommodations for 250 students. The cafeteria later moved to the Engineering Building. The Business Building was demolished in 1961 to make room for the construction of St. Thomas Hall. Claver Hall Claver Hall served as an administrative building, housing the Physical Plant and Purchasing departments. It was demolished in 2010 to make room for Pilarz Hall and Montrone Hall. It was named for St. Peter Claver, S.J., a Spanish Jesuit priest and missionary. Engineering Building In 1945, with the end of World War II and the creation of the G.I. Bill enrollment exploded at the University of Scranton. In order to accommodate this dramatic increase in enrollment, the university acquired three "barracks" buildings from the government in 1947 which they placed on the 900 block of Linden Street, part of the former Scranton Estate. They were named A or Arts Building, B or Business Building, and E or Engineering Building, and each housed classrooms and offices pertaining to those specific subjects. Purchased from the Navy for one dollar plus transportation and remodeling costs, the Barracks were naval buildings being used in Portsmouth, Virginia that were dismantled and then reassembled in Scranton. Originally intended as temporary measures to accommodate the larger student body, the Barracks buildings were used for nearly fifteen years before being replaced by permanent structures. E Building held the university's physics laboratories, with special equipment for experimentation in optics, electricity and magnetism. The building also housed a lecture hall, physics department offices, and a photography dark room. After Loyola Hall of Science was completed in 1956, most of the science classrooms and laboratories were moved there and the Engineering Building underwent renovations. The school constructed the St. Ignatius Chapel, a cafeteria, and lounges. In 1960, E Building was dismantled in order to make room for St. Thomas Hall. It was then shipped to St. Jude's Parish in Mountaintop, Pennsylvania to be used as a grade school annex. It was subsequently demolished in the 1980s. Gallery Building In 1979, the University of Scranton purchased the Pennsylvania Drug Warehouse for $150,000 from Kay Wholesale Drugs of Wilkes-Barre. After significant renovations, the three-story building was dedicated as the Gallery Building on April 28, 1982, in honor of former University President J. Eugene Gallery, S.J. who served as the second Jesuit president from 1947 until 1953. When the building was opened, it housed a Media Resource Center, two large multipurpose lecture rooms, Career Services, a Counseling Center, the Audio Visual department, the Computer Science department, computer laboratories, an Art Gallery, and a study area. A ten-foot wide wire mesh satellite dish on the roof served as a receiver for educational programs, part of the university's participation in the National University Teleconference Network. In the 1990s, the Gallery also housed the Dexter Hanley College and the Office of Annual Giving. The Gallery was demolished in October 2001 to make room for Founders' Green, outside of Brennan Hall. Many of the departments housed in the Gallery were moved into the remodeled O’Hara Hall, while the Art Gallery was moved into the fourth floor of Hyland Hall. Gunster Memorial Student Center In 1955, the University of Scranton announced an ambitious $5,000,000 campus expansion plan, which proposed constructing ten new buildings over the course of the next ten years so that the school's physical plant would be concentrated at the former Scranton family Estate, the temporary barrack structures would be replaced with safer and more permanent buildings, and its facilities would be expanded to better serve its growing student body. One of these proposed buildings was a Student Center. Construction began in 1959 on the $1,030,000 three-story brick, steel, and concrete building. When it was completed in 1960, the Student Center housed a cafeteria, bookstore, student activities offices, staff and student lounges, a snack bar, game room, a rifle room, and a large ballroom/auditorium. The cafeteria was designed to seat 600 and to provide between 1200 and 1500 lunches each day (for a full-time student body of 1,358), along with 400 to 600 breakfasts and dinners for resident students (then numbering less than 250). Over the years, the Student Center was renovated and expanded to fit the needs of the growing student body. In 1974, a $228,000 renovation converted the third floor patio into a grill room, providing another dining area for students with a seating capacity of 300. In December 1980, during the dedication of the University Commons, the Student Center was also rededicated and renamed as the Joseph F. Gunster Memorial Student Center, in memory of Joseph F. Gunster, a St. Thomas College alumnus, generous university benefactor, and Florida attorney who had previously practiced law in Lackawanna County for over twenty years, whose $1,150,000 estate gift had been the largest in the university's history. The completion of the University Commons closed the 900 and 1000 blocks of Linden Street to vehicular traffic, creating a campus and making it easier and safer for students to walk between Gunster, Galvin Terrace, the Long Center, St. Thomas Hall, and their residence halls. During the construction, the entrance to Gunster from the Commons was updated to include a gathering area and speaker's forum. In 1989, Gunster was renovated, as one of its rooms named the Archives was modernized. In the Archives, seating was added for an additional 75 to 125 people, an 800-square-foot dance with a disc jockey booth was created, and a retractable video screen was installed. The snack bar was also expanded, offering a wider variety of food choices. Because the bookstore was moved to Hyland Hall from Gunster Memorial Student Center, they also expanded the third floor cafeteria. A summer 1993 expansion of Gunster by architects Leung, Hemmler, and Camayd created a 19,000 sq. ft. addition for dining services, increasing the seating capacity from 630 to 1,000. The addition included a new food court located on the third floor and created more space for food preparation as well as student activities and organizations. In summer 2004, the staircase and brick patio outside of Gunster Student Center were replaced due to safety concerns, and the second floor dining room was renovated. By 2001, with the realization that Gunster could not be effectively renovated any further and was unable to adequately meet the needs of the university student body which had grown dramatically since its completion in 1960, the university initiated plans for a new student center, which culminated in the construction of the DeNaples Center in 2008. Upon the completion and opening of the DeNaples Center, the Gunster Memorial Student Center was demolished. In its place, the university created the Dionne Green, a 25,000-square-foot green space roughly the size of a football field featuring a 3,600 sq ft outdoor amphitheater, located directly in front of the DeNaples Center. Hill House Donated in 1984 to the university by an anonymous faculty member, the Hill House was located on the corner of North Webster and Linden Street. It was used as a faculty residence, a guest house, and a facility for meetings and social gatherings. Hill House was named for Rev. William B. Hill, S.J., who in 1984 was marking his 15th year of service to the University of Scranton, having served as having served as an English professor, the academic vice president from 1975 until 1978, the chair of the English department from 1973 until 1975, special assistant to the president from 1987 until 2002, the chaplain of the board of trustees, and the chaplain of the Pro Deo et Universitate Society. Hill House was razed in the summer of 2007 to make way for Condron Hall. Hopkins House In 1985, the University of Scranton acquired the Hopkins House, located at 1119 Linden Street. It originally served as the home for the university's student publication offices, which included the Aquinas student newspaper, the Windhover yearbook, and the literary magazine Esprit. The House was named in honor of Gerard Manley Hopkins, a leading English poet, convert to Catholicism, and Jesuit priest. In 1988, because of a shortage of available on-campus beds, the university converted Hopkins House into a student residence. The housing crunch resulted from the city's crackdown on illegal rooming houses, as well as concerns about security and the conditions of off-campus houses, which all lead to an increasing demand for on-campus housing. In 1990, the university converted the Hopkins House into the Service House, a themed house meant to bring together students, faculty, and staff with an interest in community service to act as a catalyst to expand the university's involvement in volunteer work through getting as many people involved as possible and coordinating the volunteer activities of the other student residences. Before it was acquired by the university, Hopkins House was the home of Terry Connors, the university photographer for over four decades. In 2007, Hopkins House was demolished in order to make room for the construction of Condron Hall, a sophomore residence hall. Jerrett House Acquired by the university in 1977, the Jerrett House, a converted apartment building, was the first campus house dedicated to study and reserved for women students. Opened in 1978, it housed approximately 20 students. In 1980, Jerrett House made history as the first co-ed student residence on campus, when male students moved into the first floor apartments in order to increases protection around the female residences on Madison Avenue. Jerrett was retired as a student residence in Fall 2008 due to the construction of Condron Hall, as it had become dilapidated over the years. Lackawanna House In 1973, the university acquired Lackawanna House, named after Lackawanna County, Pennsylvania, as part of an effort to accommodate the growing number of residential students. The building cost $32,500 and required several thousand dollars of renovation to be converted into a residence for 25 students. It also housed offices for the Aquinas and the Hill Neighborhood Association. After the completion of Redington Hall, a large dormitory on campus with a capacity of approximately 240 students, in 1985 the university closed Lackawanna House because of its dilapidated condition and sold it. Lancaster House Acquired in 1973 by the university, Lancaster House was a house located on Clay Avenue and converted to a student residence as part of an effort to accommodate the growing number of residential students. Named for Lancaster County, Pennsylvania, it housed upperclassmen. In the 1980s, Lancaster House was demolished to make room for Gavigan Hall, a four-story residence hall that houses approximately 240 students in four-person suites. LaSalle Hall In 1908, construction was completed on the three-story residence for the Christian Brothers, which would later be named La Salle Hall, adjacent to Old Main, St. Thomas College's main academic building. It was constructed on a site that had been purchased by the Diocese of Scranton in 1888. In 1942, the incoming Jesuits dedicated the building as La Salle Hall as a tribute to the departing Christian Brothers. Because the building was too small to house the large Jesuit community, who chose instead to live in the Scranton Estate, which had been donated by Worthington Scranton in 1941, the Jesuits renovated the building. The first floor housed the office of the University President, the second floor contained a small chapel for daily mass and devotions, and the third floor was converted into offices for the Jesuit faculty. The chapel was dedicated to the Sacred Heart of Jesus and featured a large painting of the apparition of the Lord to St. Margaret Mary from the cloister of the Georgetown Visitation Convent. After the end of World War II, enrollment at the university exploded as veterans went back to college. In order to accommodate these larger numbers, the university acquired three former Navy barracks in 1947 which they constructed on the 900 block of Linden Street, part of the former Scranton Estate in the lower Hill section, as the university was unable to expand any further on Wyoming Avenue. Over the next fifteen years, the university embarked on an ambitious building project to move its entire campus to the Scranton Estate. Thus, in 1962, after the completion of St. Thomas Hall, the university no longer needed to use Old Main or La Salle Hall and they were vacated. In 1964, the University of Scranton donated La Salle Hall to the city of Scranton as a contribution to the Central City Redevelopment Project. The property was worth approximately $360,000 at the time. In 1970, the building was converted into Cathedral Convent for the Sisters of the Immaculate Heart of Mary, who staffed Bishop Hannan High School across the street. Leahy Hall - The YWCA Leahy Hall was originally the home of the Scranton Chapter of the Young Women's Christian Association (YWCA). The building was constructed in 1907 in the Colonial Revival style. The three-story, red brick building, designed by Scranton architect Edward Langley, featured offices, lounges, meeting rooms, a gymnasium, and a cafeteria. The YWCA held classes in physical culture, sewing, music and singing, arithmetic, grammar, Bible study, cooking, and English. It also created a debate club, served lunch to the public in its cafeteria, and became a meeting spot for local women's civic organizations. In 1927, the Platt-Woolworth building was constructed as an addition to meet the growing needs of the YWCA. Funded by Frederick J. Platt and C. S. Woolworth, the new wing to the YWCA headquarters provided housing for 100 women, as well as kitchens, laundry facilities, an auditorium, and a basement swimming pool. As the University of Scranton expanded and began to accept women students, its women students also benefited from the YWCA's services and programs. Several women resided at the YWCA while studying at the university, including international students. Over time, however, the YWCA found it increasingly difficult to maintain the property. In 1976, the University of Scranton purchased the YWCA building for $500,000. The University of Scranton initially named the building Jefferson Hall. After the YWCA vacated the building in June 1978, moving to a new location on Stafford Avenue, significant renovations converted the structure into an off-campus residence for 91 students. The building also contained a gym for recreational athletic activities, conference rooms, a dark room, and offices for student organizations, including the Aquinas, Windhover, Hanley College Council, Debate Club, Esprit, T.V. and radio stations. A student lounge and snack bar were added in February 1979, although they were removed during later renovations in September 1984. In November 1983, the gym was transformed into a facility for the Physical Therapy department, housing three laboratories, a dark room, classrooms, and faculty offices. In 1995, The university renamed Jefferson Hall as Edward R. Leahy, Jr., Hall in gratitude to the Leahy family for their endowment of health care education at Scranton. The son of Edward and Patricia Leahy, Edward R. Leahy, Jr., was born in 1984 with cerebral palsy and several related disabilities. He died shortly before his ninth birthday in 1993. Until the fall of 2013, Leahy Hall housed facilities, offices, classrooms, and laboratories for the departments of Physical Therapy and Occupational Therapy. Demolition of Leahy Hall began September 16, 2013, and revealed a time capsule, which held a 1907 almanac and a wealth of YWCA papers, pamphlets, and clippings dating back to the 1890s. Construction was completed on the new Edward R. Leahy, Jr. Hall in 2015 and it opened for use for the Fall 2015 semester. Loyola Hall of Science Loyola Hall was constructed in 1956, as part of a major campus expansion. Built at a cost of $1,205,000, the reinforced concrete structure featured a porcelain enameled steel "skin" brickwork as well as aluminum mullions along its exterior. At the time of its opening, the ground floor was dedicated to engineering, the first floor to physics, the second floor to biology, and the third floor to chemistry. The penthouse housed the university's radio station (WUSV) and its equipment, including a steel radio tower, which was subsequently dismantled in 1974. When the building was first constructed, its ultra modern design, technologically advanced features, and ability to house all of the science departments in one building made it a vital part of the University of Scranton's campus. Before the construction of Loyola Hall, engineering students had been forced to go elsewhere for the final two years of their education because the university lacked the proper equipment to teach them. As part of the "Second Cornerstone" campaign, a fifteen million dollar expansion and improvement project, the university extensively renovated Loyola Hall in 1987. In the $2,750,000 expansion of Loyola Hall, the existing building was remodeled and an expansion towards Monroe Avenue was added, in order to accommodate the growing student body and the expanding science programs. An additional floor and a twenty-foot extension of Loyola's east wall expanded the floor space of the facility by more than 14,000 feet. The new space provided room for additional chemistry laboratories, classrooms, research areas, and computer facilities for faculty and students. With the construction of the Loyola Science Center in 2011, Loyola Hall was functionally superseded. The science departments, classrooms, and laboratories formerly housed in Loyola Hall were moved to the more modern, more technologically advanced, more energy-efficient, and safer The Loyola Science Center. Before being demolished, it served as "swing space," or a housing site for classes or offices whose buildings are undergoing renovations. The building provided housing for the Panuska College of Professional Studies Academic Advising Center and the departments of Physical Therapy and Occupational Therapy, all displaced by the demolition of Leahy Hall and the construction of the new Center for Rehabilitation Education. In the summer of 2016, Loyola Hall was demolished. Luzerne House In 1978, the university acquired Luzerne House, located at 308 Clay Avenue, for $60,000. It was then converted into a residency for women students with a capacity of 32 occupants. Luzerne House, named for Luzerne County in Pennsylvania, was demolished in 2010 as part of the University of Scranton's restoration project on Clay Avenue. The site now features green space and a graded sidewalk. Mercer House Acquired in 1974 by the university, Mercer House was converted to a student residence as part of an effort to accommodate the growing number of residential students. Named for Mercer County, Pennsylvania, it housed upperclassmen. In the 1990s, the university stopped using Mercer House as a residence for students after the completion of several larger on-campus residence halls. Montgomery House Acquired in 1974 by the university, Montgomery House, named for Montgomery County in Pennsylvania, was converted to a student residence as part of an effort to accommodate the growing number of residential students. Montgomery House was retired as a student residence in Fall 2008 following the construction of Condron Hall. Old Main Old Main, also known as College Hall, was the first building constructed for St. Thomas College and served as the center of the school's campus for many years. In 1883, Bishop O'Hara purchased the Wyoming Avenue property near St. Peter's Cathedral from William H. Pier. On August 12, 1888, he blessed and placed a cornerstone as the foundation for St. Thomas College. The laying of the cornerstone was a city-wide celebration, featuring a parade, musical performances by the Cathedral choir and a local orchestra, and a sermon by Bishop O’Hara, attracting residents from the city of Scranton as well as the surrounding area, as far as Wilkes-Barre and Carbondale. The cornerstone held a copper box, in which were placed six newspapers from the day of the dedication and seven silver coins. After four years of intense fundraising, the construction of Old Main was completed. The three-story red brick building, located on Wyoming Avenue next to St. Peter's Cathedral and the Bishop's residence, had three floors and a basement. Originally, there were eight classrooms on the first and second floors, the third floor was an auditorium/gymnasium, and the basement held a chapel dedicated to St. Aloysius. In September 1892, the college opened for classes. By the 1920s, Old Main could no longer fully accommodate the growing institution and underwent a number of renovations. In 1926, the college's first library was established after part of the gymnasium on the third floor of Old Main was converted. Originally, the library's collection consisted of 300 books that Bishop Hoban had donated from his personal collection. Over the next ten years, the college continued to make changes to Old Main to meet the needs of the expanding student population. The library's collections continued to grow, its existing laboratories were modernized, and the building was repainted and repaired. The gymnasium on the third floor was converted into three laboratories, a lecture hall, and faculty offices. Without a gym on campus, physical education classes moved to the Knights of Columbus gym on North Washington & Olive Streets, and basketball practice moved to Watres Armory. Additional renovations in 1949 transformed the basement of Old Main into a student lounge. Known as Anthracite Hall, it contained a snack bar and could be converted into an auditorium/ballroom with a seating capacity of 600. The library was also expanded to occupy the entire third floor of Old Main, as classrooms and laboratories moved into the Navy barracks, which had been acquired by the university in 1947. In 1941, Worthington Scranton donated his home and adjoining estate, located on Linden Street in the lower Hill section about seven blocks away from Old Main on Wyoming Avenue, to the University of Scranton. As the school continued to grow, additional buildings were acquired and constructed on the former Scranton Estate, as there was no room for expansion on Wyoming Avenue. Gradually, all operations moved from Old Main to the new campus. The arts and sciences, business, and engineering divisions were moved from Old Main to the naval barracks when they were purchased in 1947. During the 1950s, the university embarked on an ambitious $5,000,000 campus expansion plan, building Loyola Science Hall, Alumni Memorial Library, Gunster Memorial Student Center, and St. Thomas Hall, which allowed the school to vacate its Wyoming Avenue properties, including Old Main in 1962. During the dedication ceremony for the new classroom building, the original cornerstone from Old Main transferred to the front corner of St. Thomas Hall. Seventy four years after Old Main's blessing in 1888, the University of Scranton transferred its cornerstone to the new campus, linking the university with its past and providing continuity from both the university's former name, St. Thomas College, and its old campus. After the university vacated Old Main, it was used by Scranton Preparatory School for two years after its previous home, the former Thomson Hospital, was purchased and demolished by the Scranton Redevelopment Authority as part of an effort to widen Mulberry Street. In 1964, Scranton Prep moved to its permanent location, the former Women's Institute Building of the International Correspondence Schools, at 1000 Wyoming Avenue. After Scranton Prep moved locations, the University of Scranton transferred the title of the building back to St. Peter's Cathedral parish. In 1968, Old Main was demolished. Currently, the land serves as the Cathedral Prayer Garden. Somerset House Acquired in 1974 by the university, Somerset House was converted to a student residence as part of an effort to accommodate the growing number of residential students. Originally housing male students, it was converted to a female residence in 1980 and later became coed. Named for Somerset County in Pennsylvania, Somerset House was razed in the 1990s to make way for Brennan Hall. Thomson Hospital Constructed in 1895, Thomson Hall was originally the private hospital of Dr. Charles E. Thomson. When it was first built, the structure was four stories tall, but a later expansion added two additional stories for a total of 24,000 square feet. In 1941, Bishop Hafey purchased the hospital, which had by then ceased operation, for $60,000 for use by the University of Scranton. Called the Annex, the building was not used by the Christian Brothers before they relinquished control of the university to the Society of Jesus. When the Jesuits arrived at Scranton, renovations were made to the building, including the creation of additional classrooms, faculty offices, and living quarters for out-of-town students and aviation cadets training at the university. It opened in 1942, after the Jesuits took ownership of the University of Scranton. However, two days before Christmas in December 1943, the Annex was severely damaged by a fire. Since the university was in recess for the Christmas holiday, no one was in the building to be injured, although a firefighter died later that night of a heart attack, believed to have been brought on by exhaustion and smoke inhalation. The upper two floors of the Annex were gutted and subsequently eliminated, before the rest of the building was repaired. The sharp decline in enrollment caused by World War II had reduced the university's space requirements, so when the Annex was repaired, it was decided that the University of Scranton did not need the building. Instead, the Jesuits decided to open a high school housed in the Annex. Since the Jesuits had arrived in Scranton, the Scranton diocese and the Catholic community had requested they establish a college preparatory school. The availability of a reconstructed Annex made such a step possible. Thus, the Scranton Preparatory School was born in 1944. In 1961, Scranton Prep moved from the Annex to the recently vacated Old Main because the Annex had been purchased by the Scranton Redevelopment Authority. It was demolished later that year in order to widen Mulberry Street. Throop House Throop House was originally the private home of Dr. Benjamin H. Throop, a pioneer Scranton physician. It was constructed in 1880, and owned by Dr. Throop until his death in 1897. The structure, located between the Thomson Hospital and LaSalle Hall, was owned by the Throop Estate until 1922, when it was purchased by the Diocese of Scranton in order to accommodate the growing student body of St. Thomas College and provide additional classroom space. The two-story barn behind the Throop House, later called C Building, was converted into a chemistry laboratory. During the 1920s and 1930s, the Throop House mainly held freshman classes but also served as a meeting space for the local Scranton Catholic Club and classrooms for the high school division of St. Thomas. Though its former barn was used by the university until 1956, Throop House was demolished in January 1943, because it was considered a fire hazard. Wyoming House Acquired by the university in 1982 for $115,000 to be used as a student residence, Wyoming House was constructed in 1901 and originally known as Jefferson Towne House. The three-floor, Colonial Revival-style Towne House had been used over the years as a medical office, a music conservatory, a funeral parlor, and a catering business. In 2000, Wyoming House was demolished to make room for the construction of Mulberry Plaza. References Buildings
378970
https://en.wikipedia.org/wiki/Grosch%27s%20law
Grosch's law
Grosch's law is the following observation of computer performance, made by Herb Grosch in 1953: I believe that there is a fundamental rule, which I modestly call Grosch's law, giving added economy only as the square root of the increase in speed — that is, to do a calculation ten times as cheaply you must do it hundred times as fast. This adage is more commonly stated as Computer performance increases as the square of the cost. If computer A costs twice as much as computer B, you should expect computer A to be four times as fast as computer B. Two years before Grosch's statement, Seymour Cray was quoted in Business Week (August 1963) expressing this very same thought: Computers should obey a square law — when the price doubles, you should get at least four times as much speed. The law can also be interpreted as meaning that computers present economies of scale: the more costly is the computer, the price–performance ratio linearly becomes better. This implies that low-cost computers cannot compete in the market. An analysis of rental cost/performance data for computers between 1951 and 1963 by Knight found that Grosch's law held for commercial and scientific operations (a modern analysis of the same data found that Grosch's law only applied to commercial operations). In a separate study Knight found that Grosch's law did not apply to computers between 1963-1967 (also confirmed by a modern analysis). Debates Paul Strassmann asserted in 1997, that "it was never clear whether Grosch's Law was a reflection of how IBM priced its computers or whether it related to actual costs. It provided the rationale that a bigger computer is always better. The IBM sales force used Grosch's rationale to persuade organizations to acquire more computing capacity than they needed. Grosch's Law also became the justification for offering time-sharing services from big data centers as a substitute for distributed computing." Grosch himself has stated that the law was more useful in the 1960s and 1970s than it is today. He originally intended the law to be a "means for pricing computing services". See also Metcalfe's law Moore's law References Adages Computer architecture statements
48113
https://en.wikipedia.org/wiki/SWIFT
SWIFT
The Society for Worldwide Interbank Financial Telecommunication (SWIFT), legally S.W.I.F.T. SC, is a Belgian cooperative society providing services related to the execution of financial transactions and payments between banks worldwide. Its principal function is to serve as the main messaging network through which international payments are initiated. It also sells software and services to financial institutions, mostly for use on its proprietary "SWIFTNet", and ISO 9362 Business Identifier Codes (BICs), popularly known as "SWIFT codes". The SWIFT messaging network is a component of the global payments system. SWIFT acts as a carrier of the "messages containing the payment instructions between financial institutions involved in a transaction." However, the organization does not manage accounts on behalf of individuals or financial institutions, and it does not hold funds from third parties. It also does not perform clearing or settlement functions. After a payment has been initiated, it must be settled through a payment system, such as TARGET2 in Europe. In the context of cross-border transactions, this step often takes place through correspondent banking accounts that financial institutions have with each other. As of 2018, around half of all high-value cross-border payments worldwide used the SWIFT network, and in 2015, SWIFT linked more than 11,000 financial institutions in over 200 countries and territories, who were exchanging an average of over 32 million messages per day (compared to an average of 2.4 million daily messages in 1995). Though widely utilized, SWIFT has been criticized for its inefficiency. In 2018, the London-based Financial Times noted that transfers frequently "pass through multiple banks before reaching their final destination, making them time-consuming, costly and lacking transparency on how much money will arrive at the other end". SWIFT has since introduced an improved service called "Global Payments Innovation" (GPI), claiming it was adopted by 165 banks and was completing half its payments within 30 minutes. As a cooperative society under Belgian law, SWIFT is owned by its member financial institutions. It is headquartered in La Hulpe, Belgium, near Brussels; its main building was designed by Ricardo Bofill Taller de Arquitectura and completed in 1989. The chairman of SWIFT is Yawar Shah of Pakistan, and its CEO is Javier Pérez-Tasso of Spain. SWIFT hosts an annual conference, called Sibos, specifically aimed at the financial services industry. History SWIFT was founded in Brussels on 3 May 1973 under the leadership of its inaugural CEO, Carl Reuterskiöld (1973–1989), and was supported by 239 banks in 15 countries. Before its establishment, international financial transactions were communicated over Telex, a public system involving manual writing and reading of messages. It was set up out of fear of what might happen if a single private and fully American entity controlled global financial flows – which before was First National City Bank (FNCB) of New York – later Citibank. In response to FNCB's protocol, FNCB's competitors in the US and Europe pushed an alternative "messaging system that could replace the public providers and speed up the payment process". SWIFT started to establish common standards for financial transactions and a shared data processing system and worldwide communications network designed by Logica and developed by the Burroughs Corporation. Fundamental operating procedures and rules for liability were established in 1975, and the first message was sent in 1977. SWIFT's first international (non-European) operations center was inaugurated by Governor John N. Dalton of Virginia in 1979. Standards SWIFT has become the industry standard for syntax in financial messages. Messages formatted to SWIFT standards can be read and processed by many well-known financial processing systems, whether or not the message traveled over the SWIFT network. SWIFT cooperates with international organizations for defining standards for message format and content. SWIFT is also Registration authority (RA) for the following ISO standards: ISO 9362: 1994 BankingBanking telecommunication messagesBank identifier codes ISO 10383: 2003 Securities and related financial instrumentsCodes for exchanges and market identification (MIC) ISO 13616: 2003 IBAN Registry ISO 15022: 1999 SecuritiesScheme for messages (Data Field Dictionary) (replaces ISO 7775) ISO 20022-1: 2004 and ISO 20022-2:2007 Financial servicesUniversal Financial Industry message scheme In RFC 3615 urn:swift: was defined as Uniform Resource Names (URNs) for SWIFT FIN. Operations centers The SWIFT secure messaging network is run from three data centers, located in the United States, the Netherlands, and Switzerland. These centers share information in near real-time. In case of a failure in one of the data centers, another is able to handle the traffic of the complete network. SWIFT uses submarine communications cables to transmit its data. Shortly after opening its third data center in Diessenhofen, Switzerland in 2009, SWIFT introduced new distributed architecture with two messaging zones, European and Trans-Atlantic, so data from European SWIFT members no longer mirrored the U.S. data center. European zone messages are stored in the Netherlands and in part of the Swiss operating center; Trans-Atlantic zone messages are stored in the United States and in another part of the Swiss operating center that is segregated from the European zone messages. Countries outside of Europe were by default allocated to the Trans-Atlantic zone, but could choose to have their messages stored in the European zone. The Swiss operating center was built in Diessenhofen. SWIFTNet network SWIFT moved to its current IP network infrastructure, known as SWIFTNet, from 2001 to 2005, providing a total replacement of the previous X.25 infrastructure. The process involved the development of new protocols that facilitate efficient messaging, using existing and new message standards. The adopted technology chosen to develop the protocols was XML, where it now provides a wrapper around all messages legacy or contemporary. The communication protocols can be broken down into: InterAct SWIFTNet InterAct Realtime SWIFTNet InterAct Store and Forward FileAct SWIFTNet FileAct Realtime SWIFTNet FileAct Store and Forward Browse SWIFTNet Browse Architecture SWIFT provides a centralized store-and-forward mechanism, with some transaction management. For bank A to send a message to bank B with a copy or authorization involving institution C, it formats the message according to standards and securely sends it to SWIFT. SWIFT guarantees its secure and reliable delivery to B after the appropriate action by C. SWIFT guarantees are based primarily on high redundancy of hardware, software, and people. SWIFTNet Phase 2 During 2007 and 2008, the entire SWIFT network migrated its infrastructure to a new protocol called SWIFTNet Phase 2. The main difference between Phase 2 and the former arrangement is that Phase 2 requires banks connecting to the network to use a Relationship Management Application (RMA) instead of the former bilateral key exchange (BKE) system. According to SWIFT's public information database on the subject, RMA software should eventually prove more secure and easier to keep up-to-date; however, converting to the RMA system meant that thousands of banks around the world had to update their international payments systems to comply with the new standards. RMA completely replaced BKE on 1 January 2009. Products and interfaces SWIFT means several things in the financial world: a secure network for transmitting messages between financial institutions; a set of syntax standards for financial messages (for transmission over SWIFTNet or any other network) a set of connection software and services allowing financial institutions to transmit messages over SWIFT network. Under 3 above, SWIFT provides turn-key solutions for members, consisting of linkage clients to facilitate connectivity to the SWIFT network and CBTs or "computer based terminals" which members use to manage the delivery and receipt of their messages. Some of the more well-known interfaces and CBTs provided to their members are: SWIFTNet Link (SNL) software which is installed on the SWIFT customer's site and opens a connection to SWIFTNet. Other applications can only communicate with SWIFTNet through the SNL. Alliance Gateway (SAG) software with interfaces (e.g., RAHA = Remote Access Host Adapter), allowing other software products to use the SNL to connect to SWIFTNet Alliance WebStation (SAB) desktop interface for SWIFT Alliance Gateway with several usage options: administrative access to the SAG direct connection SWIFTNet by the SAG, to administrate SWIFT Certificates so-called Browse connection to SWIFTNet (also by SAG) to use additional services, for example Target2 Alliance Access (SAA) and Alliance Messaging Hub (AMH) are the main messaging software applications by SWIFT, which allow message creation for FIN messages, routing and monitoring for FIN and MX messages. The main interfaces are FTA (files transfer automated, not FTP) and MQSA, a WebSphere MQ interface. The Alliance Workstation (SAW) is the desktop software for administration, monitoring and FIN message creation. Since Alliance Access is not yet capable of creating MX messages, Alliance Messenger (SAM) has to be used for this purpose. Alliance Web Platform (SWP) as new thin-client desktop interface provided as an alternative to existing Alliance WebStation, Alliance Workstation (soon) and Alliance Messenger. Alliance Integrator built on Oracle's Java Caps which enables customer's back office applications to connect to Alliance Access or Alliance Entry. Alliance Lite2 is a secure and reliable, cloud-based way to connect to the SWIFT network which is a light version of Alliance Access specifically targeting customers with low volume of traffic. Services There are four key areas that SWIFT services fall under in the financial marketplace: securities, treasury & derivatives, trade services. and payments-and-cash management. Securities SWIFTNet FIX (obsolete) SWIFTNet Data Distribution SWIFTNet Funds SWIFTNet Accord for Securities (end of life October 2017) Treasury and derivatives SWIFTNet Accord for Treasury (end of life October 2017) SWIFTNet Affirmations SWIFTNet CLS Third Party Service Cash management SWIFTNet Bulk Payments SWIFTNet Cash Reporting SWIFTNet Exceptions and Investigations Trade services SWIFTNet Trade Services Utility SWIFTREF Swift Ref, the global payment reference data utility, is SWIFT's unique reference data service. Swift Ref sources data direct from data originators, including central banks, code issuers and banks making it easy for issuers and originators to maintain data regularly and thoroughly. SWIFTRef constantly validates and cross-checks data across the different data sets. SWIFTNet Mail SWIFT offers a secure person-to-person messaging service, SWIFTNet Mail, which went live on 16 May 2007. SWIFT clients can configure their existing email infrastructure to pass email messages through the highly secure and reliable SWIFTNet network instead of the open Internet. SWIFTNet Mail is intended for the secure transfer of sensitive business documents, such as invoices, contracts and signatories, and is designed to replace existing telex and courier services, as well as the transmission of security-sensitive data over the open Internet. Seven financial institutions, including HSBC, FirstRand Bank, Clearstream, DnB NOR, Nedbank, and Standard Bank of South Africa, as well as SWIFT piloted the service. U.S. government involvement Terrorist Finance Tracking Program A series of articles published on 23 June 2006 in The New York Times, The Wall Street Journal, and the Los Angeles Times revealed a program, named the Terrorist Finance Tracking Program, which the US Treasury Department, Central Intelligence Agency (CIA), and other United States governmental agencies initiated after the 11 September attacks to gain access to the SWIFT transaction database. After the publication of these articles, SWIFT quickly came under pressure for compromising the data privacy of its customers by allowing governments to gain access to sensitive personal information. In September 2006, the Belgian government declared that these SWIFT dealings with American governmental authorities were a breach of Belgian and European privacy laws. In response, and to satisfy members' concerns about privacy, SWIFT began a process of improving its architecture by implementing a distributed architecture with a two-zone model for storing messages (see Operations centers). Concurrently, the European Union negotiated an agreement with the United States government to permit the transfer of intra-EU SWIFT transaction information to the United States under certain circumstances. Because of concerns about its potential contents, the European Parliament adopted a position statement in September 2009, demanding to see the full text of the agreement and asking that it be fully compliant with EU privacy legislation, with oversight mechanisms emplaced to ensure that all data requests were handled appropriately. An interim agreement was signed without European Parliamentary approval by the European Council on 30 November 2009, the day before the Lisbon Treaty—which would have prohibited such an agreement from being signed under the terms of the codecision procedure—formally came into effect. While the interim agreement was scheduled to come into effect on 1 January 2010, the text of the agreement was classified as "EU Restricted" until translations could be provided in all EU languages and published on 25 January 2010. On 11 February 2010, the European Parliament decided to reject the interim agreement between the EU and the US by 378 to 196 votes. One week earlier, the parliament's civil liberties committee had already rejected the deal, citing legal reservations. In March 2011, it was reported that two mechanisms of data protection had failed: EUROPOL released a report complaining that requests for information from the US had been too vague (making it impossible to make judgments on validity) and that the guaranteed right for European citizens to know whether their information had been accessed by US authorities had not been put into practice. Sanctions against Iran In January 2012, the advocacy group United Against Nuclear Iran (UANI) implemented a campaign calling on SWIFT to end all relations with Iran's banking system, including the Central Bank of Iran. UANI asserted that Iran's membership in SWIFT violated US and EU financial sanctions against Iran as well as SWIFT's own corporate rules. Consequently, in February 2012, the U.S. Senate Banking Committee unanimously approved sanctions against SWIFT aimed at pressuring it to terminate its ties with blacklisted Iranian banks. Expelling Iranian banks from SWIFT would potentially deny Iran access to billions of dollars in revenue using SWIFT but not from using IVTS. Mark Wallace, president of UANI, praised the Senate Banking Committee. Initially SWIFT denied that it was acting illegally, but later said that "it is working with U.S. and European governments to address their concerns that its financial services are being used by Iran to avoid sanctions and conduct illicit business". Targeted banks would be—amongst others—Saderat Bank of Iran, Bank Mellat, Post Bank of Iran and Sepah Bank. On 17 March 2012, following agreement two days earlier between all 27 member states of the Council of the European Union and the Council's subsequent ruling, SWIFT disconnected all Iranian banks that had been identified as institutions in breach of current EU sanctions from its international network and warned that even more Iranian financial institutions could be disconnected from the network. In February 2016, most Iranian banks reconnected to the network following the lift of sanctions due to the Joint Comprehensive Plan of Action. U.S. control over transactions within the EU On 26 February 2012 the Danish newspaper Berlingske reported that US authorities have sufficient control over SWIFT to seize money being transferred between two European Union (EU) countries (Denmark and Germany), since they succeeded in seizing around $26,000 that was being transferred from a Danish businessman to a German bank. The transaction was automatically routed through the US, possibly because of the USD currency used in the transaction, which is how the United States was able to seize the funds. The money was a payment for a batch of Cuban cigars previously imported to Germany by a German supplier. As justification for the seizure, the U.S. Treasury stated that the Danish businessman had violated the United States embargo against Cuba. Monitoring by the NSA Der Spiegel reported in September 2013 that the National Security Agency (NSA) widely monitors banking transactions via SWIFT, as well as credit card transactions. The NSA intercepted and retained data from the SWIFT network used by thousands of banks to securely send transaction information. SWIFT was named as a "target", according to documents leaked by Edward Snowden. The documents revealed that the NSA spied on SWIFT using a variety of methods, including reading "SWIFT printer traffic from numerous banks". In April 2017, a group known as the Shadow Brokers released files allegedly from the NSA which indicate that the agency monitored financial transactions made through SWIFT. Use in sanctions SWIFT had disconnected all Iranian banks from its international network as a sanction against Iran. However, as of 2016, Iranian banks that are no longer on international sanctions lists were reconnected to SWIFT. Even though this enables movement of money from and to these Iranian banks, foreign banks remain wary of doing business with the country. Due to primary sanctions, transactions of U.S. banks with Iran or transactions in U.S. dollars with Iran both remain prohibited. In 2014, SWIFT rejected calls from pro-Palestinian activists to revoke Israeli banks' access to its network. Similarly, in August 2014 the UK planned to press the EU to block Russian use of SWIFT as a sanction due to Russian military intervention in Ukraine. However, SWIFT refused to do so. SPFS, a Russia-based SWIFT equivalent, was created by the Central Bank of Russia as a backup measure. During the 2021–2022 Russo-Ukrainian crisis, the United States developed preliminary possible sanctions against Russia, but excluded banning Russia from SWIFT. Following the 2022 Russian invasion of Ukraine, the foreign ministers of the Baltic states called for Russia to be cut off from SWIFT. However, other EU member states were reluctant, both because European lenders held most of the nearly $30 billion in foreign banks' exposure to Russia and because Russia had developed the SPFS alternative. The European Union, United Kingdom, Canada, and the United States finally agreed to remove select Russian banks from the SWIFT messaging system in response to the 2022 Russian invasion of Ukraine; the governments of France, Germany, Italy and Japan individually released statements alongside the EU. Competitors Alternatives to the SWIFT system include: CIPS – sponsored by China, for trade-related deals in the Chinese currency with Chinese clearing banks SFMS - sponsored by India SPFS – sponsored by Russia, mostly composed of Russian banks INSTEX – sponsored by the European Union, limited to non-USD transactions for trade with Iran, largely unused and ineffective Security In 2016 an $81 million theft from the Bangladesh central bank via its account at the New York Federal Reserve Bank was traced to hacker penetration of SWIFT's Alliance Access software, according to a New York Times report. It was not the first such attempt, the society acknowledged, and the security of the transfer system was undergoing new examination accordingly. Soon after the reports of the theft from the Bangladesh central bank, a second, apparently related, attack was reported to have occurred on a commercial bank in Vietnam. Both attacks involved malware written to both issue unauthorized SWIFT messages and to conceal that the messages had been sent. After the malware sent the SWIFT messages that stole the funds, it deleted the database record of the transfers then took further steps to prevent confirmation messages from revealing the theft. In the Bangladeshi case, the confirmation messages would have appeared on a paper report; the malware altered the paper reports when they were sent to the printer. In the second case, the bank used a PDF report; the malware altered the PDF viewer to hide the transfers. In May 2016, Banco del Austro (BDA) in Ecuador sued Wells Fargo after Wells Fargo honored $12 million in fund transfer requests that had been placed by thieves. In this case, the thieves sent SWIFT messages that resembled recently canceled transfer requests from BDA, with slightly altered amounts; the reports do not detail how the thieves gained access to send the SWIFT messages. BDA asserts that Wells Fargo should have detected the suspicious SWIFT messages, which were placed outside of normal BDA working hours and were of an unusual size. Wells Fargo claims that BDA is responsible for the loss, as the thieves gained access to the legitimate SWIFT credentials of a BDA employee and sent fully authenticated SWIFT messages. In the first half of 2016, an anonymous Ukrainian bank and others—even "dozens" that are not being made public—were variously reported to have been "compromised" through the SWIFT network and to have lost money. See also ABA routing transit number Bilateral key exchange and the new Relationship Management Application (RMA) Cross-Border Interbank Payment System Cryptocurrency / Digital currency Electronic money Indian Financial System Code (IFSC) Structured Financial Messaging System (India) Instrument in Support of Trade Exchanges (INSTEX) International sanctions Internationalization of the renminbi ISO 9362, the SWIFT/BIC code standard ISO 15022 ISO 20022 Organisation for Economic Co-operation and Development (OECD) Single Euro Payments Area (SEPA) Sibos conference SPFS Terrorist Finance Tracking Program TIPANET Value transfer system Further reading Farrell, Henry and Abraham Newman. 2019. Of Privacy and Power: The Transatlantic Struggle over Freedom and Security. Princeton University Press. References External links Financial services companies established in 1973 1973 establishments in Belgium Market data Financial metadata Financial markets software La Hulpe Network architecture Ricardo Bofill buildings
4396958
https://en.wikipedia.org/wiki/Gus%20Henderson
Gus Henderson
Elmer Clinton "Gloomy Gus" Henderson (March 10, 1889 – December 16, 1965) was an American football coach. He served as the head coach at the University of Southern California (1919–1924), the University of Tulsa (1925–1935), and Occidental College (1940–1942), compiling a career college football record of 126–42–7. Henderson's career winning percentage of .865 at USC is the best of any Trojans football coach, and his 70 wins with the Tulsa Golden Hurricane remain a team record. In between his stints at Tulsa and Occidental, Henderson moved to the professional ranks, helming the Los Angeles Bulldogs of the American Football League in 1937 and the Detroit Lions of the National Football League (NFL) in 1939. Henderson also coached basketball and baseball at USC, each for two seasons. Early life Henderson was born in Oberlin, Ohio on March 10, 1889. He graduated from Oberlin College, and then coached at Broadway High School in Seattle, Washington. USC Henderson arrived at the University of Southern California (USC) in 1919, and set the Trojans football team on its first steps toward national prominence. He led USC to a 6–0 record in 1920, the team's first perfect season of at least three games, and to their first appearance in the Rose Bowl in 1923. In the 1923 Rose Bowl, the first Rose Bowl game to be held in its namesake stadium, USC's faced their first opponent from east of the Rocky Mountains. The Trojans defeated the heavily favored Penn State Nittany Lions, 14–3. Penn State arrived at the game 45 minutes late, and ten minutes after the scheduled kickoff, because of a traffic jam. Henderson accused Penn State coach Hugo Bezdek of doing so intentionally as a psychological tactic, and the coaches nearly began throwing punches. Later, they exchanged public insults after the game. Gordon Campbell, a halfback USC's 1923 Rose Bowl team, said of Henderson, "He put the Trojans on the map. He was a great coach when we needed one most, because we were just growing up." Under Henderson's tenure, USC joined the Pacific Coast Conference in 1922, and in 1923 moved from Bovard Field on campus to play in the Los Angeles Memorial Coliseum. He received his nickname from Los Angeles Times sports editor Paul Lowry because of his tendency to poor-mouth the Trojans' prospects before a game. Gloomy Gus was a character in a popular comic strip of the era, Happy Hooligan. In regard to his offensive tactics which proved successful, Los Angeles Times sports editor Paul Zimmerman noted, "Until someone proves otherwise, it must be assumed that Henderson invented the spread formation, variations of which have become an important form of attack in modern day football." During his time at USC, Henderson also coached the Trojans baseball team in 1920 and 1921 and school's basketball team for two seasons from 1919 to 1921. Henderson left USC following the 1924 season, despite a 45–7 record, in part due to his inability to defeat rival California in five tries. USC's loss to California in 1924 loss followed one week later by an upset at the hands of Saint Mary's. Henderson's contract was bought out at the end of the year. At the time, USC also had strained relations with Cal and Stanford University, who threatened to sever conference ties with USC due to their belief that USC was using cash to recruit players. USC quarterback Chet Dolley was dismissive of the idea, noting, "That was really a joke, because the university didn't have a dime." He stated that Henderson "made his players responsible for bringing in athletes. I came from Long Beach, so I was assigned to that area. So, naturally, I was in charge of getting Morley Drury." Among the other players who arrived at USC during Henderson's tenure were the school's first two All-Americans, Brice Taylor and Mort Kaer, as well as future Pro Football Hall of Famer, Red Badgro. Taylor recalled of his former coach, "Not only was he a great coach, but he was a wonderful man. He was real people. You know, I'll never forget the day I was standing on a corner, shivering, because it was cold, and Gus drives by in his car. He sees me, stops and backs up, and says, 'What's the matter Brice, are you cold?' And I said, 'I sure am coach.' So he reaches into the back seat and takes out his brand new, blue Chesterfield coat and says, 'Here, take this, it's yours.' You know, years after I left SC, when I was teaching in the South, I was still wearing that coat." USC finished its 1924 regular season with their first-ever regularly scheduled game against an eastern team, winning at home over Syracuse, 16–0. The Trojans ended the year with a 20–7 win over Missouri in the Christmas Festival Bowl, held at the Coliseum. Howard Jones of Iowa succeeded Henderson as USC's head coach in 1925, and controversies quickly abated, although California still canceled its 1925 game against USC, the only year between 1920 and 2020 in which the teams have not met. Tulsa Henderson moved to the University of Tulsa in 1925 and served at the Golden Hurricane head coach for the next 11 seasons. There he oversaw the construction of the Skelly Field, which opened in 1930. Under Henderson, Tulsa captured five conference championships: the Oklahoma Collegiate Conference title in 1925, the Big Four Conference titles in 1929, 1930, and 1932, and the Missouri Valley Conference title in 1935. Henderson's final record at Tulsa was 70–25–5. Later coaching career Henderson returned to Los Angeles and became the head coach of the professional Los Angeles Bulldogs, which operated as an independent team in 1936 before joining the American Football League in 1937 and capturing the conference title with a perfect 8–0 record. The Bulldogs returned to independent play in 1938 when the league folded. In 1939, Henderson was hired as coach of the National Football League's Detroit Lions by team owner Dick Richards, who also owned Los Angeles radio station KMPC. The Lions posted a 6–5 record in 1939, but the team was sold before the 1940 season, and, despite a three-year contract, Henderson was released by new owner Fred Mandel. Again Henderson returned to Los Angeles, this time to take over the football program at Occidental College. As head coach from 1940 to 1942, he posted a record of 11–10–2, but the program was suspended due to World War II and he ended his coaching career. Death Henderson died on December 16, 1965 at age 76 in Desert Hot Springs, California of complications from pneumonia. He was survived by his wife, Kathryn, and their daughter. His cremated remains were returned to Oberlin, Ohio. He was inducted into the USC Athletic Hall of Fame in 2005. Head coaching record College football References 1889 births 1965 deaths Basketball coaches from Ohio Detroit Lions coaches Occidental Tigers football coaches Tulsa Golden Hurricane football coaches USC Trojans baseball coaches USC Trojans football coaches USC Trojans men's basketball coaches High school football coaches in Washington (state) Oberlin College alumni People from Oberlin, Ohio People from Desert Hot Springs, California Sportspeople from Riverside County, California
68743380
https://en.wikipedia.org/wiki/Bangladesh%20Black%20Hat%20Hackers
Bangladesh Black Hat Hackers
Bangladesh Black Hat Hackers aka BD Black Hats (Bengali: বাংলাদেশ ব্ল্যাক হ্যাট হ্যাকার্স) is a hacker group based in Bangladesh. It claimed in 2012 to have hacked Indian websites in retaliation for border killings by the Indian Border Security Force and the construction of the Tipaimukh Dam. Although Bangladesh Black Hat Hackers declared this cyber war, later two other Bangladesh-based hacker groups Bangladesh Cyber Army and 3xp1r3 Cyber Army also joined them. The incident is described as a cyber war without the presence of any governmental personnel in it. The threat of the attack was given on 9 February 2012 and till 12 February 10,000 websites were claimed to have been hacked. The number rose to 20,000 later. Around March 2015, the group also took responsibility for hacking Shashi Tharoor's website, owing to negative comments against Bangladesh's cricket win against England. See also Anonymous (hacker group) LulzSec References External links Hacker groups Bangladeshi hacker groups
4707252
https://en.wikipedia.org/wiki/Lockheed%20Martin%20Information%20Technology
Lockheed Martin Information Technology
Lockheed Martin Information Technology (I&TS) (also known as Lockheed Martin Information & Technology Services and Lockheed Martin Technology Services) is a subsidiary of american company Lockheed Martin that consists of dozens of smaller companies and units that have been acquired and integrated. The company also administers a number of U.S. Government contracts. I&TS includes operations in information technology integration and management, enterprise solutions, application development, aircraft maintenance and modification services, management and logistics services for government and military systems, mission and analysis services, engineering and information services for NASA, and support of nuclear weapons and naval nuclear reactors. The US government accounts for more than 90% of sales. Business components, subdivisions, and affiliations Knolls Atomic Power Laboratory Sandia National Laboratories OAO (acquired in 2001) ACS gave its IT assets to LMIT and became Lockheed Martin's benefits provider in 2003. The Sytex Group (acquired in 2005) Aspen Systems Corporation (acquired in January 2006) Contracts Contract to clean up the Hanford Site in Richland, Washington. Interrogator recruitment at Fort Belvoir and Fort Huachuca. In 1999 the British government awarded Lockheed Martin U.K. a contract controlling British census info. In 2002, a 7-year contract for the Centers for Medicare and Medicaid Services (CMS) Consolidated Information Technology Infrastructure Contract (CITIC) program. In 2003 a 7-year, $465,000,000 contract to provide services for the CDC. In 2004 LMIT conducted a test census in Canada and has been awarded the contract to administer the upcoming Canada 2006 Census. In 2004 LMIT was part of a $600,000,000 contract with the Air Force Pentagon Communications Agency. As of 2004, a 7-year, $525,000,000 contract with the United States Social Security Administration called the Agency Wide Support Services Contract. In 2004, a $700,000,000 contract to provide services for the EPA. In 2005, a $800,000,000 contract to provide services for HUD was split with EDS after a protracted battle. In 2005, a 10-year contract with the FAA for operating Automated Flight Service Stations (AFSS). In 2006, a 6-year $305,000,000 contract for the FBI Sentinel program. Products E-STARS workflow management software. CIO-SP2 contract vehicle. Interrogation controversy As part of the ACS and Sytex acquisitions, Lockheed Martin became a contractor for military interrogation. Some of the Sytex interrogators have been linked to Guantanamo Bay, Bagram torture and prisoner abuse and the Abu Ghraib torture and prisoner abuse scandals. In 2004, the GSA was reported to have begun investigating Lockheed's interrogation contracts. References External links Acquisitions and Contracts Yahoo Finance Reference "Eagle Industry Day" at LMIT" Article on LMIT/FAA contract Lockheed Martin Aspen Systems SSA contract note LM announcement of Sentinel contract LM announcement of British 2000 census contract Details on LM's involvement in British census effort OAO redirector E-STARS testimonial by Adobe Hanford contract LM announcement of CDC contract CIO-SP2 website Article on LMIT/EPA contract Article that mentions a Pentagon contract from 2004 HUD contract LMCO announcement of ACS deal in 2003 ACS deal renewed Interrogation Sytex / Lockheed Martin CorpWatch interrogation article "Meet the New Interrogators: Lockheed Martin:" ,, GSA investigates Lockheed "Haunted by Abu Ghraib" "An Interrogator Speaks Out" Lockheed Martin Information technology consulting firms of the United States
16372210
https://en.wikipedia.org/wiki/Mars%20trojan
Mars trojan
The Mars trojans are a group of Trojan objects that share the orbit of the planet Mars around the Sun. They can be found around the two Lagrangian points 60° ahead of and behind Mars. The origin of the Mars trojans is not well understood. One theory suggests that they were primordial objects left over from the formation of Mars that were captured in its Lagrangian points as the Solar System was forming. However, spectral studies of the Mars trojans indicate this may not be the case. Another explanation involves asteroids chaotically wandering into the Mars Lagrangian points later in the Solar System's formation. This is also questionable considering the short dynamical lifetimes of these objects. The spectra of Eureka and two other Mars trojans indicates an olivine-rich composition. Since olivine-rich objects are rare in the asteroid belt it has been suggested that some of the Mars trojans are captured debris from a large orbit-altering impact on Mars when it encountered a planetary embryo. Presently, this group contains 14 asteroids confirmed to be stable Mars trojans by long-term numerical simulations but only nine of them are accepted by the Minor Planet Center (†). Due to close orbital similarities, most of the smaller members of the L5 group are hypothesized to be fragments of Eureka that were detached after it was spun up by the YORP effect (Eureka's rotational period is 2.69 h). The L4 trojan 1999 UJ7 has a much longer rotational period of ~50 h, apparently due to a chaotic rotation that prevents YORP spinup. (leading): † (trailing): 5261 Eureka (1990 MB) † † † See also Trojan (celestial body) Minor planets that orbit near trojan points Earth trojan Jupiter trojan Neptune trojan References 4
28842378
https://en.wikipedia.org/wiki/TravelSky
TravelSky
TravelSky Technology Limited is the dominant provider of information technology services to the Chinese air travel and tourism industries. Its clients include airlines, airports, air travel suppliers, travel agencies, individual and corporate travel consumers and cargo services. It is listed on the Hong Kong stock exchange and its majority shareholder or parent group is the China TravelSky Holding Company, a State-owned enterprise (SOE). History Originally "a government unit staffed with only dozens of people", the organisation has grown with Chinese aviation reform to represent a group of companies in multiple countries. Corporate structure TravelSky Technology Limited was listed on the Hong Kong Stock Exchange in July 2008 with stock code 0696.HK and TSYHY. New York. Its majority shareholder is the China TravelSky Holding Company which itself is a national enterprise under the State-owned Assets Supervision and Administration Commission of the State Council. The core holding is TravelSky Technology Limited. TravelSky Technology Limited is owned by China TravelSky Holding Company. Headquarters TravelSky is headquartered in Beijing, with over 7,255 employees. Subsidiaries TravelSky has 12 branches, 18 subsidiaries (including subsidiaries in Hong Kong, Japan, Singapore and South Korea), 8 affiliated companies. Customers The Group's travel service distribution network comprises more than 70,000 sales terminals owned by more than 8,000 travel agencies and travel service distributors, with high-level networking and direct links to all Global Distribution Systems (GDSs) around the world and 137 foreign and regional commercial airlines through SITA networks, covering over 400 domestic and overseas cities. The Group rendered technology support and localized services to travel agencies and travel service distributors through more than 40 local distribution centres across China and 9 overseas distribution centres across Asia, Europe, North America and Australia. The network processed over 422.4 million transactions during 2016 with its transaction amount reaching RMB424.8 billion. Services AIT TravelSky's aviation information technology (AIT) services are used by 38 commercial airlines in the PRC and more than 350 foreign and regional commercial airlines. They support electronic travel distribution (ETD), including Inventory Control (ICS) and Computer Reservations (CRS), Airport Passenger Processing (APP), airline alliances, e-tickets and e-commerce, decision-making, and ground operational efficiency. Electronic travel Distribution Electronic travel distribution (ETD) services include: Inventory Control System (ICS) Computer Reservation System (CRS) Airport Passenger Processing (APP) ETD services include data services to support decisions of commercial airlines, product service to support aviation alliance, solutions for developing commercial airlines' e-ticket and e-commerce as well as information management systems to improve ground operational efficiency of commercial airlines and airports. Transaction volume 2010 In August 2010, Travelsky processed 27,664,743 bookings on Chinese commercial airlines, and 1,076,297 bookings on foreign and regional commercial airlines. 2009 In August 2009, Travelsky processed 23,615,669 bookings on Chinese commercial airlines, and 841,025 bookings on foreign and regional commercial airlines. 2006 Travelsky's ETD system processed approximately 173.0 million bookings on domestic and overseas commercial airlines in 2006. Electronic tickets sold amounted to approximately 71.6 million segments in 2006 by domestic airlines through the Company's BSP (Billing and Settlement Plan) electronic ticketing, Airline Directsale electronic ticketing and Airline Online electronic ticketing. Electronic tickets amounted to 81.4% of all tickets in 2006, making China the second largest user of electronic tickets in the world, after the United States. In 2006, the new generation APP (NewAPP) front system developed by the company was further installed in several domestic airports including Beijing Capital Airport and Pudong Airport in Shanghai. As a result, 45 domestic airports are now using the Company's NewAPP front system, which helped establish its leading position at domestic large and medium airports. In 2006, overseas and regional commercial airlines using the company's APP systems increased to 29. Together with those accesses from overseas and regional commercial airlines to the company's multi-host connecting program, a total of 2.4 million passenger departures were processed. The Group's travel distribution network comprises approximately 58 thousand sales terminals owned by more than 6,500 travel agencies or travel service distributors, with high-level networking and direct links to all GDS around the world and 29 foreign and regional commercial airlines through SITA networks, covering over 400 domestic and overseas cities. The Group rendered technology support and localised services to travel agencies and travel service distributors through more than 30 local distribution centers across China and four overseas distribution centers in Hong Kong, Singapore, Japan and South Korea. In 2006, the group kept perfecting the hotel distribution system to actively cooperate with the upstream travel product providers and the downstream travel service distributors. Throughout the year, the Company successfully distributed 230.4 thousand hotels' roomnights, representing a year-on-year increase of 4.5 times. In 2006, capturing opportunities airing from the increasing demands for China's aviation information safety, the Group proactively expanded its information technology integration service to promote its business in the field. On the January 1, 2001, InfoSky, the joint venture between the company and SITA, commenced operation. InfoSky introduced and developed a series of creative technical products for air logistics enterprise such as airlines, airport cargo terminals, freight forwarders and logistics service providers. InfoSky offers cargo system services for more than 11 airlines and 15 airports. The H shares of the company were listed on the Stock Exchange of Hong Kong Limited on February 7, 2001, trade code 0696, and the net proceeds from the issuance of H shares amounted to approximately HK$1.2 billion. After the company was listed, we received much recognition from investors and were ranked 'Best Run' by Hong Kong Exchange. Achievements 2002 TravelSky was nominated as one of the Deloitte Touche Tohmastsu Technology Fast 500, with the net income increase rate of 38%, TravelSky was ranked 218th in the list. The United States Securities and Exchange Commission approved the company's Sponsored Level 1 American depositary receipt programme. 2001 In 2001 Forbes recognised TravelSky, along with only seven other listed entities, as one of the 200 best small-scale enterprises in the world. Commercial outlook According to its website in 2010, TravelSky is focused on diversification in the travel market, including travel-related rental and hotel reservation. References External links TravelSky Technology - English website Companies listed on the Hong Kong Stock Exchange Online companies of China Travel technology
458524
https://en.wikipedia.org/wiki/EMV
EMV
EMV is a payment method based upon a technical standard for smart payment cards and for payment terminals and automated teller machines which can accept them. EMV originally stood for "Europay, Mastercard, and Visa", the three companies that created the standard. EMV cards are smart cards, also called chip cards, integrated circuit cards, or IC cards which store their data on integrated circuit chips, in addition to magnetic stripes for backward compatibility. These include cards that must be physically inserted or "dipped" into a reader, as well as contactless cards that can be read over a short distance using near-field communication technology. Payment cards which comply with the EMV standard are often called chip and PIN or chip and signature cards, depending on the authentication methods employed by the card issuer, such as a personal identification number (PIN) or digital signature. There are standards based on ISO/IEC 7816 for contact cards, and standards based on ISO/IEC 14443 for contactless cards (Mastercard Contactless, Visa PayWave, American Express ExpressPay). In February 2010, computer scientists from Cambridge University demonstrated that an implementation of EMV PIN entry is vulnerable to a man-in-the-middle attack but only implementations where the PIN was validated offline were vulnerable. History Until the introduction of Chip & PIN, all face-to-face credit or debit card transactions involved the use of a magnetic stripe or mechanical imprint to read and record account data, and a signature for purposes of identity verification. The customer hands their card to the cashier at the point of sale who then passes the card through a magnetic reader or makes an imprint from the raised text of the card. In the former case, the system verifies account details and prints a slip for the customer to sign. In the case of a mechanical imprint, the transaction details are filled in, a list of stolen numbers is consulted, and the customer signs the imprinted slip. In both cases the cashier must verify that the customer's signature matches that on the back of the card to authenticate the transaction. Using the signature on the card as a verification method has a number of security flaws, the most obvious being the relative ease with which cards may go missing before their legitimate owners can sign them. Another involves the erasure and replacement of legitimate signature, and yet another involves the forgery of the correct signature. The invention of the silicon integrated circuit chip in 1959 led to the idea of incorporating it onto a plastic smart card in the late 1960s by two German engineers, Helmut Gröttrup and Jürgen Dethloff. The earliest smart cards were introduced as calling cards in the 1970s, before later being adapted for use as payment cards. Smart cards have since used MOS integrated circuit chips, along with MOS memory technologies such as flash memory and EEPROM (electrically erasable programmable read-only memory). The first standard for smart payment cards was the Carte Bancaire B0M4 from Bull-CP8 deployed in France in 1986, followed by the B4B0' (compatible with the M4) deployed in 1989. Geldkarte in Germany also predates EMV. EMV was designed to allow cards and terminals to be backwardly compatible with these standards. France has since migrated all its card and terminal infrastructure to EMV. EMV originally stood for Europay, Mastercard, and Visa, the three companies that created the standard. The standard is now managed by EMVCo, a consortium with control split equally among Visa, Mastercard, JCB, American Express, China UnionPay, and Discover. EMVCo also refers to "Associates," companies able to provide input and receive feedback on detailed technical and operational issues connected to the EMV specifications and related processes. JCB joined the consortium in February 2009, China UnionPay in May 2013, and Discover in September 2013. Differences and benefits There are two major benefits to moving to smart-card-based credit card payment systems: improved security (with associated fraud reduction), and the possibility for finer control of "offline" credit-card transaction approvals. One of the original goals of EMV was to provide for multiple applications on a card: for a credit and debit card application or an e-purse. New issue debit cards in the US contain two applications — a card association (Visa, Mastercard etc.) application, and a common debit application. The common debit application ID is somewhat of a misnomer as each "common" debit application actually uses the resident card association application. EMV chip card transactions improve security against fraud compared to magnetic stripe card transactions that rely on the holder's signature and visual inspection of the card to check for features such as hologram. The use of a PIN and cryptographic algorithms such as Triple DES, RSA and SHA provide authentication of the card to the processing terminal and the card issuer's host system. The processing time is comparable to online transactions, in which communications delay accounts for the majority of the time, while cryptographic operations at the terminal take comparatively little time. The supposed increased protection from fraud has allowed banks and credit card issuers to push through a "liability shift", such that merchants are now liable (as of 1 January 2005 in the EU region and 1 October 2015 in the US) for any fraud that results from transactions on systems that are not EMV-capable. The majority of implementations of EMV cards and terminals confirm the identity of the cardholder by requiring the entry of a personal identification number (PIN) rather than signing a paper receipt. Whether or not PIN authentication takes place depends upon the capabilities of the terminal and programming of the card. When credit cards were first introduced, merchants used mechanical rather than magnetic portable card imprinters that required carbon paper to make an imprint. They did not communicate electronically with the card issuer, and the card never left the customer's sight. The merchant had to verify transactions over a certain currency limit by telephoning the card issuer. During the 1970s in the United States, many merchants subscribed to a regularly-updated list of stolen or otherwise invalid credit card numbers. This list was commonly printed in booklet form on newsprint, in numerical order, much like a slender phone book, yet without any data aside from the list of invalid numbers. Checkout cashiers were expected to thumb through this booklet each and every time a credit card was presented for payment of any amount, prior to approving the transaction, which incurred a short delay. Later, equipment electronically contacted the card issuer, using information from the magnetic stripe to verify the card and authorize the transaction. This was much faster than before, but required the transaction to occur in a fixed location. Consequently, if the transaction did not take place near a terminal (in a restaurant, for example) the clerk or waiter had to take the card away from the customer and to the card machine. It was easily possible at any time for a dishonest employee to swipe the card surreptitiously through a cheap machine that instantly recorded the information on the card and stripe; in fact, even at the terminal, a thief could bend down in front of the customer and swipe the card on a hidden reader. This made illegal cloning of cards relatively easy, and a more common occurrence than before. Since the introduction of payment card Chip and PIN, cloning of the chip is not feasible; only the magnetic stripe can be copied, and a copied card cannot be used by itself on a terminal requiring a PIN. The introduction of Chip and PIN coincided with wireless data transmission technology becoming inexpensive and widespread. In addition to mobile-phone-based magnetic readers, merchant personnel can now bring wireless PIN pads to the customer, so the card is never out of the cardholder's sight. Thus, both chip-and-PIN and wireless technologies can be used to reduce the risks of unauthorized swiping and card cloning. Chip and PIN versus chip and signature Chip and PIN is one of the two verification methods that EMV enabled cards can employ. Rather than physically signing a receipt for identification purposes, the user just enters a personal identification number (PIN), typically of 4 to 6 digits in length. This number must correspond to the information stored on the chip. Chip and PIN technology makes it much harder for fraudsters to use a found card, so if someone steals a card, they can't make fraudulent purchases unless they know the PIN. Chip and signature, on the other hand, differentiates itself from chip and PIN by verifying a consumer's identity with a signature. As of 2015, chip and signature cards are more common in the US, Mexico, parts of South America (such as Argentina, Colombia, Peru) and some Asian countries (such as Taiwan, Hong Kong, Thailand, South Korea, Singapore, and Indonesia), whereas chip and PIN cards are more common in most European countries (e.g., the UK, Ireland, France, Portugal, Finland and the Netherlands) as well as in Iran, Brazil, Venezuela, India, Sri Lanka, Canada, Australia and New Zealand. Online, phone, and mail order transactions While EMV technology has helped reduce crime at the point of sale, fraudulent transactions have shifted to more vulnerable telephone, Internet, and mail order transactions—known in the industry as card-not-present or CNP transactions. CNP transactions made up at least 50% of all credit card fraud. Because of physical distance, it is not possible for the merchant to present a keypad to the customer in these cases, so alternatives have been devised, including Software approaches for online transactions that involve interaction with the card-issuing bank or network's website, such as Verified by Visa and Mastercard SecureCode (implementations of Visa's 3-D Secure protocol). 3-D Secure is now being replaced by Strong Customer Authentication as defined in the European Second Payment Services Directive. Creating a one-time virtual card linked to a physical card with a given maximum amount. Additional hardware with keypad and screen that can produce a one-time password, such as the Chip Authentication Program. Keypad and screen integrated into complex cards to produce a one-time password. Since 2008, Visa has been running pilot projects using the Emue card where the generated number replaces the code printed on the back of standard cards. Commands ISO/IEC 7816-3 defines the transmission protocol between chip cards and readers. Using this protocol, data is exchanged in application protocol data units (APDUs). This comprises sending a command to a card, the card processing it, and sending a response. EMV uses the following commands: application block application unblock card block external authenticate (7816-4) generate application cryptogram get data (7816-4) get processing options internal authenticate (7816-4) PIN change / unblock read record (7816-4) select (7816-4) verify (7816-4). Commands followed by "7816-4" are defined in ISO/IEC 7816-4 and are interindustry commands used for many chip card applications such as GSM SIM cards. Transaction flow An EMV transaction has the following steps: Application selection Initiate application processing Read application data Processing restrictions Offline data authentication Certificates Cardholder verification Terminal risk management Terminal action analysis First card action analysis Online transaction authorization (only carried out if required by the result of the previous steps; mandatory in ATMs) Second card action analysis Issuer script processing. Application selection ISO/IEC 7816 defines a process for application selection. The intent of application selection was to let cards contain completely different applications—for example GSM and EMV. However, EMV developers implemented application selection as a way of identifying the type of product, so that all product issuers (Visa, Mastercard, etc.) must have their own application. The way application selection is prescribed in EMV is a frequent source of interoperability problems between cards and terminals. Book 1 of the EMV standard devotes 15 pages to describing the application selection process. An application identifier (AID) is used to address an application in the card or Host Card Emulation (HCE) if delivered without a card. An AID consists of a registered application provider identifier (RID) of five bytes, which is issued by the ISO/IEC 7816-5 registration authority. This is followed by a proprietary application identifier extension (PIX), which enables the application provider to differentiate among the different applications offered. The AID is printed on all EMV cardholder receipts. Card issuers can alter the application name from the name of the card network. Chase, for example, renames the Visa application on its Visa cards to "CHASE VISA", and the Mastercard application on its Mastercard cards to "CHASE MASTERCARD". Capital One renames the Mastercard application on its Mastercard cards to "CAPITAL ONE", and the Visa application on its Visa cards to "CAPITAL ONE VISA". The applications are otherwise the same. List of applications: Initiate application processing The terminal sends the get processing options command to the card. When issuing this command, the terminal supplies the card with any data elements requested by the card in the processing options data objects list (PDOL). The PDOL (a list of tags and lengths of data elements) is optionally provided by the card to the terminal during application selection. The card responds with the application interchange profile (AIP), a list of functions to perform in processing the transaction. The card also provides the application file locator (AFL), a list of files and records that the terminal needs to read from the card. Read application data Smart cards store data in files. The AFL contains the files that contain EMV data. These all must be read using the read record command. EMV does not specify which files data is stored in, so all the files must be read. Data in these files is stored in BER TLV format. EMV defines tag values for all data used in card processing. Processing restrictions The purpose of the processing restrictions is to see if the card should be used. Three data elements read in the previous step are checked: Application version number, Application usage control (This shows whether the card is only for domestic use, etc.), Application effective/expiration dates checking. If any of these checks fails, the card is not necessarily declined. The terminal sets the appropriate bit in the terminal verification results (TVR), the components of which form the basis of an accept/decline decision later in the transaction flow. This feature lets, for example, card issuers permit cardholders to keep using expired cards after their expiry date, but for all transactions with an expired card to be performed on-line. Offline data authentication (ODA) Offline data authentication is a cryptographic check to validate the card using public-key cryptography. There are three different processes that can be undertaken depending on the card: Static data authentication (SDA) ensures data read from the card has been signed by the card issuer. This prevents modification of data, but does not prevent cloning. Dynamic data authentication (DDA) provides protection against modification of data and cloning. Combined DDA/generate application cryptogram (CDA) combines DDA with the generation of a card's application cryptogram to assure card validity. Support of CDA in devices may be needed, as this process has been implemented in specific markets. This process is not mandatory in terminals and can only be carried out where both card and terminal support it. EMV certificates To verify the authenticity of payment cards, EMV certificates are used. The EMV Certificate Authority issues digital certificates to payment card issuers. When requested, the payment card chip provides the card issuer's public key certificate and SSAD to the terminal. The terminal retrieves the CA's public key from local storage and uses it to confirm trust for the CA and, if trusted, to verify the card issuer's public key was signed by the CA. If the card issuer's public key is valid, the terminal uses the card issuer's public key to verify the card's SSAD was signed by the card issuer. Cardholder verification Cardholder verification is used to evaluate whether the person presenting the card is the legitimate cardholder. There are many cardholder verification methods (CVMs) supported in EMV. They are Signature Offline plaintext PIN Offline enciphered PIN Offline plaintext PIN and signature Offline enciphered PIN and signature Online PIN No CVM required Consumer Device CVM Fail CVM processing The terminal uses a CVM list read from the card to determine the type of verification to perform. The CVM list establishes a priority of CVMs to use relative to the capabilities of the terminal. Different terminals support different CVMs. ATMs generally support online PIN. POS terminals vary in their CVM support depending on type and country. For offline enciphered PIN methods, the terminal encrypts the cleartext PIN block with the card's public key before sending it to the card with the Verify command. For the online PIN method, the cleartext PIN block is encrypted by the terminal using its point-to-point encryption key before sending it to the acquirer processor in the authorization request message. In 2017, EMVCo added support for biometric verification methods in version 4.3 of the EMV specifications Terminal risk management Terminal risk management is only performed in devices where there is a decision to be made whether a transaction should be authorised on-line or offline. If transactions are always carried out on-line (e.g., ATMs) or always off-line, this step can be skipped. Terminal risk management checks the transaction amount against an offline ceiling limit (above which transactions should be processed on-line). It is also possible to have a 1 in an online counter, and a check against a hot card list (which is only necessary for off-line transactions). If the result of any of these tests is positive, the terminal sets the appropriate bit in the terminal verification results (TVR). Terminal action analysis The results of previous processing steps are used to determine whether a transaction should be approved offline, sent online for authorization, or declined offline. This is done using a combination of data objects known as terminal action codes (TACs) held in the terminal and issuer action codes (IACs) read from the card. The TAC is logically OR'd with the IAC, to give the transaction acquirer a level of control over the transaction outcome. Both types of action code take the values Denial, Online, and Default. Each action code contains a series of bits which correspond to the bits in the Terminal verification results (TVR), and are used in the terminal's decision whether to accept, decline or go on-line for a payment transaction. The TAC is set by the card acquirer; in practice card schemes advise the TAC settings that should be used for a particular terminal type depending on its capabilities. The IAC is set by the card issuer; some card issuers may decide that expired cards should be rejected, by setting the appropriate bit in the Denial IAC. Other issuers may want the transaction to proceed on-line so that they can in some cases allow these transactions to be carried out. An online-only device such as an ATM always attempts to go on-line with the authorization request, unless declined off-line due to issuer action codes—Denial settings. During IAC—Denial and TAC—Denial processing, for an online only device, the only relevant Terminal verification results bit is "Service not allowed". When an online-only device performs IAC—Online and TAC—Online processing the only relevant TVR bit is "Transaction value exceeds the floor limit". Because the floor limit is set to zero, the transaction should always go online and all other values in TAC—Online or IAC—Online are irrelevant. Online-only devices do not need to perform IAC-default processing. First card action analysis One of the data objects read from the card in the Read application data stage is CDOL1 (Card Data object List). This object is a list of tags that the card wants to be sent to it to make a decision on whether to approve or decline a transaction (including transaction amount, but many other data objects too). The terminal sends this data and requests a cryptogram using the generate application cryptogram command. Depending on the terminal's decision (offline, online, decline), the terminal requests one of the following cryptograms from the card: Transaction certificate (TC)—Offline approval Authorization Request Cryptogram (ARQC)—Online authorization Application Authentication Cryptogram (AAC)—Offline decline. This step gives the card the opportunity to accept the terminal's action analysis or to decline a transaction or force a transaction on-line. The card cannot return a TC when an ARQC has been asked for, but can return an ARQC when a TC has been asked for. Online transaction authorization Transactions go online when an ARQC has been requested. The ARQC is sent in the authorisation message. The card generates the ARQC. Its format depends on the card application. EMV does not specify the contents of the ARQC. The ARQC created by the card application is a digital signature of the transaction details, which the card issuer can check in real time. This provides a strong cryptographic check that the card is genuine. The issuer responds to an authorization request with a response code (accepting or declining the transaction), an authorisation response cryptogram (ARPC) and optionally an issuer script (a string of commands to be sent to the card). ARPC processing is not performed in contact transactions processed with Visa Quick Chip for EMV and Mastercard M/Chip Fast, and in contactless transactions across schemes because the card is removed from the reader after the ARQC has been generated. Second card action analysis CDOL2 (Card data object list) contains a list of tags that the card wanted to be sent after online transaction authorisation (response code, ARPC, etc.). Even if for any reason the terminal could not go online (e.g., communication failure), the terminal should send this data to the card again using the generate authorisation cryptogram command. This lets the card know the issuer's response. The card application may then reset offline usage limits. Issuer script processing If a card issuer wants to update a card post issuance it can send commands to the card using issuer script processing. Issuer scripts are meaningless to the terminal and can be encrypted between the card and the issuer to provide additional security. Issuer script can be used to block cards, or change card parameters. Issuer script processing is not available in contact transactions processed with Visa Quick Chip for EMV and Mastercard M/Chip Fast, and for contactless transactions across schemes. Control of the EMV standard The first version of EMV standard was published in 1995. Now the standard is defined and managed by the privately owned corporation EMVCo LLC. The current members of EMVCo are American Express, Discover Financial, JCB International, Mastercard, China UnionPay, and Visa Inc. Each of these organizations owns an equal share of EMVCo and has representatives in the EMVCo organization and EMVCo working groups. Recognition of compliance with the EMV standard (i.e., device certification) is issued by EMVCo following submission of results of testing performed by an accredited testing house. EMV Compliance testing has two levels: EMV Level 1, which covers physical, electrical and transport level interfaces, and EMV Level 2, which covers payment application selection and credit financial transaction processing. After passing common EMVCo tests, the software must be certified by payment brands to comply with proprietary EMV implementations such as Visa VSDC, American Express AEIPS, Mastercard MChip, JCB JSmart, or EMV-compliant implementations of non-EMVCo members such as LINK in the UK, or Interac in Canada. List of EMV documents and standards As of 2011, since version 4.0, the official EMV standard documents which define all the components in an EMV payment system are published as four "books" and some additional documents: Book 1: Application Independent ICC to Terminal Interface Requirements Book 2: Security and Key Management Book 3: Application Specification Book 4: Cardholder, Attendant, and Acquirer Interface Requirements Common Payment Application Specification EMV Card Personalisation Specification Versions The first EMV standard came into view in 1995 as EMV 2.0. This was upgraded to EMV 3.0 in 1996 (sometimes referred to as EMV '96) with later amendments to EMV 3.1.1 in 1998. This was further amended to version 4.0 in December 2000 (sometimes referred to as EMV 2000). Version 4.0 became effective in June 2004. Version 4.1 became effective in June 2007. Version 4.2 is in effect since June 2008. Version 4.3 is in effect since November 2011. Vulnerabilities Opportunities to harvest PINs and clone magnetic stripes In addition to the track-two data on the magnetic stripe, EMV cards generally have identical data encoded on the chip, which is read as part of the normal EMV transaction process. If an EMV reader is compromised to the extent that the conversation between the card and the terminal is intercepted, then the attacker may be able to recover both the track-two data and the PIN, allowing construction of a magnetic stripe card, which, while not usable in a Chip and PIN terminal, can be used, for example, in terminal devices that permit fallback to magstripe processing for foreign customers without chip cards, and defective cards. This attack is possible only where (a) the offline PIN is presented in plaintext by the PIN entry device to the card, where (b) magstripe fallback is permitted by the card issuer and (c) where geographic and behavioural checking may not be carried out by the card issuer. APACS, representing the UK payment industry, claimed that changes specified to the protocol (where card verification values differ between the magnetic stripe and the chip – the iCVV) rendered this attack ineffective and that such measures would be in place from January 2008. Tests on cards in February 2008 indicated this may have been delayed. Successful attacks Conversation capturing is a form of attack which was reported to have taken place against Shell terminals in May 2006, when they were forced to disable all EMV authentication in their filling stations after more than £1 million was stolen from customers. In October 2008, it was reported that hundreds of EMV card readers for use in Britain, Ireland, the Netherlands, Denmark, and Belgium had been expertly tampered with in China during or shortly after manufacture. For 9 months details and PINs of credit and debit cards were sent over mobile phone networks to criminals in Lahore, Pakistan. United States National Counterintelligence Executive Joel Brenner said, "Previously only a nation state's intelligence agency would have been capable of pulling off this type of operation. It's scary." Data were typically used a couple of months after the card transactions to make it harder for investigators to pin down the vulnerability. After the fraud was discovered it was found that tampered-with terminals could be identified as the additional circuitry increased their weight by about 100 g. Tens of millions of pounds sterling are believed to have been stolen. This vulnerability spurred efforts to implement better control of electronic POS devices over their entire life cycle, a practice endorsed by electronic payment security standards like those being developed by the Secure POS Vendor Alliance (SPVA). PIN harvesting and stripe cloning In a February 2008 BBC Newsnight programme Cambridge University researchers Steven Murdoch and Saar Drimer demonstrated one example attack, to illustrate that Chip and PIN is not secure enough to justify passing the liability to prove fraud from the banks onto customers. The Cambridge University exploit allowed the experimenters to obtain both card data to create a magnetic stripe and the PIN. APACS, the UK payments association, disagreed with the majority of the report, saying "The types of attack on PIN entry devices detailed in this report are difficult to undertake and not currently economically viable for a fraudster to carry out." They also said that changes to the protocol (specifying different card verification values between the chip and magnetic stripe – the iCVV) would make this attack ineffective from January 2008. The fraud reported in October 2008 to have operated for 9 months (see above) was probably in operation at the time, but was not discovered for many months. In August 2016, NCR (payment technology company) computer security researchers showed how credit card thieves can rewrite the code of a magnetic strip to make it appear like a chipless card, which allows for counterfeiting. 2010: Hidden hardware disables PIN checking on stolen card On 11 February 2010 Murdoch and Drimer's team at Cambridge University announced that they had found "a flaw in chip and PIN so serious they think it shows that the whole system needs a re-write" that was "so simple that it shocked them". A stolen card is connected to an electronic circuit and to a fake card which is inserted into the terminal ("man-in-the-middle attack"). Any four digits are typed in and accepted as a valid PIN. A team from the BBC's Newsnight programme visited a Cambridge University cafeteria (with permission) with the system, and were able to pay using their own cards (a thief would use stolen cards) connected to the circuit, inserting a fake card and typing in "0000" as the PIN. The transactions were registered as normal, and were not picked up by banks' security systems. A member of the research team said, "Even small-scale criminal systems have better equipment than we have. The amount of technical sophistication needed to carry out this attack is really quite low." The announcement of the vulnerability said, "The expertise that is required is not high (undergraduate level electronics) ... We dispute the assertion by the banking industry that criminals are not sophisticated enough, because they have already demonstrated a far higher level of skill than is necessary for this attack in their miniaturized PIN entry device skimmers." It is not known if this vulnerability has been exploited. EMVCo disagreed and published a response saying that, while such an attack might be theoretically possible, it would be extremely difficult and expensive to carry out successfully, that current compensating controls are likely to detect or limit the fraud, and that the possible financial gain from the attack is minimal while the risk of a declined transaction or exposure of the fraudster is significant. When approached for comment, several banks (Co-operative Bank, Barclays and HSBC) each said that this was an industry-wide issue, and referred the Newsnight team to the banking trade association for further comment. According to Phil Jones of the Consumers' Association, Chip and PIN has helped to bring down instances of card crime, but many cases remain unexplained. "What we do know is that we do have cases that are brought forward from individuals which seem quite persuasive." Because submission of the PIN is suppressed, this is the exact equivalent of a merchant performing a PIN bypass transaction. Such transactions can't succeed offline, as a card never generates an offline authorisation without a successful PIN entry. As a result of this, the transaction ARQC must be submitted online to the issuer, who knows that the ARQC was generated without a successful PIN submission (since this information is included in the encrypted ARQC) and hence would be likely to decline the transaction if it were for a high value, out of character, or otherwise outside of the typical risk management parameters set by the issuer. Originally, bank customers had to prove that they had not been negligent with their PIN before getting redress, but UK regulations in force from 1 November 2009 placed the onus firmly on the banks to prove that a customer has been negligent in any dispute, with the customer given 13 months to make a claim. Murdoch said that "[the banks] should look back at previous transactions where the customer said their PIN had not been used and the bank record showed it has, and consider refunding these customers because it could be they are victim of this type of fraud." 2011: CVM downgrade allows arbitrary PIN harvest At the CanSecWest conference in March 2011, Andrea Barisani and Daniele Bianco presented research uncovering a vulnerability in EMV that would allow arbitrary PIN harvesting despite the cardholder verification configuration of the card, even when the supported CVMs data is signed. The PIN harvesting can be performed with a chip skimmer. In essence, a CVM list that has been modified to downgrade the CVM to Offline PIN is still honoured by POS terminals, despite its signature being invalid. PIN bypass In 2020, researchers David Basin, Ralf Sasse, and Jorge Toro from ETH Zurich reported a critical security issue affecting Visa contactless cards. The issue consists of lack of cryptographic protection of critical data sent by the card to the terminal during an EMV transaction. The data in question determines the cardholder verification method (CVM, such as PIN verification) to be used for the transaction. The team demonstrated that it is possible to modify this data to trick the terminal into believing that no PIN is required because the cardholder was verified using their device (e.g. smartphone). The researchers developed a proof-of-concept Android app that effectively turns a physical Visa card into a mobile payment app (e.g. Apple Pay, Google Pay) to perform PIN-free, high-value purchases. The attack is carried out using two NFC-enabled smartphones, one held near the physical card and the second held near the payment terminal. The attack might affect cards by Discover and China's UnionPay but this was not demonstrated in practice, in contrast to the case of cards by Visa. In early 2021, the same team disclosed that Mastercard cards are also vulnerable to a PIN bypass attack. They showed that criminals can trick a terminal into transacting with a Mastercard contactless card while believing it to be a Visa card. This card brand mixup has critical consequences since it can be used in combination with the PIN bypass for Visa to also bypass the PIN for Mastercard cards. "Complex systems such as EMV must be analyzed by automated tools, like model checkers"'', researchers point out as the main takeaway of their findings. As opposed to humans, model-checking tools like Tamarin are up to the task since they can deal with the complexity of real-world systems like EMV. Implementation EMV originally stood for "Europay, Mastercard, and Visa", the three companies that created the standard. The standard is now managed by EMVCo, a consortium of financial companies. The most widely known chips of the EMV standard are: VIS: Visa Mastercard chip: Mastercard AEIPS: American Express UICS: China Union Pay J Smart: JCB D-PAS: Discover/Diners Club International Rupay: NPCI Verve Visa and Mastercard have also developed standards for using EMV cards in devices to support card not present transactions (CNP) over the telephone and Internet. Mastercard has the Chip Authentication Program (CAP) for secure e-commerce. Its implementation is known as EMV-CAP and supports a number of modes. Visa has the Dynamic Passcode Authentication (DPA) scheme, which is their implementation of CAP using different default values. In many countries of the world, debit card and/or credit card payment networks have implemented liability shifts. Normally, the card issuer is liable for fraudulent transactions. However, after a liability shift is implemented, if the ATM or merchant's point of sale terminal does not support EMV, the ATM owner or merchant is liable for the fraudulent transaction. Chip and PIN systems can cause problems for travellers from countries that do not issue Chip and PIN cards as some retailers may refuse to accept their chipless cards. While most terminals still accept a magnetic strip card, and the major credit card brands require vendors to accept them, some staff may refuse to take the card, under the belief that they are held liable for any fraud if the card cannot verify a PIN. Non-chip-and-PIN cards may also not work in some unattended vending machines at, for example, train stations, or self-service check-out tills at supermarkets. Africa Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions. Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008. South Africa Mastercard's liability shift took place on 1 January 2005. Asian and Pacific countries Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions, except for domestic transactions in China and Japan. Visa's liability shift for points of sale took place on 1 October 2010. For ATMs, the liability shift date took place on 1 October 2015, except in China, India, Japan, and Thailand, where the liability shift was on 1 October 2017. Domestic ATM transactions in China are not currently not subject to a liability shift deadline. Australia Mastercard required that all point of sale terminals be EMV capable by April 2013. For ATMs, the liability shift took place in April 2012. ATMs must be EMV compliant by the end of 2015 Visa's liability shift for ATMs took place 1 April 2013. Malaysia Malaysia is the first country in the world to completely migrate to EMV-compliant smart cards two years after its implementation in 2005. New Zealand Mastercard required all point of sale terminals to be EMV compliant by 1 July 2011. For ATMs, the liability shift took place in April 2012. ATMs are required to be EMV compliant by the end of 2015. Visa's liability shift for ATMs was 1 April 2013. Europe Mastercard's liability shift took place on 1 January 2005. Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008. France has cut card fraud by more than 80% since its introduction in 1992 (see Carte Bleue). United Kingdom Chip and PIN was trialled in Northampton, England from May 2003, and as a result was rolled out nationwide in the United Kingdom on 14 February 2006 with advertisements in the press and national television touting the "Safety in Numbers" slogan. During the first stages of deployment, if a fraudulent magnetic swipe card transaction was deemed to have occurred, the retailer was refunded by the issuing bank, as was the case prior to the introduction of Chip and PIN. On January 1, 2005, the liability for such transactions was shifted to the retailer; this acted as an incentive for retailers to upgrade their point of sale (PoS) systems, and most major high-street chains upgraded on time for the EMV deadline. Many smaller businesses were initially reluctant to upgrade their equipment, as it required a completely new PoS system—a significant investment. New cards featuring both magnetic strips and chips are now issued by all major banks. The replacement of pre-Chip and PIN cards was a major issue, as banks simply stated that consumers would receive their new cards "when their old card expires" — despite many people having had cards with expiry dates as late as 2007. The card issuer Switch lost a major contract with HBOS to Visa, as they were not ready to issue the new cards as early as the bank wanted. The Chip and PIN implementation was criticised as designed to reduce the liability of banks in cases of claimed card fraud by requiring the customer to prove that they had acted "with reasonable care" to protect their PIN and card, rather than on the bank having to prove that the signature matched. Before Chip and PIN, if a customer's signature was forged, the banks were legally liable and had to reimburse the customer. Until 1 November 2009 there was no such law protecting consumers from fraudulent use of their Chip and PIN transactions, only the voluntary Banking Code. There were many reports that banks refused to reimburse victims of fraudulent card use, claiming that their systems could not fail under the circumstances reported, despite several documented successful large-scale attacks. The Payment Services Regulations 2009 came into force on 1 November 2009 and shifted the onus onto the banks to prove, rather than assume, that the cardholder is at fault. The Financial Services Authority (FSA) said "It is for the bank, building society or credit card company to show that the transaction was made by you, and there was no breakdown in procedures or technical difficulty" before refusing liability. Latin America and the Caribbean Mastercard's liability shift among countries within this region took place on 1 January 2005. Visa's liability shift for points of sale took place on 1 October 2012, for any countries in this region that had not already implemented a liability shift. For ATMs, the liability shift took place on 1 October 2014, for any countries in this region that had not already implemented a liability shift. Brazil Mastercard's liability shift took place on 1 March 2008. Visa's liability shift for points of sale took place on 1 April 2011. For ATMs, the liability shift took place on 1 October 2012. Colombia Mastercard's liability shift took place on 1 October 2008. Mexico Discover implemented a liability shift on 1 October 2015. For pay at the pump at gas stations, the liability shift was on 1 October 2017. Visa's liability shift for points of sale took place on 1 April 2011. For ATMs, the liability shift took place on 1 October 2012. Venezuela Mastercard's liability shift took place on 1 July 2009. Middle East Mastercard's liability shift among countries within this region took place on 1 January 2006. By 1 October 2010, a liability shift had occurred for all point of sale transactions. Visa's liability shift for points of sale took place on 1 January 2006. For ATMs, the liability shift took place on 1 January 2008. North America Canada American Express implemented a liability shift on 31 October 2012. Discover implemented a liability shift on 1 October 2015 for all transactions except pay-at-the-pump at gas stations; those transactions shifted on 1 October 2017. Interac (Canada's debit card network) stopped processing non-EMV transactions at ATMs on 31 December 2012, and mandated EMV transactions at point-of-sale terminals on 30 September 2016, with a liability shift taking place on 31 December 2015. Mastercard implemented domestic transaction liability shift on 31 March 2011, and international liability shift on 15 April 2011. For pay at the pump at gas stations, the liability shift was implemented 31 December 2012. Visa implemented domestic transaction liability shift on 31 March 2011, and international liability shift on 31 October 2010. For pay at the pump at gas stations, the liability shift was implemented 31 December 2012. Over a 5-year period post-EMV migration, domestic card-card present fraudulent transactions significantly reduced in Canada. According to Helcim's reports, card-present domestic debit card fraud reduced 89.49% and credit card fraud 68.37%. United States After widespread identity theft due to weak security in the point-of-sale terminals at Target, Home Depot, and other major retailers, Visa, Mastercard and Discover in March 2012 – and American Express in June 2012 – announced their EMV migration plans for the United States. Since the announcement, multiple banks and card issuers have announced cards with EMV chip-and-signature technology, including American Express, Bank of America, Citibank, Wells Fargo, JPMorgan Chase, U.S. Bank, and several credit unions. In 2010, a number of companies began issuing pre-paid debit cards that incorporate Chip and PIN and allow Americans to load cash as euros or pound sterling. United Nations Federal Credit Union was the first United States issuer to offer Chip and PIN credit cards. In May 2010, a press release from Gemalto (a global EMV card producer) indicated that United Nations Federal Credit Union in New York would become the first EMV card issuer in the United States, offering an EMV Visa credit card to its customers. JPMorgan was the first major bank to introduce a card with EMV technology, namely its Palladium card, in mid-2012. As of April 2016, 70% of U.S. consumers have EMV cards and as of December 2016 roughly 50% of merchants are EMV compliant. However, deployment has been slow and inconsistent across vendors. Even merchants with EMV hardware may not be able to process chip transactions due to software or compliance deficiencies. Bloomberg has also cited issues with software deployment, including changes to audio prompts for Verifone machines which can take several months to release and deploy software out. Industry experts, however, expect more standardization in the United States for software deployment and standards. Visa and Mastercard have both implemented standards to speed up chip transactions with a goal of reducing the time for these to be under three seconds. These systems are labelled as Visa Quick Chip and Mastercard M/Chip Fast. American Express implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift is 16 April 2021. This was extended from 1 October 2020 due to complications from the coronavirus. Discover implemented liability shift on 1 October 2015. For pay at the pump, at gas stations, the liability shift is 1 October 2020. Maestro implemented liability shift of 19 April 2013, for international cards used in the United States. Mastercard implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift formally is on 1 October 2020. For ATMs, the liability shift date was on 1 October 2016. Visa implemented liability shift for point of sale terminals on 1 October 2015. For pay at the pump, at gas stations, the liability shift formally is on 1 October 2020. For ATMs, the liability shift date was on 1 October 2017. Notes See also Contactless payment Supply chain attack Two-factor authentication MM code References External links
1070569
https://en.wikipedia.org/wiki/Glossary%20of%20cryptographic%20keys
Glossary of cryptographic keys
This glossary lists types of keys as the term is used in cryptography, as opposed to door locks. Terms that are primarily used by the U.S. National Security Agency are marked (NSA). For classification of keys according to their usage see cryptographic key types. 40-bit key - key with a length of 40 bits, once the upper limit of what could be exported from the U.S. and other countries without a license. Considered very insecure. See key size for a discussion of this and other lengths. authentication key - Key used in a keyed-hash message authentication code, or HMAC. benign key - (NSA) a key that has been protected by encryption or other means so that it can be distributed without fear of its being stolen. Also called BLACK key. content-encryption key (CEK) a key that may be further encrypted using a KEK, where the content may be a message, audio, image, video, executable code, etc. crypto ignition key An NSA key storage device (KSD-64) shaped to look like an ordinary physical key. cryptovariable - NSA calls the output of a stream cipher a key or key stream. It often uses the term cryptovariable for the bits that control the stream cipher, what the public cryptographic community calls a key. data encryption key (DEK) used to encrypt the underlying data. derived key - keys computed by applying a predetermined hash algorithm or key derivation function to a password or, better, a passphrase. DRM key - A key used in Digital Rights Management to protect media electronic key - (NSA) key that is distributed in electronic (as opposed to paper) form. See EKMS. ephemeral key - A key that only exists within the lifetime of a communication session. expired key - Key that was issued for a use in a limited time frame (cryptoperiod in NSA parlance) which has passed and, hence, the key is no longer valid. FIREFLY key - (NSA) keys used in an NSA system based on public key cryptography. Key derivation function (KDF) - function used to derive a key from a secret value, e.g. to derive KEK from Diffie-Hellman key exchange. key encryption key (KEK) - key used to protect MEK keys (or DEK/TEK if MEK is not used). key production key (KPK) -Key used to initialize a keystream generator for the production of other electronically generated keys. key fill - (NSA) loading keys into a cryptographic device. See fill device. master key - key from which all other keys (or a large group of keys) can be derived. Analogous to a physical key that can open all the doors in a building. master encryption key (MEK) - Used to encrypt the DEK/TEK key. master key encryption key (MKEK) - Used to encrypt multiple KEK keys. For example, an HSM can generate several KEK and wrap them with an MKEK before export to an external DB - such as OpenStack Barbican. one time pad (OTP or OTPad) - keying material that should be as long as the plaintext and should only be used once. If truly random and not reused it's the most secure encryption method. See one-time pad article. one time password (OTP) - One time password based on a prebuilt single use code list or based on a mathematical formula with a secret seed known to both parties, uses event or time to modify output (see TOTP/HOTP). paper key - (NSA) keys that are distributed in paper form, such as printed lists of settings for rotor machines, or keys in punched card or paper tape formats. Paper keys are easily copied. See Walker spy ring, RED key. poem key - Keys used by OSS agents in World War II in the form of a poem that was easy to remember. See Leo Marks. Public/private key - in public key cryptography, separate keys are used to encrypt and decrypt a message. The encryption key (public key) need not be kept secret and can be published. The decryption or private key must be kept secret to maintain confidentiality. Public keys are often distributed in a signed public key certificate. pre-placed key - (NSA) large numbers of keys (perhaps a year's supply) that are loaded into an encryption device allowing frequent key change without refill. RED key - (NSA) symmetric key in a format that can be easily copied, e.g. paper key or unencrypted electronic key. Opposite of BLACK or benign key. revoked key - a public key that should no longer be used, typically because its owner is no longer in the role for which it was issued or because it may have been compromised. Such keys are placed on a certificate revocation list or CRL. session key - key used for one message or an entire communications session. See traffic encryption key. symmetric key - a key that is used both to encrypt and decrypt a message. Symmetric keys are typically used with a cipher and must be kept secret to maintain confidentiality. traffic encryption key (TEK)/data encryption key (DEK) - a symmetric key that is used to encrypt messages. TEKs are typically changed frequently, in some systems daily and in others for every message. See session key. DEK is used to specify any data form type (in communication payloads or anywhere else). transmission security key (TSK) - (NSA) seed for a pseudorandom number generator that is used to control a radio in frequency hopping or direct-sequence spread spectrum modes. See HAVE QUICK, SINCGARS, electronic warfare. seed key - (NSA) a key used to initialize a cryptographic device so it can accept operational keys using benign transfer techniques. Also a key used to initialize a pseudorandom number generator to generate other keys. signature key - public key cryptography can also be used to electronically sign messages. The private key is used to create the electronic signature, the public key is used to verify the signature. Separate public/private key pairs must be used for signing and encryption. The former is called signature keys. stream key - the output of a stream cipher as opposed to the key (or cryptovariable in NSA parlance) that controls the cipher training key - (NSA) unclassified key used for instruction and practice exercises. Type 1 key - (NSA) keys used to protect classified information. See Type 1 product. Type 2 key - (NSA) keys used to protect sensitive but unclassified (SBU) information. See Type 2 product. Vernam key - Type of key invented by Gilbert Vernam in 1918. See stream key. zeroized key - key that has been erased (see zeroisation.) See also Specific encryption systems and ciphers have key types associated with them, e.g. PGP key, DES key, AES key, RC4 key, BATON key, Kerberos key, etc. :Category:Cryptographic algorithms :Category:Cryptographic protocols References Schneier, Bruce. Applied Cryptography, Second Edition, John Wiley & Sons, 1996. National Information Assurance (IA) Glossary, Committee on National Security Systems, CNSS Instruction No. 4009, 2010. Link 16 Joint Key Management Plan, CJCSM 6520.01A, 2011 Cryptographic keys Cryptographic keys Key management Cryptographic keys Cryptographic keys
454486
https://en.wikipedia.org/wiki/Windows%209x
Windows 9x
Windows 9x is a generic term referring to a series of Microsoft Windows computer operating systems produced from 1995 to 2000, which were based on the Windows 95 kernel and its underlying foundation of MS-DOS, both of which were updated in subsequent versions. The first version in the 9x series was Windows 95, which was succeeded by Windows 98 and then Windows Me, which was the third and last version of Windows on the 9x line, until the series was superseded by Windows XP. Windows 9x is predominantly known for its use in home desktops. In 1998, Windows made up 82% of operating system market share. Internal release versions for versions of Windows 9x are 4.x. The internal versions for Windows 95, 98, and Me are 4.0, 4.1, and 4.9, respectively. Previous MS-DOS-based versions of Windows used version numbers of 3.2 or lower. Windows NT, which was aimed at professional users such as networks and businesses, used a similar but separate version number between 3.1 and 4.0. All versions of Windows from Windows XP onwards are based on the Windows NT codebase. History Windows prior to 95 The first independent version of Microsoft Windows, version 1.0, released on November 20, 1985, achieved little popularity. Its name was initially "Interface Manager", but Rowland Hanson, the head of marketing at Microsoft, convinced the company that the name Windows would be more appealing to consumers. Windows 1.0 was not a complete operating system, but rather an "operating environment" that extended MS-DOS. Consequently, it shared the inherent flaws and problems of MS-DOS. The second installment of Microsoft Windows, version 2.0, was released on December 9, 1987, and used the real-mode memory model, which confined it to a maximum of 1 megabyte of memory. In such a configuration, it could run under another multitasking system like DESQview, which used the 286 Protected Mode. Microsoft Windows scored a significant success with Windows 3.0, released in 1990. In addition to improved capabilities given to native applications, Windows also allowed users to better multitask older MS-DOS-based software compared to Windows/386, thanks to the introduction of virtual memory. Microsoft developed Windows 3.1, which included several minor improvements to Windows 3.0, but primarily consisted of bugfixes and multimedia support. It also excluded support for Real mode, and only ran on an Intel 80286 or better processor. In November 1993 Microsoft also released Windows 3.11, a touch-up to Windows 3.1 which included all of the patches and updates that followed the release of Windows 3.1 in early 1992. Meanwhile, Microsoft continued to develop Windows NT. The main architect of the system was Dave Cutler, one of the chief architects of VMS at Digital Equipment Corporation. Microsoft hired him in August 1988 to create a successor to OS/2, but Cutler created a completely new system instead based on his MICA project at Digital. Microsoft announced at its 1991 Professional Developers Conference its intentions to develop a successor to both Windows NT and Windows 3.1's replacement (Windows 95, code-named Chicago), which would unify the two into one operating system. This successor was codenamed Cairo. In hindsight, Cairo was a much more difficult project than Microsoft had anticipated and, as a result, NT and Chicago would not be unified until Windows XP. Windows 95 After Windows 3.11, Microsoft began to develop a new consumer oriented version of the operating system code-named Chicago. Chicago was designed to have support for 32-bit preemptive multitasking, that of which was available in OS/2 and Windows NT, although a 16-bit kernel would remain for the sake of backward compatibility. The Win32 API first introduced with Windows NT was adopted as the standard 32-bit programming interface, with Win16 compatibility being preserved through a technique known as "thunking". A new GUI was not originally planned as part of the release, although elements of the Cairo user interface were borrowed and added as other aspects of the release (notably Plug and Play) slipped. Microsoft did not change all of the Windows code to 32-bit; parts of it remained 16-bit (albeit not directly using real mode) for reasons of compatibility, performance and development time. Additionally it was necessary to carry over design decisions from earlier versions of Windows for reasons of backwards compatibility, even if these design decisions no longer matched a more modern computing environment. These factors immediately began to impact the operating system's efficiency and stability. Microsoft marketing adopted Windows 95 as the product name for Chicago when it was released on August 24, 1995. Microsoft went on to release five different versions of Windows 95: Windows 95 – original release Windows 95 A – included Windows 95 OSR1 slipstreamed into the installation. Windows 95 B – (OSR2) included several major enhancements, Internet Explorer (IE) 3.0 and full FAT32 file system support. Windows 95 B USB – (OSR2.1) included basic USB support. Windows 95 C – (OSR2.5) included all the above features, plus IE 4.0. This was the last 95 version produced. OSR2, OSR2.1, and OSR2.5 were not released to the general public, rather, they were available only to OEMs that would preload the OS onto computers. Some companies sold new hard drives with OSR2 preinstalled (officially justifying this as needed due to the hard drive's capacity). The first Microsoft Plus! add-on pack was sold for Windows 95. Windows 98 On June 25, 1998, Microsoft released Windows 98. It included new hardware drivers and better support for the FAT32 file system which allows support for disk partitions larger than the 2 GB maximum accepted by Windows 95. The USB support in Windows 98 was more robust than the basic support provided by the OEM editions of Windows 95. It also controversially integrated the Internet Explorer 4 browser into the Windows GUI and Windows Explorer file manager. On May 5, 1999, Microsoft released Windows 98 Second Edition, an interim release whose notable features were the addition of Internet Connection Sharing and improved WDM audio and modem support. Internet Connection Sharing is a form of network address translation, allowing several machines on a LAN (Local Area Network) to share a single Internet connection. Windows 98 Second Edition has certain improvements over the original release. Hardware support through device drivers was increased. Many minor problems present in the original Windows 98 were found and fixed which make it, according to many, the most stable release of Windows 9x family—to the extent that commentators used to say that Windows 98's beta version was more stable than Windows 95's final (gamma) version. Windows Me On September 14, 2000, Microsoft introduced Windows Me (Millennium Edition), which upgraded Windows 98 with enhanced multimedia and Internet features. It also introduced the first version of System Restore, which allowed users to revert their system state to a previous "known-good" point in the case of system failure. The first version of Windows Movie Maker was introduced as well. Windows Me was conceived as a quick one-year project that served as a stopgap release between Windows 98 and Whistler (soon to be renamed to Windows XP). Many of the new features were available from the Windows Update site as updates for older Windows versions. As a result, Windows Me was not acknowledged as a distinct operating system along the lines of 95 or 98, and is often included in the Windows 9x series. Windows Me was criticized by users for its instability and unreliability, due to frequent freezes and crashes. A PC World article dubbed Windows Me the "Mistake Edition" and placed it 4th in their "Worst Tech Products of All Time" feature. The inability of users to easily boot into real mode MS-DOS, as in Windows 95 and 98, led users to quickly learn how to hack their Windows Me installations to provide the needed service. Decline The release of Windows 2000 marked a shift in the user experience between the Windows 9x series and the Windows NT series. Windows NT 4.0 suffered from a lack of support for USB, Plug and Play, and DirectX, preventing its users from playing contemporary games, whereas Windows 2000 featured an updated user interface, and better support for both Plug and Play and USB. The release of Windows XP confirmed the change of direction for Microsoft, bringing the consumer and business operating systems together under Windows NT. One by one, support for the Windows 9x series ended, and Microsoft stopped selling the software to end users, then later to OEMs. By March 2004, it was impossible to purchase any versions of the Windows 9x series. End of service life Microsoft continued to support the use of the Windows 9x series until July 11, 2006, when extended support ended for Windows 98, Windows 98 Second Edition (SE), and Windows Millennium Edition (Me) (extended support for Windows 95 ended on December 31, 2001). Microsoft DirectX, a set of standard gaming APIs, stopped being updated on Windows 95 at Version 8.0a. The last version of DirectX supported for Windows 98 and Me is 9.0c. Support for Microsoft Internet Explorer running on any Windows 9x system has also since ended. Internet Explorer 5.5 with Service Pack 2 is the last version of Internet Explorer compatible with Windows 95 and Internet Explorer 6 with Service Pack 1 is the last version compatible with Windows 98 and Me. Internet Explorer 7, the first major update to Internet Explorer 6 in half a decade, was only available for Windows XP SP2 and Windows Vista. The Windows Update website continued to be available for Windows 98, Windows 98SE, and Windows Me after their end of support date (Windows Update was never available for Windows 95), however, during 2011, Microsoft retired the Windows Update v4 website and removed the updates for Windows 98, Windows 98SE, and Windows Me from its servers. Microsoft announced in July 2019 that the Microsoft Internet Games services on Windows Me (and XP) would end on July 31, 2019. The growing number of important updates caused by the end of service life of these pieces of software have slowly made Windows 9x even less practical for everyday use. Today, even open source projects such as Mozilla Firefox will not run on Windows 9x without rework. RetroZilla is a fork of Gecko 1.8.1 aimed at bringing "improved compatibility on the modern web" for versions of Windows as old as Windows 95 and NT 4.0. The latest version, 2.2, was released in February 2019 and added support for TLS 1.2. Design Kernel Windows 9x is a series of hybrid 16/32-bit operating systems. Like most operating systems, Windows 9x consists of kernel space and user space memory. Although Windows 9x features some memory protection, it does not protect the first megabyte of memory from userland applications for compatibility reasons. This area of memory contains code critical to the functioning of the operating system, and by writing into this area of memory an application can crash or freeze the operating system. This was a source of instability as faulty applications could accidentally write into this region, potentially corrupting important operating system memory, which usually resulted in some form of system error and halt. User mode The user-mode parts of Windows 9x consist of three subsystems: the Win16 subsystem, the Win32 subsystem and MS-DOS. Windows 9x/Me set aside two blocks of 64 KB memory regions for GDI and heap resources. By running multiple applications, applications with numerous GDI elements or by running applications over a long span of time, it could exhaust these memory areas. If free system resources dropped below 10%, Windows would become unstable and likely crash. Kernel mode The kernel mode parts consist of the Virtual Machine Manager (VMM), the Installable File System Manager (IFSHLP), the Configuration Manager, and in Windows 98 and later, the WDM Driver Manager (NTKERN). As a 32-bit operating system, virtual memory space is 4 GiB, divided into a lower 2 GiB for applications and an upper 2 GiB for kernel per process. Registry Like Windows NT, Windows 9x stores user-specific and configuration-specific settings in a large information database called the Windows registry. Hardware-specific settings are also stored in the registry, and many device drivers use the registry to load configuration data. Previous versions of Windows used files such as AUTOEXEC.BAT, CONFIG.SYS, WIN.INI, SYSTEM.INI and other files with an .INI extension to maintain configuration settings. As Windows became more complex and incorporated more features, .INI files became too unwieldy for the limitations of the then-current FAT filesystem. Backwards-compatibility with .INI files was maintained until Windows XP succeeded the 9x and NT lines. Although Microsoft discourages using .INI files in favor of Registry entries, a large number of applications (particularly 16-bit Windows-based applications) still use .INI files. Windows 9x supports .INI files solely for compatibility with those applications and related tools (such as setup programs). The AUTOEXEC.BAT and CONFIG.SYS files also still exist for compatibility with real-mode system components and to allow users to change certain default system settings such as the PATH environment variable. The registry consists of two files: User.dat and System.dat. In Windows Me, Classes.dat was added. Virtual Machine Manager The Virtual Machine Manager (VMM) is the 32-bit protected mode kernel at the core of Windows 9x. Its primary responsibility is to create, run, monitor and terminate virtual machines. The VMM provides services that manage memory, processes, interrupts and protection faults. The VMM works with virtual devices (loadable kernel modules, which consist mostly of 32-bit ring 0 or kernel mode code, but may include other types of code, such as a 16-bit real mode initialisation segment) to allow those virtual devices to intercept interrupts and faults to control the access that an application has to hardware devices and installed software. Both the VMM and virtual device drivers run in a single, 32-bit, flat model address space at privilege level 0 (also called ring 0). The VMM provides multi-threaded, preemptive multitasking. It runs multiple applications simultaneously by sharing CPU (central processing unit) time between the threads in which the applications and virtual machines run. The VMM is also responsible for creating MS-DOS environments for system processes and Windows applications that still need to run in MS-DOS mode. It is the replacement for WIN386.EXE in Windows 3.x, and the file vmm32.vxd is a compressed archive containing most of the core VxD, including VMM.vxd itself and ifsmgr.vxd (which facilitates file system access without the need to call the real mode file system code of the DOS kernel). Software support Unicode Partial support for Unicode can be installed on Windows 9x through the Microsoft Layer for Unicode. File systems Windows 9x does not natively support NTFS or HPFS, but there are third-party solutions which allow Windows 9x to have read-only access to NTFS volumes. Early versions of Windows 95 did not support FAT32. Like Windows for Workgroups 3.11, Windows 9x provides support for 32-bit file access based on IFSHLP.SYS, and unlike Windows 3.x, Windows 9x has support for the VFAT file system, allowing file names with a maximum of 255 characters instead of having 8.3 filenames. Event logging and tracing Also, there is no support for event logging and tracing or error reporting which the Windows NT family of operating systems has, although software like Norton CrashGuard can be used to achieve similar capabilities on Windows 9x. Security Windows 9x is designed as a single-user system. Thus, the security model is much less effective than the one in Windows NT. One reason for this is the FAT file systems (including FAT12/FAT16/FAT32), which are the only ones that Windows 9x supports officially, though Windows NT also supports FAT12 and FAT16 (but not FAT32) and Windows 9x can be extended to read and write NTFS volumes using third-party Installable File System drivers. FAT systems have very limited security; every user that has access to a FAT drive also has access to all files on that drive. The FAT file systems provide no access control lists and file-system level encryption like NTFS. Some operating systems that were available at the same time as Windows 9x are either multi-user or have multiple user accounts with different access privileges, which allows important system files (such as the kernel image) to be immutable under most user accounts. In contrast, while Windows 95 and later operating systems offer the option of having profiles for multiple users, they have no concept of access privileges, making them roughly equivalent to a single-user, single-account operating system; this means that all processes can modify all files on the system that aren't open, in addition to being able to modify the boot sector and perform other low-level hard drive modifications. This enables viruses and other clandestinely installed software to integrate themselves with the operating system in a way that is difficult for ordinary users to detect or undo. The profile support in the Windows 9x family is meant for convenience only; unless some registry keys are modified, the system can be accessed by pressing "Cancel" at login, even if all profiles have a password. Windows 95's default login dialog box also allows new user profiles to be created without having to log in first. Users and software can render the operating system unable to function by deleting or overwriting important system files from the hard disk. Users and software are also free to change configuration files in such a way that the operating system is unable to boot or properly function. Installation software often replaced and deleted system files without properly checking if the file was still in use or of a newer version. This created a phenomenon often referred to as DLL hell. Windows Me introduced System File Protection and System Restore to handle common problems caused by this issue. Network sharing Windows 9x offers share-level access control security for file and printer sharing as well as user-level access control if a Windows NT-based operating system is available on the network. In contrast, Windows NT-based operating systems offer only user-level access control but integrated with the operating system's own user account security mechanism. Hardware support Drivers Device drivers in Windows 9x can be virtual device drivers or (starting with Windows 98) WDM drivers. VxDs usually have the filename extension .vxd or .386, whereas WDM compatible drivers usually use the extension .sys. The 32-bit VxD message server (msgsrv32) is a program that is able to load virtual device drivers (VxDs) at startup and then handle communication with the drivers. Additionally, the message server performs several background functions, including loading the Windows shell (such as Explorer.exe or Progman.exe). Another type of device drivers are .DRV drivers. These drivers are loaded in user-mode, and are commonly used to control devices such as multimedia devices. To provide access to these devices, a dynamic link library is required (such as MMSYSTEM.DLL). Windows 9x retains backwards compatibility with many drivers made for Windows 3.x and MS-DOS. Using MS-DOS drivers can limit performance and stability due to their use of conventional memory and need to run in real mode which requires the CPU to switch in and out of protected mode. Drivers written for Windows 9x/Windows Me are loaded into the same address space as the kernel. This means that drivers can by accident or design overwrite critical sections of the operating system. Doing this can lead to system crashes, freezes and disk corruption. Faulty operating system drivers were a source of instability for the operating system. Other monolithic and hybrid kernels, like Linux and Windows NT, are also susceptible to malfunctioning drivers impeding the kernel's operation. Often the software developers of drivers and applications had insufficient experience with creating programs for the 'new' system, thus causing many errors which have been generally described as "system errors" by users, even if the error is not caused by parts of Windows or DOS. Microsoft has repeatedly redesigned the Windows Driver architecture since the release of Windows 95 as a result. CPU and bus technologies Windows 9x has no native support for hyper-threading, Data Execution Prevention, symmetric multiprocessing, or multi-core processors. Windows 9x has no native support for SATA host bus adapters (and neither did Windows 2000 nor Windows XP), or USB drives (except Windows Me). There are, however, many SATA-I controllers for which Windows 98/Me drivers exist, and USB mass storage support has been added to Windows 95 OSR2 and Windows 98 through third party drivers. Hardware driver support for Windows 98/Me began to decline in 2005, most notably for motherboard chipsets and video cards. Early versions of Windows 95 had no support for USB or AGP acceleration. MS-DOS Windows 95 was able to reduce the role of MS-DOS in Windows much further than had been done in Windows 3.1x and earlier. According to Microsoft developer Raymond Chen, MS-DOS served two purposes in Windows 95: as the boot loader, and as the 16-bit legacy device driver layer. When Windows 95 started up, MS-DOS loaded, processed CONFIG.SYS, launched COMMAND.COM, ran AUTOEXEC.BAT and finally ran WIN.COM. The WIN.COM program used MS-DOS to load the virtual machine manager, read SYSTEM.INI, load the virtual device drivers, and then turn off any running copies of EMM386 and switch into protected mode. Once in protected mode, the virtual device drivers (VxDs) transferred all state information from MS-DOS to the 32-bit file system manager, and then shut off MS-DOS. These VxDs allow Windows 9x to interact with hardware resources directly, as providing low-level functionalities such as 32-bit disk access and memory management. All future file system operations would get routed to the 32-bit file system manager. In Windows Me, win.com was no longer executed during the startup process; instead it went directly to execute VMM32.VXD from IO.SYS. The second role of MS-DOS (as the 16-bit legacy device driver layer) was as a backward compatibility tool for running DOS programs in Windows. Many MS-DOS programs and device drivers interacted with DOS in a low-level way, for example, by patching low-level BIOS interrupts such as int 13h, the low-level disk I/O interrupt. When a program issued an int 21h call to access MS-DOS, the call would go first to the 32-bit file system manager, which would attempt to detect this sort of patching. If it detects that the program has tried to hook into DOS, it will jump back into the 16-bit code to let the hook run. A 16-bit driver called IFSMGR.SYS would previously have been loaded by CONFIG.SYS, the job of which was to hook MS-DOS first before the other drivers and programs got a chance, then jump from 16-bit code back into 32-bit code, when the DOS program had finished, to let the 32-bit file system manager continue its work. According to Windows developer Raymond Chen, "MS-DOS was just an extremely elaborate decoy. Any 16-bit drivers and programs would patch or hook what they thought was the real MS-DOS, but which was in reality just a decoy. If the 32-bit file system manager detected that somebody bought the decoy, it told the decoy to quack." MS-DOS Virtualization Windows 9x can run MS-DOS applications within itself using a method called "Virtualization", where an application is run on a Virtual DOS machine. MS-DOS Mode Windows 95 and Windows 98 also offer regressive support for DOS applications in the form of being able to boot into a native "DOS Mode" (MS-DOS can be booted without booting Windows, not putting the CPU in protected mode). Through Windows 9x's memory managers and other post-DOS improvements, the overall system performance and functionality is improved. This differs from the emulation used in Windows NT-based operating systems. Some old applications or games may not run properly in a DOS box within Windows and require real DOS Mode. Having a command line mode outside of the GUI also offers the ability to fix certain system errors without entering the GUI. For example, if a virus is active in GUI mode it can often be safely removed in DOS mode, by deleting its files, which are usually locked while infected in Windows. Similarly, corrupted registry files, system files or boot files can be restored from the command line. Windows 95 and Windows 98 can be started from DOS Mode by typing 'WIN' <enter> at the command prompt. However, the Recovery Console for Windows 2000, which as a version of Windows NT played a similar role in removing viruses. Because DOS was not designed for multitasking purposes, Windows versions such as 9x that are DOS-based lack File System security, such as file permissions. Further, if the user uses 16-bit DOS drivers, Windows can become unstable. Hard disk errors often plague the Windows 9x series. User interface Users can control a Windows 9x-based system through a command-line interface (or CLI), or a graphical user interface (or GUI). For desktop systems, the default mode is usually graphical user interface, where the CLI is available through MS-DOS windows. The GDI, which is a part of the Win32 and Win16 subsystems, is also a module that is loaded in user mode, unlike Windows NT where the GDI is loaded in kernel mode. Alpha compositing and therefore transparency effects, such as fade effects in menus, are not supported by the GDI in Windows 9x. On desktop machines, Windows Explorer is the default user interface, though a variety of additional Windows shell replacements exist. Other GUIs include LiteStep, bbLean and Program Manager. The GUI provides a means to control the placement and appearance of individual application windows, and interacts with the Window System. See also Comparison of operating systems Architecture of Windows 9x MS-DOS 7 References External links Computing platforms 9x Discontinued versions of Microsoft Windows
2701312
https://en.wikipedia.org/wiki/Efficient%20Probabilistic%20Public-Key%20Encryption%20Scheme
Efficient Probabilistic Public-Key Encryption Scheme
EPOC (Efficient Probabilistic Public Key Encryption) is a probabilistic public-key encryption scheme. EPOC was developed in 1999 by T. Okamoto, S. Uchiyama and E. Fujisaki of NTT Labs in Japan. It is based on the random oracle model, in which a primitive public-key encryption function is converted to a secure encryption scheme by use of a truly random hash function; the resulting scheme is designed to be semantically secure against a chosen ciphertext attack. EPOC's primitive encryption function is the OU (Okamoto–Uchiyama) function, in which to invert the OU function is proven to be as hard as factoring a composite integer public key. There are three versions of EPOC: EPOC-1 uses a one-way trapdoor function and a random function (hash function); EPOC-2 uses a one-way trapdoor function, two random functions (hash functions) and a symmetric-key encryption (e.g., one-time padding and block-ciphers); EPOC-3 uses the Okamoto–Uchiyama one-way trapdoor function and two random functions (hash functions) as well as any symmetric encryption scheme such as the one-time pad, or any classical block cipher. EPOC-1 is designed for key distribution; EPOC-2 and EPOC-3 are designed for both key distribution and encrypted data transfer. See also Cryptography Computational complexity theory Okamoto–Uchiyama cryptosystem References T. Okamoto, S. Uchiyama and E. Fujisaki (1999). "EPOC: Efficient Probabilistic Public-Key Encryption", Contribution to IEEE – describes EPOC-1 and EPOC-2. T. Okamoto and D. Pointcheval (2000). "EPOC-3: Efficient Probabilistic Public-Key Encryption (Version 2)", Contribution to IEEE – describes EPOC-3. Public-key encryption schemes
32179965
https://en.wikipedia.org/wiki/Floppy%20disk%20variants
Floppy disk variants
The floppy disk is a data storage and transfer medium that was ubiquitous from the mid-1970s well into the 2000s. Besides the 3½-inch and 5¼-inch formats used in IBM PC compatible systems, or the 8-inch format that preceded them, many proprietary floppy disk formats were developed, either using a different disk design or special layout and encoding methods for the data held on the disk. Non-standard media and devices IBM DemiDiskette In the early 1980s, IBM Rochester developed a 4-inch floppy disk drive, the Model 341 and an associated diskette, the DemiDiskette. At about half the size of the original 8-inch floppy disk the name derived from the prefix demi for "half". This program was driven by aggressive cost goals, but missed the pulse of the industry. The prospective users, both inside and outside IBM, preferred standardization to what by release time were small cost reductions, and were unwilling to retool packaging, interface chips and applications for a proprietary design. The product was announced and withdrawn in 1983 with only a few units shipped. IBM wrote off several hundred million dollars of development and manufacturing facility. IBM obtained patent number on the media and the drive for the DemiDiskette. Tabor Drivette Another unsuccessful diskette variant was the Drivette, a 3¼-inch diskette drive marketed by Tabor Corporation of Westland, Massachusetts, USA between 1983 and 1985 with media supplied by Dysan, Brown and 3M. The diskettes were named Dysan 3¼" Flex Diskette (P/N 802950), Tabor 3¼" Flex Diskette (P/N D3251), sometimes also nicknamed "Tabor" or "Brown" at tradeshows. The Microfloppy Disk Drive TC 500 was a single-sided quad-density drive with a nominal storage capacity of 500 KB (80 tracks, 140 tpi, 16 sectors, 300 rpm, 250 kbit/s, 9250 bpi with MFM). It could work with standard controllers for 5¼-inch floppy disks. Since August 1984, it was used in the Seequa Chameleon 325, an early CP/M-80 & MS-DOS portable computer with both Z80 and 8088 processors. It was also offered in limited quantity with some PDP 11/23-based workstations by General Scientific Corporation. Originally, Educational Microcomputer Systems (EMS) announced a system using this drive as well, but later changed plans to use 3½-inch diskette drives instead. 3-inch "MCD-1 Micro Cassette" A 3-inch magnetic disk in a hard plastic shell was invented by , who was working at the Hungarian Budapest Radio Technology Factory (, BRG), in 1973. It was sanctioned by the socialist government in the following year; however, due to a lack of support by the directors of the factory, the development stalled and working prototypes were only created in 1979. In 1980, the product was announced internationally and Jack Tramiel showed interest in using the technology in his Commodore computers, but negotiations fell through. The product was released to the market in 1982, but was unsuccessful and only about 2000 floppy drives were produced. Versions of the floppy drive was released in minimal quantity for the ZX Spectrum and Commodore 64, and some computers made in East Germany were also equipped with one. The floppies are single sided and can hold up to 149 KB of data when MFM formatted. The drives were compatible with contemporary floppy controllers. 3-inch "Compact Floppy Disk" / "CF-2" format The 3-inch "Compact Floppy Disk" or "CF-2" was an intended rival to Sony's 3.5" floppy system introduced by a consortium of manufacturers led by Matsushita. Hitachi was a manufacturer of 3-inch disk drives, and stated in advertisements, "It's clear that the 3" floppy will become the new standard." The format was widely used by Amstrad in their CPC and PCW computers, and (after Amstrad took over manufacture of the line) the Sinclair ZX Spectrum +3. It was also adopted by some other manufacturers/systems such as Sega, the Tatung Einstein, and Timex of Portugal in the FDD and FDD-3000 disk drives. Despite this, the format was not a major success. Three-inch diskettes bear much similarity to the -inch size, but with some unique features. One example is the more elongated plastic casing, taller than a -inch disk, but less wide and thicker (i.e. with increased depth). The actual 3-inch magnetic-coated disk occupies less than 50% of the space inside the casing, the rest being used by the complex protection and sealing mechanisms implemented on the disks, which thus are largely responsible for the thickness, length, and relatively high costs of the disks. On the early Amstrad machines (the CPC line and the PCW 8256), the disks are typically flipped over to change the side (acting like 2 separate single-sided disks, comparable to the "flippy disks" of -inch media) as opposed to being contiguously double-sided. Double-sided mechanisms were introduced on the later PCW 8512 and PCW 9512, thus removing the need to remove, flip, and then reinsert the disk. Quick Disk variants Mitsumi marketed several 3-inch diskette "Quick Disk" formats for OEM use. They used 2.8-inch magnetic discs. The OEM could decide on the outer case of the media which led to several mechanically incompatible solutions: Famicon Disk System The Japanese Nintendo Famicom Disk System used proprietary 3-inch diskettes called "Disk Cards" between 1986 and 1990. Smith Corona DataDisk Many Smith Corona "CoronaPrint" word-processor typewriters used a proprietary double-sided 3-inch diskette format named "DataDisk". Confusingly, it was labelled 2.8-inch reflecting the diameter of the magnetic disk itself rather than the media's case. Sharp 2.5-inch floppy disk In 1986, Sharp introduced a 2.5-inch floppy disk format for use with their family of BASIC pocket computers. Two drives were produced: the Sharp CE-1600F and the CE-140F (chassis: FDU-250). Both took turnable diskettes named CE-1650F with a total capacity of 2×64 KB (128 KB) at bytes per side (512 byte sectors, 8 sectors/track, 16 tracks (00..15), 48 tpi, 250 kbit/s, 270 rpm with GCR (4/5) recording). 2-inch floppy disks At least two incompatible floppy disks measuring two inches appeared in the 1980s. One of these, officially referred to as a Video Floppy (or VF for short) can be used to store video information for still video cameras such as the original Sony Mavica (not to be confused with later Digital Mavica models) and the Ion and Xapshot cameras from Canon. VF is not a digital data format; each track on the disk stores one video field in the analog interlaced composite video format in either the North American NTSC or European PAL standard. This yields a capacity of 25 images per disk in frame mode and 50 in field mode. Another 2-inch format, the LT-1, is digitally formatted—720 kB, 245 TPI, 80 tracks/side, double-sided, double-density. They are used exclusively in the Zenith MinisPORT laptop computer circa 1989. Although the media exhibited nearly identical performance to the 3½-inch disks of the time, they were not very successful. This was due in part to the scarcity of other devices using this drive making it impractical for software transfer, and high media cost which was much more than 3½-inch and 5¼-inch disks of the time. Much later, another 2-inch (case size: 54.5 mm × 50.2 mm × 2.0 mm) miniature disk format was Iomega's PocketZip (originally named Clik!), introduced in 1999. The disks could store 40 MB. The external drives were available as PC Card Type II and with USB interface. Extended use cases Flippy disks A flippy disk (sometimes known as a "flippy") is a double-sided -inch floppy disk, specially modified so that the two sides can be used independently (but not simultaneously) in single-sided drives. Many commercial publishers of computer software (mainly, relatively small programs like arcade games that could fit on a single-sided floppy disk) distributed their products on flippy disks formatted for two different brands of computer, e.g. TRS-80 on one side and Apple on the other. Compute! published an article on the topic in March 1981. Generally, there are two levels of modifications: For Disk Operating Systems that do not use the index hole in the disk to mark the beginnings of tracks, the "flippy" modification required only a new write-enable notch to be cut if the disk was designed to be written to. For this purpose, specially designed single-rectangular-hole punchers, commonly known as disk doublers, were produced and sold by third-party computer accessory manufacturers. Many users, however, made do with a standard (round) hole puncher and/or an ordinary pair of scissors for this job. For disk operating systems that do use index sync, a second index hole window has to be punched in both sides of the jacket, and for hard-sectored formats, an additional window must be punched for the sector holes. While cutting a second notch is relatively safe, cutting an additional window into the jacket is a great peril to the disk itself. A number of floppy-disk manufacturers produced ready-made "flippy" media. As the cost of media went down and double-sided drives became the standard, "flippies" became obsolete. Auto-loaders IBM developed, and several companies copied, an autoloader mechanism that can load a stack of floppies one at a time into a drive. These are very bulky systems, and suffer from media hangups and chew-ups more than standard drives, but they were a partial answer to replication and large removable storage needs. The smaller 5¼- and 3½-inch floppies made this a much easier technology to perfect. Floppy mass storage A number of companies, including IBM and Burroughs, experimented with using large numbers of unenclosed disks to create massive amounts of storage. The Burroughs system uses a stack of 256 12-inch disks, spinning at a high speed. The disk to be accessed is selected by using air jets to part the stack, and then a pair of heads flies over the surface as in some hard disk drives. This approach in some ways anticipated the Bernoulli disk technology implemented in the Iomega Bernoulli Box, but head crashes or air failures were spectacularly messy. The program did not reach production. Standard floppy replacements A number of attempts were made by various companies to introduce newer floppy-disk formats based on the standard 3½-inch physical format. Most of these systems provide the ability to read and write standard DD and HD disks, while at the same time introducing a much higher-capacity format as well. None of these ever reached the point where it could be assumed that every current PC would have one, and they have now largely been replaced by optical disc burners and flash storage. Nevertheless, the 5¼- and 3½-inch sizes remain to this day as the standards for drive bays in computer cases, the former used for optical drives (including Blu-ray), and the latter for hard disk drives. The main technological change for the higher-capacity formats was the addition of tracking information on the disk surface to allow the read/write heads to be positioned more accurately. Normal disks have no such information, so the drives use feedforward (blind) positioning by a stepper motor in order to position their heads over the desired track. For good interoperability of disks among drives, this requires precise alignment of the drive heads to a reference standard, somewhat similar to the alignment required to get the best performance out of an audio tape deck. The newer systems generally use position information on the surfaces of the disk to find the tracks, allowing the track width to be greatly reduced. In 1990, an attempt was made to standardize details for a 20 megabyte 3½-inch format floppy. At the time, "three different technologies that are not interchangeable" existed. One major goal was that the to-be-developed standard drive be backward compatible: that it be able to read 720K and 1.44Mb floppies. From a conceptual point of view, superfloppies are treated as unpartitioned media. The entire media forms a single volume. Flextra As early as 1987, Brier Technology announced the Flextra BR3020, which boasts 21.4 MB (a value used for marketing: its true size is 21,040 kB, 2 sides × 526 cylinders × 40 sectors × 512 bytes or 25 MB unformatted). Around 1990 it announced the BR3225 drive, which was supposed to double the capacity and also read standard DD, HD and ED 3½-inch disks. However, the drive was still not released in 1992. It uses 3½-inch standard disk jackets whose disks have low-frequency magnetic servo information embedded on them for use with the Twin-Tier Tracking technology. Media were manufactured by Verbatim. Quantum sold the drives under the QuadFlextra name. Floptical In 1991, Insite Peripherals introduced the "Floptical", which uses an infra-red LED to position the heads over marks in the disk surface. The original drive stores 21 MB, while also reading and writing standard DD and HD floppies. In order to improve data transfer speeds and make the high-capacity drive usefully quick as well, the drives are attached to the system using a SCSI connector instead of the normal floppy controller. This meant that most PCs were unable to boot from them. This again adversely affected pickup rates. Insite licensed their technology to a number of companies, who introduced compatible devices as well as even larger-capacity formats. The most popular of these, by far, was the LS-120, mentioned below. Zip drive In 1994, Iomega introduced the Zip drive. Although neither size (the original or the later Pocket Zip drive) conforms to the 3½-inch form factor and hence is not compatible with standard 1.44 MB drives, the original physical size still became the most popular of the "super floppies". The first version boasted 100 MB; later versions boasted 250 MB and then 750 MB of storage, until the PocketZip (formerly known as Clik!) was developed with 40 MB. Though Zip drives gained in popularity for several years they never reached the same market penetration as standard floppy drives, since only some new computers were sold with the drives. The rise of desktop publishing and computer graphics led to much larger file sizes. Zip disks greatly eased the exchange of files that were too big to fit on a standard 3.5-inch floppy or an email attachment, when there was no high-speed connection to transfer the file to the recipient. Eventually the falling prices of compact disc optical media and, later, flash storage, along with notorious hardware failures (the so-called "click of death"), reduced the popularity of the Zip drive. LS-120/LS-240 Announced in 1995, the "SuperDisk" marketed as the LS-120 drive, often seen with the brand names Matsushita (Panasonic) and Imation, had an initial capacity of 120 MB (120.375 MB). LS in this case stands for LASER-servo, which uses a very low-power superluminescent LED that generates light with a small focal spot. This allows the drive to align its rotation to precisely the same point each time, allowing far more data to be written due to the absence of conventional magnetic alignment marks. The alignment is based on hard-coded optical alignment marks, which meant that a complete format can safely be done. This worked very well at the time and as a result failures associated with magnetic fields wiping the Zip drive alignment Z tracks were less of a problem. It was also able to read and write to standard floppy disks about 5 times as fast as standard floppy drives. It was upgraded (as the "LS-240") to 240 MB (240.75 MB). Not only can the drive read and write 1440 kB disks, but the last versions of the drives can write 32 MB onto a normal 1440 kB disk. Unfortunately, popular opinion held the Super Disks to be quite unreliable, though no more so than the Zip drives and SyQuest Technology offerings of the same period and there were also many reported problems moving standard floppies between LS-120 drives and normal floppy drives. This belief, true or otherwise, crippled adoption. The BIOS of many motherboards even to this day supports LS-120 drives as a boot option. LS-120 drives were available as options on many computers, including desktop and notebook computers from Compaq Computer Corporation. In the case of the Compaq notebooks, the LS-120 drive replaced the standard floppy drive in a multibay configuration. Sony HiFD Sony introduced its own floptical-like system in 1997 as the "150 MB Sony HiFD" which was originally supposed to hold 150 MB (157.3 decimal megabytes) of data. Although by this time the LS-120 had already garnered some market penetration, industry observers nevertheless confidently predicted the HiFD would be the real standard-floppy-killer and finally replace standard floppies in all machines. After only a short time on the market the product was pulled, as it was discovered there were a number of performance- and reliability problems that made the system essentially unusable. Sony then reengineered the device for a quick rerelease, but then extended the delay well into 1998 instead, and increased the capacity to "200 MB" (approximately 210 decimal megabytes) while they were at it. By this point the market was already saturated by the Zip disk, so it never gained much market share. Caleb Technology’s UHD144 The UHD144 drive surfaced early in 1998 as the it drive, and provides 144 MB of storage while also being compatible with the standard 1.44 MB floppies. The drive was slower than its competitors but the media was cheaper, running about US$8 at introduction and US$5 soon after. Custom formatting types on 3½-inch and 5¼-inch media Commodore 64/128 Commodore started its tradition of special disk formats with the 5¼-inch disk drives accompanying its PET/CBM, VIC-20 and Commodore 64 home computers, the same as the 1540 and 1541 drives used with the later two machines. The standard Commodore Group Coded Recording (GCR) scheme used in 1541 and compatibles employed four different data rates depending upon track position (see zone bit recording). Tracks 1 to 17 had 21 sectors, 18 to 24 had 19, 25 to 30 had 18, and 31 to 35 had 17, for a disk capacity of 170.75 KB (175 decimal kB). Unique among personal computer architectures, the operating system on the computer itself is unaware of the details of the disk and filesystem; disk operations are handled by Commodore DOS instead, which was implemented with an extra MOS-6502 processor on the disk drive. Many programs such as GEOS bypass Commodore's DOS completely, and replace it with fast-loading (for the time) programs in the 1541 drive. Eventually Commodore gave in to disk format standardization, and made its last 5¼-inch drives, the 1570 and 1571, compatible with Modified Frequency Modulation (MFM), to enable the Commodore 128 to work with CP/M disks from several vendors. Equipped with one of these drives, the C128 is able to access both C64 and CP/M disks, as it needs to, as well as MS-DOS disks (using third-party software), which was a crucial feature for some office work. At least one commercial program, Big Blue Reader by SOGWAP software was available to perform the task. Commodore also developed a 3½-inch 800 KB disk format for its 8-bit machines with the 1581 disk drive, which uses only MFM. The GEOS operating system uses a disk format that is largely identical to the Commodore DOS format with a few minor extensions; while generally compatible with standard Commodore disks, certain disk maintenance operations can corrupt the filesystem without proper supervision from the GEOS kernel. Atari 8-bit line The combination of DOS and hardware (810, 1050 and XF551 disk drives) for Atari 8-bit floppy usage allows sectors numbered from 1 to 720 (1040 in the 1050 disk drive, 1440 in XF551). For instance, the DOS's 2.0 disk bitmap provides information on sector allocation, counts from 0 to 719. As a result, sector 720 cannot be written to by the DOS. Some companies used a copy-protection scheme where hidden data was put in sector 720 that cannot be copied through the DOS copy option. Another more-common early copy-protected scheme simply does not record important sectors as allocated in the VTOC, so the DOS Utility Package (DUP) does not duplicate them. All of these early techniques were thwarted by the first program that simply duplicated all sectors. Later DOS versions (3.0 and later 2.5) and DOSes by third parties (i.e. OSS) accept (and format) disks with up to 1040 sectors, resulting in 130 KB of storage capacity per disk side on drives equipped with double-density controllers (i.e. not the Atari 810) vs. previous 90 KB. That unusual 130 KB format and was introduced by Atari with the 1050 drive with the introduction of DOS 3.0 in 1983. A true double-density Atari floppy format (from 180K upwards) uses 128-byte sectors for sectors 1-3, then 256-byte sectors for the rest. The first three sectors typically contain boot code as used by the onboard ROM OS; it is up to the resulting boot program (such as SpartaDOS) to recognize the density of the formatted disk structure. While this format was developed by Atari for their DOS 2.0D and their (canceled) 180K Atari 815 floppy drive, that double-density DOS was never widely released and the format was generally used by third-party DOS products. Under the Atari DOS II scheme, sector 360 is the VTOC sector map, and sectors 361-367 contain the file listing. The Atari-brand DOS II versions and compatible use three bytes per sector for housekeeping and to link-list to the next sector. Later, mostly third-party DOS systems added features such as double-sided drives, subdirectories, and drive types such as 720K, 1.2 MB, 1.44 MB. Well-known 3rd party Atari DOS products include SmartDOS (distributed with the Rana disk drive), TopDos, MyDos and SpartaDOS. Commodore Amiga The Commodore Amiga computers use an 880 KB format (11×512-byte sectors per track, times 80 tracks, times two sides) on a 3½-inch floppy. Because the entire track is written at once, intersector gaps can be eliminated, saving space. The Amiga floppy controller is basic but much more flexible than the one on the PC: it is free of arbitrary format restrictions, encoding such as MFM and GCR can be done in software, and developers were able to create their own proprietary disk formats. Because of this, foreign formats such as the IBM PC-compatible can be handled with ease (by use of CrossDOS, which was included with later versions of AmigaOS). With the correct filesystem driver, an Amiga can theoretically read any arbitrary format on the 3½-inch floppy, including those recorded at a slightly different rotation rate. On the PC, however, there is no way to read an Amiga disk without special hardware, such as a CatWeasel, and a second floppy drive. Commodore never upgraded the Amiga chip set to support high-density floppies, but sold a custom drive (made by Chinon) that spins at half speed (150 RPM) when a high-density floppy was inserted, enabling the existing floppy controller to be used. This drive was built into the Amiga 3000, although the later Amiga 1200 was only fitted with the standard DD drive. The Amiga HD disks can handle 1760 KB, but using special software programs they can hold even more data. A company named Kolff Computer Supplies also made an external HD floppy drive (KCS Dual HD Drive) available which can handle HD format diskettes on all Amiga computer systems. Because of storage reasons, the use of emulators and preserving data, many disks were packed into disk images. Currently popular formats are .ADF (Amiga Disk File), .DMS (DiskMasher) and .IPF (Interchangeable Preservation Format) files. The DiskMasher format is copy-protected and has problems storing particular sequences of bits due to bugs in the compression algorithm, but was widely used in the pirate and demo scenes. ADF has been around for almost as long as the Amiga itself though it was not initially called by that name. Only with the advent of the internet and Amiga emulators has it become a popular way of distributing disk images. The proprietary IPF files were created to allow preservation of commercial games which have copy protection, which is something that ADF and DMS cannot do. The Amiga is also notorious for the clicking sound made by the floppy drive mechanism if no disk is inserted. The purpose is to detect disk changes, and various utilities such as Noclick exist that can disable the clicking noise to the relief of many Amiga users. Acorn Electron, BBC Micro, and Acorn Archimedes The British company Acorn Computers used non-standard disk formats in their 8-bit BBC Micro and Acorn Electron, and their successor the 32-bit Acorn Archimedes. Acorn however, used standard disk controllers: initially FM, though they quickly transitioned to MFM. The original disk implementation for the BBC Micro stores 100 KB (40 track) or 200 KB (80 track) per side on 5¼-inch disks in a custom format using the Disc Filing System (DFS). Due to the incompatibility between 40- and 80-track drives, much software was distributed on combined 40/80-track disks. These work by writing the same data in pairs of consecutive tracks in 80-track format, and including a small loader program on track 1 (which is in the same physical position in either format). The loader program detects which type of drive is in use, and loads the main software program straight from disk bypassing the DFS, double-stepping for 80-track drives and single-stepping for 40-track. This effectively achieves downgraded capacity to 100 KB from either disk format, but enabled distributed software to be effectively compatible with either drive. For their Electron floppy-disk add-on, Acorn chose 3½-inch disks and developed the Advanced Disk Filing System (ADFS). It uses double-density recording and adds the ability to treat both sides of the disk as a single disk. This offers three formats: S (small): 160 KB, 40-track single-sided; M (medium): 320 KB, 80-track single-sided; L (large): 640 KB, 80-track double-sided. ADFS provides hierarchical directory structure, rather than the flat model of DFS. ADFS also stores some metadata about each file, notably a load address, an execution address, owner and public privileges, and a lock bit. Even on the eight-bit machines, load addresses are stored in 32-bit format, since those machines support 16- and 32-bit coprocessors. The ADFS format was later adopted into the BBC line upon release of the BBC Master. The BBC Master Compact marked the move to 3½-inch disks, using the same ADFS formats. The Acorn Archimedes adds D format, which increases the number of objects per directory from 44 to 77 and increase the storage space to 800 KB. The extra space is obtained by using 1024 byte sectors instead of the usual 512 bytes, thus reducing the space needed for inter-sector gaps. As a further enhancement, successive tracks are offset by a sector, giving time for the head to advance to the next track without missing the first sector, thus increasing bulk throughput. The Archimedes uses special values in the ADFS load/execute address metadata to store a 12-bit filetype field and a 40-bit timestamp. RISC OS 2 introduces E format, which retains the same physical layout as D format, but supports file fragmentation and auto-compaction. Post-1991 machines including the A5000 and Risc PC add support for high-density disks with F format, storing 1600 KB. However, the PC combo IO chips used are unable to format disks with sector skew, losing some performance. ADFS and the PC controllers also support extra-high density (ED) disks as G format, storing 3200 KB, but ED drives were never fitted to production machines. With RISC OS 3, the Archimedes can also read and write disk formats from other machines (for example the Atari ST and the IBM PC, which are largely compatible depending on the ST's OS version). With third-party software it can even read the BBC Micro's original single-density 5¼-inch DFS disks. The Amiga's disks cannot be read by this system as they omitted the usual sector gap markers. The Acorn filesystem design is interesting to some people because all ADFS-based storage devices connect to a module called FileCore which provides almost all the features required to implement an ADFS-compatible filesystem. Because of this modular design, it is easy in RISC OS 3 to add support for so-called image filing systems. These are used to implement completely transparent support for IBM PC format floppy disks, including the slightly different Atari ST format. Computer Concepts released a package that implements an image filing system to allow access to high density Macintosh format disks. See also Floppy disk dd (Unix) Disk image Disk storage Don't Copy That Floppy Floppy disk controller Floppy disk format Floppy disk hardware emulator Group coded recording History of the floppy disk List of floppy disk formats Sneakernet 3mode (1.2 MB format on 3.5-inch media) References Bibliography . . Immers, Richard; Neufeld, Gerald G. (1984). Inside Commodore DOS. The Complete Guide to the 1541 Disk Operating System. Data most & Reston (Prentice-Hall). . : a detailed essay describing one of the first commercial floppy disk drives. External links Programming Floppy Disk Controllers HowStuffWorks: How Floppy Disk Drives Work Computer Hope: Information about computer floppy drives NCITS (mention of ANSI X3.162 and X3.171 floppy standards) Floppy disk drives and media technical information Floppy Disk Formats at the Museum of Obsolete Media . Rotating disc computer storage media Legacy hardware American inventions Floppy disk computer storage
16442175
https://en.wikipedia.org/wiki/1870%20Glaukos
1870 Glaukos
1870 Glaukos is a mid-sized Jupiter trojan from the Trojan camp, approximately in diameter. Discovered during the first Palomar–Leiden Trojan survey in 1971, it was later named for Glaucus from Greek mythology. The dark D-type asteroid has a rotation period of 6.0 hours. Discovery Glaukos was discovered on 24 March 1971, by Dutch astronomer couple Ingrid and Cornelis van Houten at Leiden, on photographic plates taken by astronomer Tom Gehrels at the Californian Palomar Observatory in California. The body's observation arc begins with a precovery of its first recorded observation at Palomar in November 1955, or more than 15 years prior to its official discovery observation. This discovery was made in the context of a larger survey of faint Trojans. The trio of Dutch and Dutch–American astronomers also collaborated on the productive Palomar–Leiden survey in the 1960s, using the same procedure as for this (smaller) survey: Tom Gehrels used Palomar's Samuel Oschin telescope (also known as the 48-inch Schmidt Telescope), and shipped the photographic plates to Cornelis and Ingrid van Houten at Leiden Observatory where astrometry was carried out. More than 7000 Jupiter trojans have already been discovered. Orbit and classification Glaukos is a dark Jovian asteroid in a 1:1 orbital resonance with Jupiter. It is located in the trailing Trojan camp at the Gas Giant's Lagrangian point, 60° behind its orbit . It is also a non-family asteroid of the Jovian background population. It orbits the Sun at a distance of 5.1–5.4 AU once every 12.02 years (4,389 days; semi-major axis of 5.25 AU). Its orbit has an eccentricity of 0.03 and an inclination of 7° with respect to the ecliptic. Physical characteristics Glaukos has been characterized as a dark D-type asteroid by PanSTARRS photometric survey as well as in the SDSS-based taxonomy. It is the most common spectral type among the Jupiter trojans. Lightcurves In 2012 and 2013, three rotational lightcurves of Glaukos in the R- and S-band were obtained by astronomers at the Palomar Transient Factory in California. Lightcurve analysis gave a rotation period of 5.979, 5.980 and 5.989 hours with an amplitude between 0.27 and 0.37 magnitude (). In October 2013, photometric observations by American astronomer Robert Stephens at the Center for Solar System Studies gave the so-far best rated lightcurve, with a period of hours and a brightness variation of 0.42 magnitude (). Diameter and albedo According to the survey carried out by NASA's Wide-field Infrared Survey Explorer with its subsequent NEOWISE mission, Glaukos measures 47.65 kilometers in diameter, and its surface has an albedo of 0.049, while the Collaborative Asteroid Lightcurve Link assumes a standard albedo for a carbonaceous asteroid of 0.057 and calculates a diameter of 42.23 kilometers with an absolute magnitude of 10.6. Naming This minor planet was named after Glaucus (Glaukos) from Greek mythology. In Homer's Iliad, he was captain in the Lycian contingent during the Trojan War. and was killed by Ajax, after whom the Jovian asteroid 1404 Ajax is named. The official was published by the Minor Planet Center on 1 June 1975 (). Notes References External links Asteroid Lightcurve Database (LCDB), query form (info ) Dictionary of Minor Planet Names, Google books Discovery Circumstances: Numbered Minor Planets (1)-(5000) – Minor Planet Center 001870 Discoveries by Cornelis Johannes van Houten Discoveries by Ingrid van Houten-Groeneveld Discoveries by Tom Gehrels Minor planets named from Greek mythology Named minor planets 19710324
10522
https://en.wikipedia.org/wiki/EDIF
EDIF
EDIF (Electronic Design Interchange Format) is a vendor-neutral format based on S-Expressions in which to store Electronic netlists and schematics. It was one of the first attempts to establish a neutral data exchange format for the electronic design automation (EDA) industry. The goal was to establish a common format from which the proprietary formats of the EDA systems could be derived. When customers needed to transfer data from one system to another, it was necessary to write translators from one format to other. As the number of formats (N) multiplied, the translator issue became an N-squared problem. The expectation was that with EDIF the number of translators could be reduced to the number of involved systems. Representatives of the EDA companies Daisy Systems, Mentor Graphics, Motorola, National Semiconductor, Tektronix, Texas Instruments and the University of California, Berkeley established the EDIF Steering Committee in November 1983. Later Hilary Kahn, a computer science professor at the University of Manchester, joined the team and led the development from version EDIF 2 0 0 till the final version 4 0 0. Syntax The general format of EDIF involves using parentheses to delimit data definitions, and in this way it superficially resembles Lisp. The basic tokens of EDIF 2.0.0 were keywords (like library, cell, instance, etc.), strings (delimited with double quotes), integer numbers, symbolic constants (e.g. GENERIC, TIE, RIPPER for cell types) and "Identifiers", which are reference labels formed from a very restricted set of characters. EDIF 3.0.0 and 4.0.0 dropped the symbolic constants entirely, using keywords instead. So, the syntax of EDIF has a fairly simple foundation. A typical EDIF file looks like this: (edif fibex (edifVersion 2 0 0) (edifLevel 0) (keywordMap (keywordLevel 0)) (status (written (timeStamp 1995 1 1 1 1 1) (program "xxx" (version "v1")))) (library xxx (edifLevel 0) (technology (numberDefinition (scale 1 (e 1 -6) (unit distance)))) (cell dff_4 (cellType generic) (view view1 (viewType netlist) (interface (port aset (direction INPUT)) (port clok (direction INPUT)) ... (cell yyy (cellType generic) (view schematic_ (viewType netlist) (interface (port CLEAR (direction INPUT)) (port CLOCK (direction INPUT)) ... ) (contents (instance I_36_1 (viewRef view1 (cellRef dff_4))) (instance (rename I_36_3 "I$3") (viewRef view1 (cellRef addsub_4))) ... (net CLEAR (joined (portRef CLEAR) (portRef aset (instanceRef I_36_1)) (portRef aset (instanceRef I_36_3)))) ... Versions The 1 0 0 release of EDIF was made in 1985. EDIF 2 0 0 The first "real" public release of EDIF was version 2 0 0, which was approved in March 1988 as the standard ANSI/EIA-548-1988. It is published in a single volume. This version has no formal scope statement but what it tries to capture is covered by the defined viewTypes: BEHAVIOR to describe the behavior of a cell DOCUMENT to describe the documentation of a cell GRAPHIC to describe a dumb graphics and text representation of displayable or printable information LOGICMODEL to describe the logic-simulation model of the cell MASKLAYOUT to describe an integrated circuit layout NETLIST to describe a netlist PCBLAYOUT to describe a printed circuit board SCHEMATIC to describe the schematic representation and connectivity of a cell STRANGER to describe an as yet unknown representation of a cell SYMBOLIC to describe a symbolic layout The industry tested this release for several years, but finally only the NETLIST view was the one widely used and some EDA tools are still supporting it today for EDIF 2 0 0. To overcome problems with the main 2 0 0 standard several further documents got released: Electronic Industries Association EDIF Monograph Series, Volume 1, Introduction to EDIF, EIA/EDIF-1, Sept. 1988 EDIF Monograph Series, Volume 2, EDIF Connectivity, EIA/EDIF-2, June 1989 Using EDIF 2 0 0 for schematic transfer, EIA/EDIF/AG-1, July 1989 Documentation from Hilary J. Kahn, Department of Computer Science, University of Manchester EDIF 2 0 0, An Introductory Tutorial", September 1989 EDIF Questions and answers, volume one, November 1988 EDIF Questions and answers, volume two, February 1989 EDIF Questions and answers, volume three, July 1989 EDIF Questions and answers, volume four, November 1989 EDIF Questions and answers, volume five, June 1991 EDIF 3 0 0 Because of some fundamental weaknesses in the 2 0 0 release a new not compatible release 3 0 0 was released in September 1993, given the designation of EIA standard EIA-618. It later achieved ANSI and ISO designations. It is published in 4 volumes. The main focus of this version were the viewTypes NETLIST and SCHEMATIC from 2 0 0. MASKLAYOUT, PCBLAYOUT and some other views were dropped from this release and shifted for later releases because the work for these views was not fully completed. EDIF 3 0 0 is available from the International Electrotechnical Commission as IEC 61690-1 EDIF 4 0 0 EDIF 4 0 0 was released in late August 1996, mainly to add "Printed Circuit Board" extensions (the original PCBLAYOUT view) to EDIF 3 0 0. This more than doubled the size of EDIF 3 0 0, and is published in HTML format on CD. EDIF 4 0 0 is available from the International Electrotechnical Commission as IEC 61690-2 Evolution Problems with 2 0 0 To understand the problems users and vendors encountered with EDIF 2 0 0, one first has to picture all the elements and dynamics of the electronics industry. The people who needed this standard were mainly design engineers, who worked for companies whose size ranged from a house garage to multi-billion dollar facilities with thousands of engineers. These engineers worked mainly from schematics and netlists in the late 1980s, and the big push was to generate the netlists from the schematics automatically. The first suppliers were Electronic Design Automation vendors (e.g., Daisy, Mentor, and Valid formed the earliest predominating set). These companies competed vigorously for their shares of this market. One of the tactics used by these companies to "capture" their customers was their proprietary databases. Each had special features that the others did not. Once a decision was made to use a particular vendor's software to enter a design, the customer was ever after constrained to use no other software. To move from vendor A's to vendor B's systems usually meant a very expensive re-entry of almost all design data by hand into the new system. This expense of "migration" was the main factor that locked design engineers into using a single vendor. But the "customers" had a different desire. They saw immediately that while vendor A might have a really nice analog simulation environment, vendor B had a much better PCB or silicon layout auto-router. And they wished that they could pick and choose amongst the different vendors. EDIF was mainly supported by the electronics design end-users, and their companies. The EDA vendors were involved also, but their motivation was more along the lines of wanting to not alienate their customers. Most of the EDA vendors produced EDIF 2 0 0 translators, but they were definitely more interested in generating high-quality EDIF readers, and they had absolutely no motivation at all to write any software that generated EDIF (an EDIF Writer), beyond threats from customers of mass migration to another vendor's software. The result was rather interesting. Hardly any software vendor wrote EDIF 2 0 0 output that did not have severe violations of syntax or semantics. The semantics were just loose enough that there might be several ways to describe the same data. This began to be known as "flavors" of EDIF. The vendor companies did not always feel it important to allocate many resources to EDIF products, even if they sold a large number of them. There were several stories of active products with virtually no-one to maintain them for years. User complaints were merely gathered and prioritized. The harder it became to export customer data to EDIF, the more the vendors seemed to like it. Those who did write EDIF translators found they spent a huge amount of time and effort on generating sufficiently powerful, forgiving, artificially intelligent readers, that could handle and piece together the poor-quality code produced by the extant EDIF 2 0 0 writers of the day. In designing EDIF 3 0 0, the committees were well aware of the faults of the language, the calumny heaped on EDIF 2 0 0 by the vendors and the frustration of the end users. So, to tighten the semantics of the language, and provide a more formal description of the standard, the revolutionary approach was taken to provide an information model for EDIF, in the information modeling language EXPRESS. This helped to better document the standard, but was done more as an afterthought, as the syntax crafting was done independently of the model, instead of being generated from the model. Also, even though the standard says that if the syntax and model disagree, the model is the standard, this is not the case in practice. The BNF description of the syntax is the foundation of the language inasmuch as the software that does the day-to-day work of producing design descriptions is based on a fixed syntax. The information model also suffered from the fact that it was not (and is not) ideally suited to describing EDIF. It does not describe such concepts as name spaces very well at all, and the differences between a definition and a reference is not clearly describable either. Also, the constructs in EXPRESS for describing constraints might be formal, but constraint description is a fairly complicated matter at times. So, most constraints ended up just being described as comments. Most of the others became elaborate formal descriptions which most readers will never be able to decipher, and therefore may not stand up to automated debugging/compiling, just as a program might look good in review, but a compiler might find some interesting errors, and actually running the program written might find even more interesting errors. (Additionally, analogous EXPRESS compilers/executors didn't exist when the standard was written, and may not still exist today!) Solutions to EDIF 2 0 0 problems The solution to the "flavor" problem of EDIF 2 0 0 was to develop a more specific semantic description in EDIF 3 0 0 (1993). Indeed, reported results of people generating EDIF 3 0 0 translators was that the writers were now much more difficult to get right, due to the great number of semantic restrictions, and the readers are comparatively trivial to develop. The solution to vendor "conflict of interest" was neutral third-party companies, who could provide EDIF products based on vendor interfaces. This separation of the EDIF products from direct vendor control was critical to providing the end-user community with tools that worked well. It formed naturally and without comment. Engineering DataXpress was perhaps the first such company in this realm, with Electronic Tools Company seeming to have captured the market in the mid to late 1990s. Another dynamic in this industry is EDIF itself. Since they have grown to a rather large size, generating readers and writers has become a very expensive proposition. Usually the third-party companies have congregated the necessary specialists and can use this expertise to more efficiently generate the software. They are also able to leverage code sharing and other techniques an individual vendor could not. By 2000, almost no major vendor produced its own EDIF tools, choosing instead to OEM third-party tools. Since the release of EDIF 4 0 0, the entire EDIF standards organisation has essentially dissolved. There have been no published meetings of any of the technical subcommittees, the EDIF Experts group, etc. Most of the individuals involved have moved on to other companies or efforts. The newsletter was abandoned, and the Users' Group no longer holds yearly meetings. EDIF 3 0 0 and 4 0 0 are now ANSI, IEC and European (EN) standards. EDIF Version 3 0 0 is IEC/EN 61690-1, and EDIF Version 4 0 0 is IEC/EN 61690-2. EDIF Descendants LKSoft took major concepts from EDIF 2 0 0 to create a proprietary data format with the default extension ".cam" for their CircuitCAM system offered originally by LPKF Laser & Electronics AG in Garbsen/Hannover, Germany and today owned by DCT Co., Ltd. in Tianjn, China. To efficiently work on EDIF like formats LKSoft has developed the EDIF Procedural Interface'', an API for the C programming language. Zuken, formerly Racal-Redac Ltd., took concepts from the early EDIF 4 0 0 development to create a new proprietary format called CADIF for their Visula PCB-CAD system. This format is also widely used by 3rd party vendors. STEP-AP210, a part of ISO 10303, practically inherited all of the EDIF 4 0 0 functionality except for schematics. External links BYU EDIF Tools A Java framework for parsing/manipulating EDIF files, developed and maintained by BYU's Configurable Computing Lab Torc Open-source C++ API for reconfigurable computing, including parsing and manipulation of EDIF 2 0 0, from ISI's Reconfigurable Computing Group EDIF Overview from Elgris Technologies, Inc. www.edif.org at the Internet Archive Archive of www.edif.org (now defunct) containing an introduction to the EDIF format Computer Aids for VLSI Design - Appendix D: Electronic Design Interchange Format by Steven M. Rubin Professor Hilary Kahn (1943-2007) EDA file formats
6688190
https://en.wikipedia.org/wiki/Lahore%20College%20for%20Women%20University
Lahore College for Women University
The Lahore College for Women University (LCWU) () is a public university in Lahore, Punjab, Pakistan. Prof. Dr. Bushra Mirza is the current Vice Chancellor of LCWU. She took over the charge of the office of Vice Chancellor on 5th July 2019. Lahore College for Women University, with a full time enrollment of about 15,000 students and a teaching faculty of more than 600, is one of the most prestigious institutions of Pakistan. It admits students at the Intermediate, Graduate, Masters and Ph.D. levels. At the time of autonomy, in 1990, there were Masters classes in only six areas, but now with the University status, LCWU is offering degrees at graduate, postgraduate and doctoral levels. BS 4 year degree program is being offered in 39 disciplines and 5 years Phram-D and Architecture degree is being offered. University is offering M.A/M.Sc. in 6 subjects and MS/M.Phil degree in 28 subjects. The University is also offering Ph.D. programs in 16 disciplines i.e. Chemistry, Bio-Technology, Botany, Zoology, Computer Science, City & Regional Planning, Mathematics, Environmental Science, Physics, Political Science, Education, Applied Psychology, Islamic Studies, Urdu, Punjabi and Fine Arts. Lahore College for Women University, established in May 1922 as an intermediate residential college, was originally housed in a building on Hall Road, Lahore, with strength of 60 students (25 boarders) and 13 staff members. By 1950, the College strength increased to 600 students and was shifted to the present building on Jail Road. LCWU by 1922 was affiliated with the University of the Punjab for undergraduate program in 18 subjects. Within the next two years the institution had graduate programs in 14 subjects. Post graduate classes in English were initiated in 1940 and Honours classes in five subjects were introduced in 1949. B.Sc. classes started in 1955 while Post-graduate classes in the subjects of Economics and Physics started in 1966. By 1979 Islamic Studies, Political Science and Psychology were also added to the ever increasing list of programs. The year 1990, when Administrative and Financial Autonomy was given to the institution, proved to be the turning point in the history of LCWU. On 13 August 1999, it was declared a Degree-Awarding institution. The institution was elevated to the status of a Women University on 10 September 2002. The 21st century has brought a drastic revolution in science, which has completely transformed the world. Today sciences are exploring areas that defy imagination. Keeping in line with the importance of the sciences in today's world, LCWU has been fulfilling the demand of the female students in the area, as this was the only institution offering science subjects at undergraduate level. This is evident from the fact that majority of the female doctors, serving and retired, have at some stage (F.Sc. or B.Sc.) studied at Lahore College for Women. Since 1922 the college has proved its worth as the highest seat of learning for science subjects. At the moment Botany, Chemistry, Physics, Zoology, Bio-Technology, Mathematics, Economics, Statistics, Electronics, Environmental Science, Computer Science and Pharmacy are taught at Graduate, Postgraduate and Doctoral levels. LCWU is cognizant of the significance of social sciences and liberal arts since they contribute to the aesthetic sense of human beings and are essential for the society. The Department of English, being more than 70 years old, is the oldest post-graduate department of the University. Founded by Prof Mrs. U.K. Siraj-ud-Din, it still is rooted in the traditions of scholarship and academic excellence. Besides English, Urdu, Punjabi, Islamic Studies, International Relations, Political Science, Fine Arts, Pakistan Studies, Mass Communication and Gender and Development Studies are being offered at graduate and postgraduate level. The graduates of LCWU take their place in moral, intellectual and professional leadership in all walks of national life. Among its alumni are an extra ordinary number of teachers, physicians and professionals in all fields of life. At F.A./F.Sc. level the Intermediate College of LCWU is affiliated with the Board of Intermediate and Secondary Education, Lahore for the purpose of examinations. This Intermediate College has proved to be a very fruitful nursery in providing women force for professional education in the province. Since the establishment of LCWU as a university, the institution has striven for improvement in Higher Education. MoU with various national industries and linkages with Foreign Universities have been established in the field of Pharmacy, Electronics, Environmental Science, Fine Arts, Economics, Mass Communication and Gender & Development Studies. Current Vice Chancellor - Prof. Dr. Bushra Mirza T.I Prof. Dr. Bushra Mirza is the current Vice Chancellor of LCWU. She took over the charge of the office of Vice Chancellor on 5th July 2019. Dr. Bushra Mirza obtained both of her M. Sc and M. Phil degrees with distinction from Quaid-i-Azam University. After completing her Ph. D from the University of Cambridge, on Cambridge Commonwealth Scholarship, she did a short post-doc from the University of North Carolina, USA. She remained a regular faculty member of Quaid-i-Azam University since 1999 till 2019 when she joined Lahore College Women University as Vice Chancellor. During this period, she has been working on several research projects. She pioneered the establishment of research laboratory to produce transgenic plants, in Quaid-i-Azam University. Her research interests include evaluation of medicinal activity of various plants and their genetic transformation for improvement. She has also been involved in analysing medicinal activities of newly synthesized compounds. Twenty two PhD and 120 M. Phil students have completed their research work successfully under her supervision. She has published more than 180 papers in the refereed journals of international repute with total impact factor and citation more than 300 and 2000 respectively. Apart from lab work, she has been interested in the bioethical aspect of Biotechnology as well and has been working at various levels in this regard. In 2001, she represented Pakistan at Salzberg Seminar “Biotechnology: Legal, ethical and moral issues” held at Salzburg, Austria. As a consequence of that seminar, she contributed to a book entitled “Cross Cultural Biotechnology” published by Rowman & Littlefield Publishers, Inc. Maryland, USA in 2004. In 2013, she was awarded an honorary position of ISESCO Women Science Chair. In this regard she has organized several events especially for the female scientists working in OIC countries. In 2017 she participated in the joint meeting of presidents of ISESCO Women Science Chairs at ISESCO’s Headquarters, Rabat, to develop a roadmap for promotion of science and technology by focusing on contribution of females in the region. Furthermore, she has been involved in volunteer work as well in different capacities. She has been coordinating at national level for High School Summer Science Research Program aimed to help high school students develop research aptitude. Besides, she is a member of several international forums like UNESCO-IFAP National Committee, Global Biodiversity & Health Big Data Alliance and executive member of the National Chapter of the Organization of Women Scientists of Developing World (OWSD). As recognition of her research achievements, she was awarded Best Young Research Scholar Award (2006) by Higher Education Commission, Pakistan, gold medal for Biochemistry in the Year 2008 by Pakistan Academy of Sciences and Prof. A. R. Shakoori Gold Medal by the Zoology Society of Pakistan in 2010. Best Research Paper Award, by Higher Education Commission and Presidential Award Tamgha-i-Imtiaz in 2017 Memoranda of understanding MoUs with national industries and links with foreign universities have been established in the fields of Pharmacy, Electrical Engineering, Environmental Science, Fine Arts, Economics, Mass Communication and architecture. Research Lahore College for Women University has established a Research Center to provide students with facilities and services for advance research. Quality enhancement cell A higher education institution needs quality benchmarks in its key performance areas. To institutionalize the process of quality control, the Quality Assurance Agency (QAA) has been established in the HEC. In pursuit of the National Action Plan for performance evaluation, assessment and accreditation of institutions of higher education, a Quality Enhancement Cell was established in Lahore College for Women University. Libraries HEC Digital Library HEC Digital Library is a program to provide researchers at public and private universities in Pakistan and non-profit research and development organizations with access to international scholarly literature based on electronic (online) delivery. University Library The main library of the university (Sciences, Computer Science and Arts) stocks books, magazines and publications as well as national newspapers. Besides the main library there are seminar libraries in the post-graduate departments Book bank There is a book bank maintained by the Social Work Department. Deserving and needy students are provided with textbooks. Student activities To facilitate co-curricular activities and sports, there are student societies and clubs. Presidents and secretaries of these societies are nominated by the teachers. By participating in these activities the students win prizes. The Shaukat Ara Niazi Gold Medal is awarded to students on the basis of best performance in literary pursuits. 'Botanical and Horticultural Society established by Dept. of Botany in 2013, holding interuniversity events with perspectives of Plant Sciences. Bazzm-e- Mushaira's develops literary and poetic ability among students by arranging poetry events and competitions. Inter-university functions and competitions are arranged Comptech Society is a platform for the student of Computer Science Department to incorporate their extracurricular activities. Students from BS-CS, MS-CS and MIT share their views and arrange events. The Economics Society arranges seminars, workshops, quiz competitions and essay competitions. The English Debating Society allows students to present their views, building confidence and oratorical skills. The Geography Society objective is to enhance student's capabilities, enhance their knowledge, build confidence among students and promote teamwork The Home Economics Society organizes workshops and competitions. Iqbal Society (departments of Philosophy and Persian) organize Iqbal Day programs, workshops and seminars highlighting the thought and philosophy of Iqbal. The Psychology Society has been revived and the members to be elected are interviewed. The society arranges events like workshops, seminars, conferences and lectures. The Punjabi Debating Society arranges inter-class and intercollegiate debates competitions and prepares students of LCWU to participate in debating competitions in other colleges and universities. The Shaukat Ara Niazi English Literary Society is named after Shaukat Ara Niazi, a faculty member of the English Department. The society arranges events, quizzes and essay writing competitions to polish students in literary activities. LCWU's Sports Society is under the department of Physical Education. The Statistics Society arranges seminars, workshops, quiz competitions and essay competitions. The Urdu Debating Society deals with debate declamation and speeches. Under the Urdu Literary Society (Halqa-e-Ehel-e-Qalam)''' students participate in essay writing competitions at a national level. College- and university-level essay writing competitions are arranged to promote students in literary activities. Notable alumni Bushra Ansari Diana Baig Samina Khalid Ghurki Saba Hameed Salima Hashmi Farhat Hashmi Riffat Hassan Kishwar Naheed Maryam Nawaz Anusha Rahman Manmohini Zutshi Sahgal Arfa Sayeda Zehra See also Women's colleges List of educational institutions in Lahore List of current and historical women's universities and colleges References External links LCWU official website Women's universities and colleges in Pakistan Universities and colleges in Lahore Educational institutions established in 1922 Lahore College for Women University 1922 establishments in British India Engineering universities and colleges in Pakistan Public universities and colleges in Punjab, Pakistan Lahore Lahore District
48000575
https://en.wikipedia.org/wiki/Eric%20Gibbs
Eric Gibbs
Eric Gibbs is an American attorney at Gibbs Law Group LLP in the United States. He is a member of the Board of Governors of the Consumer Attorneys of California. As a part of the American Association for Justice, he co-chairs the Consumer Privacy and Data Breach Litigation Group and the Lumber Liquidators Litigation Group, and serves as the secretary for the Qui Tam Litigation Group. Early life and education Eric Gibbs started his legal career at Lieff Cabraser, working his way from the mail room to paralegal. After deciding to become an attorney, Eric attended law school at Seattle University School of Law and graduated with his juris doctor degree in 1995. He passed the California bar examination and received his license to practice in the state of California. He worked for two years at the Consumer Protection Division of the Washington Attorney General's Office, before joining his former colleague from Lieff Cabraser, Daniel Girard, as a named partner at Girard Gibbs LLP. After building his practice with his partner Daniel Girard for 20 years, Eric set out on his own, transitioning his practice to a new firm, Gibbs Law Group. Gibbs Law Group grew to include approximately 20 attorneys, a relatively large size for a plaintiffs' firm. Career Data Breach Practice Eric won an MVP award in 2018 from Law360, a LexisNexis Company, for his work in litigating against Anthem Blue Cross Blue Shield over a data breach involving the personal information of approximately 80 million insurance customers. Law360 notes that Eric played a key role in negotiating a $115 million with Anthem, a record settlement in a data breach case. Eric noted that the Anthem medical data breach was a tricky case because data breach litigation was still in its infancy and there was not much precedent to draw upon. "With data breach cases in particular, a lot of what you’re doing is uncertain and untested. There’s not 20 years of data breach jurisprudence that one can draw on,” Eric said. Eric first got involved in data breach cases, he says, "because we were seeing a number of district court opinions dismissing breach cases on standing grounds" and he believed the law was going in the wrong direction. As a result, Eric sought and was awarded Lead Counsel status in a case stemming from the 2013 Adobe data breach. His firm argued the standing issue to federal court judge Lucy Koh, the same judge who would later appoint Eric to a leadership position in the Anthem data breach litigation. Judge Koh sided with Eric and the plaintiffs he represented in the Adobe case, in an opinion on data-breach standing that has been cited dozens of times by courts across the country, and influenced at least three of the twelve federal appellate courts to adopt pro-standing positions in data breach cases. Eric said the Adobe decision "opened the door" for data breach cases across the country, and these cases are "incredibly important to the marketplace" and "to the people whose data was taken." In 2019, Eric received a California Lawyer Magazine's Attorney of the Year (CLAY) award for is work in data breach and privacy litigation. Eric was appointed by the court in the consolidated Equifax data breach lawsuit to help lead the litigation efforts. The Equifax breach compromised the personal information of as many as 145.5 million people. In 2019, Eric and his firm helped secure a ruling for consumers that they could pursue a nationwide claim for negligence and negligence per se against Equifax for failing to adequately secure their personal information. Eric is also currently litigating data breach cases against Banner Health and Excellus BlueCross BlueShield in court-appointed leadership roles. Privacy Practice Eric worked on a privacy class action against Lenovo, in which his firm was appointed co-lead counsel. The lawsuit alleged that Lenovo violated federal hacking and wiretap laws by pre-installing spyware, made by Superfish, on its computers to track their owners' activities, without their consent. Eric's firm helped achieve a $8.3 million settlement with Lenovo and Superfish on behalf of consumers. In the Vizio privacy litigation, concerning allegations that Vizio smart TVs spied on owners without their consent, the court appointed Eric co-lead counsel. Eric's firm helped secure an early victory by defeating Vizio's motion to dismiss the plaintiffs' Video Privacy Protection Act claim. Eric was then instrumental in negotiating a $17 million settlement with Vizio on behalf of smart TV owners. The settlement also required Vizio to delete all the viewing data it had collected without users' consent. Mass Tort Practice In the Risperdal litigation, the court appointed Eric to a leadership role to help lead the mass tort litigation against Johnson & Johnson over failing to disclose that its antipsychotic drug, Risperdal, could cause growth of breast tissue in men. The mass tort litigation efforts also included representing men and boys who developed male breast tissue from taking the antipsychotic drug Invega, manufactured by Janssen Pharmaceuticals. Eric and the litigation team helped uncover evidence that Johnson & Johnson knew as early as 2001 that taking Risperdal could cause breast development in men. The litigation proceeded to bellwether trials to give the parties a sense of the likely outcomes if all the thousands of cases are litigated to a verdict. In early trials, one plaintiff, who had used Risperdal since he was eight and developed breast tissue as a result, was awarded $2.5 million by the jury. In a 2016 bellwether, the jury awarded a record-setting $70 million to the family of a five-year-old boy who grew breast tissue due to Risperdal use. Eric and the litigation team are gearing up for additional bellwether trials against Johnson & Johnson. Eric Gibbs served in a leadership role in the mass tort litigation against Takeda Pharmaceuticals concerning allegations that the diabetes drug Actos (Pioglitazone) caused bladder cancer. In 2015, Takeda reached a global settlement with the over 9,000 plaintiffs in the litigation, agreeing to pay them $2.4 billion. Eric and his partners took on a leadership position doing legal briefing in the Yaz and Yasmin mass tort litigation, concerning allegations that the birth control pills manufactured by Bayer caused blood clots, deep vein thrombosis, and stroke. Bayer paid over $2 billion in settlements to individuals who experienced these side effects, without taking any cases to trial. Eric served in a leadership position in the GranuFlo and Naturalyte litigation. Fresenius, a major player in the dialysis industry, made a dialysis cocktail that included its patented GranuFlo and Naturalyte drugs. Fresenius identified in an internal memo that the dialysis drugs caused elevated bicarbonate levels in patients, elevating the risk of heart attack and stroke by a factor of six, but Fresenius did not disclose this risk until a year later, when the FDA issued a Class I recall. In 2017, Fresenius agreed to pay $250 million into a settlement fund to pay patients who had heart attacks or strokes while using GranuFlo and NaturLyte. Consumer Practice Early in his career, Eric served as lead counsel in a class action lawsuit against Apple over iPod batteries dying too quickly. The lawsuit alleged that Apple had promised 8 to 10 hour battery life for early-generation iPods and that the batteries would last the "lifetime" of the iPod, but consumers reported battery performance degrading within 18 months of purchase. Eric negotiated a settlement with Apple that provided new iPods and cash to class members. Commenting on the settlement, Eric said, "At the heart of this settlement is the promise of a replacement iPod for people who bought the third-generation iPod." Consumers who bought first or second-generation iPods could receive a cash payment of $25 or a coupon for $50 in Apple store credit, which would pay 85% of the cost of purchasing an extended battery warranty from Apple. Eric Gibbs has served as a court-appointed lead counsel, class counsel, and liaison counsel in numerous major class action lawsuits. As a partner at Girard Gibbs LLP, he litigated against General Motors regarding the organic antifreeze DEX-COOL, as it was linked to the degradation of engine sealability components. Eric was part of the leadership team in the Chase check loan litigation, which alleged that Chase has issued favorable consumer loans during good economic times, and later jacked up the interests rates. Eric helped settle the case for $100 million. At the end, only one of plaintiffs' legal theories remained standing, prompting the judge to note that Eric and other plaintiffs' counsel “fought tooth and nail, down to the wire” to achieve “the best settlement that they could under the circumstances.” The Consumer Attorneys of California nominated Eric Gibbs as a finalist for the 2013 Consumer Attorney of the Year for his work representing plaintiffs against Chase Bank. Additionally, Girard Gibbs LLP recently filed a class action suit against Volkswagen over its circumvention of federal emissions tests. Gibbs Law Group LLP In 2014, Eric Gibbs founded Gibbs Law Group, a national law firm representing consumers, employees, whistleblowers and investors is complex lawsuits. Girard Gibbs LLP Girard Gibbs LLP was founded in 1995. The firm represents clients in securities, antitrust, personal injury, consumer protection, whistleblower, and employment law litigation. On October 1, 2018, Girard Gibbs changed its name to Girard Sharp and Eric Gibbs continued his role as managing partner of Gibbs Law Group. Current court-appointed leadership positions In re Adobe Systems, Inc. Privacy Litigation, No. 13-cv-05226-LHK Velasco v. Chrysler Group LLC, No. 13-cv-08080-DDP-VBK Stedman v. Mazda Motor Corporation, No. 8:14-cv-01608-JVS In re Ford Fusion and C-MAX Fuel Economy Litigation, No. 13-MD-2450 In re Hyundai and Kia Fuel Economy Litigation, MDL No. 2424, No. 13-m-02424 In re JCCP 4775, Risperdal and Invega Product Liability Cases Yaeger v. Subaru of America, Inc., No 1:14-cv-04490 Honors and awards Top 30 Plaintiff Lawyers in California, 2016 Consumer Protection MVP for 2016, Law360 Top 100 Northern California Super Lawyers, 2012-2013 Northern California Super Lawyers, 2010-2015 Consumer Attorneys of California 2013 Consumer Attorney of the Year Finalist Best Lawyers Guide for Mass Tort Litigation/ Class Actions, 2012-2016 Martindale-Hubbell AV-Preeminent Rating for Highest Ethics and Legal Skills References Living people Year of birth missing (living people) Place of birth missing (living people) Seattle University School of Law alumni People from the San Francisco Bay Area California lawyers
10477267
https://en.wikipedia.org/wiki/NTHU%20College%20of%20Electrical%20Engineering%20and%20Computer%20Science
NTHU College of Electrical Engineering and Computer Science
The College of Electrical Engineering and Computer Science (EECS) of National Tsing Hua University was established on February 1, 1998. The goal of the college is to foster high-tech professionals to be ready to meet the trend in national economic construction and industrial development. Many alumni now work in Hsinchu Science Park, the technological heart of Taiwan. The college of EECS now consists of two departments and four graduate institutes: Department of Electrical Engineering (EE), Ph.D., M.S. and B.S. Department of Computer Science (CS), Ph.D., M.S. and B.S. Institute of Electronics Engineering (ENE), Ph.D. and M.S. Institute of Communications Engineering (COM), Ph.D. and M.S. Institute of Information Systems and Applications (ISA), Ph.D. and M.S. Institute of Photonics Technologies (IPT), Ph.D. and M.S. Currently, the College of Electrical Engineering and Computer Science has 95 full-time faculty members. External links EECS College Website EECS College Website Electrical Engineering Department Computer Science Department Institute of Communications Institute of Information Systems and Applications Institute of Fotonic Technologies NTHU Official Website National Tsing Hua University
43651666
https://en.wikipedia.org/wiki/Eye%20Candy%20%28TV%20series%29
Eye Candy (TV series)
Eye Candy is an American thriller television series that premiered on MTV on January 12, 2015. The series was developed by Christian Taylor, and is based on the 2004 novel of the same name by R. L. Stine. Eye Candy stars Victoria Justice as Lindy Sampson, a tech genius who goes on the hunt for a serial killer in New York while searching for her lost sister Sara. On February 11, 2014, Eye Candy was picked up for a 10-episode first season. Justice revealed on April 18, 2015, that the series had been cancelled. Premise Eye Candy centers on tech genius Lindy (Victoria Justice), a 22-year-old woman who is persuaded by her roommate, Sophia (Kiersey Clemons), to begin online dating. Unfortunately, she begins to suspect that one of her suitors might be a deadly cyber stalker. She teams up with her friends, a band of hackers, to solve the murders he committed, while unleashing her own style of justice on the streets of New York City in an attempt to find her sister, Sara (Jordyn DiNatale), who was kidnapped three years earlier by an unknown suspect. Cast and characters Main Victoria Justice is Lindy Sampson, a brilliant 22-year-old hacker, who dropped out of MIT after her sister was kidnapped, moved to New York City to look for her, and is set on finding the Flirtual killer. Casey Deidrick is Detective Tommy Calligan, an officer at the NYPD, who is working with Lindy and Ben's best friend in the series, and one of Lindy's love interests. Harvey Guillén is George Reyes, Lindy's coworker, close friend, and confidant. Kiersey Clemons is Sophia Preston, Lindy's roommate and best friend. John Garet Stoker is Connor North, Sophia's best friend and Lindy's frenemy. Recurring Ryan Cooper is Jake Bolin, one of Lindy's love interests. Melanie Nicholls-King is Sgt. Catherine Shaw, the head of the Cyber Crimes Unit of the NYPD. Eric Sheffer Stevens is Hamish Stone. Marcus Callender is Detective Marco Yeager, Detective Calligan's partner. Rachel Kenney is Detective Pascal. Theodora Woolley (or Theodora Miranne) is Tessa Duran. Nils Lawton is Reiss Hennesy, one of the guys Lindy found on Flirtual and was murdered by the Flirtual killer. Guest stars Daniel Lissing is Ben Miller, Tommy's former partner in the Cyber Crimes Unit, who fell in love with Lindy. He was murdered by the Flirtual killer. Jordyn DiNatale is Sara Sampson, Lindy's sister, who was allegedly abducted. David Carranza is Peter, one of the guys Lindy found on Flirtual, and was murdered by the Flirtual killer. Peter Mark Kendall as Bubonic, a highly intelligent hacker. Taylor Rose is Amy Bryant. Daniel Flaherty is Max Jenner. Ariane Rinehart is Jessica. Erica Sweany is Julia Becker. Ted Sutherland is Jeremy. Erin Wilhelmi is Erika Williams. Ebonée Noel is Mary Robertson, Catherine's niece. Episodes Production and development A pilot episode of Eye Candy was ordered on September 13, 2013, by MTV. The first and unaired pilot of Eye Candy, which starred Victoria Justice, Harvey Guillen, Justin Martin, Lilan Bowden, Nico Tortorella, and Olesya Rulin, was written by Emmy Grinwis and directed by Catherine Hardwicke. On February 11, 2014, the series was announced as picked up for a 10-episode first season, with the first episode being reshot and all the roles being recast except for those of Justice and Guillen. On September 16, 2014, the cast was extended with Casey Deidrick, Kiersey Clemons, and John Garet Stoker all becoming series regulars. Production began on September 15, 2014, and ended on December 20, 2014, in Brooklyn, New York City. Reception Eye Candy has received mixed reviews. Tim Stack of Entertainment Weekly stated, "While Justice is a winning actress, she's miscast here and not helped by a story line that feels like one of those old USA TV movies that would have starred Shannen Doherty and Rob Estes." Robert Lloyd of the Los Angeles Times said, "The prologue is well-handled, suspenseful and alarming, but much of what follows seems at least a little bit silly or confused." More positively, Adam Smith of the Boston Herald said, "With the suspenseful Eye Candy, we have a pretty good show, especially for teens who get a thrill out of being creeped out." The series' pilot episode holds a score of 54/100 on review aggregating website Metacritic. References External links 2015 American television series debuts 2015 American television series endings 2010s American crime drama television series 2010s American teen drama television series 2010s American LGBT-related drama television series 2010s American mystery television series English-language television shows MTV original programming Fictional portrayals of the New York City Police Department Television shows based on American novels Television shows filmed in New York (state) Television shows set in New York City
47076878
https://en.wikipedia.org/wiki/Rangaswamy%20Narasimhan
Rangaswamy Narasimhan
Rangaswamy Narasimhan (April 17, 1926 – September 3, 2007) was an Indian computer and cognitive scientist, regarded by many as the father of computer science research in India. He led the team which developed the TIFRAC, the first Indian indigenous computer and was instrumental in the establishment of CMC Limited in 1975, a Government of India company, later bought by Tata Consultancy Services. He was a recipient of the fourth highest Indian civilian award of Padma Shri from the Government of India in 1977. Biography Rangaswamy Narasimhan was born on 17 April 1926 in Chennai in the south Indian state of Tamil Nadu. He graduated with honours in Telecommunication Engineering from College of Engineering, Guindy, then part of University of Madras in 1947 and moved to US to obtain his master's degree (MS) in electrical engineering from the California Institute of Technology. He stayed in US to secure a doctoral degree (PhD) in mathematics from Indiana University. In 1954, he returned to India, accepting Homi J. Bhabha's invitation to join the project team set up by the Tata Institute of Fundamental Research, (TIFR) Mumbai for the development of the first indigenous computer. Five years later, the prototype of the computer was ready and the computer was inaugurated by the then prime minister of India, Jawaharlal Nehru, who named the equipment as Tata Institute of Fundamental Research Automatic Calculator (TIFRAC). In 1961, he went back to Illinois, US to conduct further research on cognitive science at the University of Illinois at Urbana–Champaign and worked as a visiting scientist at the Digital Computer Laboratory of the university till 1964. His next assignment at TIFR was the establishment of a software development centre and that is reported to have paved way for the founding of the National Center for Software Development and Computing Techniques (NCSDCT) under TIFR. The institution was later renamed as the National Centre for Software Technology and was merged into the Centre for Development of Advanced Computing (C-DAC) in 2003. In August 1963, the Government of India set up an interdepartmental Electronics Committee under the chairmanship of Vikram Sarabhai for finding ways for self-sufficiency in the electronics industry sector and Narasimhan was made the chairman of one of the sub committees, entrusted with the responsibility to look into the possibilities of finding ways to reduce dependence on IBM and International Computers Limited. One of the recommendations of Narasimhan Committee was to establish a national organization for manufacture and maintenance of computers which was later endorsed by the Electronics Commission, headed by M. G. K. Menon, and Narasimhan was entrusted with the responsibility which resulted in the formation of Computer Maintenance Corporation, later day CMC Limited as a fully owned government company in 1977 with Narasimhan as its founder chairman. He was also connected with TIFR at their National Centre for Software Development Computing Techniques from 1975 to 1985. Narasimhan was associated with several agencies and organizations for his research; the Industrial Design Centre at Indian Institute of Technology Bombay, the Speech Pathology Unit of Topiwala National Medical College and Nair Hospital, the All India Institute of Speech and Hearing, the Central Institute of Indian Languages, the Indira Gandhi National Centre for the Arts, and the Centre for Applied Cognitive Science at the Ontario Institute for Studies in Education, Toronto were some of them. He sat on the council of the International Federation for Information Processing as the representative of India during 1975-86 and was a member of the Scientific Advisory Council of the Indo-French Centre for the Promotion of Advanced Research from 1988 to 1990. He retired from TIFR service in 1990 as a professor of eminence but retained his association with CMC past his retirement in the capacity as an advisor even after the company was bought by Tata Consultancy Services in 2001. He died on 3 September 2007, at the age of 81, in Bengaluru in Karnataka. Legacy Besides the contributions in the development of the first Indian computer and founding of CMC and the National Center for Software Development and Computing Techniques, Narasimhan was involved in bringing the computer sector in India together and was successful in founding a society, the Computer Society of India in 1964 and became its founder president, a post he held till 1969. He was involved in research in the area of the theory of behaviour, extending his studies to first language acquisition, artificial intelligence, computational modelling of behaviour and modelling language behaviour, and was reported to be the first to discover an analogy between ″formal grammars of natural languages and the formal structures underlying picture processing″. He carried on his research on synthetic pattern recognition during his stint at the University of Illinois from 1961 to 1964 into TIFR and ″developed a meta theory and approach to the study of language behaviour″. His argument was that the use of behaviour to specific uses must have been evolutionary and as such, use must define the structure or mechanism. This was the topic of his book, Modeling Language Behaviour, which is considered to have offered alternatives to the concepts of Noam Chomsky, drawing comparisons with the American cognitive scientist. Narasimhan studied the environment a child (9 months to 3 years) is exposed to while he or she acquired their first language. This ethological study of language behaviour acquisition led him to the discovery that the pre-literate oral language behaviour differed from the literate language behaviour and while the former is genetic, the latter is acquired. He postulated that this difference was analogous to connectionist Artificial Intelligence that included non literate modes of functioning and rule-based Artificial Intelligence. His book, Artificial Intelligence and the Study of Agentive Behaviour, released in 2004, details his findings. The book is reported to have propounded a new understanding of early education of children. Narasimhan's studies in the 1960s and 70s at Illinois on computational modeling of visual behaviour is known to have proposed a new grammar for analysing the visually given image. In order to have a better understanding, his team at Illinois developed a new language by name, PAX, and the group worked on developing a hardware working on PAX to analyse the retinal image but the project was abandoned after a while. Besides several articles written in peer reviewed journals, he published two more books, between the release of Modeling Language Behaviour and Artificial Intelligence and the Study of Agentive Behaviour, released in 1998 and 2004 respectively. Both the books, Language Behaviour: Acquisition and Evolutionary History and Characterising Literacy: A Study of Western and Indian Literacy Experiences were published by Sage Publications. He was also associated with the publication of the book, The Dynamics of Technology: Creation and Diffusion of Skills and Knowledge as an editor and edited the 1993 special issue of Current Science featuring Artificial Intelligence. Awards and honours Rangaswamy Narasimhan was an elected fellow of the Indian National Science Academy (INSA), Indian Academy of Sciences, The National Academy of Sciences, India and the Computer Society of India and held the Jawaharlal Nehru Fellowship from 1971 to 1973. He received the Homi J. Bhabha Award from the University Grants Commission in 1976 and the Government of India awarded him the civilian honour of Padma Shri in 1977. The Om Prakash Bhasin Award was conferred on him in 1988 and Dataquest magazine selected him for their Lifetime Achievement Award in 1994. See also Tata Institute of Fundamental Research Automatic Calculator Centre for Development of Advanced Computing CMC Limited University of Illinois at Urbana–Champaign References Further reading Recipients of the Padma Shri in civil service 1926 births 2007 deaths Scientists from Chennai Indian neuroscientists College of Engineering, Guindy alumni California Institute of Technology alumni Indiana University Bloomington alumni University of Illinois at Urbana–Champaign people Fellows of the Indian Academy of Sciences Fellows of the Indian National Science Academy Fellows of The National Academy of Sciences, India Jawaharlal Nehru Fellows 20th-century Indian biologists
33348521
https://en.wikipedia.org/wiki/Spacecraft%20in%20Star%20Trek
Spacecraft in Star Trek
The Star Trek franchise features many spacecraft. Various space vessels make up the primary settings of the Star Trek television series, films, and expanded universe; others help advance the franchise's stories. Throughout the franchise's production, spacecraft have been depicted by numerous physical and computer-generated models. Producers worked to balance often tight budgets with the need to depict convincing, futuristic vessels. Beyond their media appearances, Star Trek spacecraft have been marketed as models, books, and rides. Filming models have sold for thousands of dollars at public auction. Development and production Establishing basic designs (1966–1969) The original Star Trek television series (1966–1969) established key tenets of the Star Trek franchise: an intrepid, diverse crew traveling through space and encountering the unknown. Matt Jeffries designed the crew's spaceship, the USS Enterprise. Jeffries' experience with aviation led to his Enterprise designs being imbued with what he called "aircraft logic". Series creator Gene Roddenberry wanted the ship's design to convey speed, power, a "shirt sleeve" working environment, and readiness for a multiyear mission. Roddenberry insisted the ship not have fins or rockets; Jeffries also avoided repeating fictional designs from Buck Rogers and Flash Gordon, along with the real-world space exploration work done by Boeing, Douglas Aircraft Company, Lockheed Corporation, NACA, NASA, and Northrop. With Roddenberry's speed requirement, Jeffries decided the ship needed to be instantly recognizable from a distance, and that speed could be conveyed by the ship starting small in the background and growing as it accelerates toward the camera. Jeffries imagined the ship's engines were so powerful they would be dangerous to be near, hence the pair of external warp nacelles. Jeffries initially designed the habitable portion of the ship as a sphere, but it conflicted with the need to suggest the ship's speed. Although Jeffries wanted to avoid the cliche of a "flying saucer", the saucer-shaped upper portion of the hull eventually became part of the final design. Jeffries kept the exterior as plain as possible, both to allow light to play across the model and to suggest that the ship's vital equipment was on the interior, where it could be more readily maintained and repaired. Looking at an early balsa and birchwood model of the Enterprise, Roddenberry thought the vessel would look better upsidedown, and a TV Guide cover once depicted it as such; ultimately, however, the show used Jeffries' arrangement. The saucer module, engineering hull, and twin warp nacelle design influenced producers' designs of Starfleet vessels throughout the franchise's spin-offs and films. The filming model's constituent parts cost under $600. The Enterprise is depicted with a registry number of "NCC-1701". Jeffries combined the "NC" of American civilian aircraft registration codes with the "CC CC" of Russian aircraft, deriving "NCC". The "1701" digits were chosen for their readability on television screens. Although initially lacking internal lighting, the tight budget ultimately allowed the model's starboard side to receive illuminated windows. The show's limited budget also affected the Enterprises support craft: Jeffries wanted to give the show's shuttlecraft a more aerodynamic look than the Enterprise itself, but it was too expensive to build a life-size filming model with a curved hull. Ultimately, toy model company AMT paid for the construction of the shuttle design in exchange for the rights to sell a model toy. The shuttlecraft became a key plot element in the episode "The Galileo Seven" (1967). The show's tight budget meant, more often than not, producers recycled models and footage, used cheaper animation techniques, or simply omitted the appearance of spacecraft. As with the Enterprises design, alien spacecraft design in Star Trek—such as the Klingon starships' resemblance to a manta ray with a bulbous prow, and Romulan vessels' bird-of-prey markings and nomenclature—influenced future television and film productions. Initial films (1979–1984) Several years after Star Trek was canceled, Roddenberry and other producers began work for a new series, Star Trek: Phase II. Paramount Pictures, recognizing the market for science-fiction films after the success of Star Wars (1977), instead approved the production of Star Trek: The Motion Picture (1979). Many of the film's designs and models came from Phase II, although they were recreated to provide the higher level of detail needed for a big-screen appearance. Mike Minor, Joe Jennings, Harold Michaelson, Andrew Probert, Douglas Trumbull, and Richard Tyler redesigned the USS Enterprise while retaining the television series ship's overall shape. The Motion Picture introduced the rubberband-like "snap" effect for starships going to warp speed. Like the Enterprise, the Klingon vessel retained a design reminiscent of its television appearances. Star Trek II: The Wrath of Khan (1982) and Star Trek III: The Search for Spock (1984) introduce models—Klingon bird-of-prey, Federation starbase, merchant ship, USS Excelsior, USS Grissom, and USS Reliant—that would be reused in at least one Star Trek television spin-off. These models were created by Industrial Light & Magic (ILM), which would continue to generate models and assist with special effects for subsequent films and spin-offs. Producers still used some cost-saving measures when depicting some spacecraft, such as reusing footage from previous films. William Shatner's and Leonard Nimoy's demands for "sky-high salaries" for Star Trek IV: The Voyage Home (1986) caused the studio to plan for a new television series. The seven-year production of Star Trek: The Next Generation overlapped with those of Star Trek V: The Final Frontier (1989) and Star Trek VI: The Undiscovered Country (1991)—and while those two films made heavy use of Next Generation sets, few spacecraft model assets were shared between the television and film projects. Return to television (1987–1994) Among the first to join the design team of Star Trek: The Next Generation (1986–1994) were Probert, Rick Sternbach, and Michael Okuda. The three had not only worked on the Star Trek films, but also had experiences working in science and aerospace. Roddenberry envisioned the new series occurring in an era when people were preoccupied with improving the quality of life, and he emphasized this point in calling for a larger, brighter, and less-sterile USS Enterprise than the ship in the original Star Trek. Probert's design of the new Enterprise was based on a "what if?" painting he created after designing the refit Enterprise for The Motion Picture. It suggested a merging of technology and design into a sleeker ship, yet retained the overall shape of a saucer section, engineering hull, and warp engine nacelles from the original television show. The final Enterprise-D design was revealed to the public in a July 1987 column in Starlog. After rejecting the idea of using CGI for special effects and shooting miniatures, the producers hired ILM—which worked extensively on the Star Trek films—to build a pair of Enterprise models. Six modelmakers, led by Star Trek film veteran Greg Jein, built the models for $75,000. Another model was created midway through the third season. ILM also created the distinct "rubberband" effect of the Enterprise going to warp speed—an effect initially created for The Motion Picture. The new series' creators were concerned that budget constraints for The Next Generation could be even more of a problem for them as they had been for the original Star Trek. To help avoid them, the producers reused and recycled sets, models, props, and footage created for the movie franchise; the development of Star Trek: Deep Space Nine also saw resources being shared across the two television series. ILM created a catalog of effects shots, which they thought would help the show save money. In practice, however, the catalog was insufficient to meet the show's needs, and using catalog footage as an element in a shot placed constraints on the movement of shooting models added to the shot. By the end of the first season, the producers moved away from that catalog. Robert Legato, who supervised the show's in-house visual effects, was eventually able to enhance the appearance of shooting models by using a moving camera for its effects shots, allowing objects in a shot to move in relation to each other. When The Next Generation depicts combat between spacecraft, it is usually single ship-on-ship; however, there are exceptions, such as a "Star Wars-like" battle in "Preemptive Strike" (1994) between numerous Maquis fighters and a Cardassian ship. According to Sternbach, there usually wasn't enough time to create and make a new ship each week; nevertheless, producers created numerous new spacecraft for The Next Generation. Dan Curry said that to conserve the budget for use on ships to be seen in close-up, small "worker bee" vessels not requiring significant detail were made out of cheap, everyday objects. Additionally, the Next Generation team used models from the first three Star Trek films; the Excelsior, Grissom, and Reliant models were redressed to become various Excelsior-, Oberth-, and Miranda-class starships, respectively. The visual effects shot of the "USS Pegasus" spacecraft was a re-dress of the Oberth class model. The VFX model auctioned off in 2006 by Christie's. The Oberth-class was designed by David Carson, and built at Industrial Light & Magic. The model was made for the 1984 theatrical film Star Trek III: The Search for Spock, where it depicted a ship called the "USS Grissom". As with the original Star Trek, budget concerns delayed the construction of a full-scale shuttlecraft set until a script made the shuttle an important part of the story. The first Next Generation story to use a shuttle is "Coming of Age" (1988); for this story, Probert designed a shooting model and set-designers built one-quarter of the interior space—additional sections were built as the budget allowed. Because the angular interior did not match Probert's curved vessel, a more angular shuttlepod vessel was introduced in "Time Squared (Star Trek: The Next Generation)" (1989). A full-scale shuttle that matched the angular interiors was introduced in "Darmok" (1991). Writers had hoped to depict the designed-but-not-built captain's yacht for "Samaritan Snare" (1989), but the budget instead led to the use of a shuttlecraft (That specific craft type's appearance would eventually appear in Star Trek: Insurrection). Several shuttlecraft names are in homage to figures from science, such as Marie Curie, Farouk El-Baz, Ferdinand Magellan, and Ellison Onizuka. A second spin-off, and introducing digital models (1993–1999) Star Trek: Deep Space Nine (1993–1999) began production as The Next Generation was ending. The eponymous Deep Space Nine space station took Sternbach and Herman Zimmerman several months to design. The show's producers insisted that it look "weird" and distinctly non-Starfleet. Every episode of Deep Space Nine includes shots of the shooting model. Sternbach and Jim Martin designed the show's runabout vessel, conceived as a way to allow the station's crew to continue with Star Treks main themes of exploration in a show set on an immobile space station. Seven weeks went into the creation of the ship's cockpit—however, when an episode of The Next Generation needed to depict the runabout's living quarters, designer Richard James and set decorator Jim Mees had only nine days to both design and build the set. Martin also designed the USS Defiant under the direction of Gary Hutzel and Zimmerman. The Defiant was introduced in the third season to give the show's character greater range and capabilities when leaving the station. Starting with the show's third season, spacecraft exteriors began to be computer-generated. The studio VisionArt created computer models for several Deep Space Nine ships, including the Defiant, the runabouts, and Jem'Hadar vessels. VisionArt also created a CGI model of the Deep Space Nine, which was used for the final shot of the series finale. Digital Muse and Foundation Imaging also contributed toward Deep Space Nines special effects and computer modeling. Although the production designers gave the new spin-off a distinct look, Deep Space Nine used numerous ship models created for The Next Generation and, later, Star Trek: First Contact (1996). Balancing digital and physical models in films (1994–2002) Even as The Next Generation was ending, the actors and many of the production crew were preparing for their first film, Star Trek Generations (1994). This film saw the widening adoption of—but not sole reliance on—computer-generated vehicle models in the film franchise. The USS Enterprise-B in Generations is a reuse of the Excelsior model in Star Trek III, and its surrounding spacedock a reconstruction—with some flattening alterations—of the frame created for The Motion Picture. The Enterprise-D was filmed with one of the original models created by ILM, although it was stripped down, rewired, and resurfaced to depict the level of detail needed for film. The antagonists' Klingon bird-of-prey previously appeared in Star Trek, as did the rescue shuttles and orbiting rescue ships at the film's end. Producers created new models of a solar observatory, along with a model of the Enterprises saucer section. Scenes involving the Enterprise-B and the Lakul in the Nexus energy ribbon were all computer-generated—in fact, no shooting model was ever made of the ill-fated El-Aurian refugee ship. Shots of the Enterprise-D going to warp were also computer-generated. The trend toward using digital models increased with subsequent films. Star Trek: First Contact (1996) introduces the Sovereign-class Enterprise-E, conceived by production designer Herman Zimmerman and illustrator John Eaves as a larger, sleeker, faster-looking ship. Based on blueprints created by Sternbach, ILM's John Goodson created a shooting model. Goodson also created a model of the Phoenix ship, and a physical Borg cube model was needed for close-up shots. First Contact was the last Star Trek film to make heavy use of physical models, and many ships in the film are depicted by computer models. In addition to the physical model, the Enterprise was also built as a computer model. John Knoll worked with visual effects art director Alex Jaeger to design and create a variety of new ships to populate the opening battle against the Borg. Knoll and Jaeger decided the new ships had to be consistent with Star Trek precedent, such as a saucer section and pair of warp nacelles, but also could not look so similar as to be confused with the new Enterprise. With these requirements in mind, Jaeger reduced 16 initial designs down to four, and created computer-generated models of the Akira-, Norway-, Saber-, and Steamrunner-class ships. ILM was not available to support the next two films, Star Trek: Insurrection (1998) and Star Trek Nemesis (2002). Santa Barbara Studies created CG models of the Enterprise and other new ships for Insurrection, while Digital Domain worked on Nemesis. John Eaves designed new ships for Nemesis, with Doug Drexler doing computer-generated models. The antagonist's Scimitar ship was initially conceived to be a massive upgrade to the Romulan warbird designed for The Next Generation. In designing the ship, Eaves revisited the Klingon bird-of-prey concept created for Star Trek III, retaining the "hawklike head". For the smaller Scorpion fighter, Eaves instead took inspiration from an F-18 fighter. Although the film largely used computer-generated models, Digital Domain used physical models to depict the collision between the battle-damaged Enterprise and Scimitar; Digital Domain's Mark Forker said building battle-damaged models was at least twice as hard as creating models of pristine starships. Continuation on television (1995–2005) By the time production began on Star Trek: Voyager (1995–2001), advances in computing allowed designers to create rough digital three-dimensional models of starships. Until that point, designers could submit only sketches to executive producer Rick Berman and other staffers, but "sketches can be deceiving"; the use of 3D modeling removed a degree of guesswork from the process. Sternbach said the most important change in the process of creating spacecraft for the franchise was the increasing availability of CGI software and access to better-performing computers. Digital Muse, Foundation Imaging, and Eden FX contributed toward Voyagers computer modeling; the latter two also worked on Enterprise. Sternbach and Richard James, who designed the Borg cube for The Next Generation, collaborated over several months to design the Intrepid-class USS Voyager. As with Star Trek and The Next Generation, the show's budget did not immediately allow for the creation of a new shuttlecraft; initially, the show used one of The Next Generations shuttle miniatures and interiors, with minor alterations to make it look Voyager-specific. Many Voyager plot lines called for a shuttlecraft to be destroyed; the large number of shuttlecraft reserves the stranded starship seemed to have amused some people and bothered others. Eventually, Sternbach and James collaborated to create the Delta Flyer, a more resilient shuttlecraft. Doug Drexler took four months to design the eponymous Enterprise for the fifth spinoff, Star Trek: Enterprise (2001–2005). A predecessor to Jeffries' original Enterprise, some elements of this ship were inspired by the Akira class in First Contact, and its overall compactness was inspired by Deep Space Nines Defiant. Eden FX created computer-generated models for all four seasons of Enterprise. Franchise reboot (2009) Producers of the 2009 Star Trek film balanced between paying homage to established Star Trek lore while also reinvigorating the franchise. The redesigned Enterprise has a "hot-rod" look while retaining a ship's traditional shape. ILM was given "tremendous" leeway in creating the ship. Concept artist Ryan Church's initial designs were refined and developed into photo-realistic models by Alex Jaeger's team at ILM. ILM's Roger Guyett recalled the original Enterprise being "very static", and added moving components to the film's model. ILM retained subtle geometric forms and patterns to allude back to the original Enterprise. The computer model's digital paint recreates the use of "interference paint", which contains small particles of mica to alter the apparent color, used on the first three films' model. Film and television re-releases The 2001 Director's Edition of The Motion Picture includes 90 new and redesigned computer-generated shots produced by Foundation Imaging, many of which include a computer-generated model of the Enterprise. The new shots depict more dynamic lighting and clearer senses of scale than the original release. In September 2006, CBS began airing remastered episodes of Star Trek. The remastered series, directed by Mike Okuda, includes updated special effects shots. For example, the alternate universe Enterprise in "Mirror, Mirror" was originally depicted by the "regular" Enterprise filming model; however, in the remastered version, the alternate Enterprise has different markings and hull features. In contrast, Okuda said CBS' release of The Next Generation on Blu-rays would see "sharper [and] clearer" effects shots, but no significant changes. Part of the disparity between the treatment of effects shots for the remastered Star Trek and the Blu-ray release of The Next Generation is due to film archiving. The studio did not store film from each individual effect element in Star Trek; it stored only the final, composite effect. However, the composite prints did not scan well in high definition, leading to the creation of new effects elements. In contrast, Paramount Studios maintained a thorough archive of Next Generation film elements, allowing most of those to transition to Blu-ray with minimal, if any, alterations. Nearly all of the spacecraft elements in the Next Generation Blu-ray will be from the original film, and there will be few corrections to production or effects errors. Books and games Several Star Trek board, roleplaying, and video games take place on and allow players to control various spacecraft. Star Trek Online (2010) developers invited fans to design the Enterprise-F, successor to the USS Enterprise-E from the Next Generation-era films. Adam Ihle submitted the winning design, an Odyssey-class starship that will appear in the game. Star Trek Online executive producer Daniel Stahl said Ihle's design inspired the creative team, presenting a familiar silhouette yet evolving the franchise's ship design. Similarly, Simon & Schuster held a contest to design the USS Titan, a science vessel commanded by William Riker about whom a series of novels has been published. Sean Tourangeau's design won the contest, which was scored on originality, execution, consistency with the publisher's concept notes, and consistency with Star Treks established Starfleet style. Several other Star Trek novel lines have been created that take place on ships and stations other than those depicted in the franchise's film and television fiction. Impact and critical reaction The basic design of the original Enterprise "formed the basis for one of sci-fi's most iconic images". In 1992, the National Air and Space Museum curator said "there is no other fantasy more pervasive in the conceptualization of space flight than Star Trek". The Next Generation was nominated for an Emmy for its depiction of the Borg cube in "Q Who". Star Trek: The Experience included a shuttlecraft ride simulator. Spacecraft filming models made up nine of the ten highest-bid items in Christie's Star Trek: The Collection auction. Merchandising AMT's model of the original Enterprises shuttlecraft sold over one million units. In 1989, Ertl released a model kit that included The Next Generations Ferengi marauder, Klingon bird-of-prey, and Romulan warbird. AMT released a Vor'cha-class model in 1991. Galoob created Micro Machines of various Star Trek starships from 1993 to 1997, and Hallmark created Christmas ornaments of the original series shuttlecraft, Romulan warbird, and Klingon bird-of-prey. In 2011, Simon & Schuster published the Starship Spotter, a collection of images of various spacecraft in Star Trek. Since 2002, Star Trek illustrator and designer Doug Drexler has led development of an annual Ship of the Line calendar featuring images and information about various spacecraft from the Star Trek franchise. See also Starship Enterprise References Notes Bibliography External links Individual lot listings for Christie's Star Trek: The Collection auction, with photos of numerous filming models Hop aboard the spaceships seen in Star Trek Star Trek spacecraft it:Astronavi di Star Trek#USS Excelsior
26301345
https://en.wikipedia.org/wiki/Dvorak%20keyboard%20layout
Dvorak keyboard layout
Dvorak is a keyboard layout for English patented in 1936 by August Dvorak and his brother-in-law, William Dealey, as a faster and more ergonomic alternative to the QWERTY layout (the de facto standard keyboard layout). Dvorak proponents claim that it requires less finger motion and as a result reduces errors, increases typing speed, reduces repetitive strain injuries, or is simply more comfortable than QWERTY. Dvorak has not replaced QWERTY as the most common keyboard layout because QWERTY was introduced 60 years earlier and because Dvorak's advantages over QWERTY were not large enough. However, most major modern operating systems (such as Windows, macOS, Linux, Android, Chrome OS, and BSD) allow a user to switch to the Dvorak layout. iOS does not provide a system-wide, touchscreen Dvorak keyboard, although third-party software is capable of adding the layout to iOS, and the layout can be chosen for use with any hardware keyboard, regardless of printed characters on the keyboard. Several modifications were designed by the team directed by Dvorak or by ANSI. These variations have been collectively or individually termed the Dvorak Simplified Keyboard, the American Simplified Keyboard or simply the Simplified Keyboard, but they all have come to be known commonly as the Dvorak keyboard or Dvorak layout. Overview Dvorak was designed with the belief that it would significantly increase typing speeds with respect to the QWERTY layout by alleviating some of its perceived shortcomings, such as: Many common letter combinations require awkward finger motions. Some common letter combinations are typed with the same finger. (e.g. "ed" and "de") Many common letter combinations require a finger to jump over the home row. Many common letter combinations are typed with one hand while the other sits idle (e.g. was, were). Most typing is done with the left hand, which for most people is not the dominant hand. About 16% of typing is done on the lower row, 52% on the top row and only 32% on the home row. August Dvorak studied letter frequencies and the physiology of the hand and created a new layout to alleviate the above problems, based on the following principles: Letters should be typed by alternating between hands (which makes typing more rhythmic, increases speed, reduces error, and reduces fatigue). On a Dvorak keyboard, vowels and the most used symbol characters are on the left (with the vowels on the home row), while the most used consonants are on the right. For maximum speed and efficiency, the most common letters and bigrams should be typed on the home row, where the fingers rest, and under the strongest fingers (Thus, about 70% of letter keyboard strokes on Dvorak are done on the home row and only 22% and 8% on the top and bottom rows respectively). The least common letters should be on the bottom row which is the hardest row to reach. The right hand should do more of the typing because most people are right-handed. Digraphs should not be typed with adjacent fingers. Stroking should generally move from the edges of the board to the middle. An observation of this principle is that, for many people, when tapping fingers on a table, it is easier going from little finger to index than vice versa. This motion on a keyboard is called inboard stroke flow. The Dvorak layout is intended for the English language. For other European languages, letter frequencies, letter sequences, and bigrams differ from those of English. Also, many languages have letters that do not occur in English. For non-English use, these differences lessen the alleged advantages of the original Dvorak keyboard. However, the Dvorak principles have been applied to the design of keyboards for other languages, though the primary keyboards used by most countries are based on the QWERTY design. The layout was completed in 1932 and granted in 1936. The American National Standards Institute (ANSI) designated the Dvorak keyboard as an alternative standard keyboard layout in 1982 (INCITS 207-1991 R2007; previously X4.22-1983, X3.207:1991), "Alternate Keyboard Arrangement for Alphanumeric Machines". The original ANSI Dvorak layout was available as a factory-supplied option on the original IBM Selectric typewriter. History August Dvorak was an educational psychologist and professor of education at the University of Washington in Seattle. Touch typing had come into wide use by that time and Dvorak became interested in the layout while serving as an advisor to Gertrude Ford, who was writing her master's thesis on typing errors. He quickly concluded that the QWERTY layout needed to be replaced, as QWERTY had been laid out not with the pure intention of ease and speed, but heavily including the intention of sequentially distant keyboard strokes so that the mechanical typewriter arms did not jam. Dvorak was joined by his brother-in-law William Dealey, a professor of education at the then North Texas State Teacher's College in Denton, Texas. Dvorak and Dealey's objective was to scientifically design a keyboard to decrease typing errors, speed up typing, and lessen typist fatigue. They engaged in extensive research while designing their keyboard layout. In 1914 and 1915, Dealey attended seminars on the science of motion and later reviewed slow-motion films of typists with Dvorak. Dvorak and Dealey meticulously studied the English language, researching the most used letters and letter combinations. They also studied the physiology of the hand. The result in 1932 was the Dvorak Simplified Keyboard. In 1893, George Blickensderfer had developed a keyboard layout for the Blickensderfer typewriter model 5 that used the letters DHIATENSOR for the home row. Blickensderfer had determined that 85% of English words contained these letters. The Dvorak keyboard uses the same letters in its home row, apart from replacing R with U, and even keeps the consonants in the same order, but moves the vowels to the left: AOEUIDHTNS. In 1933, Dvorak started entering typists trained on his keyboard into the International Commercial Schools Contest, which was a typing contest sponsored by typewriter manufacturers consisting of a series of professional and amateur contests. The professional contests had typists sponsored by typewriter companies to advertise their machines. In the 1930s, the Tacoma, Washington, school district ran an experimental program in typing designed by Dvorak to determine whether to hold Dvorak layout classes. The experiment put 2,700 highschool students through Dvorak typing classes and found that students learned Dvorak in one-third the time it took to learn QWERTY. When a new school board was elected, however, it chose to terminate the Dvorak classes. During World War II, while in the Navy, Dvorak conducted experiments which he claimed showed that typists could be retrained to Dvorak in a mere 10 days, though he discarded at least two previous studies which were conducted and whose results are unknown. With such great apparent gains, interest in the Dvorak keyboard layout increased by the early 1950s. Numerous businesses and government organizations began to consider retraining their typists on Dvorak keyboards. In this environment, the General Services Administration commissioned Earle Strong to determine whether the switch from QWERTY to Dvorak should be made. After retraining a selection of typists from QWERTY to Dvorak, once the Dvorak group had regained their previous typing speed (which took 100 hours of training, more than was claimed in Dvorak's Navy test), Strong took a second group of QWERTY typists chosen for equal ability to the Dvorak group and retrained them in QWERTY in order to improve their speed at the same time the Dvorak typists were training. The carefully controlled study failed to show any benefit to the Dvorak keyboard layout in typing or training speed. Strong recommended speed training with QWERTY rather than switching keyboards, and attributed the previous apparent benefits of Dvorak to improper experimental design and outright bias on the part of Dvorak, who had designed and directed the previous studies. However, Strong had a personal grudge against Dvorak and had made public statements before his study opposing new keyboard designs. After this study, interest in the Dvorak keyboard waned. Later experiments have shown that many keyboard designs, including some alphabetical ones, allow very similar typing speeds to QWERTY and Dvorak when typists have been trained for them, suggesting that Dvorak's careful design principles may have had little effect because keyboard layout is only a small part of the complicated physical activity of typing. The work of Dvorak paved the way for other optimized keyboard layouts for English such as Colemak, but also for other languages such as the German Neo and the French BÉPO. Original layout Over the decades, symbol keys were shifted around the keyboard resulting in variations of the Dvorak design. In 1982, the American National Standards Institute (ANSI) implemented a standard for the Dvorak layout known as ANSI X4.22-1983. This standard gave the Dvorak layout official recognition as an alternative to the QWERTY keyboard. The layout standardized by the ANSI differs from the original or "classic" layout devised and promulgated by Dvorak. Indeed, the layout promulgated publicly by Dvorak differed slightly from the layout for which Dvorak & Dealey applied for a patent in 1932most notably in the placement of Z. Today's keyboards have more keys than the original typewriter did, and other significant differences existed: The numeric keys of the classic Dvorak layout are ordered: 7 5 3 1 9 0 2 4 6 8 (used today by the Programmer Dvorak layout) In the classic Dvorak layout, the question mark key [?] is in the leftmost position of the upper row, while the slash key [/] is in the rightmost position of the upper row For the classic Dvorak layout, the following symbols share keys (the second symbol being printed when the [shift] key is pressed): colon [:] and question mark [?] ampersand [&] and slash [/] Modern U.S. Dvorak layouts almost always place semicolon and colon together on a single key, and slash and question mark together on a single key. Thus, if the keycaps of a modern keyboard are rearranged so that the unshifted symbol characters match the classic Dvorak layout then the result is the ANSI Dvorak layout. Availability in operating systems Dvorak is included with all major operating systems (such as Windows, macOS, Linux and BSD). Since the introduction of iOS 8 in 2014, Apple iPhone and iPad users have been able to install third party keyboards on their touchscreen devices which allow for alternative keyboard layouts such as Dvorak on a system wide basis. Early PCs Although some word processors could simulate alternative keyboard layouts by software, this was application specific; if more than one program was commonly used (e.g., a word processor and a spreadsheet), the user could be forced to switch layouts depending on the application. Occasionally, stickers were provided to place over the keys for these layouts. However, IBM-compatible PCs used an active, "smart" keyboard. Striking a key generated a key "code", which was sent to the computer. Thus, changing to an alternative keyboard layout was accomplished most easily by simply buying a keyboard with the new layout. Because the key codes were generated by the keyboard itself, all software would respond accordingly. In the mid to late 1980s, a small industry for replacement PC keyboards developed; although most of these were concerned with keyboard "feel" and/or programmable macros, there were several with alternative layouts, such as Dvorak. Amiga Amiga operating systems from the 1986 version 1.2 onward allow the user to modify the keyboard layout by using the setmap command line utility with "usa2" as an argument, or later in 3.x systems by opening the keyboard input preference widget and selecting "Dvorak". Amiga systems versions 1.2 and 1.3 came with the Dvorak keymap on the Workbench disk. Versions 2.x came with the keymaps available on the "Extras" disk. In 3.0 and 3.1 systems, the keymaps were on the "Storage" disk. By copying the respective keymap to the Workbench disk or installing the system to a hard drive, Dvorak was usable for Workbench application programs. Microsoft Windows Versions of Microsoft Windows including Windows 95, Windows NT 3.51 and later have shipped with U.S. Dvorak layout capability. Free updates to use the layout on earlier Windows versions are available for download from Microsoft. Earlier versions, such as DOS 6.2/Windows 3.1, included four keyboard layouts: QWERTY, two-handed Dvorak, right-hand Dvorak, and left-hand Dvorak. In May 2004, Microsoft published an improved version of its Keyboard Layout Creator (MSKLC version 1.3 – current version is 1.4) that allows anyone to easily create any keyboard layout desired, thus allowing the creation and installation of any international Dvorak keyboard layout such as Dvorak Type II (for German), Svorak (for Swedish) etc. Another advantage of the Microsoft Keyboard Layout Creator with respect to third-party programs for installing an international Dvorak layout is that it allows creation of a keyboard layout that automatically switches to standard (QWERTY) after pressing the two hotkeys (SHIFT and CTRL). Unix-based systems Many operating systems based on UNIX, including OpenBSD, FreeBSD, NetBSD, OpenSolaris, Plan 9, and most Linux distributions, can be configured to use the U.S. Dvorak layout and a handful of variants. Furthermore, all current Unix-like systems with X.Org and appropriate keymaps installed (and virtually all systems meant for desktop use include them) are able to use any QWERTY-labeled keyboard as a Dvorak one without any problems or additional configuration. This eliminates the burden of producing additional keymaps for every variant of QWERTY provided. Runtime layout switching is also possible. Chrome OS Chrome OS and Chromium OS offer Dvorak, and there are three different ways to switch the keyboard to the Dvorak layout. Chrome OS includes the US Dvorak and UK Dvorak layouts. Apple computers Apple had Dvorak advocates since the company's early (pre-IPO) days. Several engineers devised hardware and software to remap the keyboard, which were used inside the company and even sold commercially. Apple II The Apple IIe had a keyboard ROM that translated keystrokes into characters. The ROM contained both QWERTY and Dvorak layouts, but the QWERTY layout was enabled by default. A modification could be made and was reversible and did no damage. By flipping a switch, the user could switch from one layout to the other. This modification was entirely unofficial but was inadvertently demonstrated at the 1984 Comdex show, in Las Vegas, by an Apple employee whose mission was to demonstrate Apple Logo II. The employee had become accustomed to the Dvorak layout and brought the necessary parts to the show, installed them in a demo machine, then did his Logo demo. Viewers, curious that he always reached behind the machine before and after allowing other people to type, asked him about the modification. He spent as much time explaining the Dvorak keyboard as explaining Logo. Apple brought new interest to the Dvorak layout with the Apple IIc, which had a mechanical switch above the keyboard whereby the user could switch back and forth between the QWERTY and Dvorak: this was the most official version of the IIe Dvorak mod. The IIc Dvorak layout was even mentioned by 1984 advertisements, which stated that the world's fastest typist, Barbara Blackburn, had set a record on an Apple IIc with the Dvorak layout. Dvorak was also selectable using the built-in control panel applet on the Apple IIGS. Apple III The Apple III used a keyboard-layout file loaded from a floppy disk: the standard system-software package included QWERTY and Dvorak layout files. Changing layouts required restarting the machine. Apple Lisa The Apple Lisa did not offer the Dvorak keyboard mapping, though it was purportedly available through undocumented interfaces. Mac OS In the early days, Macintosh users could only use the Dvorak layout by editing the "System" file using Apple's "RESource EDITor" ResEditwhich allowed users to create and edit keyboard layouts, icons, and other interface components. By 1994, a package named 'Electric Dvorak' by John Rethorst provided an easily user-installable "implementation [that] was particularly good on pre-system 7 Macs" as freeware, and especially useful for Mac+ and Mac SE machines running MacOS 6 and 7. Another third-party developer offered a utility program called MacKeymeleon, which put a menu on the menu bar that allowed on-the-fly switching of keyboard layouts. Eventually, Apple Macintosh engineers built the functionality of this utility into the standard system software, along with a few layouts: QWERTY, Dvorak, French (AZERTY), and other foreign-language layouts. Since about 1998, beginning with Mac OS 8.6, Apple has included the Dvorak layout. It can be activated with the Keyboard Control Panel and selecting "Dvorak". The setting is applied once the Control Panel is closed out. Apple also includes a Dvorak variant they call "Dvorak – Qwerty ⌘". With this layout, the keyboard temporarily becomes QWERTY when the Command (⌘/Apple) key is held down. By keeping familiar keyboard shortcuts like "close" or "copy" on the same keys as ordinary QWERTY, this lets some people use their well-practiced muscle memory and may make the transition easier. Mac OS and subsequently Mac OS X allow additional "on-the-fly" switching between layouts: a menu-bar icon (by default, a national flag that matches the current language, a 'DV' represents Dvorak and a 'DQ' represents Dvorak – Qwerty ⌘) brings up a drop-down menu, allowing the user to choose the desired layout. Subsequent keystrokes will reflect the choice, which can be reversed the same way. Mac OS X 10.5 "Leopard" and later offer a keyboard identifier program that asks users to press a few keys on their keyboards. Dvorak, QWERTY and many national variations of those designs are available. If multiple keyboards are connected to the same Mac computer, they can be configured to different layouts and use simultaneously. However should the computer shut down (lack of battery, etc.) the computer will revert to QWERTY for reboot, regardless of what layout the Admin was using. Mobile phones and PDAs Most mobile phones have software implementations of keyboards on a touch screen. Sometimes the keyboard layout can be changed by means of a freeware third-party utility, such as Hacker's Keyboard for Android, AE Keyboard Mapper for Windows Mobile, or KeybLayout for Symbian OS. The RIM BlackBerry lines offer only QWERTY and its localized variants AZERTY and QWERTZ. Apple's iOS 8.0 and later has the option to install onscreen keyboards from the App Store, which includes several free and paid Dvorak layouts. iOS 4.0 and later supports external Dvorak keyboards. Google's Android OS touchscreen keyboard can use Dvorak and other nonstandard layouts natively as of version 4.1. Comparison of the QWERTY and Dvorak layouts Keyboard strokes Touch typing requires typists to rest their fingers in the home row (QWERTY row starting with "ASDF"). The more strokes there are in the home row, the less movement the fingers must do, thus allowing a typist to type faster, more accurately, and with less strain to the hand and fingers. The majority of the Dvorak layout's key strokes (70%) are done in the home row, claimed to be the easiest row to type because the fingers rest there. Additionally, the Dvorak layout requires the fewest strokes on the bottom row (the most difficult row to type). By contrast, QWERTY requires typists to move their fingers to the top row for a majority of strokes and has only 32% of the strokes done in the home row. Because the Dvorak layout concentrates the vast majority of key strokes to the home row, the Dvorak layout uses about 63% of the finger motion required by QWERTY, which is claimed to make the keyboard more ergonomic. Because the Dvorak layout requires less finger motion from the typist compared to QWERTY, some users with repetitive strain injuries have reported that switching from QWERTY to Dvorak alleviated or even eliminated their repetitive strain injuries; however, no scientific study has been conducted verifying this. The typing loads between hands differs for each of the keyboard layouts. On QWERTY keyboards, 56% of the typing strokes are done by the left hand. As the right hand is dominant for the majority of people, the Dvorak keyboard puts the more often used keys on the right hand side, thereby having 56% of the typing strokes done by the right hand. Awkward strokes Awkward strokes are undesirable because they slow down typing, increase typing errors, and increase finger strain. The term hurdling refers to an awkward stroke requiring a single finger to jump directly from one row, over the home row to another row (e.g., typing "minimum" [which often comes out as "minimun" or "mimimum"] on the QWERTY keyboard). In the English language, there are about 1,200 words that require a hurdle on the QWERTY layout. In contrast, there are only a few words requiring a hurdle on the Dvorak layout. Hand alternation and finger repetition The QWERTY layout has more than 3,000 words that are typed on the left hand alone and about 300 words that are typed on the right hand alone (the aforementioned word "minimum" is a right-hand-only word). In contrast, with the Dvorak layout, only a few words are typed using only the left hand and even fewer use the right hand alone. This is because most syllables require at least one vowel, and, in a Dvorak layout, all the vowels (and "y") fall on the left side of the keyboard. However, this benefit dwindles for longer words, because one English syllable can contain numerous consonants (as in "schmaltz" or "strengths"). Standard keyboard QWERTY enjoys advantages with respect to Dvorak due to the fact that it is the de facto standard keyboard: Keyboard shortcuts in most major operating systems, including Windows, are designed for QWERTY users, and can be awkward for some Dvorak users, such as Ctrl-C (Copy) and Ctrl-V (Paste). However, Apple computers have a "Dvorak – Qwerty ⌘" setting, which temporarily changes the keyboard mapping to QWERTY when the command (⌘) key is held, and Windows users can replicate this setting using AutoHotkey scripts. Some public computers (such as in libraries) will not allow users to change the keyboard to the Dvorak layout. Some standardized exams will not allow test takers to use the Dvorak layout (e.g. Graduate Record Examination). Support for Dvorak in games, especially those that make use of "WASD" – an ergonomic inverted-T shape using QWERTY but spread out across the keyboard in Dvorak – for in-game movement vary. Some games will automatically detect the keyboard is in Dvorak and adjust keys to the Dvorak equivalent, ",AOE", while others allow the same effect with some manual tweaking; games with hard-coded keybinds that do not allow changing the keys away from WASD become practically impossible to play under Dvorak. People who can touch type with a QWERTY keyboard will be less productive with alternative layouts until they retrain themselves, even if these are closer to the optimum. Not all people use keyboard fingerings as specified in touch-typing manuals due to either preference or anatomical difference. This can change the relative efficiency on alternative layouts. Variants One-handed versions In the 1960s, Dvorak designed left- and right-handed Dvorak layouts for touch-typing with only one hand. He tried to minimize the need to move the hand from side to side (lateral travel), as well as to minimize finger movement. Each layout has the hand resting near the center of the keyboard, rather than on one side. Because the layouts require less hand movement than layouts designed for two hands, they can be more accessible to single-handed users. The layouts are also used by people with full use of two hands, who prefer having one hand free while they type with the other. The left-handed Dvorak and right-handed Dvorak keyboard layouts are mostly each other's mirror image, with the exception of some punctuation keys, some of the less-used letters, and the 'wide keys' (Enter, Shift, etc.). Dvorak arranged the parentheses as ")(" on his left-handed keyboard, but some keyboards place them in the typical "()" reading order. Dvorak's original ")(" placement is the more widely distributed layout, and the version that ships with Windows. Programmer Dvorak Programmer Dvorak was developed by Roland Kaufmann and was designed based on code in C, Java, Pascal, Lisp, HTML, CSS and XML. While the letters are in the same places as the regular Dvorak layout, the numbers and most symbols have been moved. The top row contains brackets, curly brackets, and parentheses positioned in an way which makes opening and closing these symbols more intuitive. Some other common programming symbols are also placed in the top row for easy access: (&, %, =, *, +, !, #). The numbers are in the top row as well but the Shift key must be used to type them like on typewriters. The numbers are arranged with the odds under the left hand and the evens under the right hand, as on Dvorak's original layout. Another notable change is the swap of the semicolon/colon key with the quotation/apostrophe key. Many programming languages require each line to end with a semicolon; therefore, it makes sense to put the semicolon in a spot which is easy to reach. While this layout may be a slight improvement over standard Dvorak, it adds another level of difficulty when it comes to portability. While drivers are available for macOS and Windows, installing them is more of a challenge than just selecting a new layout in settings. Linux users are in a better position here as Programmer Dvorak comes pre-installed. Research on efficiency The Dvorak layout is designed to improve touch-typing, in which the user rests their fingers on the home row. It would have less effect on other methods of typing such as hunt-and-peck. Some studies show favorable results for the Dvorak layout in terms of speed, while others do not show any advantage, with many accusations of bias or lack of scientific rigour among researchers. The first studies were performed by Dvorak and his associates. These showed favorable results and generated accusations of bias. However, research published in 2013 by economist Ricard Torres suggests that the Dvorak layout has definite advantages. In 1956, a study with a sample of 10 people in each group conducted by Earle Strong of the U.S. General Services Administration found Dvorak no more efficient than QWERTY and claimed it would be too costly to retrain the employees. The failure of the study to show any benefit to switching, along with its illustration of the considerable cost of switching, discouraged businesses and governments from making the switch. This study was similarly criticised as being biased in favor of the QWERTY control group. In the 1990s, economists Stan Liebowitz and Stephen E. Margolis wrote articles in the Journal of Law and Economics and Reason magazine where they rejected Dvorak proponents' claims that the dominance of the QWERTY is due to market failure brought on by QWERTY's early adoption, writing, "[T]he evidence in the standard history of Qwerty versus Dvorak is flawed and incomplete. [..] The most dramatic claims are traceable to Dvorak himself; and the best-documented experiments, as well as recent ergonomic studies, suggest little or no advantage for the Dvorak keyboard." Resistance to adoption Although the Dvorak design is the only other keyboard design registered with ANSI and is provided with all major operating systems, attempts to convert universally to the Dvorak design have not succeeded. The failure of Dvorak to displace QWERTY has been the subject of some studies. A discussion of the Dvorak layout is sometimes used as an exercise by management consultants to illustrate the difficulties of change. The Dvorak layout is often used in economics textbooks as a standard example of network effects, though this method has been criticized. Most keyboards are based on QWERTY layouts, despite the availability of other keyboard layouts, including the Dvorak. Other languages Although DSK is implemented in many languages other than English, there are still potential issues. Every Dvorak implementation for other languages has the same difficulties as for Roman characters. However, other (occidental) language orthographies can have other typing needs for optimization (many are very different from English). Because Dvorak was optimized for the statistical distribution of letters of English text, keyboards for other languages would likely have different distributions of letter frequencies. Hence, non-QWERTY-derived keyboards for such languages would need a keyboard layout that might be quite different from the Dvorak layout for English. United Kingdom layouts Whether Dvorak or QWERTY, a United Kingdom keyboard differs from the U.S. equivalent in these ways: the " and @ are swapped; the backslash/pipe [\ |] key is in an extra position (to the right of the lower left shift key); there is a taller return/enter key, which places the hash/tilde [# ~] key to its lower left corner (see picture). The most notable difference between the U.S. and UK Dvorak layouts is the [2 "] key remains on the top row, whereas the U.S. [' "] key moves. This means that the query [/ ?] key retains its classic Dvorak location, top right, albeit shifted. Interchanging the [/ ?] and [' @] keys more closely matches the U.S. layout, and the use of "@" has increased in the information technology age. These variations, plus keeping the numerals in Dvorak's idealised order, appear in the Classic Dvorak and Dvorak for the Left Hand and Right Hand varieties. Svorak The Svorak layout places the three extra Swedish vowels (å, ä and ö) on the leftmost three keys of the upper row, which correspond to punctuation symbols on the English Dvorak layout. This retains the original English DVORAK design goal of keeping all vowels by the left hand, including Y which is a vowel in Swedish. The displaced punctuation symbols (period and comma) end up at the edges of the keyboard, but every other symbol is in the same place as in the standard Swedish QWERTY layout, facilitating easier re-learning. The Alt-Gr key is required to access some of the punctuation symbols. This major design goal also makes it possible to "convert" a Swedish QWERTY keyboard to SVORAK simply by moving keycaps around. Unlike for Norway, there's no standard Swedish Dvorak layout and the community is fragmented. In Svdvorak, by Gunnar Parment, the punctuation symbols as they were in the English version; the first extra vowel (å) is placed in the far left of the top row while the other two (ä and ö) are placed at the far left of the bottom row. Others The Norwegian implementation (known as "NorskDvorak") is similar to Parment's layout, with "æ" and "ø" replacing "ä" and "ö". The Danish layout DanskDvorak is similar to the Norwegian. An IcelandicDvorak layout exists, created by a student at Reykjavik University. It retains the same basic layout as the standard Dvorak but features special Alt-Gr functions to allow easy usage for common characters such as "þ", "æ", "ö" and dead-keys to allow the typing of characters such as "å" and "ü". A FinnishDAS keyboard layout follows many of Dvorak's design principles, but the layout is an original design based on the most common letters and letter combinations of the Finnish language. Matti Airas has also made another layout for Finnish. Finnish can also be typed reasonably well with the English Dvorak layout if the letters ä and ö are added. The Finnish ArkkuDvorak keyboard layout adds both on a single key and keeps the American placement for each other character. As with DAS, the SuoRak keyboard is designed by the same principles as the Dvorak keyboard, but with the most common letters of the Finnish language taken into account. Contrary to DAS, it keeps the vowels on the left side of the keyboard and most consonants on the right hand side. The Turkish F keyboard layout (link in Turkish) is also an original design with Dvorak's design principles, however it's not clear if it is inspired by Dvorak or not. Turkish F keyboard was standardized in 1955 and the design has been a requirement for imported typewriters since 1963. There are some non standard Brazilian Dvorak keyboard designs currently in development. The simpler design (also called BRDK) is just a Dvorak layout plus some keys from the Brazilian ABNT2 keyboard layout. Another design, however, was specifically designed for Brazilian Portuguese, by means of a study that optimized typing statistics, like frequent letters, trigraphs and words. The most common German Dvorak layout is the German Type II layout. It is available for Windows, Linux, and macOS. There is also the Neo layout and the de ergo layout, both original layouts that also use many of Dvorak's design principles. Because of the similarity of both languages, even the standard Dvorak layout (with minor modifications) is an ergonomic improvement with respect to the common QWERTZ layout. One such modification puts ß at the shift+comma position and the umlaut dots as a dead key accessible via shift+period (standard German keyboards have a separate less/greater key to the right of the left shift key). For French, there is a Dvorak layout and the Bépo layout, which is founded on Dvorak's method of analysing key frequency. Although Bépo's placement of keys is optimised for French, the scheme also facilitates key combinations for typing characters of other European languages, Esperanto and various symbols. Three Spanish layouts exist. A Romanian version of the Dvorak layout was released in October 2008. It is available for both Windows and Linux. Polish propositions of national keyboard layout smiliar to Dvorak were created in 1950s, but weren't introduced due to new version of Polish Norm in 1958 with modernized QWERTZ layout. See also Keyboard layout Colemak layout Kinesis contoured keyboard Maltron keyboard Path dependence TypeMatrix keyboard Velotype References External links DvZine.org – A print and webcomic zine advocating Dvorak and teaching its history. A Basic Course in Dvorak – by Dan Wood Dvorak your way with by Dan Wood and Marcus Hayward – Comparison of common optimal keyboard layouts, including Dvorak. Programmer Dvorak – a variant of the Dvorak layout for programmers by Roland Kaufmann. Keyboard layouts Ergonomics Latin-script keyboard layouts
25598595
https://en.wikipedia.org/wiki/Michael%20Hennell
Michael Hennell
Professor Michael A. Hennell (born 9 September 1940) is a British computer scientist who has made leading contributions in the field of software testing. Michael Hennell was a Professor of Mathematical Sciences, University of Liverpool in England. As part of his leading role in software testing, Hennell was a member of the editorial board of the journal Software Testing, Verification and Reliability (STVR), a major international journal in the field of software testing. Hennell's academic research was initially conducted in Nuclear physics, resulting in the use of Computational science for addressing complex nuclear mathematics. Assessing the quality of the mathematical libraries on which this work depended lead Professor Hennell into the world of Software testing, specifically in the use of Static code analysis for quantifying the effectiveness of test data, which led to the development of the Linear Code Sequence and Jump concept. In 1975 Professor Hennell founded Liverpool Data Research Associates Ltd. (LDRA) to commercialize the software test-bed designed to analyse numerical software. References 1940 births Academics of the University of Liverpool English computer scientists Software engineers Software testing people Academic journal editors Living people
3078524
https://en.wikipedia.org/wiki/Gryphon%20Software
Gryphon Software
Founded in 1991 by Gabriel Wilensky and Duane Maxwell, Gryphon Software Corporation was a leading software publisher specializing in a broad range of innovative, graphics-oriented software. The company had two product lines. One focused on graphics for video professionals, graphic designers and hobbyists; the other focused on children's software with a strong graphic orientation. The company was consistently singled out as one of the most innovative graphics software companies in the personal computer industry. Its software was used by millions of people. Gryphon's rapid success was due to the introduction of its premier software program, Morph. The first software program to affordably bring cost-prohibitive Hollywood special effects to the personal computer, Morph enabled users to smoothly transform still images and videos into another. The ability to create this level of professional-quality special effects immediately captured the public's attention. Video professionals used Morph in a variety of ads, television commercials, music videos and film productions such as Francis Ford Coppola's Bram Stoker's Dracula, Robin Hood: Men in Tights and Dragon: The Bruce Lee Story. TIME Magazine used the software to illustrate one article and to make two front covers. Morph, for Macintosh and Windows-based computers, also made significant inroads in less obvious forums. Architects used Morph to dramatize "before and after" stages of a historic building's restoration. Anthropologists incorporated the use of Morph in primate studies. The National Center For Exploited And Missing Children used Morph in their age progression work. Another product Gryphon developed for the enthusiast and professional videographers was Gryphon Dynamic Effects, which was a collection of special effects plug-ins for Adobe Premiere. Gryphon Software was recognized for the development of this technological innovation, receiving a number of awards, both in the personal computer industry and in the consumer market. In 1994, Gryphon entered the children's software market with the introduction of the first computer-based multimedia activity centers and the Colorforms Computer Fun Set line of software for children. The Aladdin Activity Center and Lion King Activity Center applications launched Disney's successful activity center line and offered children a variety of puzzles, coloring and spelling games based on these popular animated films. Following on these successes, Gryphon developed and published The Adventures of Batman & Robin® Activity Center, Power Rangers® Activity Center and Superman® Activity Center. Gryphon also developed and published Gryphon Bricks, a virtual construction toy for kids. With over 300 brick styles, a palette of 12 colors and several backgrounds to choose from, both kids and adults were able to use Gryphon Bricks to create anything they could imagine. Gryphon Bricks included both Kids and Adults user interfaces. The latter offered more sophisticated functions, allowing the software to grow with the user’s skill and expanding interests. In mid-1997, Gryphon Software was acquired by CUC International (later on renamed Cendant Software) and its products were sold under the banners of Knowledge Adventure and Sierra Home line of products. Awards included Discover Magazine 1993 Technological Innovation Finalist Software Publishers Association (SPA) - 1993 Excellence In Software Codie Awards Best Business Application Best Graphics Application BYTE Magazine 1993 "Award of Distinction" New Media Magazine "New Media Envision 1993 Multimedia Award" MacUser Magazine 1993 "Eddy" Award Finalist Commendation Compute Magazine finalist, 1994 References External links Official Website (archived) Book: CD-Morph 2D animation software Software companies based in California Defunct computer companies based in California Defunct software companies of the United States Companies based in San Diego Software companies established in 1991 Software companies disestablished in 1999 1991 establishments in California 1999 disestablishments in California
26386
https://en.wikipedia.org/wiki/Red%20Hat
Red Hat
Red Hat, Inc. is an American IBM subsidiary software company that provides open source software products to enterprises. Founded in 1993, Red Hat has its corporate headquarters in Raleigh, North Carolina, with other offices worldwide. Red Hat has become associated to a large extent with its enterprise operating system Red Hat Enterprise Linux. With the acquisition of open-source enterprise middleware vendor JBoss, Red Hat also offers Red Hat Virtualization (RHV), an enterprise virtualization product. Red Hat provides storage, operating system platforms, middleware, applications, management products, and support, training, and consulting services. Red Hat creates, maintains, and contributes to many free software projects. It has acquired several proprietary software product codebases through corporate mergers and acquisitions and has released such software under open source licenses. , Red Hat is the second largest corporate contributor to the Linux kernel version 4.14 after Intel. On October 28, 2018, IBM announced its intent to acquire Red Hat for $34 billion. The acquisition closed on July 9, 2019. It now operates as an independent subsidiary. History In 1993, Bob Young incorporated the ACC Corporation, a catalog business that sold Linux and Unix software accessories. In 1994, Marc Ewing created his own Linux distribution, which he named Red Hat Linux (associated with the time Ewing wore a red Cornell University lacrosse hat, given to him by his grandfather, while attending Carnegie Mellon University). Ewing released the software in October, and it became known as the Halloween release. Young bought Ewing's business in 1995, and the two merged to become Red Hat Software, with Young serving as chief executive officer (CEO). Red Hat went public on August 11, 1999, achieving—at the time—the eighth-biggest first-day gain in the history of Wall Street. Matthew Szulik succeeded Bob Young as CEO in December of that year. Bob Young went on to found the online print on demand and self-publishing company, Lulu in 2002. On November 15, 1999, Red Hat acquired Cygnus Solutions. Cygnus provided commercial support for free software and housed maintainers of GNU software products such as the GNU Debugger and GNU Binutils. One of the founders of Cygnus, Michael Tiemann, became the chief technical officer of Red Hat and the vice president of open-source affairs. Later Red Hat acquired WireSpeed, C2Net and Hell's Kitchen Systems. In February 2000, InfoWorld awarded Red Hat its fourth consecutive "Operating System Product of the Year" award for Red Hat Linux 6.1. Red Hat acquired Planning Technologies, Inc. in 2001 and AOL's iPlanet directory and certificate-server software in 2004. Red Hat moved its headquarters from Durham to North Carolina State University's Centennial Campus in Raleigh, North Carolina in February 2002. In the following month Red Hat introduced Red Hat Linux Advanced Server, later renamed Red Hat Enterprise Linux (RHEL). Dell, IBM, HP and Oracle Corporation announced their support of the platform. In December 2005, CIO Insight magazine conducted its annual "Vendor Value Survey", in which Red Hat ranked #1 in value for the second year in a row. Red Hat stock became part of the NASDAQ-100 on December 19, 2005. Red Hat acquired open-source middleware provider JBoss on June 5, 2006, and JBoss became a division of Red Hat. On September 18, 2006, Red Hat released the Red Hat Application Stack, which integrated the JBoss technology and which was certified by other well-known software vendors. On December 12, 2006, Red Hat stock moved from trading on NASDAQ (RHAT) to the New York Stock Exchange (RHT). In 2007 Red Hat acquired MetaMatrix and made an agreement with Exadel to distribute its software. On March 15, 2007, Red Hat released Red Hat Enterprise Linux 5, and in June acquired Mobicents. On March 13, 2008, Red Hat acquired Amentra, a provider of systems integration services for service-oriented architecture, business process management, systems development and enterprise data services. On July 27, 2009, Red Hat replaced CIT Group in Standard and Poor's 500 stock index, a diversified index of 500 leading companies of the U.S. economy. This was reported as a major milestone for Linux. On December 15, 2009, it was reported that Red Hat will pay to settle a class action lawsuit related to the restatement of financial results from July 2004. The suit had been pending in U.S. District Court for the Eastern District of North Carolina. Red Hat reached the proposed settlement agreement and recorded a one-time charge of for the quarter that ended Nov. 30. On January 10, 2011, Red Hat announced that it would expand its headquarters in two phases, adding 540 employees to the Raleigh operation, and investing over . The state of North Carolina is offering up to in incentives. The second phase involves "expansion into new technologies such as software virtualization and technology cloud offerings". On August 25, 2011, Red Hat announced it would move about 600 employees from the N.C. State Centennial Campus to the Two Progress Plaza building. A ribbon cutting ceremony was held June 24, 2013, in the re-branded Red Hat Headquarters. In 2012, Red Hat became the first one-billion dollar open-source company, reaching in annual revenue during its fiscal year. Red Hat passed the $2 billion benchmark in 2015. the company's annual revenue was nearly $3 billion. On October 16, 2015, Red Hat announced its acquisition of IT automation startup Ansible, rumored for an estimated US$100 million. In June 2017, Red Hat announced Red Hat Hyperconverged Infrastructure (RHHI) 1.0 software product In May 2018, Red Hat acquired CoreOS. IBM subsidiary On October 28, 2018, IBM announced its intent to acquire Red Hat for US$34 billion, in one of its largest-ever acquisitions. The company will operate out of IBM's Hybrid Cloud division. Six months later, on May 3, 2019, the US Department of Justice concluded its review of IBM's proposed Red Hat acquisition, and according to Steven J. Vaughan-Nichols "essentially approved the IBM/Red Hat deal". The acquisition was closed on July 9, 2019. Fedora Project Red Hat is primary sponsor of Fedora Project, a community-supported free software project that aims to promote the rapid progress of free and open-source software and content. Business model Red Hat operates on a business model based on open-source software, development within a community, professional quality assurance, and subscription-based customer support. They produce open-source code so that more programmers can make adaptations and improvements. Red Hat sells subscriptions for the support, training, and integration services that help customers in using their open-source software products. Customers pay one set price for unlimited access to services such as Red Hat Network and up to 24/7 support. In September 2014, however, CEO Jim Whitehurst announced that Red Hat was "in the midst of a major shift from client-server to cloud-mobile". Rich Bynum, a member of Red Hat's legal team, attributes Linux's success and rapid development partially to open-source business models, including Red Hat's. Programs and projects One Laptop per Child Red Hat engineers worked with the One Laptop per Child initiative (a non-profit organization established by members of the MIT Media Lab) to design and produce an inexpensive laptop and try to provide every child in the world with access to open communication, open knowledge, and open learning. The XO-4 laptop, the machine of this project, runs a slimmed-down version of Fedora 17 as its operating system. KVM Avi Kivity began the development of KVM in mid-2006 at Qumranet, a technology startup company that was acquired by Red Hat in 2008. GNOME Red Hat is the largest contributor to the GNOME desktop environment. It has several employees working full-time on Evolution, the official personal information manager for GNOME. systemd Init system and system/service manager for Linux systems. PulseAudio Network-capable sound server program distributed via the freedesktop.org project. Dogtail Dogtail, an open-source automated graphical user interface (GUI) test framework initially developed by Red Hat, consists of free software released under the GNU General Public License (GPL) and is written in Python. It allows developers to build and test their applications. Red Hat announced the release of Dogtail at the 2006 Red Hat Summit. MRG Red Hat MRG is a clustering product intended for integrated high-performance computing (HPC). The acronym MRG stands for "Messaging Realtime Grid". Red Hat Enterprise MRG replaces the Red Hat Enterprise Linux RHEL, a Linux distribution developed by Red Hat, kernel in order to provide extra support for real-time computing, together with middleware support for message brokerage and scheduling workload to local or remote virtual machines, grid computing, and cloud computing. , Red Hat works with the Condor High-Throughput Computing System community and also provides support for the software. The Tuna performance-monitoring tool runs in the MRG environment. Opensource.com Red Hat produces the online publication Opensource.com since January 20, 2010. The site highlights ways open-source principles apply in domains other than software development. The site tracks the application of open-source philosophy to business, education, government, law, health, and life. The company originally produced a newsletter called Under the Brim. Wide Open magazine first appeared in March 2004, as a means for Red Hat to share technical content with subscribers on a regular basis. The Under the Brim newsletter and Wide Open magazine merged in November 2004, to become Red Hat Magazine. In January 2010, Red Hat Magazine became Opensource.com. Red Hat Exchange In 2007, Red Hat announced that it had reached an agreement with some free software and open-source (FOSS) companies that allowed it to make a distribution portal called Red Hat Exchange, reselling FOSS software with the original branding intact. However, by 2010, Red Hat had abandoned the Exchange program to focus their efforts more on their Open Source Channel Alliance which began in April 2009. Red Hat Single Sign On Red Hat Single Sign On is a software product to allow single sign-on with Identity Management and Access Management aimed at modern applications and services. There is an ongoing Open source project alongside Red Hat SSO, that is Keycloak. Keycloak is basically the community version from Red Hat SSO. Red Hat Single Sign On 7.3 is the latest version available. Red Hat Subscription Management Red Hat Subscription Management (RHSM) combines content delivery with subscription management. CEPH Storage Red Hat is the largest contributor to the CEPH Storage Red Hat Ceph Storage (SDS) project : Block, File & Object Storage which runs on industry-standard x86 servers and Ethernet IP. Ceph aims primarily for completely distributed operation without a single point of failure, scalable to the exabyte level. Ceph replicates data and makes it fault-tolerant, using commodity hardware and requiring no specific hardware support. Ceph's system offers disaster recovery and data redundancy through techniques such as replication, erasure coding, snapshots and storage cloning. As a result of its design, the system is both self-healing and self-managing, aiming to minimize administration time and other costs. In this way, administrators have a single, consolidated system that avoids silos and collects the storage within a common management framework. Ceph consolidates several storage use cases and improves resource utilization. It also lets an organization deploy servers where needed. OpenShift Red Hat operates OpenShift, a cloud computing platform as a service, supporting applications written in Node.js, PHP, Perl, Python, Ruby, JavaEE and more. On July 31, 2018, Red Hat announced the release of Istio 1.0, a microservices management program used in tandem with the Kubernetes platform. The software purports to provide "traffic management, service identity and security, policy enforcement and telemetry" services in order to streamline Kubernetes use under the various Fedora-based operating systems. Red Hat's Brian Redbeard Harring described Istio as "aiming to be a control plane, similar to the Kubernetes control plane, for configuring a series of proxy servers that get injected between application components". Also Red Hat is the second largest contributor to Kubernetes code itself, after Google. OpenStack Red Hat markets a version of OpenStack which helps manage a data center in the manner of cloud computing. CloudForms Red Hat CloudForms provides management of virtual machines, instances and containers based on VMware vSphere, Red Hat Virtualization, Microsoft Hyper-V, OpenStack, Amazon EC2, Google Cloud Platform, Microsoft Azure, and Red Hat OpenShift. CloudForms is based on the ManageIQ project that Red Hat open sourced. Code in ManageIQ is from the over acquisition of ManageIQ in 2012. LibreOffice Red Hat contributes, with several software developers, to LibreOffice, a free and open-source office suite. Other FOSS projects Red Hat has some employees working full-time on other free and open-source software projects that are not Red Hat products, such as two full-time employees working on the free software radeon (David Airlie and Jerome Glisse) and one full-time employee working on the free software nouveau graphic drivers. Another such project is AeroGear, an open-source project that brings security and development expertise to cross-platform enterprise mobile development. Red Hat also organises "Open Source Day" events where multiple partners show their open-source technologies. Xorg Red Hat is one of the largest contributors to the X Window System. Utilities and tools Subscribers have access to: Red Hat Developer Toolset (DTS) – performance analysis and development tools Red Hat Software Collections (RHSCL) Over and above Red Hat's major products and acquisitions, Red Hat programmers have produced software programming-tools and utilities to supplement standard Unix and Linux software. Some of these Red Hat "products" have found their way from specifically Red Hat operating environments via open-source channels to a wider community. Such utilities include: Disk Druid – for disk partitioning rpm – for package management sos (son of sysreport) – tools for collecting information on system hardware and configuration. sosreport – reports system hardware and configuration details SystemTap – tracing tool for Linux kernels, developed with IBM, Hitachi, Oracle and Intel NetworkManager The Red Hat website lists the organization's major involvements in free and open-source software projects. Community projects under the aegis of Red Hat include: the Pulp application for software repository management. Subsidiaries Red Hat Czech Red Hat Czech s.r.o. is a research and development arm of Red Hat, based in Brno, Czech Republic. The subsidiary was formed in 2006 and has 1,180 employees (2019). Red Hat chose to enter the Czech Republic in 2006 over other locations due to the country's embrace of open-source. The subsidiary expanded in 2017 to a second location in the Brno Technology Park to accommodate an additional 350 employees. In 2016, Red Hat Czech reported revenue of CZK 1,002 million (FY 2016), and net income of CZK 123 million (FY 2016), with assets of CZK 420 million (FY 2016)|CZK 325 million (FY 2015). The group was named the "Most progressive employer of the year" in the Czech Republic in 2010, and the "Best Employer in the Czech Republic" for large scale companies in 2011 by Aon. Red Hat India In 2000, Red Hat created the subsidiary Red Hat India to deliver Red Hat software, support, and services to Indian customers. Colin Tenwick, former vice president and general manager of Red Hat EMEA, said Red Hat India was opened "in response to the rapid adoption of Red Hat Linux in the subcontinent. Demand for open-source solutions from the Indian markets is rising and Red Hat wants to play a major role in this region." Red Hat India has worked with local companies to enable adoption of open-source technology in both government and education. In 2006, Red Hat India had a distribution network of more than 70 channel partners spanning 27 cities across India. Red Hat India's channel partners included MarkCraft Solutions, Ashtech Infotech Pvt Ltd., Efensys Technologies, Embee Software, Allied Digital Services, and Softcell Technologies. Distributors include Integra Micro Systems and Ingram Micro. Mergers and acquisitions Red Hat's first major acquisition involved Delix Computer GmbH-Linux Div, the Linux-based operating-system division of Delix Computer, a German computer company, on July 30, 1999. Red Hat acquired Cygnus Solutions, a company that provided commercial support for free software, on January 11, 2000 – it was the company's largest acquisition, for . Michael Tiemann, co-founder of Cygnus, served as the chief technical officer of Red Hat after the acquisition. Red Hat made the most acquisitions in 2000 with five: Cygnus Solutions, Bluecurve, Wirespeed Communications, Hell's Kitchen Systems, and C2Net. On June 5, 2006, Red Hat acquired open-source middleware provider JBoss for and integrated it as its own division of Red Hat. On December 14, 1998, Red Hat made its first divestment, when Intel and Netscape acquired undisclosed minority stakes in the company. The next year, on March 9, 1999, Compaq, IBM, Dell and Novell each acquired undisclosed minority stakes in Red Hat. Acquisitions Divestitures References External links Red Hat contributions 1993 establishments in North Carolina 1999 initial public offerings 2019 mergers and acquisitions Cloud computing providers Companies based in Raleigh, North Carolina Software companies established in 1993 American companies established in 1993 Companies formerly listed on the Nasdaq Companies formerly listed on the New York Stock Exchange Software companies based in North Carolina Free software companies GNOME companies Linux companies Research Triangle IBM acquisitions IBM subsidiaries Software companies of the United States
2665251
https://en.wikipedia.org/wiki/Time-of-check%20to%20time-of-use
Time-of-check to time-of-use
In software development, time-of-check to time-of-use (TOCTOU, TOCTTOU or TOC/TOU) is a class of software bugs caused by a race condition involving the checking of the state of a part of a system (such as a security credential) and the use of the results of that check. TOCTOU race conditions are common in Unix between operations on the file system, but can occur in other contexts, including local sockets and improper use of database transactions. In the early 1990s, the mail utility of BSD 4.3 UNIX had an exploitable race condition for temporary files because it used the mktemp() function. Early versions of OpenSSH had an exploitable race condition for Unix domain sockets. They remain a problem in modern systems; as of 2019, a TOCTOU race condition in Docker allows root access to the filesystem of the host platform. Examples In Unix, the following C code, when used in a setuid program, has a TOCTOU bug: if (access("file", W_OK) != 0) { exit(1); } fd = open("file", O_WRONLY); write(fd, buffer, sizeof(buffer)); Here, access is intended to check whether the real user who executed the setuid program would normally be allowed to write the file (i.e., access checks the real userid rather than effective userid). This race condition is vulnerable to an attack: In this example, an attacker can exploit the race condition between the access and open to trick the setuid victim into overwriting an entry in the system password database. TOCTOU races can be used for privilege escalation, to get administrative access to a machine. Although this sequence of events requires precise timing, it is possible for an attacker to arrange such conditions without too much difficulty. The implication is that applications cannot assume the state managed by the operating system (in this case the file system namespace) will not change between system calls. Reliably timing TOCTOU Exploiting a TOCTOU race condition requires precise timing to ensure that the attacker's operations interleave properly with the victim's. In the example above, the attacker must execute the symlink system call precisely between the access and open. For the most general attack, the attacker must be scheduled for execution after each operation by the victim, also known as "single-stepping" the victim. In the case of BSD 4.3 mail utility and mktemp(), the attacker can simply keep launching mail utility in one process, and keep guessing the temporary file names and keep making symlinks in another process. The attack can usually succeed in less than one minute. Techniques for single-stepping a victim program include file system mazes and algorithmic complexity attacks. In both cases, the attacker manipulates the OS state to control scheduling of the victim. File system mazes force the victim to read a directory entry that is not in the OS cache, and the OS puts the victim to sleep while it is reading the directory from disk. Algorithmic complexity attacks force the victim to spend its entire scheduling quantum inside a single system call traversing the kernel's hash table of cached file names. The attacker creates a very large number of files with names that hash to the same value as the file the victim will look up. Preventing TOCTOU Despite conceptual simplicity, TOCTOU race conditions are difficult to avoid and eliminate. One general technique is to use error handling instead of pre-checking, under the philosophy of EAFP – "It is easier to ask for forgiveness than permission" rather than LBYL – "look before you leap" – in this case there is no check, and failure of assumptions to hold are signaled by an error being returned. In the context of file system TOCTOU race conditions, the fundamental challenge is ensuring that the file system cannot be changed between two system calls. In 2004, an impossibility result was published, showing that there was no portable, deterministic technique for avoiding TOCTOU race conditions. Since this impossibility result, libraries for tracking file descriptors and ensuring correctness have been proposed by researchers. An alternative solution proposed in the research community is for UNIX systems to adopt transactions in the file system or the OS kernel. Transactions provide a concurrency control abstraction for the OS, and can be used to prevent TOCTOU races. While no production UNIX kernel has yet adopted transactions, proof-of-concept research prototypes have been developed for Linux, including the Valor file system and the TxOS kernel. Microsoft Windows has added transactions to its NTFS file system, but Microsoft discourages their use, and has indicated that they may be removed in a future version of Windows. File locking is a common technique for preventing race conditions for a single file, but it does not extend to the file system namespace and other metadata, nor does locking work well with networked filesystems, and cannot prevent TOCTOU race conditions. For setuid binaries a possible solution is to use the seteuid() system call to change the effective user and then perform the open(). Differences in setuid() between operating systems can be problematic. See also Linearizability References Further reading Computer security exploits software bugs
27902361
https://en.wikipedia.org/wiki/Roberto%20Marroquin
Roberto Marroquin
Roberto Marroquin (born August 21, 1989) is an American boxer in the super bantamweight division and he is signed to Bob Arum's Top Rank. Amateur career During his amateur career Marroquin won silver medals at the 2005 National Junior Olympics, 2006 International Aliyev Cup, 2006 National PAL Championships, a Gold medal at the 2006 National Junior Olympics, and another silver medal at the 2008 U.S. Olympic Team Trials. He also has a win over Gary Russell. Marroquin finished with a record of 165-15. Professional career In the under card of Manny Pacquiao vs. Joshua Clottey at Cowboys Stadium, Marroquin beat veteran Samuel Sanchez in the second round by K.O. Professional record |- style="margin:0.5em auto; font-size:95%;" | style="text-align:center;" colspan="8"|26 Wins (19 knockouts), 4 Losses, 1 Draw |- style="text-align:center; margin:0.5em auto; font-size:95%; background:#e3e3e3;" | style="border-style:none none solid solid; "|Res. | style="border-style:none none solid solid; "|Record | style="border-style:none none solid solid; "|Opponent | style="border-style:none none solid solid; "|Type | style="border-style:none none solid solid; "|Rd., Time | style="border-style:none none solid solid; "|Date | style="border-style:none none solid solid; "|Location | style="border-style:none none solid solid; "|Notes |- align=center |Win || 26-4-1 ||align=left| Jonathan Perez |KO || 4 (8), 2:23 || June 11, 2017 ||align=left| Pioneer Event Center, Lancaster, California |align=left| |- align=center |Loss || 25-4-1 ||align=left| Carlos Diaz Ramirez |UD || 10 || May 14, 2016 ||align=left| Gimnasio UAT, Reynosa, Tamaulipas |align=left| |- align=center |Win || 25-3-1 ||align=left| Kuin Evans |TKO || 2 (8), 2:31 || November 5, 2015 ||align=left| The Empire Room, Dallas, Texas |align=left| |- align=center |Win || 24-3-1 ||align=left| Miguel Soto |RTD || 4 (8), 0:10 || September 26, 2014 ||align=left| Mesquite Arena, Mesquite, Texas |align=left| |- align=center |style="background: #B0C4DE"|Draw || 23-3-1 ||align=left| Alejandro Rodriguez |PTS || 8 || February 1, 2014 ||align=left| Laredo Energy Arena, Laredo, Texas |align=left| |- align=center |Loss || 23-3 ||align=left| Daniel Diaz |SD || 10 || June 29, 2013 ||align=left| WinStar World Casino, Thackerville, Oklahoma |align=left| |- align=center |Win || 23-2 ||align=left| Antonio Escalante |TKO || 3 (10), 0:49 || March 16, 2013 ||align=left| WinStar World Casino, Thackerville, Oklahoma |align=left| |- align=center |Loss || 22-2 ||align=left| Guillermo Rigondeaux |UD || 12 || September 15, 2012 ||align=left| Thomas & Mack Center, Las Vegas, Nevada |align=left| |- align=center |Win || 22-1 ||align=left| Arturo Santiago |KO || 2 (8), 1:32 || June 16, 2012 ||align=left| Sun Bowl Stadium, El Paso, Texas |align=left| |- align=center |Win || 21-1 ||align=left| Carlos Valcárcel |UD || 10 || December 17, 2011 ||align=left| WinStar World Casino, Thackerville, Oklahoma |align=left| |- align=center |Win || 20-1 ||align=left| Jose Angel Beranza |UD || 8 || July 30, 2011 ||align=left| Softball Country Arena, Denver, Colorado |align=left| |- align=center |Loss || 19-1 ||align=left| Francisco Leal |SD || 10 || April 23, 2011 ||align=left| WinStar World Casino, Thackerville, Oklahoma |align=left| |- align=center |Win || 19-0 ||align=left| Gilberto Sanchez Leon |UD || 8 || February 26, 2011 ||align=left| Palms Casino Resort, Las Vegas, Nevada |align=left| |- align=center |Win || 18-0 ||align=left| Eduardo Arcos |TKO || 4 (8), 1:21 || January 22, 2011 ||align=left| Texas Station, North Las Vegas, Nevada |align=left| |- align=center |Win || 17-0 ||align=left| Francisco Dominguez |TKO || 1 (6), 1:27 || November 13, 2010 || align=left| Cowboys Stadium, Arlington, Texas |align=left| |- align=center |Win || 16-0 ||align=left| Rafael Cerrillo |UD || 6 || October 16, 2010 || align=left| Estadio de Beisbol, Monterrey, Nuevo León |align=left| |- align=center |Win || 15-0 ||align=left| Jesus Quintero |TKO || 3 (8), 1:40 || August 7, 2010 || align=left| Estadio Hector Espino, Hermosillo, Sonora |align=left| |- align=center |Win || 14-0 ||align=left| Arturo Camargo |KO || 2 (6), 2:20 || May 15, 2010 || align=left| Estadio Centenario, Los Mochis, Sinaloa |align=left| |- align=center |Win || 13-0 ||align=left| Samuel Sanchez |KO || 2 (6), 1:36 || March 13, 2010 || align=left| Cowboys Stadium, Arlington, Texas |align=left| |- align=center |Win || 12-0 || align=left| Robert Guillen |TKO || 1 (6), 2:30 || February 6, 2010 || align=left| Convention Center, McAllen, Texas |align=left| |- align=center |Win || 11-0 || align=left| Anthony Napunyi |TKO || 3 (6), 0:31 || November 13, 2009 || align=left| Mandalay Bay House of Blues, Las Vegas, Nevada |align=left| |- align=center |Win || 10-0 || align=left| Jose Garcia Bernal |UD || 6 || October 17, 2009 || align=left| Whataburger Field, Corpus Christi, Texas |align=left| |- align=center |Win || 9-0 || align=left| Steven Johnson |TKO || 2 (6), 1:40 || August 29, 2009 || align=left| Quick Trip Ballpark, Grand Prairie, Texas |align=left| |- align=center |Win || 8-0 || align=left| Jose Manuel Garcia |TKO || 3 (6), 1:27 || June 19, 2009 || align=left| Dr Pepper Arena, Frisco, Texas |align=left| |- align=center |Win || 7-0 || align=left| Robert DaLuz |UD || 6 || May 16, 2009 || align=left| Buffalo Bill's Star Arena, Primm, Nevada |align=left| |- align=center |Win || 6-0 || align=left| Julio Valadez |KO || 4 (4), 2:15 || May 1, 2009 || align=left| Hard Rock Hotel and Casino, Las Vegas, Nevada |align=left| |- align=center |Win || 5-0 || align=left| Isaac Hidalgo |TKO || 1 (6), 2:46 || December 6, 2008 || align=left| MGM Grand, Las Vegas, Nevada |align=left| |- align=center |Win || 4-0 || align=left| Gino Escamilla |UD || 6 || November 5, 2008 || align=left| Isleta Casino & Resort, Albuquerque, New Mexico |align=left| |- align=center |Win || 3-0 || align=left| Roberto Perez |RTD || 2 (4), 0:10 || July 11, 2008 || align=left| American Bank Center, Corpus Christi, Texas |align=left| |- align=center |Win || 2-0 || align=left| Luis Angel Paneto |TKO || 2 (4), 0:02 || February 29, 2008 || align=left| Municipal Auditorium, Harlingen, Texas |align=left| |- align=center |Win || 1-0 || align=left| Genaro Castorena |RTD || 2 (4), 0:10 || January 18, 2008 || align=left| Jacob Brown Auditorium, Brownsville, Texas |align=left| |- align=center References External links American boxers of Mexican descent Super-bantamweight boxers 1989 births Living people American male boxers
57912174
https://en.wikipedia.org/wiki/Black%20Mirror%20%282017%20video%20game%29
Black Mirror (2017 video game)
Black Mirror is a 2017 gothic adventure horror video game developed by KING Art Games and published by THQ Nordic. The game is a reboot of The Black Mirror series, a trilogy of point-and-click games for Microsoft Windows. Consequently, it is sometimes referred to as Black Mirror IV or Black Mirror 4. Synopsis Setting The game features a new setting and cast of characters from the original title. Set in 1926, players control David Gordon, who travels from his home in the British Raj to his ancestral homeland of Scotland, following the suicide of his estranged father, John Gordon. John was an avid enthusiast of occultism, and the final weeks leading to his death remain a mystery. Given these suspicious circumstances, David begins an investigation into the suicide of John. Plot A man runs in an abandoned village, hearing the screams of spirits. He approaches a ritual ground, where he says a prayer and lights himself on fire. David Gordon receives a letter that his father, John Gordon, has died, and that he must return to his home to begin the process of inheriting John's estate, which his family has lived around since Roman times. The Gordon family butler, Angus McKinnon, escorts him to the castle grounds, called the Sgathan Dubh (Gaelic for "Black Mirror"). Margaret Gordon, David's grandmother, and Andrew Harrison, a lawyer working on dividing the property, greet him personally in the foyer, but are unwilling to answer David's questions about his father's death. Later that night, David sneaks out of his room to investigate the mansion, curious about the circumstances of the grounds and seeking further answers about John's passing. While exploring the mansion library, he overhears and notices that Andrew is looking for a room in the house. While searching, David meets the grounds gardener, Rory Johnstone, maid, Ailsa Crannan, and his cousin Eddie Malorie, who similarly avoid his questioning. Later, an unknown boy appears next to him, who runs away after being spotted. When the child appears again, he is with Edward Gordon, David's deceased grandfather, who pushes the boy down the nearby stairs. David jumps to save the boy but falls down the stairs. Angus, Margaret, Ailsa, and Andrew arrive hearing the fall, but claim that they have not seen anyone. David wins Ailsa's trust by giving her an earring she lost, and she offers to give him help in his quest later that evening. Intermediately, David experiences another vision, showing the boy burying something at a grave in the yard. David goes to the grave and learns it is Cecilia Gordon, his dead aunt. In the chapel nearby, David experiences another vision and goes unconscious, to be saved by Leah Farber, his father's doctor, who is in the area. Leah explains John was locked in a madhouse for his delusions regarding a curse. The pair return to the house where David questions Leah further. During this, Eddie finds Ailsa dead in the basement. Investigating the murder, David experiences more visions, including ones with a young woman, and questions Rory. Having softened up to David, Rory explains the woman David is seeing is Cecilia, who drowned herself in the lake on the property. Suspecting Eddie of murdering Ailsa through his investigation, David confronts Eddie, who asserts that he did not kill Ailsa, but does not deny the evidence presented by David. Eddie mentions Rosemary, the mother of Margaret and Edward and the great-grandmother of David, whereupon Margaret immediately breaks off the conversation and orders Angus to bring Eddie to the attic. David follows Eddie there and speaks to a completely distraught Eddie about Rosemary. A delusional Eddie points to a place on the wall that originally had a crucifix cross, triggering another vision in David, which shows Edward beating his mother Rosemary in the attic with a crucifix. David and Leah search the room and uncover a secret passage to the attic, where they find Rosemary, still alive and chained to a bed. Rosemary says Edward chained her there and refuses to leave, saying she is safer here than anywhere else on the property. The two decide to confront Margaret about Rosemary's captivity, but as they head out, they encounter Margaret and Angus, who tell them that they just found out that Eddie has taken Andrew hostage after being accused of murdering Ailsa. While trying to free Andrew, the young boy reappears and causes a swarm of moths to attack Eddie, ending the standoff. Andrew, furious at the events of the night, leaves the castle. Searching the room, the pair find Edward's desk, which contains a diary about an abandoned village that has great power, in which Edward was born. Leah and David interrogate Rosemary for the location, an island across the lake on the property, and Rory sails them there. The two explore the island, filled with ghostly screams, and uncover a hideout. Inside, they find records written by Andrew, who intends to resurrect Edward's ghost and release it using the power of the Black Mirror, which the Gordon family grounds are built upon. It is also learned that Andrew is the son of Cecilia, making him David's first cousin. In another vision, David sees an apparition of his father, showing he is alive in the secret hiding place trying to prevent Andrew from performing the ritual in the abandoned village. After a fight, John escapes, and in another vision, performs a protective ritual, the event seen at the start of the game. Angus appears and attacks David with a knife. David manages to free himself, and Angus attacks Rory, who has come to help. Rory is stabbed, whereupon David attacks and distracts Angus. Rory pulls the knife out of his stomach and stabs Angus in the neck, killing him. The three leave the village and, at Rory's request, go to the greenhouse on the property, where Rory performs a ritual to further strengthen David's connection with his father, who is revealed to be the ghostly boy David has been seeing. Rory succumbs to his injury in the greenhouse and Leah and David go back to the castle, where they find Margaret, who was beaten by Andrew, who has taken Eddie through a secret passage underneath the castle. David and Leah leave, after which Rosemary appears and attacks Margaret for enforcing her captivity; the two die in the fight. David has another vision, which he now knows is him seeing into the past, showing John accusing Edward of killing Cecilia. At this moment John penetrates the darkness of the Black Mirror, so that he pushes Edward to the ground and murders him with a model band. John's ghost appears again David forgives him and thanks his father. David and Leah now make their way through the passage to the Black Mirror in a cave. Leah disappears on the way. After this, John appears again as a ghost and leads David to the Black Mirror cave waiting for him. He finds Andrew, who is holding unconscious Eddie and Leah in front of the Black Mirror. Andrew is attempting a resurrection ritual intended to bring Edward back into this world, and needs the blood of three Gordons and a soul. Andrew kills Eddie and uses Leah as a bargaining chip to convince David to cut his hands for the blood offering. David deceives Andrew and snatches the knife from him, whereupon Leah pushes him into the Black Mirror. As Andrew sinks inside, the ghost of Cecilia appears, whom Andrew never met, and takes Andrew into the depths of the Mirror. The cave collapses, and David and Leah escape. Outside, the pair set Rory's body on the boat they had previously used and set it out to the lake, where John happily watches. Gameplay Unlike the original trilogy, which was point-and-click in nature, Black Mirror is a more traditional graphic adventure title. Players are free to explore the mansion at will, interacting with various objects in the environment to collect items and solve puzzles. Various characters can be found within the mansion including members of the Gordon family and staff, who may have information that can aid the investigation. As the story progresses, more areas of the Gordon's property become available. Release The game was announced on August 16, 2017, with a trailer and November 28 release date being given. The title was released on November 28 for Linux, Microsoft Windows, OS X, PlayStation 4, and Xbox One. Physical copies were produced for the console versions, while the Linux, Windows and OS X releases were digital exclusives. Reception Black Mirror received mixed reviews from critics, according to review aggregator Metacritic. The title was generally praised for its story, puzzles and art direction, but received criticism for its short length compared to other games in the series and technical issues, including glitches, bugs, poor graphics, and long load times. The small extent of translation has been criticized, especially in the Czech Republic, where the game originates from, because all previous games have been translated into more languages. References External links Official website 2017 video games Adventure games Black Mirror (video game series) King Art Games games Horror video games Linux games MacOS games PlayStation 4 games Point-and-click adventure games Single-player video games THQ Nordic games Video games developed in Germany Video game sequels Video games set in the 1920s Video games set in Scotland Windows games Xbox One games Works set in castles
40467176
https://en.wikipedia.org/wiki/Optimus%20UI
Optimus UI
Optimus UI is a front-end touch interface developed by LG Electronics with partners, featuring a full touch user interface. It is sometimes incorrectly identified as an operating system. Optimus UI is used internally by LG for sophisticated feature phones and tablet computers, and is not available for licensing by external parties. The latest version of Optimus UI, 4.1.2, has been released on the Optimus K II and the Optimus Neo 3. It features a more refined user interface as compared to the previous version, 4.1.1, which would include voice shutter and quick memo. Optimus UI is used in devices based on Android. Phones running LG Optimus UI Android Smartphones/Phablets LG GT540 Optimus LG Optimus One LG Optimus 2X LG Optimus 4X HD LG Optimus 3D LG Optimus 3D Max LG Optimus Slider LG Optimus LTE LG Optimus LTE 2 LG Optimus Vu LG Optimus Black LG Optimus Chat LG Optimus Chic LG Optimus Net LG Optimus Sol LG Optimus HUB (E510) LG Optimus L3 LG Optimus L5 LG Optimus L5 II LG Optimus L7 LG Optimus L9 LG Optimus L9 II LG Optimus L90 LG Optimus F3 LG Optimus F3Q LG Optimus F5 LG Optimus F6 LG Optimus F7 LG Optimus G LG Optimus G Pro LG G2 LG G Pro 2 LG Vu 3 LG G Pro Lite LG G Flex LG L40 Dual LG L65 Dual LG L70 Dual LG L80 Dual LG L90 Dual LG G3 LG G3S LG Spectrum 2 LG G2 Mini LG G Flex 2 Tablets LG Optimus Pad LG Optimus Pad LTE LG G Pad 7.0 LG G Pad 8.3 Windows Phone LG Optimus 7 LG Quantum References Mobile operating systems LG Electronics Android (operating system) software
37764426
https://en.wikipedia.org/wiki/Outline%20of%20natural%20language%20processing
Outline of natural language processing
The following outline is provided as an overview of and topical guide to natural language processing: Natural language processing – computer activity in which computers are entailed to analyze, understand, alter, or generate natural language. This includes the automation of any or all linguistic forms, activities, or methods of communication, such as conversation, correspondence, reading, written composition, dictation, publishing, translation, lip reading, and so on. Natural language processing is also the name of the branch of computer science, artificial intelligence, and linguistics concerned with enabling computers to engage in communication using natural language(s) in all forms, including but not limited to speech, print, writing, and signing. Natural language processing Natural language processing can be described as all of the following: A field of science – systematic enterprise that builds and organizes knowledge in the form of testable explanations and predictions about the universe. An applied science – field that applies human knowledge to build or design useful things. A field of computer science – scientific and practical approach to computation and its applications. A branch of artificial intelligence – intelligence of machines and robots and the branch of computer science that aims to create it. A subfield of computational linguistics – interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective. An application of engineering – science, skill, and profession of acquiring and applying scientific, economic, social, and practical knowledge, in order to design and also build structures, machines, devices, systems, materials and processes. An application of software engineering – application of a systematic, disciplined, quantifiable approach to the design, development, operation, and maintenance of software, and the study of these approaches; that is, the application of engineering to software. A subfield of computer programming – process of designing, writing, testing, debugging, and maintaining the source code of computer programs. This source code is written in one or more programming languages (such as Java, C++, C#, Python, etc.). The purpose of programming is to create a set of instructions that computers use to perform specific operations or to exhibit desired behaviors. A subfield of artificial intelligence programming – A type of system – set of interacting or interdependent components forming an integrated whole or a set of elements (often called 'components' ) and relationships which are different from relationships of the set or its elements to other elements or sets. A system that includes software – software is a collection of computer programs and related data that provides the instructions for telling a computer what to do and how to do it. Software refers to one or more computer programs and data held in the storage of the computer. In other words, software is a set of programs, procedures, algorithms and its documentation concerned with the operation of a data processing system. A type of technology – making, modification, usage, and knowledge of tools, machines, techniques, crafts, systems, methods of organization, in order to solve a problem, improve a preexisting solution to a problem, achieve a goal, handle an applied input/output relation or perform a specific function. It can also refer to the collection of such tools, machinery, modifications, arrangements and procedures. Technologies significantly affect human as well as other animal species' ability to control and adapt to their natural environments. A form of computer technology – computers and their application. NLP makes use of computers, image scanners, microphones, and many types of software programs. Language technology – consists of natural language processing (NLP) and computational linguistics (CL) on the one hand, and speech technology on the other. It also includes many application oriented aspects of these. It is often called human language technology (HLT). Prerequisite technologies The following technologies make natural language processing possible: Communication – the activity of a source sending a message to a receiver Language – Speech – Writing – Computing – Computers – Computer programming – Information extraction – User interface – Software – Text editing – program used to edit plain text files Word processing – piece of software used for composing, editing, formatting, printing documents Input devices – pieces of hardware for sending data to a computer to be processed Computer keyboard – typewriter style input device whose input is converted into various data depending on the circumstances Image scanners – Subfields of natural language processing Information extraction (IE) – field concerned in general with the extraction of semantic information from text. This covers tasks such as named entity recognition, coreference resolution, relationship extraction, etc. Ontology engineering – field that studies the methods and methodologies for building ontologies, which are formal representations of a set of concepts within a domain and the relationships between those concepts. Speech processing – field that covers speech recognition, text-to-speech and related tasks. Statistical natural language processing – Statistical semantics – a subfield of computational semantics that establishes semantic relations between words to examine their contexts. Distributional semantics – a subfield of statistical semantics that examines the semantic relationship of words across a corpora or in large samples of data. Related fields Natural language processing contributes to, and makes use of (the theories, tools, and methodologies from), the following fields: Automated reasoning – area of computer science and mathematical logic dedicated to understanding various aspects of reasoning, and producing software which allows computers to reason completely, or nearly completely, automatically. A sub-field of artificial intelligence, automatic reasoning is also grounded in theoretical computer science and philosophy of mind. Linguistics – scientific study of human language. Natural language processing requires understanding of the structure and application of language, and therefore it draws heavily from linguistics. Applied linguistics – interdisciplinary field of study that identifies, investigates, and offers solutions to language-related real-life problems. Some of the academic fields related to applied linguistics are education, linguistics, psychology, computer science, anthropology, and sociology. Some of the subfields of applied linguistics relevant to natural language processing are: Bilingualism / Multilingualism – Computer-mediated communication (CMC) – any communicative transaction that occurs through the use of two or more networked computers. Research on CMC focuses largely on the social effects of different computer-supported communication technologies. Many recent studies involve Internet-based social networking supported by social software. Contrastive linguistics – practice-oriented linguistic approach that seeks to describe the differences and similarities between a pair of languages. Conversation analysis (CA) – approach to the study of social interaction, embracing both verbal and non-verbal conduct, in situations of everyday life. Turn-taking is one aspect of language use that is studied by CA. Discourse analysis – various approaches to analyzing written, vocal, or sign language use or any significant semiotic event. Forensic linguistics – application of linguistic knowledge, methods and insights to the forensic context of law, language, crime investigation, trial, and judicial procedure. Interlinguistics – study of improving communications between people of different first languages with the use of ethnic and auxiliary languages (lingua franca). For instance by use of intentional international auxiliary languages, such as Esperanto or Interlingua, or spontaneous interlanguages known as pidgin languages. Language assessment – assessment of first, second or other language in the school, college, or university context; assessment of language use in the workplace; and assessment of language in the immigration, citizenship, and asylum contexts. The assessment may include analyses of listening, speaking, reading, writing or cultural understanding, with respect to understanding how the language works theoretically and the ability to use the language practically. Language pedagogy – science and art of language education, including approaches and methods of language teaching and study. Natural language processing is used in programs designed to teach language, including first and second language training. Language planning – Language policy – Lexicography – Literacies – Pragmatics – Second language acquisition – Stylistics – Translation – Computational linguistics – interdisciplinary field dealing with the statistical or rule-based modeling of natural language from a computational perspective. The models and tools of computational linguistics are used extensively in the field of natural language processing, and vice versa. Computational semantics – Corpus linguistics – study of language as expressed in samples (corpora) of "real world" text. Corpora is the plural of corpus, and a corpus is a specifically selected collection of texts (or speech segments) composed of natural language. After it is constructed (gathered or composed), a corpus is analyzed with the methods of computational linguistics to infer the meaning and context of its components (words, phrases, and sentences), and the relationships between them. Optionally, a corpus can be annotated ("tagged") with data (manually or automatically) to make the corpus easier to understand (e.g., part-of-speech tagging). This data is then applied to make sense of user input, for example, to make better (automated) guesses of what people are talking about or saying, perhaps to achieve more narrowly focused web searches, or for speech recognition. Metalinguistics – Sign linguistics – scientific study and analysis of natural sign languages, their features, their structure (phonology, morphology, syntax, and semantics), their acquisition (as a primary or secondary language), how they develop independently of other languages, their application in communication, their relationships to other languages (including spoken languages), and many other aspects. Human–computer interaction – the intersection of computer science and behavioral sciences, this field involves the study, planning, and design of the interaction between people (users) and computers. Attention to human-machine interaction is important, because poorly designed human-machine interfaces can lead to many unexpected problems. A classic example of this is the Three Mile Island accident where investigations concluded that the design of the human–machine interface was at least partially responsible for the disaster. Information retrieval (IR) – field concerned with storing, searching and retrieving information. It is a separate field within computer science (closer to databases), but IR relies on some NLP methods (for example, stemming). Some current research and applications seek to bridge the gap between IR and NLP. Knowledge representation (KR) – area of artificial intelligence research aimed at representing knowledge in symbols to facilitate inferencing from those knowledge elements, creating new elements of knowledge. Knowledge Representation research involves analysis of how to reason accurately and effectively and how best to use a set of symbols to represent a set of facts within a knowledge domain. Semantic network – study of semantic relations between concepts. Semantic Web – Machine learning – subfield of computer science that examines pattern recognition and computational learning theory in artificial intelligence. There are three broad approaches to machine learning. Supervised learning occurs when the machine is given example inputs and outputs by a teacher so that it can learn a rule that maps inputs to outputs. Unsupervised learning occurs when the machine determines the inputs structure without being provided example inputs or outputs. Reinforcement learning occurs when a machine must perform a goal without teacher feedback. Pattern recognition – branch of machine learning that examines how machines recognize regularities in data. As with machine learning, teachers can train machines to recognize patterns by providing them with example inputs and outputs (i.e. Supervised Learning), or the machines can recognize patterns without being trained on any example inputs or outputs (i.e. Unsupervised Learning). Statistical classification – Structures used in natural language processing Anaphora – type of expression whose reference depends upon another referential element. E.g., in the sentence 'Sally preferred the company of herself', 'herself' is an anaphoric expression in that it is coreferential with 'Sally', the sentence's subject. Context-free language – Controlled natural language – a natural language with a restriction introduced on its grammar and vocabulary in order to eliminate ambiguity and complexity Corpus – body of data, optionally tagged (for example, through part-of-speech tagging), providing real world samples for analysis and comparison. Text corpus – large and structured set of texts, nowadays usually electronically stored and processed. They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules within a specific subject (or domain). Speech corpus – database of speech audio files and text transcriptions. In Speech technology, speech corpora are used, among other things, to create acoustic models (which can then be used with a speech recognition engine). In Linguistics, spoken corpora are used to do research into phonetic, conversation analysis, dialectology and other fields. Grammar – Context-free grammar (CFG) – Constraint grammar (CG) – Definite clause grammar (DCG) – Functional unification grammar (FUG) – Generalized phrase structure grammar (GPSG) – Head-driven phrase structure grammar (HPSG) – Lexical functional grammar (LFG) – Probabilistic context-free grammar (PCFG) – another name for stochastic context-free grammar. Stochastic context-free grammar (SCFG) – Systemic functional grammar (SFG) – Tree-adjoining grammar (TAG) – Natural language – n-gram – sequence of n number of tokens, where a "token" is a character, syllable, or word. The n is replaced by a number. Therefore, a 5-gram is an n-gram of 5 letters, syllables, or words. "Eat this" is a 2-gram (also known as a bigram). Bigram – n-gram of 2 tokens. Every sequence of 2 adjacent elements in a string of tokens is a bigram. Bigrams are used for speech recognition, they can be used to solve cryptograms, and bigram frequency is one approach to statistical language identification. Trigram – special case of the n-gram, where n is 3. Ontology – formal representation of a set of concepts within a domain and the relationships between those concepts. Taxonomy – practice and science of classification, including the principles underlying classification, and the methods of classifying things or concepts. Hyponymy and hypernymy – the linguistics of hyponyms and hypernyms. A hyponym shares a type-of relationship with its hypernym. For example, pigeon, crow, eagle and seagull are all hyponyms of bird (their hypernym); which, in turn, is a hyponym of animal. Taxonomy for search engines – typically called a "taxonomy of entities". It is a tree in which nodes are labelled with entities which are expected to occur in a web search query. These trees are used to match keywords from a search query with the keywords from relevant answers (or snippets). Textual entailment – directional relation between text fragments. The relation holds whenever the truth of one text fragment follows from another text. In the TE framework, the entailing and entailed texts are termed text (t) and hypothesis (h), respectively. The relation is directional because even if "t entails h", the reverse "h entails t" is much less certain. Triphone – sequence of three phonemes. Triphones are useful in models of natural language processing where they are used to establish the various contexts in which a phoneme can occur in a particular natural language. Processes of NLP Applications Automated essay scoring (AES) – the use of specialized computer programs to assign grades to essays written in an educational setting. It is a method of educational assessment and an application of natural language processing. Its objective is to classify a large set of textual entities into a small number of discrete categories, corresponding to the possible grades—for example, the numbers 1 to 6. Therefore, it can be considered a problem of statistical classification. Automatic image annotation – process by which a computer system automatically assigns textual metadata in the form of captioning or keywords to a digital image. The annotations are used in image retrieval systems to organize and locate images of interest from a database. Automatic summarization – process of reducing a text document with a computer program in order to create a summary that retains the most important points of the original document. Often used to provide summaries of text of a known type, such as articles in the financial section of a newspaper. Types Keyphrase extraction – Document summarization – Multi-document summarization – Methods and techniques Extraction-based summarization – Abstraction-based summarization – Maximum entropy-based summarization – Sentence extraction – Aided summarization – Human aided machine summarization (HAMS) – Machine aided human summarization (MAHS) – Automatic taxonomy induction – automated construction of tree structures from a corpus. This may be applied to building taxonomical classification systems for reading by end users, such as web directories or subject outlines. Coreference resolution – in order to derive the correct interpretation of text, or even to estimate the relative importance of various mentioned subjects, pronouns and other referring expressions need to be connected to the right individuals or objects. Given a sentence or larger chunk of text, coreference resolution determines which words ("mentions") refer to which objects ("entities") included in the text. Anaphora resolution – concerned with matching up pronouns with the nouns or names that they refer to. For example, in a sentence such as "He entered John's house through the front door", "the front door" is a referring expression and the bridging relationship to be identified is the fact that the door being referred to is the front door of John's house (rather than of some other structure that might also be referred to). Dialog system – Foreign-language reading aid – computer program that assists a non-native language user to read properly in their target language. The proper reading means that the pronunciation should be correct and stress to different parts of the words should be proper. Foreign language writing aid – computer program or any other instrument that assists a non-native language user (also referred to as a foreign language learner) in writing decently in their target language. Assistive operations can be classified into two categories: on-the-fly prompts and post-writing checks. Grammar checking – the act of verifying the grammatical correctness of written text, especially if this act is performed by a computer program. Information retrieval – Cross-language information retrieval – Machine translation (MT) – aims to automatically translate text from one human language to another. This is one of the most difficult problems, and is a member of a class of problems colloquially termed "AI-complete", i.e. requiring all of the different types of knowledge that humans possess (grammar, semantics, facts about the real world, etc.) in order to solve properly. Classical approach of machine translation – rules-based machine translation. Computer-assisted translation – Interactive machine translation – Translation memory – database that stores so-called "segments", which can be sentences, paragraphs or sentence-like units (headings, titles or elements in a list) that have previously been translated, in order to aid human translators. Example-based machine translation – Rule-based machine translation – Natural language programming – interpreting and compiling instructions communicated in natural language into computer instructions (machine code). Natural language search – Optical character recognition (OCR) – given an image representing printed text, determine the corresponding text. Question answering – given a human-language question, determine its answer. Typical questions have a specific right answer (such as "What is the capital of Canada?"), but sometimes open-ended questions are also considered (such as "What is the meaning of life?"). Open domain question answering – Spam filtering – Sentiment analysis – extracts subjective information usually from a set of documents, often using online reviews to determine "polarity" about specific objects. It is especially useful for identifying trends of public opinion in the social media, for the purpose of marketing. Speech recognition – given a sound clip of a person or people speaking, determine the textual representation of the speech. This is the opposite of text to speech and is one of the extremely difficult problems colloquially termed "AI-complete" (see above). In natural speech there are hardly any pauses between successive words, and thus speech segmentation is a necessary subtask of speech recognition (see below). In most spoken languages, the sounds representing successive letters blend into each other in a process termed coarticulation, so the conversion of the analog signal to discrete characters can be a very difficult process. Speech synthesis (Text-to-speech) – Text-proofing – Text simplification – automated editing a document to include fewer words, or use easier words, while retaining its underlying meaning and information. Component processes Natural language understanding – converts chunks of text into more formal representations such as first-order logic structures that are easier for computer programs to manipulate. Natural language understanding involves the identification of the intended semantic from the multiple possible semantics which can be derived from a natural language expression which usually takes the form of organized notations of natural languages concepts. Introduction and creation of language metamodel and ontology are efficient however empirical solutions. An explicit formalization of natural languages semantics without confusions with implicit assumptions such as closed-world assumption (CWA) vs. open-world assumption, or subjective Yes/No vs. objective True/False is expected for the construction of a basis of semantics formalization. Natural language generation – task of converting information from computer databases into readable human language. Component processes of natural language understanding Automatic document classification (text categorization) – Automatic language identification – Compound term processing – category of techniques that identify compound terms and match them to their definitions. Compound terms are built by combining two (or more) simple terms, for example "triple" is a single word term but "triple heart bypass" is a compound term. Automatic taxonomy induction – Corpus processing – Automatic acquisition of lexicon – Text normalization – Text simplification – Deep linguistic processing – Discourse analysis – includes a number of related tasks. One task is identifying the discourse structure of connected text, i.e. the nature of the discourse relationships between sentences (e.g. elaboration, explanation, contrast). Another possible task is recognizing and classifying the speech acts in a chunk of text (e.g. yes-no questions, content questions, statements, assertions, orders, suggestions, etc.). Information extraction – Text mining – process of deriving high-quality information from text. High-quality information is typically derived through the devising of patterns and trends through means such as statistical pattern learning. Biomedical text mining – (also known as BioNLP), this is text mining applied to texts and literature of the biomedical and molecular biology domain. It is a rather recent research field drawing elements from natural language processing, bioinformatics, medical informatics and computational linguistics. There is an increasing interest in text mining and information extraction strategies applied to the biomedical and molecular biology literature due to the increasing number of electronically available publications stored in databases such as PubMed. Decision tree learning – Sentence extraction – Terminology extraction – Latent semantic indexing – Lemmatisation – groups together all like terms that share a same lemma such that they are classified as a single item. Morphological segmentation – separates words into individual morphemes and identifies the class of the morphemes. The difficulty of this task depends greatly on the complexity of the morphology (i.e. the structure of words) of the language being considered. English has fairly simple morphology, especially inflectional morphology, and thus it is often possible to ignore this task entirely and simply model all possible forms of a word (e.g. "open, opens, opened, opening") as separate words. In languages such as Turkish, however, such an approach is not possible, as each dictionary entry has thousands of possible word forms. Named entity recognition (NER) – given a stream of text, determines which items in the text map to proper names, such as people or places, and what the type of each such name is (e.g. person, location, organization). Although capitalization can aid in recognizing named entities in languages such as English, this information cannot aid in determining the type of named entity, and in any case is often inaccurate or insufficient. For example, the first word of a sentence is also capitalized, and named entities often span several words, only some of which are capitalized. Furthermore, many other languages in non-Western scripts (e.g. Chinese or Arabic) do not have any capitalization at all, and even languages with capitalization may not consistently use it to distinguish names. For example, German capitalizes all nouns, regardless of whether they refer to names, and French and Spanish do not capitalize names that serve as adjectives. Ontology learning – automatic or semi-automatic creation of ontologies, including extracting the corresponding domain's terms and the relationships between those concepts from a corpus of natural language text, and encoding them with an ontology language for easy retrieval. Also called "ontology extraction", "ontology generation", and "ontology acquisition". Parsing – determines the parse tree (grammatical analysis) of a given sentence. The grammar for natural languages is ambiguous and typical sentences have multiple possible analyses. In fact, perhaps surprisingly, for a typical sentence there may be thousands of potential parses (most of which will seem completely nonsensical to a human). Shallow parsing – Part-of-speech tagging – given a sentence, determines the part of speech for each word. Many words, especially common ones, can serve as multiple parts of speech. For example, "book" can be a noun ("the book on the table") or verb ("to book a flight"); "set" can be a noun, verb or adjective; and "out" can be any of at least five different parts of speech. Some languages have more such ambiguity than others. Languages with little inflectional morphology, such as English are particularly prone to such ambiguity. Chinese is prone to such ambiguity because it is a tonal language during verbalization. Such inflection is not readily conveyed via the entities employed within the orthography to convey intended meaning. Query expansion – Relationship extraction – given a chunk of text, identifies the relationships among named entities (e.g. who is the wife of whom). Semantic analysis (computational) – formal analysis of meaning, and "computational" refers to approaches that in principle support effective implementation. Explicit semantic analysis – Latent semantic analysis – Semantic analytics – Sentence breaking (also known as sentence boundary disambiguation and sentence detection) – given a chunk of text, finds the sentence boundaries. Sentence boundaries are often marked by periods or other punctuation marks, but these same characters can serve other purposes (e.g. marking abbreviations). Speech segmentation – given a sound clip of a person or people speaking, separates it into words. A subtask of speech recognition and typically grouped with it. Stemming – reduces an inflected or derived word into its word stem, base, or root form. Text chunking – Tokenization – given a chunk of text, separates it into distinct words, symbols, sentences, or other units Topic segmentation and recognition – given a chunk of text, separates it into segments each of which is devoted to a topic, and identifies the topic of the segment. Truecasing – Word segmentation – separates a chunk of continuous text into separate words. For a language like English, this is fairly trivial, since words are usually separated by spaces. However, some written languages like Chinese, Japanese and Thai do not mark word boundaries in such a fashion, and in those languages text segmentation is a significant task requiring knowledge of the vocabulary and morphology of words in the language. Word sense disambiguation (WSD) – because many words have more than one meaning, word sense disambiguation is used to select the meaning which makes the most sense in context. For this problem, we are typically given a list of words and associated word senses, e.g. from a dictionary or from an online resource such as WordNet. Word-sense induction – open problem of natural language processing, which concerns the automatic identification of the senses of a word (i.e. meanings). Given that the output of word-sense induction is a set of senses for the target word (sense inventory), this task is strictly related to that of word-sense disambiguation (WSD), which relies on a predefined sense inventory and aims to solve the ambiguity of words in context. Automatic acquisition of sense-tagged corpora – W-shingling – set of unique "shingles"—contiguous subsequences of tokens in a document—that can be used to gauge the similarity of two documents. The w denotes the number of tokens in each shingle in the set. Component processes of natural language generation Natural language generation – task of converting information from computer databases into readable human language. Automatic taxonomy induction (ATI) – automated building of tree structures from a corpus. While ATI is used to construct the core of ontologies (and doing so makes it a component process of natural language understanding), when the ontologies being constructed are end user readable (such as a subject outline), and these are used for the construction of further documentation (such as using an outline as the basis to construct a report or treatise) this also becomes a component process of natural language generation. Document structuring – History of natural language processing History of natural language processing History of machine translation History of automated essay scoring History of natural language user interface History of natural language understanding History of optical character recognition History of question answering History of speech synthesis Turing test – test of a machine's ability to exhibit intelligent behavior, equivalent to or indistinguishable from, that of an actual human. In the original illustrative example, a human judge engages in a natural language conversation with a human and a machine designed to generate performance indistinguishable from that of a human being. All participants are separated from one another. If the judge cannot reliably tell the machine from the human, the machine is said to have passed the test. The test was introduced by Alan Turing in his 1950 paper "Computing Machinery and Intelligence," which opens with the words: "I propose to consider the question, 'Can machines think?'" Universal grammar – theory in linguistics, usually credited to Noam Chomsky, proposing that the ability to learn grammar is hard-wired into the brain. The theory suggests that linguistic ability manifests itself without being taught (see poverty of the stimulus), and that there are properties that all natural human languages share. It is a matter of observation and experimentation to determine precisely what abilities are innate and what properties are shared by all languages. ALPAC – was a committee of seven scientists led by John R. Pierce, established in 1964 by the U. S. Government in order to evaluate the progress in computational linguistics in general and machine translation in particular. Its report, issued in 1966, gained notoriety for being very skeptical of research done in machine translation so far, and emphasizing the need for basic research in computational linguistics; this eventually caused the U. S. Government to reduce its funding of the topic dramatically. Conceptual dependency theory – a model of natural language understanding used in artificial intelligence systems. Roger Schank at Stanford University introduced the model in 1969, in the early days of artificial intelligence. This model was extensively used by Schank's students at Yale University such as Robert Wilensky, Wendy Lehnert, and Janet Kolodner. Augmented transition network – type of graph theoretic structure used in the operational definition of formal languages, used especially in parsing relatively complex natural languages, and having wide application in artificial intelligence. Introduced by William A. Woods in 1970. Distributed Language Translation (project) – Timeline of NLP software General natural language processing concepts Sukhotin's algorithm – statistical classification algorithm for classifying characters in a text as vowels or consonants. It was initially created by Boris V. Sukhotin. T9 (predictive text) – stands for "Text on 9 keys", is a USA-patented predictive text technology for mobile phones (specifically those that contain a 3x4 numeric keypad), originally developed by Tegic Communications, now part of Nuance Communications. Tatoeba – free collaborative online database of example sentences geared towards foreign language learners. Teragram Corporation – fully owned subsidiary of SAS Institute, a major producer of statistical analysis software, headquartered in Cary, North Carolina, USA. Teragram is based in Cambridge, Massachusetts and specializes in the application of computational linguistics to multilingual natural language processing. TipTop Technologies – company that developed TipTop Search, a real-time web, social search engine with a unique platform for semantic analysis of natural language. TipTop Search provides results capturing individual and group sentiment, opinions, and experiences from content of various sorts including real-time messages from Twitter or consumer product reviews on Amazon.com. Transderivational search – when a search is being conducted for a fuzzy match across a broad field. In computing the equivalent function can be performed using content-addressable memory. Vocabulary mismatch – common phenomenon in the usage of natural languages, occurring when different people name the same thing or concept differently. LRE Map – Reification (linguistics) – Semantic Web – Metadata – Spoken dialogue system – Affix grammar over a finite lattice – Aggregation (linguistics) – Bag-of-words model – model that represents a text as a bag (multiset) of its words that disregards grammar and word sequence, but maintains multiplicity. This model is a commonly used to train document classifiers Brill tagger – Cache language model – ChaSen, MeCab – provide morphological analysis and word splitting for Japanese Classic monolingual WSD – ClearForest – CMU Pronouncing Dictionary – also known as cmudict, is a public domain pronouncing dictionary designed for uses in speech technology, and was created by Carnegie Mellon University (CMU). It defines a mapping from English words to their North American pronunciations, and is commonly used in speech processing applications such as the Festival Speech Synthesis System and the CMU Sphinx speech recognition system. Concept mining – Content determination – DATR – DBpedia Spotlight – Deep linguistic processing – Discourse relation – Document-term matrix – Dragomir R. Radev – ETBLAST – Filtered-popping recursive transition network – Robby Garner – GeneRIF – Gorn address – Grammar induction – Grammatik – Hashing-Trick – Hidden Markov model – Human language technology – Information extraction – International Conference on Language Resources and Evaluation – Kleene star – Language Computer Corporation – Language model – Languageware – Latent semantic mapping – Legal information retrieval – Lesk algorithm – Lessac Technologies – Lexalytics – Lexical choice – Lexical Markup Framework – Lexical substitution – LKB – Logic form – LRE Map – Machine translation software usability – MAREC – Maximum entropy – Message Understanding Conference – METEOR – Minimal recursion semantics – Morphological pattern – Multi-document summarization – Multilingual notation – Naive semantics – Natural language – Natural language interface – Natural language user interface – News analytics – Nondeterministic polynomial – Open domain question answering – Optimality theory – Paco Nathan – Phrase structure grammar – Powerset (company) – Production (computer science) – PropBank – Question answering – Realization (linguistics) – Recursive transition network – Referring expression generation – Rewrite rule – Semantic compression – Semantic neural network – SemEval – SPL notation – Stemming – reduces an inflected or derived word into its word stem, base, or root form. String kernel – Natural language processing tools Google Ngram Viewer – graphs n-gram usage from a corpus of more than 5.2 million books Corpora Text corpus (see list) – large and structured set of texts (nowadays usually electronically stored and processed). They are used to do statistical analysis and hypothesis testing, checking occurrences or validating linguistic rules within a specific language territory. Bank of English British National Corpus Corpus of Contemporary American English (COCA) Oxford English Corpus Natural language processing toolkits The following natural language processing toolkits are notable collections of natural language processing software. They are suites of libraries, frameworks, and applications for symbolic, statistical natural language and speech processing. Named entity recognizers ABNER (A Biomedical Named Entity Recognizer) – open source text mining program that uses linear-chain conditional random field sequence models. It automatically tags genes, proteins and other entity names in text. Written by Burr Settles of the University of Wisconsin-Madison. Stanford NER (Named Entity Recognizer) — Java implementation of a Named Entity Recognizer that uses linear-chain conditional random field sequence models. It automatically tags persons, organizations, and locations in text in English, German, Chinese, and Spanish languages. Written by Jenny Finkel and other members of the Stanford NLP Group at Stanford University. Translation software Comparison of machine translation applications Machine translation applications Google Translate DeepL Linguee – web service that provides an online dictionary for a number of language pairs. Unlike similar services, such as LEO, Linguee incorporates a search engine that provides access to large amounts of bilingual, translated sentence pairs, which come from the World Wide Web. As a translation aid, Linguee therefore differs from machine translation services like Babelfish and is more similar in function to a translation memory. Hindi-to-Punjabi Machine Translation System UNL Universal Networking Language Yahoo! Babel Fish Reverso Other software CTAKES – open-source natural language processing system for information extraction from electronic medical record clinical free-text. It processes clinical notes, identifying types of clinical named entities — drugs, diseases/disorders, signs/symptoms, anatomical sites and procedures. Each named entity has attributes for the text span, the ontology mapping code, context (family history of, current, unrelated to patient), and negated/not negated. Also known as Apache cTAKES. DMAP – ETAP-3 – proprietary linguistic processing system focusing on English and Russian. It is a rule-based system which uses the Meaning-Text Theory as its theoretical foundation. JAPE – the Java Annotation Patterns Engine, a component of the open-source General Architecture for Text Engineering (GATE) platform. JAPE is a finite state transducer that operates over annotations based on regular expressions. LOLITA – "Large-scale, Object-based, Linguistic Interactor, Translator and Analyzer". LOLITA was developed by Roberto Garigliano and colleagues between 1986 and 2000. It was designed as a general-purpose tool for processing unrestricted text that could be the basis of a wide variety of applications. At its core was a semantic network containing some 90,000 interlinked concepts. Maluuba – intelligent personal assistant for Android devices, that uses a contextual approach to search which takes into account the user's geographic location, contacts, and language. METAL MT – machine translation system developed in the 1980s at the University of Texas and at Siemens which ran on Lisp Machines. Never-Ending Language Learning – semantic machine learning system developed by a research team at Carnegie Mellon University, and supported by grants from DARPA, Google, and the NSF, with portions of the system running on a supercomputing cluster provided by Yahoo!. NELL was programmed by its developers to be able to identify a basic set of fundamental semantic relationships between a few hundred predefined categories of data, such as cities, companies, emotions and sports teams. Since the beginning of 2010, the Carnegie Mellon research team has been running NELL around the clock, sifting through hundreds of millions of web pages looking for connections between the information it already knows and what it finds through its search process – to make new connections in a manner that is intended to mimic the way humans learn new information. NLTK – Online-translator.com – Regulus Grammar Compiler – software system for compiling unification grammars into grammars for speech recognition systems. S Voice – Siri (software) – Speaktoit – TeLQAS – Weka's classification tools – word2vec – models that were developed by a team of researchers led by Thomas Milkov at Google to generate word embeddings that can reconstruct some of the linguistic context of words using shallow, two dimensional neural nets derived from a much larger vector space. Festival Speech Synthesis System – CMU Sphinx speech recognition system – Language Grid - Open source platform for language web services, which can customize language services by combining existing language services. Chatterbots Chatterbot – a text-based conversation agent that can interact with human users through some medium, such as an instant message service. Some chatterbots are designed for specific purposes, while others converse with human users on a wide range of topics. Classic chatterbots Dr. Sbaitso ELIZA PARRY Racter (or Claude Chatterbot) Mark V Shaney General chatterbots Albert One - 1998 and 1999 Loebner winner, by Robby Garner. A.L.I.C.E. - 2001, 2002, and 2004 Loebner Prize winner developed by Richard Wallace. Charlix Cleverbot (winner of the 2010 Mechanical Intelligence Competition) Elbot - 2008 Loebner Prize winner, by Fred Roberts. Eugene Goostman - 2012 Turing 100 winner, by Vladimir Veselov. Fred - an early chatterbot by Robby Garner. Jabberwacky Jeeney AI MegaHAL Mitsuku, 2013 and 2016 Loebner Prize winner Rose - ... 2015 - 3x Loebner Prize winner, by Bruce Wilcox. SimSimi - A popular artificial intelligence conversation program that was created in 2002 by ISMaker. Spookitalk - A chatterbot used for NPCs in Douglas Adams' Starship Titanic video game. Ultra Hal - 2007 Loebner Prize winner, by Robert Medeksza. Verbot Instant messenger chatterbots GooglyMinotaur, specializing in Radiohead, the first bot released by ActiveBuddy (June 2001-March 2002) SmarterChild, developed by ActiveBuddy and released in June 2001 Infobot, an assistant on IRC channels such as #perl, primarily to help out with answering Frequently Asked Questions (June 1995-today) Negobot, a bot designed to catch online pedophiles by posing as a young girl and attempting to elicit personal details from people it speaks to. Natural language processing organizations AFNLP (Asian Federation of Natural Language Processing Associations) – the organization for coordinating the natural language processing related activities and events in the Asia-Pacific region. Australasian Language Technology Association – Association for Computational Linguistics – international scientific and professional society for people working on problems involving natural language processing. Natural language processing-related conferences Annual Meeting of the Association for Computational Linguistics (ACL) International Conference on Intelligent Text Processing and Computational Linguistics (CICLing) International Conference on Language Resources and Evaluation – biennial conference organised by the European Language Resources Association with the support of institutions and organisations involved in Natural language processing Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL) Text, Speech and Dialogue (TSD) – annual conference Text Retrieval Conference (TREC) – on-going series of workshops focusing on various information retrieval (IR) research areas, or tracks Companies involved in natural language processing AlchemyAPI – service provider of a natural language processing API. Google, Inc. – the Google search engine is an example of automatic summarization, utilizing keyphrase extraction. Calais (Reuters product) – provider of a natural language processing services. Wolfram Research, Inc. developer of natural language processing computation engine Wolfram Alpha. Natural language processing publications Books Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing – Wermter, S., Riloff E. and Scheler, G. (editors). First book that addressed statistical and neural network learning of language. Speech and Language Processing: An Introduction to Natural Language Processing, Speech Recognition, and Computational Linguistics – by Daniel Jurafsky and James H. Martin. Introductory book on language technology. Book series Studies in Natural Language Processing – book series of the Association for Computational Linguistics, published by Cambridge University Press. Journals Computational Linguistics – peer-reviewed academic journal in the field of computational linguistics. It is published quarterly by MIT Press for the Association for Computational Linguistics (ACL) People influential in natural language processing Daniel Bobrow – Rollo Carpenter – creator of Jabberwacky and Cleverbot. Noam Chomsky – author of the seminal work Syntactic Structures, which revolutionized Linguistics with 'universal grammar', a rule based system of syntactic structures. Kenneth Colby – David Ferrucci – principal investigator of the team that created Watson, IBM's AI computer that won the quiz show Jeopardy! Lyn Frazier – Daniel Jurafsky – Professor of Linguistics and Computer Science at Stanford University. With James H. Martin, he wrote the textbook Speech and Language Processing: An Introduction to Natural Language Processing, Speech Recognition, and Computational Linguistics Roger Schank – introduced the conceptual dependency theory for natural language understanding. Jean E. Fox Tree – Alan Turing – originator of the Turing Test. Joseph Weizenbaum – author of the ELIZA chatterbot. Terry Winograd – professor of computer science at Stanford University, and co-director of the Stanford Human-Computer Interaction Group. He is known within the philosophy of mind and artificial intelligence fields for his work on natural language using the SHRDLU program. William Aaron Woods – Maurice Gross – author of the concept of local grammar, taking finite automata as the competence model of language. Stephen Wolfram – CEO and founder of Wolfram Research, creator of the programming language (natural language understanding) Wolfram Language, and natural language processing computation engine Wolfram Alpha. Victor Yngve – See also References Bibliography . . External links Natural language processing Natural language processing
4739013
https://en.wikipedia.org/wiki/MTR%20%28software%29
MTR (software)
My traceroute, originally named Matt's traceroute (MTR), is a computer program which combines the functions of the traceroute and ping programs in one network diagnostic tool. MTR probes routers on the route path by limiting the number of hops individual packets may traverse, and listening to responses of their expiry. It will regularly repeat this process, usually once per second, and keep track of the response times of the hops along the path. History The original Matt's traceroute program was written by Matt Kimball in 1997. Roger Wolff took over maintaining MTR (renamed My traceroute) in October 1998. Fundamentals MTR is licensed under the terms of the GNU General Public License (GPL) and works under modern Unix-like operating systems. It normally works under the text console, but it also has an optional GTK+-based graphical user interface (GUI). MTR relies on Internet Control Message Protocol (ICMP) Time Exceeded (type 11, code 0) packets coming back from routers, or ICMP Echo Reply packets when the packets have hit their destination host. MTR also has a User Datagram Protocol (UDP) mode (invoked with "-u" on the command line or pressing the "u" key in the curses interface) that sends UDP packets, with the time to live (TTL) field in the IP header increasing by one for each probe sent, toward the destination host. When the UDP mode is used, MTR relies on ICMP port unreachable packets (type 3, code 3) when the destination is reached. MTR also supports IPv6 and works in a similar manner but instead relies on ICMPv6 messages. The tool is often used for network troubleshooting. By showing a list of routers traversed, and the average round-trip time as well as packet loss to each router, it allows users to identify links between two given routers responsible for certain fractions of the overall latency or packet loss through the network. This can help identify network overuse problems. Examples This example shows MTR running on Linux tracing a route from the host machine (example.lan) to a web server at Yahoo! (p25.www.re2.yahoo.com) across the Level 3 Communications network. My traceroute [v0.71] example.lan Sun Mar 25 00:07:50 2007 Packets Pings Hostname %Loss Rcv Snt Last Best Avg Worst 1. example.lan 0% 11 11 1 1 1 2 2. ae-31-51.ebr1.Chicago1.Level3.n 19% 9 11 3 1 7 14 3. ae-1.ebr2.Chicago1.Level3.net 0% 11 11 7 1 7 14 4. ae-2.ebr2.Washington1.Level3.ne 19% 9 11 19 18 23 31 5. ae-1.ebr1.Washington1.Level3.ne 28% 8 11 22 18 24 30 6. ge-3-0-0-53.gar1.Washington1.Le 0% 11 11 18 18 20 36 7. 63.210.29.230 0% 10 10 19 19 19 19 8. t-3-1.bas1.re2.yahoo.com 0% 10 10 19 18 32 106 9. p25.www.re2.yahoo.com 0% 10 10 19 18 19 19 An additional example below shows a recent version of MTR running on FreeBSD. MPLS labels are displayed by default when the "-e" switch is used on the command line (or the "e" key is pressed in the curses interface): My traceroute [v0.82] dax.prolixium.com (0.0.0.0) Sun Jan 1 12:58:02 2012 Keys: Help Display mode Restart statistics Order of fields quit Packets Pings Host Loss% Snt Last Avg Best Wrst StDev 1. voxel.prolixium.net 0.0% 13 0.4 1.7 0.4 10.4 3.2 2. 0.ae2.tsr1.lga5.us.voxel.net 0.0% 12 10.8 2.9 0.2 10.8 4.3 3. 0.ae59.tsr1.lga3.us.voxel.net 0.0% 12 0.4 1.7 0.4 16.0 4.5 4. rtr.loss.net.internet2.edu 0.0% 12 4.8 7.4 0.3 41.8 15.4 5. 64.57.21.210 0.0% 12 5.4 15.7 5.3 126.7 35.0 6. nox1sumgw1-vl-530-nox-mit.nox.org 0.0% 12 109.5 60.6 23.0 219.5 66.0 [MPLS: Lbl 172832 Exp 0 S 1 TTL 1] 7. nox1sumgw1-peer--207-210-142-234.nox.org 0.0% 12 25.0 23.2 23.0 25.0 0.6 8. B24-RTR-2-BACKBONE-2.MIT.EDU 0.0% 12 23.2 23.4 23.2 24.9 0.5 9. MITNET.TRANTOR.CSAIL.MIT.EDU 0.0% 12 23.4 23.4 23.3 23.5 0.1 10. trantor.helicon.csail.mit.edu 0.0% 12 23.7 25.0 23.5 26.5 1.3 11. zermatt.csail.mit.edu 0.0% 12 23.1 23.1 23.1 23.3 0.1 Windows versions WinMTR is a Windows GUI application functionally equivalent to MTR. It was originally developed by Appnor MSP S.R.L.; it is now maintained by White-Tiger. Although it is very similar, WinMTR shares no common code with MTR. A console version of MTR does exist for Windows, but it has fewer features than MTR on other platforms. See also traceroute Ping (networking utility) PathPing - a network utility supplied in Windows NT and beyond that combines the functions of ping with those of traceroute, or tracert Bufferbloat References External links MTR manual page MTR, BitWizard's MTR page with Unix downloads WinMTR, the equivalent of MTR for Windows platforms WinMTR (Redux), fork of WinMTR, maintained by René Schümann aka White-Tiger Free network-related software Network analyzers
60766435
https://en.wikipedia.org/wiki/Express%20Data%20Path
Express Data Path
XDP (eXpress Data Path) is an eBPF-based high-performance data path merged in the Linux kernel since version 4.8. Data path The idea behind XDP is to add an early hook in the RX path of the kernel, and let a user supplied eBPF program decide the fate of the packet. The hook is placed in the network interface controller (NIC) driver just after the interrupt processing, and before any memory allocation needed by the network stack itself, because memory allocation can be an expensive operation. Due to this design, XDP can drop 26 million packets per second per core with commodity hardware. The eBPF program must pass a preverifier test before being loaded, to avoid executing malicious code in kernel space. The preverifier checks that the program contains no out-of-bounds accesses, loops or global variables. The program is allowed to edit the packet data and, after the eBPF program returns, an action code determines what to do with the packet: XDP_PASS: let the packet continue through the network stack XDP_DROP: silently drop the packet XDP_ABORTED: drop the packet with trace point exception XDP_TX: bounce the packet back to the same NIC it arrived on XDP_REDIRECT: redirect the packet to another NIC or user space socket via the AF_XDP address family XDP requires support in the NIC driver but, as not all drivers support it, it can fallback to a generic implementation, which performs the eBPF processing in the network stack, though with slower performance. XDP has infrastructure to offload the eBPF program to a network interface controller which supports it, reducing the CPU load. At the time only Netronome cards supports it, with Intel and Mellanox working on it. AF_XDP Along with XDP, a new address family entered in the Linux kernel starting 4.18. AF_XDP, formerly known as AF_PACKETv4 (which was never included in the mainline kernel), is a raw socket optimized for high performance packet processing and allows zero-copy between kernel and applications. As the socket can be used for both receiving and transmitting, it supports high performance network applications purely in user space. References External links XDP documentation on Read the Docs AF_XDP documentation on kernel.org XDP walkthrough at FOSDEM 2017 by Daniel Borkmann, Cilium AF_XDP at FOSDEM 2018 by Magnus Karlsson, Intel eBPF.io - Introduction, Tutorials & Community Resources L4Drop: XDP DDoS Mitigations, Cloudflare Unimog: Cloudflare's edge load balancer, Cloudflare Open-sourcing Katran, a scalable network load balancer, Facebook Cilium's L4LB: standalone XDP load balancer, Cilium Kube-proxy replacement at the XDP layer, Cilium eCHO Podcast on XDP and load balancing Command-line software Firewall software Linux security software Linux kernel features Linux-only free software
16727034
https://en.wikipedia.org/wiki/QCDOC
QCDOC
The QCDOC (quantum chromodynamics on a chip) is a supercomputer technology focusing on using relatively cheap low power processing elements to produce a massively parallel machine. The machine is custom-made to solve small but extremely demanding problems in the fields of quantum physics. Overview The computers were designed and built jointly by University of Edinburgh (UKQCD), Columbia University, the RIKEN BNL Brookhaven Research Center and IBM. The purpose of the collaboration was to exploit computing facilities for lattice field theory calculations whose primary aim is to increase the predictive power of the Standard Model of elementary particle interactions through numerical simulation of quantum chromodynamics (QCD). The target was to build a massively parallel supercomputer able to peak at 10 Tflops with sustained power at 50% capacity. There are three QCDOCs in service each reaching 10 Tflops peak operation. University of Edinburgh's Parallel Computing Centre (EPCC). In operation by the UKQCD since 2005 RIKEN BNL Brookhaven Research Center at Brookhaven National Laboratory U.S. Department of Energy Program in High Energy and Nuclear Physics at Brookhaven National Laboratory Around 23 UK academic staff, their postdocs and students, from seven universities, belong to UKQCD. Costs were funded through a Joint Infrastructure Fund Award of £6.6 million. Staff costs (system support, physicist programmers and postdocs) are around £1 million per year, other computing and operating costs are around £0.2 million per year. QCDOC was to replace an earlier design, QCDSP, where the power came from connecting large amounts of DSPs together in a similar fashion. The QCDSP strapped 12.288 nodes to a 4D network and reached 1 Tflops in 1998. QCDOC can be seen as a predecessor to the highly successful Blue Gene/L supercomputer. They share a lot of design traits, and similarities go beyond superficial characteristics. Blue Gene is also a massively parallel supercomputer built with a large amount of cheap, relatively weak PowerPC 440 based SoC nodes connected with a high bandwidth multidimensional mesh. They differ, however, in that the computing nodes in BG/L are more powerful and are connected with a faster, more sophisticated network that scales up to several hundred thousand nodes per system. Architecture Computing node The computing nodes are custom ASICs with about fifty million transistors each. They are mainly made up of existing building blocks from IBM. They are built around a 500 MHz PowerPC 440 core with 4 MB DRAM, memory management for external DDR SDRAM, system I/O for internode communications, and dual Ethernet built in. The computing node is capable of 1 double precision Gflops. Each node has one DIMM socket capable of holding between 128 and 2048 MB of 333 MHz ECC DDR SDRAM. Inter node communication Each node has the capability to send and receive data from each of its twelve nearest neighbors in a six-dimensional mesh at a rate of 500 Mbit/s each. This provides a total off-node bandwidth of 12 Gbit/s. Each of these 24 channels has DMA to the other nodes' on-chip DRAM or the external SDRAM. In practice only four dimensions will be used to form a communications sub-torus where the remaining two dimensions will be used to partition the system. The operating system communicates with the computing nodes using the Ethernet network. This is also used for diagnostics, configuration and communications with disk storage. Mechanical design Two nodes are placed together on a daughter card with one DIMM socket and a 4:1 Ethernet hub for off-card communications. The daughter cards have two connectors, one carrying the internode communications network and one carrying power, Ethernet, clock and other house keeping facilities. Thirty-two daughter cards are placed in two rows on a motherboard that supports 800 Mbit/s off-board Ethernet communications. Eight motherboards are placed in crates with two backplanes supporting four motherboards each. Each crate consists of 512 processor nodes a and a 26 hypercube communications network. One node consumes about 5 W of power, and each crate is air and water cooled. A complete system can consist of any number of crates, for a total of up to several tens of thousands of nodes. Operating system The QCDOC runs a custom-built operating system, QOS, which facilitates boot, runtime, monitoring, diagnostics, and performance and simplifies management of the large number of computing nodes. It uses a custom embedded kernel and provides single process POSIX ("unix-like") compatibility using the Cygnus newlib library. The kernel includes a specially written UDP/IP stack and NFS client for disk access. The operating system also maintains system partitions so several users can have access to separate parts of the system for different applications. Each partition will only run one client application at any given time. Any multitasking is scheduled by the host controller system which is a regular computer using a large amounts of Ethernet ports connecting to the QCDOC. See also Norman Christ PowerPC 440 BlueGene/L QPACE Supercomputer References Computational Quantum Field Theory at Columbia – Columbia University Overview of the QCDSP and QCDOC computers – IBM QCDOC Architecture – Columbia University UKQCD – Science and Technology Facilities Council QCDOC: A 10 Teraflops Computer for Tightly-coupled Calculations (BNL) UK supercomputer probes secrets of universe, The Register IBM QPACE (TOP500), Softpedia Computer science institutes in the United Kingdom Parallel computing University of Edinburgh School of Informatics IBM supercomputer platforms
38400538
https://en.wikipedia.org/wiki/Joint%20Computer%20Conference
Joint Computer Conference
The Joint Computer Conferences were a series of computer conferences in the USA held under various names between 1951 and 1987. The conferences were the venue for presentations and papers representing "cumulative work in the [computer] field." Originally a semi-annual pair, the Western Joint Computer Conference (WJCC) was held annually in the western United States, and a counterpart, the Eastern Joint Computer Conference (EJCC), was held annually in the eastern US. Both conferences were sponsored by an organization known as the National Joint Computer Committee (NJCC), composed of the Association for Computing Machinery (ACM), the American Institute of Electrical Engineers (AIEE) Committee on Computing Devices, and the Institute of Radio Engineers (IRE) Professional Group on Electronic Computers. In 1962 the American Federation of Information Processing Societies (AFIPS) took over sponsorship and renamed them Fall Joint Computer Conference (FJCC) and Spring Joint Computer Conference (SJCC). In 1973 AFIPS merged the two conferences into a single annual National Computer Conference (NCC) which ran until discontinued in 1987. The 1967 FJCC in Anaheim, California attracted 15,000 attendees. In 1968 in San Francisco, California Douglas Engelbart presented "The Mother of All Demos" presenting such then-new technologies as the computer mouse, video conferencing, teleconferencing, and hypertext. Conference dates Eastern Joint Computer Conference Western Joint Computer Conference Spring Joint Computer Conference Fall Joint Computer Conference National Computer Conference See also American Federation of Information Processing Societies COMDEX References External links AFIPS conference bibliography, 1951-1987 Computer conferences
22637
https://en.wikipedia.org/wiki/Object%20Management%20Group
Object Management Group
The Object Management Group (OMG) is a computer industry standards consortium. OMG Task Forces develop enterprise integration standards for a range of technologies. Business activities The goal of the OMG was a common portable and interoperable object model with methods and data that work using all types of development environments on all types of platforms. The group provides only specifications, not implementations. But before a specification can be accepted as a standard by the group, the members of the submitter team must guarantee that they will bring a conforming product to market within a year. This is an attempt to prevent unimplemented (and unimplementable) standards. Other private companies or open source groups are encouraged to produce conforming products and OMG is attempting to develop mechanisms to enforce true interoperability. OMG hosts four technical meetings per year for its members and interested nonmembers. The Technical Meetings provide a neutral forum to discuss, develop and adopt standards that enable software interoperability. History Founded in 1989 by eleven companies (including Hewlett-Packard, IBM, Sun Microsystems, Apple Computer, American Airlines, iGrafx, and Data General), OMG's initial focus was to create a heterogeneous distributed object standard. The founding executive team included Christopher Stone and John Slitz. Current leadership includes chairman and CEO Richard Soley, President and COO Bill Hoffman and Vice President and Technical Director Jason McC. Smith. Since 2000, the group's international headquarters has been located in Boston, Massachusetts. In 1997, the Unified Modeling Language (UML) was added to the list of OMG adopted technologies. UML is a standardized general-purpose modeling language in the field of object-oriented software engineering. In June 2005, the Business Process Management Initiative (BPMI.org) and OMG announced the merger of their respective Business Process Management (BPM) activities to form the Business Modeling and Integration Domain Task Force (BMI DTF). In 2006 the Business Process Model and Notation (BPMN) was adopted as a standard by OMG. In 2007 the Business Motivation Model (BMM) was adopted as a standard by the OMG. The BMM is a metamodel that provides a vocabulary for corporate governance and strategic planning and is particularly relevant to businesses undertaking governance, regulatory compliance, business transformation and strategic planning activities. In 2009 OMG, together with the Software Engineering Institute at Carnegie Mellon launched the Consortium of IT Software Quality (CISQ). In 2011 OMG formed the Cloud Standards Customer Council. Founding sponsors included CA, IBM, Kaavo, Rackspace and Software AG. The CSCC is an OMG end user advocacy group dedicated to accelerating cloud's successful adoption, and drilling down into the standards, security and interoperability issues surrounding the transition to the cloud. In September 2011, the OMG Board of Directors voted to adopt the Vector Signal and Image Processing Library (VSIPL) as the latest OMG specification. Work for adopting the specification was led by Mentor Graphics' Embedded Software Division, RunTime Computing Solutions, The Mitre Corporation as well as the High Performance Embedded Computing Software Initiative (HPEC-SI). VSIPL is an application programming interface (API). VSIPL and VSIPL++ contain functions used for common signal processing kernel and other computations. These functions include basic arithmetic, trigonometric, transcendental, signal processing, linear algebra, and image processing. The VSIPL family of libraries has been implemented by multiple vendors for a range of processor architectures, including x86, PowerPC, Cell, and NVIDIA GPUs. VSIPL and VSIPL++ are designed to maintain portability across a range of processor architectures. Additionally, VSIPL++ was designed from the start to include support for parallelism. Late 2012 early 2013, the group's Board of Directors adopted the Automated Function Point (AFP) specification. The push for adoption was led by the Consortium for IT Software Quality (CISQ). AFP provides a standard for automating the popular function point measure according to the counting guidelines of the International Function Point User Group (IFPUG). On March 27, 2014, OMG announced it would be managing the newly formed Industrial Internet Consortium (IIC). Ratified ISO Standards Of the many standards maintained by the OMG, 13 have been ratified as ISO standards. These standards are: See also DIIOP References External links Standards organizations in the United States Unified Modeling Language
14352002
https://en.wikipedia.org/wiki/Jonathan%20S.%20Turner
Jonathan S. Turner
Jonathan Shields Turner is a senior professor of Computer Science in the School of Engineering and Applied Science at Washington University in St. Louis. His research interests include the design and analysis of high performance routers and switching systems, extensible communication networks via overlay networks, and probabilistic performance of heuristic algorithms for NP-complete problems. Biography Jonathan Shields Turner was born on November 13, 1953, in Boston. Turner started his undergraduate studies at Oberlin College, and later enrolled in the undergraduate engineering program at Washington University. In doing so, he became one of the first dual-degree engineering graduates from Washington University. In 1975, he graduated with a B.A. in Theater from Oberlin College. Then, in 1977, he graduated with a B.S. in Computer Science and B.S. in Electrical Engineering from Washington University Once Turner graduated, he began attending Northwestern University for Computer Science graduate school, and simultaneously began working at Bell Labs as a member of their technical staff. In 1979, he received his M.S. in Computer Science from Northwestern, and continued on as a doctoral student under the supervision of Hal Sudborough. From 1981 to 1983, he became the principal system architect for the Fast Packet Switching project at Bell Labs. He received eleven patents for his work on the Fast Packet Switching project. In 1982 he published his doctoral dissertation, receiving his Ph.D. in Computer Science from Northwestern. Turner joined Washington University in 1983 as an assistant professor in the Computer Science and Electrical Engineering departments. In 1986, he published a paper titled "New Directions in Communications (or Which Way to the Information Age)", which forecast the convergence of data, voice, and video traffic on networks, and proposed scalable switching architectures to handle such a traffic load. This paper would later be reprinted in the 50th anniversary issue of the IEEE Communications Magazine as a "landmark article". In 1988 he founded the Advanced Networking Group and co-founded the Applied Research Laboratory with Washington University colleagues Jerome R. Cox and Guru Parulkar. Turner directed the Applied Research Laboratory (ARL) from its inception to 2012, and directed the Advanced Networking group until it was subsumed by the ARL in 1992. He was promoted to full professor by 1990. He became the Computer Science department chair in 1992 and held this position through 1997. In 1998 Turner co-founded a company named Growth Networks—again in collaboration with Professors Jerome Cox and Guru Parulkar—which focused on high performance switching components for Internet routers and Asynchronous Transfer Mode switches. Turner was Chief Scientist at Growth Networks. In 2000 Cisco acquired Growth Networks for $355 million in stock, largely for the intellectual property and engineering talent. At the time of acquisition, Growth Networks had 55 employees. From 2007 to 2008 he again served as department chair of the Computer Science department. Turner retired from Washington University in 2014 after 30 years with the department. He is now a Senior Professor for the department, and still likes to perform research when he is not sailing the Florida coast or playing tennis with his wife. Awards and distinctions Jonathan S. Turner has been awarded 30 patents for his work in switching systems, and has many widely cited publications. Turner has received honors from a variety of professional organizations. In 1990 he was elected as an IEEE Fellow for "contributions to multipoint switching networks for high-speed packetized information transmission". In 1994 he received the IEEE Koji Kobayashi Computers and Communications Award for "fundamental contributions to communications and computing through architectural innovation in high-speed packet networks. In 2000 he was awarded the IEEE Millennium Medal In 2001 he was elected as an ACM Fellow for research involving and extending his 1986 seminal paper. In 2002 he was awarded the James B. Eads Award from the St. Louis Academy of Science, for outstanding achievement in engineering or technology. In 2007 he was elected to the National Academy of Engineering. Turner has also received many honors from Washington University. In 1993 he was honored with the Founder's Day Distinguished Faculty Award, which is awarded to faculty who have an "outstanding commitment to the intellectual and personal development of students". In 1994 he became the Henry Edwin Sever Chair of Engineering, which at that time was a new endowed professorship. He held this position until 2006. In 2004 he won the Arthur Holly Compton Faculty Achievement Award, which is similar to the Founder's Day Distinguished Faculty Award but more selective. In 2006 Turner was named the Barbara J. and Jerome R. Cox Professor of Computer Science for "advancing the relationship between theory and practice in the design of digital systems." In 2007 he received an Alumni Achievement Award from the School of Engineering and Applied Science. In 2014 he received the Dean's Award from the Dean of the School of Engineering and Applied Science. Also that year the Computer Science department created the Turner Dissertation Award in recognition of his many achievements and research contributions. References External links Home page Google scholar profile American computer scientists Fellows of the Association for Computing Machinery Fellow Members of the IEEE Members of the United States National Academy of Engineering Researchers in distributed computing American inventors American software engineers Software engineering researchers Washington University in St. Louis faculty Northwestern University alumni McKelvey School of Engineering alumni Oberlin College alumni Scientists from St. Louis 1953 births Living people
2784419
https://en.wikipedia.org/wiki/OPC%20Historical%20Data%20Access
OPC Historical Data Access
This group of standards, created by the OPC Foundation, provides COM specifications for communicating data from devices and applications that provide historical data, such as databases. The specifications provides for access to raw, interpolated and aggregate data (data with calculations). OPC Historical Data Access, also known as OPC HDA, is used to exchange archived process data. This is in contrast to the OPC Data Access (OPC DA) specification that deals with real-time data. OPC technology is based on client / server architecture. Therefore, an OPC client, such as a trending application or spreadsheet, can retrieve data from an OPC compliant data source, such as a historian, using OPC HDA. Similar to the OPC Data Access specification, OPC Historical Data Access also uses Microsoft's DCOM to transport data. DCOM also provides OPC HDA with full security features such as user authentication and authorization, as well as communication encryption services. OPC HDA Clients and Servers can reside on separate PCs, even if they are separated by a firewall. To do this, system integrators must configure DCOM properly as well as open ports in the firewall. If using the Windows firewall, users only need to open a single port. See also OLE for process control OPC Foundation OPC Data Access External links OPC Foundation OPC Historical Data Access specification Automation Computer standards Component-based software engineering
19971851
https://en.wikipedia.org/wiki/Trojan%20%28mountain%29
Trojan (mountain)
Maja Trojan or Trojan ( or Trojan, Serbian/Montenegrin: Trojan) is a mountain in the Accursed Mountains range on the border of Albania and Montenegro. Situated 7 km west of the village Gusinje, its elevation is . From the summit of Trojan there is a wonderful panoramic view of the Accursed Mountains and other ranges in the Dinaric Alps such as Visitor. References Mountains of Albania Mountains of Montenegro Accursed Mountains Albania–Montenegro border
4996092
https://en.wikipedia.org/wiki/Laserfiche
Laserfiche
Laserfiche () is a privately owned software development company that creates enterprise content management, business process automation, workflow, records management, document imaging and webform software. Laserfiche is headquartered in Long Beach, California and has offices in Mexico, United Kingdom, Hong Kong, Shanghai and Canada. Laserfiche sells its software through value-added resellers distributed throughout the world. History Nien-Ling Wacker founded Compulink Management Center Inc., a custom software development company, in 1974. By the early 1980s, Wacker had identified an emerging need among her clients for an electronic document repository that would provide both secure storage and instant retrieval by any word or phrase in the document. The concept for a PC-based document management system began in 1981 when a client, a large Japanese auto manufacturer, required litigation support for a large volume of documents. At the time, paralegals had to wade through thousands of pages of depositions, entering keywords into a database. Attorneys were limited to searching on keywords to find relevant testimony. Nien-Ling Wacker realized that if a full-text index of every page were available, the search capabilities would be greatly enhanced, and the amount of physical labor required to index the documents would decrease substantially. With the release of WORM drives that cost "only" $200 for 200MB of diskspace, the conceived system could be made cost effective. The first version of Laserfiche was released in 1987, becoming the first DOS-based document imaging system in the world. The system used commercial off-the-shelf components such as OCR boards from Kurzweil, graphics monitors from Cornerstone, and scanner interface boards by Kofax. Timeline In 1993 Laserfiche released the first PC-based client–server document imaging system, based on the NetWare Loadable Modules platform. In 2002, Laserfiche 6 marked the company's first foray into MSSQL-based document management. That year, the company also introduced Quick Fields, an automated document processing module, and WebLink, which provided read-only, Web-based access to documents stored in Laserfiche. In 2004, Laserfiche 7 marked the company's first offering for Oracle users. In early 2008, the company released Laserfiche 8, with a re-written workflow engine and integration with Microsoft SharePoint. In August 2008, the company launched Laserfiche Rio, an enterprise content management offering with unlimited servers, named user licensing and bundled functionality including content management, business process management and a thin-client interface called Web Access. In 2009, Laserfiche opened an international office in Hong Kong, creating a separate company Laserfiche International. In 2010, Laserfiche filed a lawsuit against SAP for trademark infringement over the disputed trademark phrase "Run Smarter". In 2011, Laserfiche announced that the litigation had been settled amicably and involved a license of the "Run Smarter" trademark. In 2011, Laserfiche released Laserfiche Mobile for iPhone, an app that allows users to capture images with the phone's built-in camera, store them in the Laserfiche repository and include them in digital workflows. Later that year, Laserfiche was listed as a Champion in Info-Tech Research Group's Enterprise Content Management (ECM) for Process Workers Vendor Landscape. According to Info-Tech, "Laserfiche is a stalwart that is exploiting the new capabilities of emerging technology. In 2012, Laserfiche released Laserfiche Mobile for iPad, an app that extends governance, risk and compliance standards to the iPad. Laserfiche said it was profitable since 1994, growing at what Wacker describes as a managed pace. In 2019, Laserfiche announced its integration with Microsoft Office 365, which allows customers to edit Microsoft Office documents directly on the web. Products Laserfiche has two main product lines: Laserfiche Rio and Laserfiche Avante. Laserfiche Rio is designed to meet the needs of large organizations that have more than 100 users. It combines content management functionality with business process management (BPM), security and auditing, unlimited servers and a thin-client interface. Add-ons include DoD 5015.2-certified records management functionality, public Web portals and production-level document capture and processing. Laserfiche Avante is an ECM suite for small to medium organizations with fewer than 100 users. It combines content management with workflow tools that automate business processes. Built on the Microsoft platform, Laserfiche Avante allows users to drag and drop e-mails from Outlook into Laserfiche. The Laserfiche Institute The Laserfiche Institute's stated mission is to "teach staff, resellers and current and prospective clients how to use Laserfiche most effectively." As a part of this mission, the Institute conducts conferences, web seminars and publishes document management guides, white papers and other educational content. See also Document Management Enterprise Content Management Business Process Management Document Imaging Records Management References External links Bloomberg coverage of the trademark lawsuit against SAP Laserfiche official web site Companies based in Long Beach, California Companies based in Los Angeles County, California Computer companies of the United States Software companies established in 1974 Business software Business software companies Content management systems Document management systems 1974 establishments in California American companies established in 1974
3779936
https://en.wikipedia.org/wiki/MAC%20spoofing
MAC spoofing
MAC spoofing is a technique for changing a factory-assigned Media Access Control (MAC) address of a network interface on a networked device. The MAC address that is hard-coded on a network interface controller (NIC) cannot be changed. However, many drivers allow the MAC address to be changed. Additionally, there are tools which can make an operating system believe that the NIC has the MAC address of a user's choosing. The process of masking a MAC address is known as MAC spoofing. Essentially, MAC spoofing entails changing a computer's identity, for any reason. Motivation Changing the assigned MAC address may allow the user to bypass access control lists on servers or routers, either hiding a computer on a network or allowing it to impersonate another network device. MAC spoofing is done for legitimate and illicit purposes alike. New hardware for existing Internet Service Providers (ISP) Many ISPs register the client's MAC address for service and billing services. Since MAC addresses are unique and hard-coded on network interface controller (NIC) cards, when the client wants to connect a new device or change an existing one, the ISP will detect different MAC addresses and might not grant Internet access to those new devices. This can be circumvented easily by MAC spoofing, with the client only needing to spoof the new device's MAC address so it appears to be the MAC address that was registered by the ISP. In this case, the client spoofs their MAC address to gain Internet access from multiple devices. While this is generally a legitimate case, MAC spoofing of new devices can be considered illegal if the ISP's user agreement prevents the user from connecting more than one device to their service. Moreover, the client is not the only person who can spoof their MAC address to gain access to the ISP. Computer crackers can gain unauthorized access to the ISP via the same technique. This allows them to gain access to unauthorized services, while being difficult to identify and track as they are using the client's identity. This action is considered an illegitimate and illegal use of MAC spoofing. This also applies to customer-premises equipment, such as cable and DSL modems. If leased to the customer on a monthly basis, the equipment has a hard-coded MAC address known to the provider's distribution networks, allowing service to be established as long as the customer is not in billing arrears. In cases where the provider allows customers to provide their own equipment (and thus avoid the monthly leasing fee on their bill), the provider sometimes requires that the customer provide the MAC address of their equipment before service is established. Fulfilling software requirements Some software can only be installed and run on systems with pre-defined MAC addresses as stated in the software end-user license agreement, and users have to comply with this requirement in order to gain access to the software. If the user has to install different hardware due to malfunction of the original device or if there is a problem with the user's NIC card, then the software will not recognize the new hardware. However, this problem can be solved using MAC spoofing. The user has to spoof the new MAC address so that it appears to be the address that was in use when the software was registered. Legal issues might arise if the software is run on multiple devices at once by using MAC spoofing. At the same time, the user can access software for which they have not secured a license. Contacting the software vendor might be the safest route to take if there is a hardware problem preventing access to the software. Some softwares may also perform MAC filtering in an attempt to ensure unauthorized users cannot gain access to certain networks which would otherwise be freely accessible with the software. Such cases can be considered illegitimate or illegal activity and legal action may be taken. Identity masking If a user chooses to spoof their MAC address in order to protect their privacy, this is called identity masking. As an example motivation, on Wi-Fi network connections a MAC address is not encrypted. Even the secure IEEE 802.11i-2004 (WPA) encryption method does not prevent Wi-Fi networks from sending out MAC addresses. Hence, in order to avoid being tracked, the user might choose to spoof the device's MAC address. However, computer crackers use the same technique to bypass access control methods such as MAC filtering, without revealing their identity. MAC filtering prevents access to a network if the MAC address of the device attempting to connect does not match any addresses marked as allowed, which is used by some networks. Computer crackers can use MAC spoofing to gain access to networks utilising MAC filtering if any of the allowed MAC addresses are known to them, possibly with the intent of causing damage, while appearing to be one of the legitimate users of the network. As a result, the real offender may go undetected by law enforcement. MAC Address Randomization in WiFi To prevent third parties from using MAC addresses to track devices, Android, Linux, iOS, and Windows have implemented MAC address randomization. In June 2014, Apple announced that future versions of iOS would randomize MAC addresses for all WiFi connections. The Linux kernel has supported MAC address randomization during network scans since March 2015, but drivers need to be updated to use this feature. Windows has supported it since the release of Windows 10 in July 2015. Controversy Although MAC address spoofing is not illegal, its practice has caused controversy in some cases. In the 2012 indictment against Aaron Swartz, an Internet hacktivist who was accused of illegally accessing files from the JSTOR digital library, prosecutors claimed that because he had spoofed his MAC address, this showed purposeful intent to commit criminal acts. In June 2014, Apple announced that future versions of their iOS platform would randomize MAC addresses for all WiFi connections, making it more difficult for internet service providers to track user activities and identities, which resurrected moral and legal arguments surrounding the practice of MAC spoofing among several blogs and newspapers. Limitations MAC address spoofing is limited to the local broadcast domain. Unlike IP address spoofing, where senders spoof their IP address in order to cause the receiver to send the response elsewhere, in MAC address spoofing the response is usually received by the spoofing party if the switch is not configured to prevent MAC spoofing. See also MAC address Promiscuous mode IP spoofing ifconfig, linux utility capable of changing MAC address References Hacking (computer security) Types of cyberattacks he:כתובת MAC#זיוף כתובת MAC
71435
https://en.wikipedia.org/wiki/Universal%20Turing%20machine
Universal Turing machine
In computer science, a universal Turing machine (UTM) is a Turing machine that simulates an arbitrary Turing machine on arbitrary input. The universal machine essentially achieves this by reading both the description of the machine to be simulated as well as the input to that machine from its own tape. Alan Turing introduced the idea of such a machine in 1936–1937. This principle is considered to be the origin of the idea of a stored-program computer used by John von Neumann in 1946 for the "Electronic Computing Instrument" that now bears von Neumann's name: the von Neumann architecture. In terms of computational complexity, a multi-tape universal Turing machine need only be slower by logarithmic factor compared to the machines it simulates. Introduction Every Turing machine computes a certain fixed partial computable function from the input strings over its alphabet. In that sense it behaves like a computer with a fixed program. However, we can encode the action table of any Turing machine in a string. Thus we can construct a Turing machine that expects on its tape a string describing an action table followed by a string describing the input tape, and computes the tape that the encoded Turing machine would have computed. Turing described such a construction in complete detail in his 1936 paper: "It is possible to invent a single machine which can be used to compute any computable sequence. If this machine U is supplied with a tape on the beginning of which is written the S.D ["standard description" of an action table] of some computing machine M, then U will compute the same sequence as M." Stored-program computer Davis makes a persuasive argument that Turing's conception of what is now known as "the stored-program computer", of placing the "action table"—the instructions for the machine—in the same "memory" as the input data, strongly influenced John von Neumann's conception of the first American discrete-symbol (as opposed to analog) computer—the EDVAC. Davis quotes Time magazine to this effect, that "everyone who taps at a keyboard... is working on an incarnation of a Turing machine," and that "John von Neumann [built] on the work of Alan Turing" (Davis 2000:193 quoting Time magazine of 29 March 1999). Davis makes a case that Turing's Automatic Computing Engine (ACE) computer "anticipated" the notions of microprogramming (microcode) and RISC processors (Davis 2000:188). Knuth cites Turing's work on the ACE computer as designing "hardware to facilitate subroutine linkage" (Knuth 1973:225); Davis also references this work as Turing's use of a hardware "stack" (Davis 2000:237 footnote 18). As the Turing Machine was encouraging the construction of computers, the UTM was encouraging the development of the fledgling computer sciences. An early, if not the very first, assembler was proposed "by a young hot-shot programmer" for the EDVAC (Davis 2000:192). Von Neumann's "first serious program ... [was] to simply sort data efficiently" (Davis 2000:184). Knuth observes that the subroutine return embedded in the program itself rather than in special registers is attributable to von Neumann and Goldstine. Knuth furthermore states that "The first interpretive routine may be said to be the "Universal Turing Machine" ... Interpretive routines in the conventional sense were mentioned by John Mauchly in his lectures at the Moore School in 1946 ... Turing took part in this development also; interpretive systems for the Pilot ACE computer were written under his direction" (Knuth 1973:226). Davis briefly mentions operating systems and compilers as outcomes of the notion of program-as-data (Davis 2000:185). Some, however, might raise issues with this assessment. At the time (mid-1940s to mid-1950s) a relatively small cadre of researchers were intimately involved with the architecture of the new "digital computers". Hao Wang (1954), a young researcher at this time, made the following observation: Turing's theory of computable functions antedated but has not much influenced the extensive actual construction of digital computers. These two aspects of theory and practice have been developed almost entirely independently of each other. The main reason is undoubtedly that logicians are interested in questions radically different from those with which the applied mathematicians and electrical engineers are primarily concerned. It cannot, however, fail to strike one as rather strange that often the same concepts are expressed by very different terms in the two developments." (Wang 1954, 1957:63) Wang hoped that his paper would "connect the two approaches." Indeed, Minsky confirms this: "that the first formulation of Turing-machine theory in computer-like models appears in Wang (1957)" (Minsky 1967:200). Minsky goes on to demonstrate Turing equivalence of a counter machine. With respect to the reduction of computers to simple Turing equivalent models (and vice versa), Minsky's designation of Wang as having made "the first formulation" is open to debate. While both Minsky's paper of 1961 and Wang's paper of 1957 are cited by Shepherdson and Sturgis (1963), they also cite and summarize in some detail the work of European mathematicians Kaphenst (1959), Ershov (1959), and Péter (1958). The names of mathematicians Hermes (1954, 1955, 1961) and Kaphenst (1959) appear in the bibliographies of both Sheperdson-Sturgis (1963) and Elgot-Robinson (1961). Two other names of importance are Canadian researchers Melzak (1961) and Lambek (1961). For much more see Turing machine equivalents; references can be found at register machine. Mathematical theory With this encoding of action tables as strings, it becomes possible, in principle, for Turing machines to answer questions about the behaviour of other Turing machines. Most of these questions, however, are undecidable, meaning that the function in question cannot be calculated mechanically. For instance, the problem of determining whether an arbitrary Turing machine will halt on a particular input, or on all inputs, known as the Halting problem, was shown to be, in general, undecidable in Turing's original paper. Rice's theorem shows that any non-trivial question about the output of a Turing machine is undecidable. A universal Turing machine can calculate any recursive function, decide any recursive language, and accept any recursively enumerable language. According to the Church–Turing thesis, the problems solvable by a universal Turing machine are exactly those problems solvable by an algorithm or an effective method of computation, for any reasonable definition of those terms. For these reasons, a universal Turing machine serves as a standard against which to compare computational systems, and a system that can simulate a universal Turing machine is called Turing complete. An abstract version of the universal Turing machine is the universal function, a computable function which can be used to calculate any other computable function. The UTM theorem proves the existence of such a function. Efficiency Without loss of generality, the input of Turing machine can be assumed to be in the alphabet {0, 1}; any other finite alphabet can be encoded over {0, 1}. The behavior of a Turing machine M is determined by its transition function. This function can be easily encoded as a string over the alphabet {0, 1} as well. The size of the alphabet of M, the number of tapes it has, and the size of the state space can be deduced from the transition function's table. The distinguished states and symbols can be identified by their position, e.g. the first two states can by convention be the start and stop states. Consequently, every Turing machine can be encoded as a string over the alphabet {0, 1}. Additionally, we convene that every invalid encoding maps to a trivial Turing machine that immediately halts, and that every Turing machine can have an infinite number of encodings by padding the encoding with an arbitrary number of (say) 1's at the end, just like comments work in a programming language. It should be no surprise that we can achieve this encoding given the existence of a Gödel number and computational equivalence between Turing machines and μ-recursive functions. Similarly, our construction associates to every binary string α, a Turing machine Mα. Starting from the above encoding, in 1966 F. C. Hennie and R. E. Stearns showed that given a Turing machine Mα that halts on input x within N steps, then there exists a multi-tape universal Turing machine that halts on inputs α, x (given on different tapes) in CN log N, where C is a machine-specific constant that does not depend on the length of the input x, but does depend on Ms alphabet size, number of tapes, and number of states. Effectively this is an simulation, using Donald Knuth's Big O notation. The corresponding result for space-complexity rather than time-complexity is that we can simulate in a way that uses at most CN cells at any stage of the computation, an simulation. Smallest machines When Alan Turing came up with the idea of a universal machine he had in mind the simplest computing model powerful enough to calculate all possible functions that can be calculated. Claude Shannon first explicitly posed the question of finding the smallest possible universal Turing machine in 1956. He showed that two symbols were sufficient so long as enough states were used (or vice versa), and that it was always possible to exchange states for symbols. He also showed that no universal Turing machine of one state could exist. Marvin Minsky discovered a 7-state 4-symbol universal Turing machine in 1962 using 2-tag systems. Other small universal Turing machines have since been found by Yurii Rogozhin and others by extending this approach of tag system simulation. If we denote by (m, n) the class of UTMs with m states and n symbols the following tuples have been found: (15, 2), (9, 3), (6, 4), (5, 5), (4, 6), (3, 9), and (2, 18).Kudlek and Rogozhin, 2002 Rogozhin's (4, 6) machine uses only 22 instructions, and no standard UTM of lesser descriptional complexity is known. However, generalizing the standard Turing machine model admits even smaller UTMs. One such generalization is to allow an infinitely repeated word on one or both sides of the Turing machine input, thus extending the definition of universality and known as "semi-weak" or "weak" universality, respectively. Small weakly universal Turing machines that simulate the Rule 110 cellular automaton have been given for the (6, 2), (3, 3), and (2, 4) state-symbol pairs. The proof of universality for Wolfram's 2-state 3-symbol Turing machine further extends the notion of weak universality by allowing certain non-periodic initial configurations. Other variants on the standard Turing machine model that yield small UTMs include machines with multiple tapes or tapes of multiple dimension, and machines coupled with a finite automaton. Machines with no internal states If you allow multiple heads on the Turing machine then you can have a Turing machine with no internal states at all. The "states" are encoded as part of the tape. For example, consider a tape with 6 colours: 0, 1, 2, 0A, 1A, 2A. Consider a tape such as 0,0,1,2,2A,0,2,1 where a 3-headed Turing machine is situated over the triple (2,2A,0). The rules then convert any triple to another triple and move the 3-heads left or right. For example, the rules might convert (2,2A,0) to (2,1,0) and move the head left. Thus in this example the machine acts like a 3-colour Turing machine with internal states A and B (represented by no letter). The case for a 2-headed Turing machine is very similar. Thus a 2-headed Turing machine can be Universal with 6 colours. It is not known what the smallest number of colours needed for a multi-headed Turing machine are or if a 2-colour Universal Turing machine is possible with multiple heads. It also means that rewrite rules are Turing complete since the triple rules are equivalent to rewrite rules. Extending the tape to two dimensions with a head sampling a letter and its 8 neighbours, only 2 colours are needed, as for example, a colour can be encoded in a vertical triple pattern such as 110. Example of universal-machine coding For those who would undertake the challenge of designing a UTM exactly as Turing specified see the article by Davies in Copeland (2004:103ff). Davies corrects the errors in the original and shows what a sample run would look like. He claims to have successfully run a (somewhat simplified) simulation. The following example is taken from Turing (1936). For more about this example, see Turing machine examples. Turing used seven symbols { A, C, D, R, L, N, ; } to encode each 5-tuple; as described in the article Turing machine, his 5-tuples are only of types N1, N2, and N3. The number of each "mconfiguration" (instruction, state) is represented by "D" followed by a unary string of A's, e.g. "q3" = DAAA. In a similar manner he encodes the symbols blank as "D", the symbol "0" as "DC", the symbol "1" as DCC, etc. The symbols "R", "L", and "N" remain as is. After encoding each 5-tuple is then "assembled" into a string in order as shown in the following table: Finally, the codes for all four 5-tuples are strung together into a code started by ";" and separated by ";" i.e.: ;DADDCRDAA;DAADDRDAAA;DAAADDCCRDAAAA;DAAAADDRDA This code he placed on alternate squares—the "F-squares" – leaving the "E-squares" (those liable to erasure) empty. The final assembly of the code on the tape for the U-machine consists of placing two special symbols ("e") one after the other, then the code separated out on alternate squares, and lastly the double-colon symbol "::" (blanks shown here with "." for clarity): ee.;.D.A.D.D.C.R.D.A.A.;.D.A.A.D.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::...... The U-machine's action-table (state-transition table) is responsible for decoding the symbols. Turing's action table keeps track of its place with markers "u", "v", "x", "y", "z" by placing them in "E-squares" to the right of "the marked symbol" – for example, to mark the current instruction z is placed to the right of ";" x is keeping the place with respect to the current "mconfiguration" DAA. The U-machine's action table will shuttle these symbols around (erasing them and placing them in different locations) as the computation progresses: ee.; .D.A.D.D.C.R.D.A.A. ; zD.A.AxD.D.R.D.A.A.A.;.D.A.A.A.D.D.C.C.R.D.A.A.A.A.;.D.A.A.A.A.D.D.R.D.A.::...... Turing's action-table for his U-machine is very involved. A number of other commentators (notably Penrose 1989) provide examples of ways to encode instructions for the Universal machine. As does Penrose, most commentators use only binary symbols i.e. only symbols { 0, 1 }, or { blank, mark | }. Penrose goes further and writes out his entire U-machine code (Penrose 1989:71–73). He asserts that it truly is a U-machine code, an enormous number that spans almost 2 full pages of 1's and 0's. For readers interested in simpler encodings for the Post–Turing machine the discussion of Davis in Steen (Steen 1980:251ff) may be useful. Asperti and Ricciotti described a multi-tape UTM defined by composing elementary machines with very simple semantics, rather than explicitly giving its full action table. This approach was sufficiently modular to allow them to formally prove the correctness of the machine in the Matita proof assistant. Programming Turing machines Various higher level languages are designed to be compiled into a Turing machine. Examples include Laconic and Turing Machine Descriptor. See also Alternating Turing machine Von Neumann universal constructor — an attempt to build a self-replicating Turing machine Kleene's T predicate — a similar concept for µ-recursive functions Turing completeness ReferencesGeneral references Original Paper Seminal papers Implementation Formal verification Other references' . The first of Knuth's series of three texts. ) External links Turing machine
1054389
https://en.wikipedia.org/wiki/Decapitation%20strike
Decapitation strike
A decapitation strike is a military strategy aimed at removing the leadership or command and control of a hostile government or group. The strategy of shattering or defeating an enemy by eliminating its military and political leadership has long been utilized in warfare. Genocide The deportation of Armenian intellectuals in 1915, considered the start of the Armenian genocide German AB-Aktion in Poland by the Nazis during World War II The Katyn massacre by the Soviet Union against Polish military officers. As Polish law required every university graduate to be a reserve officer, executing the officers among the Polish POWs allowed Lavrentiy Beria to stunt Polish science, culture and leadership. In nuclear warfare In nuclear warfare theory, a decapitation strike is a pre-emptive first strike attack that aims to destabilize an opponent's military and civil leadership structure in the hope that it will severely degrade or destroy its capacity for nuclear retaliation. It is essentially a subset of a counterforce strike but whereas a counterforce strike seeks to destroy weapons directly, a decapitation strike is designed to remove an enemy's ability to use its weapons. Strategies against decapitation strikes include the following: Distributed command and control structures. Dispersal of political leadership and military leadership in times of tension. Delegation of ICBM/SLBM launch capability to local commanders in the event of a decapitation strike. Distributed and diverse launch mechanisms. A failed decapitation strike carries the risk of immediate, massive retaliation by the targeted opponent. Many countries with nuclear weapons specifically plan to prevent decapitation strikes by employing second-strike capabilities. Such countries may have mobile land-based launch, sea launch, air launch, and underground ballistic missile launch facilities so that a nuclear attack on one area of the country will not totally negate its ability to retaliate. Other nuclear warfare doctrines explicitly exclude decapitation strikes on the basis that it is better to preserve the adversary's command and control structures so that a single authority remains that is capable of negotiating a surrender or ceasefire. Implementing fail-deadly mechanisms can be a way to deter decapitation strikes and respond to successful decapitation strikes. Conventional warfare, assassination and terrorist acts Decapitation Strike strategy has been employed in conventional warfare. The 2003 invasion of Iraq began with a decapitation strike against Saddam Hussein and other Iraqi military and political leaders. These air strikes failed to kill their intended targets. The U.S. and its NATO allies have, and continue to pursue this strategy in its efforts to dismantle militant Islamic fundamentalist networks, such as Al-Qaeda and ISIL, that threaten the United States and allies. Additionally, the term has been used to describe the assassination of a government's entire leadership group or a nation's royal family. April 14, 1865: The assassination of U.S. President Abraham Lincoln by Confederate sympathizer John Wilkes Booth was part of a larger plot to disrupt the presidential line of succession by also killing then-Vice President Andrew Johnson, and Secretary of State William H. Seward, at the close of the American Civil War February 1, 1908: King Carlos I of Portugal was assassinated along with his son the Crown Prince Luís Filipe by Alfredo Luís da Costa and Manuel Buiça, both connected to the Carbonária (the Portuguese section of the Carbonari) July 17, 1918: Tsar Nicholas II of Russia and the Imperial Family were executed by a Bolshevik firing squad under the command of Yakov Yurovsky November 9, 1939: Attempt on German Führer Adolf Hitler's life in the Burgerbräukeller in Munich by Swabian carpenter Georg Elser, using a time bomb in order to cripple the Third Reich and its war effort. Several died, but Hitler escaped due to a change in schedule, leaving the rostrum 13 minutes before impact. July 20, 1944: Claus von Stauffenberg attempted to assassinate Hitler and his inner circle of advisers by a suitcase bomb as part of a broader military coup d'état against the Nazi government, which ultimately failed. Yemen 1948 Alwaziri coup. In recent warfare, unmanned aerial vehicles, or drones, are popularly used for decapitation strikes against terrorist and insurgent groups. Drones are most effective in areas with inadequate air defense. There are mixed scholarly opinions whether or not decapitation strikes via drones effectively degrade the capabilities of these groups. Some military strategists, like General Michael Flynn, have argued that the experience gained by the American and Coalition military experience from fighting the Taliban insurgency in Afghanistan was in support of kill or capture operations, but that they would be ineffective without a full understanding of how they would affect the local political landscape in the country. Robert Pape has argued that decapitation is a relatively ineffective strategy. He writes that decapitation is a seductive strategy as it promises "to solve conflicts quickly and cheaply with... little collateral damage, and minimal or no friendly casualties", but decapitation strikes frequently fail or are not likely to produce the intended consequences even if successful. Counterterrorism theorists Max Abrahms and Jochen Mierau argue that leadership decapitation in a terrorist or rebel group has the tendency to create disorder within the group, but find decapitation ineffective because group disorder can often lead to politically ineffective, unfocused attacks on civilians. The two conclude that "[t]his change in the internal composition of militant groups may affect the quality and hence selectivity of their violence." One tactic that is sometimes used to inform the target selection for decapitation strikes is social network analysis. This tactic involves identifying and eliminating higher ranked members in a hierarchically arranged rebel or terrorist group by targeting lower members first, and using intel gained in initial strikes to identify an organization's leadership. Some strategists, like Generals David Petraeus and Stanley McChrystal, have also called for dedicated task units that are non-hierarchical and can be reorganized, in order to face similar distributed or decentralized terrorist groups. Others, however, argue that decapitation strikes combined with social network analysis are more than unproductive, but can prolong a conflict due to their habit of eliminating rebel or terrorist leaders who are the most capable peace negotiators or have the potential to advance communities hardest hit by terror campaigns after the cessation of hostilities. See also Continuity of government Designated survivor Preemptive war Preventive war Samson Option Targeted killing List of military strategies and concepts List of military tactics Operation Looking Glass References Assassinations Military strategy Continuity of government
44192353
https://en.wikipedia.org/wiki/Sprinklr
Sprinklr
Sprinklr is an American software company based in New York City that develops a SaaS customer experience management (CXM) platform. The company's software, also called Sprinklr, combines different applications for social media marketing, social advertising, content management, collaboration, employee advocacy, customer care, social media research, and social media monitoring. Sprinklr was founded in 2009 by technology executive Ragy Thomas. On June 23rd, 2021, the company went public on the New York Stock Exchange under the symbol CXM. History Sprinklr was founded in 2009 by Ragy Thomas, a technology marketing executive previously with email marketing company Bigfoot International. Thomas initially funded the company himself, with servers operating out of the basement of his home. The company's name came from the metaphor of a brand carefully watering their social media presence. Early customers included Cisco, Dell and Virgin America. In March 2012, the company received its first outside funding. In March 2014, Sprinklr acquired Dachis Group, adding abilities for employee advocacy, competitive intelligence, social business consulting services, and content marketing. In May, the company announced a $40 million funding round, bringing it to a $500 million valuation. In August, Sprinklr acquired TBG Digital, one of Facebook's largest ad buying clients, to improve its paid social advertising capability. In September, Sprinklr acquired brand advocacy company Branderati. In March 2015, a $46 million series E funding round gave the company a value of $1.17 billion. Also in March, the company announced the launch of its Experience Cloud platform, a way for companies to manage interactions over 23 social media channels and websites. In June, Sprinklr bought text analytics vendor NewBrand. In November, the company acquired data segmentation firm Booshaka. In April 2016, the company acquired social analytics startup Postano. In July, the company announced a $105 million funding round for a valuation of $1.8 billion. In April 2017, the company expanded from social media management to customer experience management, with the launch of new products for its Experience Cloud platform, ranging from social listening tools to content marketing. In October, the company added eight additional products integrated with Experience Cloud. In April 2018, Sprinklr released artificial intelligence (AI) capabilities called Sprinklr Intuition, allowing automatic collection and analysis of social media data. In May 2019, the company released Product Insights, an AI capability that automatically categorizes customer comments across social media and review sites about product feedback related to design, packaging, performance or features. In December, the company acquired the social advertising business from ad management company Nanigans. In 2020, Sprinklr offered case tracking services to the Kerala government in India, as part of an app to assist with managing the COVID-19 outbreak. In April 2020, the opposition party to the government accused the company of compromising patient data related to COVID-19 patients, and criticized the services for being awarded without following proper procedures. The company denied the charges, claiming that the data used in its platform is owned and controlled by the government and stored in India, in compliance with India's data privacy regulations. The government confirmed with the Kerala High Court through an affidavit that the Covid-related data was managed by Kerala's Centre for Development of Imaging Technology (C-DIT) in the Amazon Web Services cloud, and that no Sprinklr employees had any access to the data. In September 2020, Sprinklr raised $200 million from private-equity firm Hellman & Friedman in a deal that valued the customer experience management company at $2.7 billion. On June 23, 2021, Sprinklr began trading as a public company on the New York Stock Exchange under the symbol CXM. On November 17, 2021, a federal jury in Oregon agreed with claims made by the Portland-based Opal Labs that Sprinklr misappropriated trade secrets and breached both a teaming contract and a nondisclosure agreement. The verdict is the latest development in a legal battle between the two companies that has stretched across four years. “The jury’s verdict confirms what Opal has been saying for years: Sprinklr stole the key components of Opal’s software and used them in Sprinklr’s competing product. The jurors unanimously found that Sprinklr stole Opal’s trade secrets and breached its non-disclosure agreements,” Opal attorney Chad Colton said in an email. The two companies will return to court in February 2022 to hash out damages and an outstanding fraud claim. After the verdict, Opal filed a motion to bar Sprinkr from selling all content creation and planning software developed from March 4, 2014 through today. A ruling is expected soon. Acquisition strategy Sprinklr uses its funding to acquire smaller firms that have tools Sprinklr wanted to build itself. To facilitate integration, Sprinklr discards the purchased technology and has the acquired company's employees develop a native Sprinklr version of the software. Products Sprinklr provides a unified SaaS-platform of products designed to help companies monitor and interact with customers and prospects over all digital channels, including social media channels, review sites and messaging channels. The products are care, research, marketing and advertising, and sales and engagement. Operations As of October 2018, it was reported that the company had over 1,500 customers. As of December 2019, the company reported over 1,500 employees, and 25 offices in 15 countries, located across North and South America, Europe and Asia-Pacific. Customers Its customers include Wells Fargo Bank, Amazon, Nike, Microsoft and McDonald's. References Marketing companies established in 2009 Companies listed on the New York Stock Exchange Search engine optimization Software companies based in New York City Social software 2021 initial public offerings Software companies established in 2009 2009 establishments in New York City
954075
https://en.wikipedia.org/wiki/Nokia%209210%20Communicator
Nokia 9210 Communicator
The Nokia 9210 Communicator is a third-generation Communicator series smartphone produced by Nokia, announced on 21 November 2000 and released in June 2001. It greatly improved on the second generation Nokia 9110 Communicator, providing a colour main screen and using an ARM processor. It is one of the few mobile phones able to send and receive fax. It was the first device to run on the Symbian OS platform, version 6, succeeding version 5 of EPOC. It also introduced Nokia's Series 80 interface, which was the result of Symbian Ltd.'s 'Crystal' design. It is used as a normal though bulky mobile phone in closed mode; when it is flipped open it can be used like a very small notebook computer with a 640 × 200 screen. The earpiece and microphone are located on the back so one must hold it with the front screen and keypad facing out to make a call. The phone also has speakerphone functionality. The 9210 Communicator's success helped Nokia overtake both Palm and Compaq to become the leading 'mobile data device' vendor in Western Europe in the third quarter of 2001, when it had a 28.3 percent share in the market. Specifications Main applications: mobile phone, desk application, messaging (SMS, fax, email), Internet (web, WAP), contacts (address book), calendar, office (word processor, spreadsheet, presentation viewer, file manager) Extra applications: calculator, clock, games, recorder, and unit converter. In addition, 3rd party software developers could freely implement new applications for the Nokia 9210 Communicator and offer them for download by the users. Processor: 32-bit 66 MHz ARM9-based RISC CPU Radio: foldout antenna for improved reception. Operating system: Symbian OS v6.0, Series 80 v1.0 Interface: IrDA but no Bluetooth, Serial port cable for PC. Audio: Stereo-headset, mp3-player software is optional, additional internal speaker for music and full-duplex speakerphone functionality. Includes PC Suite for the Nokia 9210 Communicator, running on Windows platform. Vibrating alert: not implemented. 9210i The 9210i launched in 2002 increased the internal memory to 40 MB, video streaming and Flash 5 support for the web browser. The main screen backlight was also changed from high voltage CCFL tube light to LED backlight, which was quite new technology at the time. Replacement models Nokia replaced the 9210 in first quarter of 2005 with: Nokia 9500 – has additional features (Wi-Fi and camera) but is smaller (148 mm × 57 mm × 24 mm) and lighter (222 g), and has an updated Symbian Series 80 operating system. Nokia 9300 – is smaller (132 mm × 51 mm × 21 mm) and lighter (167 g) than Nokia 9210, with similar features and the same operating system as the Nokia 9500. Both new models include other improvements such as: EDGE, colour external displays and Bluetooth. Accessories Camera Hands free car kit Nokia 9290 The American variant is the Nokia 9290, first introduced on 5 June 2001 and eventually, after a year-long delay, released on the continent in June 2002. See also Ericsson R380 Nokia 7650 List of Nokia products References Nokia 9210 info site Smartphones Symbian devices 9210 Mobile phones with an integrated hardware keyboard Mobile phones with infrared transmitter Flip phones
18037090
https://en.wikipedia.org/wiki/Tax%20compliance%20software
Tax compliance software
Tax compliance software is software that assists tax compliance, and may cover income tax, corporate tax, VAT, service tax, customs, sales tax, use tax, or other taxes its users may be required to pay. The software automatically calculates a user's tax liabilities to the government, keeps track of all transactions (in case of indirect taxes), keeps track of eligible tax credits, etc. The software can also generate forms or filings needed for tax compliance. The software will have pre-defined tax rates and slabs and can allocate income or revenue in the right slab itself. The aim of the software is to provide the user with easy way to calculate tax payment and minimize any human error. Tax compliance software has been present in developed countries for long in the form of tax calculators mainly for direct taxes, such as income tax and corporate tax. Gradually some more complex and customized tax compliance software has been designed and developed by organizations around the globe. Tax compliance software can be divided into two main categories: direct and indirect tax compliance software. Direct tax compliance software A direct tax is one paid directly to the government by the persons (juristic or natural) on whom it is imposed (often accompanied by a tax return filed by the taxpayer). Examples include income tax, corporate tax, and transfer tax such as estate tax and gift tax. Basic software for income tax in the form of a tax calculator, and are now widely used. For example, the Government of India provides an income tax calculator on their website. Corporate tax compliance software has also been in existence for years, more often than not within the company's Finance & Accounting software or financial module of ERPs. These suites have the facilities to maintain the company's General Ledger, Cash Management, Accounts Payable, Accounts Receivable, Fixed Assets along with some basic taxes. Indirect tax compliance software An indirect tax (such as sales tax, value added tax (VAT), or goods and services tax (GST)) is a tax collected by an intermediary (such as a retail store) from the person who bears the ultimate economic burden of the tax (such as the customer). The intermediary later files a tax return and forwards the tax proceeds to government with the return. Indirect Tax compliance has always been much more complex as compared to the direct taxes. Many indirect tax compliance programs have separate modules for VAT, Service Tax, Customs etc. VAT compliance software Value Added Tax (VAT) legislation has a common structure across countries (and states in case of India). Software must be customized to each country, however, because of differences in the some areas, such as handling of credit of capital goods, sale of scrap and second-hand goods, formats of mandatory submissions and audit exercises. Service tax compliance software Service tax compliance software often include maintenance of credit registers, handling reverse charges, rebate claims on export of services along with payment of tax and filing of returns. See also Streamlined Sales Tax Project References Financial software Regulatory compliance
12914
https://en.wikipedia.org/wiki/Ghost%20in%20the%20Shell
Ghost in the Shell
is a Japanese cyberpunk media franchise based on the seinen manga series of the same name written and illustrated by Masamune Shirow. The manga, first serialized in 1989 under the subtitle of The Ghost in the Shell, and later published as its own tankōbon volumes by Kodansha, told the story of the fictional counter-cyberterrorist organization Public Security Section 9, led by protagonist Major Motoko Kusanagi, and is set in mid-21st century Japan. Animation studio Production I.G has produced several anime adaptations of the series. These include the 1995 film of the same name and its sequel, Ghost in the Shell 2: Innocence; the 2002 television series, Ghost in the Shell: Stand Alone Complex, and its 2020 follow-up, Ghost in the Shell: SAC_2045; and the Ghost in the Shell: Arise original video animation (OVA) series. In addition, an American-produced live-action film was released on March 31, 2017. Overview Title The original editor Koichi Yuri says: At first, Ghost in the Shell came from Shirow, but when Yuri asked "something more flashy", Shirow came up with "攻殻機動隊 Koukaku Kidou Tai (Shell Squad)" for Yuri. But Shirow was attached to including "Ghost in the Shell" as well even if in smaller type. Setting Primarily set in the mid-twenty-first century in the fictional Japanese city of , otherwise known as , the manga and the many anime adaptations follow the members of Public Security Section 9, a task-force consisting of various professionals at solving and preventing crime, mostly with some sort of police background. Political intrigue and counter-terrorism operations are standard fare for Section 9, but the various actions of corrupt officials, companies, and cyber-criminals in each scenario are unique and require the diverse skills of Section 9's staff to prevent a series of incidents from escalating. In this post-cyberpunk iteration of a possible future, computer technology has advanced to the point that many members of the public possess cyberbrains, technology that allows them to interface their biological brain with various networks. The level of cyberization varies from simple minimal interfaces to almost complete replacement of the brain with cybernetic parts, in cases of severe trauma. This can also be combined with various levels of prostheses, with a fully prosthetic body enabling a person to become a cyborg. The main character of Ghost in the Shell, Major Motoko Kusanagi, is such a cyborg, having had a terrible accident befall her as a child that ultimately required her to use a full-body prosthesis to house her cyberbrain. This high level of cyberization, however, opens the brain up to attacks from highly skilled hackers, with the most dangerous being those who will hack a person to bend to their whims. Media Literature Original manga The original Ghost in the Shell manga ran in Japan from April 1989 to November 1990 in Kodansha's manga anthology Young Magazine, and was released in a tankōbon volume on October 5, 1991. Ghost in the Shell 2: Man-Machine Interface followed 1997 for 9 issues in Young Magazine, and was collected in the Ghost in the Shell: Solid Box on December 1, 2000. Four stories from Man-Machine Interface that were not released in tankobon format from previous releases were later collected in Ghost in the Shell 1.5: Human-Error Processor, and published by Kodansha on July 23, 2003. Several art books have also been published for the manga. Films Animated films Two animated films based on the original manga have been released, both directed by Mamoru Oshii and animated by Production I.G. Ghost in the Shell was released in 1995 and follows the "Puppet Master" storyline from the manga. It was re-released in 2008 as Ghost in the Shell 2.0 with new audio and updated 3D computer graphics in certain scenes. Innocence, otherwise known as Ghost in the Shell 2: Innocence, was released in 2004, with its story based on a chapter from the first manga. On September 5, 2014, it was revealed by Production I.G. that a new Ghost in the Shell animated film, in Japanese, would be released in 2015 promising to show the "further evolution [of the series]". On January 8, 2015, a short teaser trailer was revealed for the project unveiling a redesigned Major more closely resembling her appearance from the older films, and a plot following the Arise continuity of the franchise. The trailer listed Kazuya Nomura as the director, Kazuchika Kise as the general director and character designer, Toru Okubo as the animation director, Tow Ubukata as the screenplay writer and Cornelius as the composer. The film premiered on June 20, 2015, in Japanese theaters. Live-action film In 2008, DreamWorks and producer Steven Spielberg acquired the rights to a live-action film adaptation of the original Ghost in the Shell manga. On January 24, 2014, Rupert Sanders was announced as director, with a screenplay by William Wheeler. In April 2016, the full cast was announced, which included Juliette Binoche, Chin Han, Lasarus Ratuere and Kaori Momoi, and Scarlett Johansson in the lead role; the casting of Johansson drew accusations of whitewashing. Principal photography on the film began on location in Wellington, New Zealand, on February 1, 2016. Filming wrapped in June 2016. Ghost in the Shell premiered in Tokyo on March 16, 2017, and was released in the United States on March 31, 2017, in 2D, 3D and IMAX 3D. It received mixed reviews, with praise for its visuals and Johansson's performance but criticism for its script and grossing a substantive box office. Television Stand Alone Complex TV series, film and ONA In 2002, Ghost in the Shell: Stand Alone Complex premiered on Animax, presenting a new telling of Ghost in the Shell independent from the original manga, focusing on Section 9's investigation of the Laughing Man hacker. It was followed in 2004 by a second season titled Ghost in the Shell: S.A.C. 2nd GIG, which focused on the Individual Eleven terrorist group. The primary storylines of both seasons were compressed into OVAs broadcast as Ghost in the Shell: Stand Alone Complex The Laughing Man in 2005 and Ghost in the Shell: Stand Alone Complex Individual Eleven in 2006. Also in 2006, Ghost in the Shell: Stand Alone Complex - Solid State Society, featuring Section 9's confrontation with a hacker known as the Puppeteer, was broadcast, serving as a finale to the anime series. The extensive score for the series and its films was composed by Yoko Kanno. Kodansha and Production I.G announced on April 7, 2017 that Kenji Kamiyama and Shinji Aramaki would be co-directing a new Kōkaku Kidōtai anime production. On December 7, 2018, it was reported by Netflix that they had acquired the worldwide streaming rights to the original net animation (ONA) anime series, titled Ghost in the Shell: SAC_2045, and that it would premiere on April 23, 2020. The series will be in 3DCG and Sola Digital Arts will be collaborating with Production I.G on the project. It was later revealed that Ilya Kuvshinov will handle character designs. It was stated that the new series will have two seasons of 12 episodes each. For the first season, the opening theme song music was “Fly with me” as performed by Daiki Tsuneta, while the ending was “Sustain++” as performed by Mili. In addition to the anime, a series of published books, two separate manga adaptations, and several video games for consoles and mobile phones have been released for Stand Alone Complex. Arise OVA, TV series and film In 2013, a new iteration of the series titled Ghost in the Shell: Arise premiered, taking an original look at the Ghost in the Shell world, set before the original manga. It was released as a series of four original video animation (OVA) episodes (with limited theatrical releases) from 2013 to 2014, then recompiled as a 10-episode television series under the title of Kōkaku Kidōtai: Arise - Alternative Architecture. An additional fifth OVA titled Pyrophoric Cult, originally premiering in the Alternative Architecture broadcast as two original episodes, was released on August 26, 2015. Kazuchika Kise served as the chief director of the series, with Tow Ubukata as head writer. Cornelius was brought onto the project to compose the score for the series, with the Major's new voice actress Maaya Sakamoto also providing vocals for certain tracks. Ghost in the Shell: The New Movie, also known as Ghost in the Shell: Arise − The Movie or New Ghost in the Shell, is a 2015 film directed by Kazuya Nomura that serves as a finale to the Ghost in the Shell: Arise story arc. The film is a continuation to the plot of the Pyrophoric Cult episode of Arise, and ties up loose ends from that arc. A manga adaptation was serialized in Kodansha's Young Magazine, which started on March 13 and ended on August 26, 2013. Video games Ghost in the Shell was developed by Exact and released for the PlayStation on July 17, 1997, in Japan by Sony Computer Entertainment. It is a third-person shooter featuring an original storyline where the character plays a rookie member of Section 9. The video game's soundtrack Megatech Body features various techno artists, such as Takkyu Ishino, Scan X or Mijk Van Dijk. Several video games were also developed to tie into the Stand Alone Complex television series, in addition to a first-person shooter by Nexon and Neople titled Ghost in the Shell: Stand Alone Complex - First Assault Online, released in 2016. Legacy Ghost in the Shell influenced a number of prominent filmmakers. The Wachowskis, creators of The Matrix and its sequels, showed it to producer Joel Silver, saying, "We wanna do that for real." The Matrix series took several concepts from the film, including the Matrix digital rain, which was inspired by the opening credits of Ghost in the Shell, and the way characters access the Matrix through holes in the back of their necks. Other parallels have been drawn to James Cameron's Avatar, Steven Spielberg's A.I. Artificial Intelligence, and Jonathan Mostow's Surrogates. James Cameron cited Ghost in the Shell as a source of inspiration, citing it as an influence on Avatar. Bungie's 2001 third-person action game Oni draws substantial inspiration from Ghost in the Shell setting and characters. Ghost in the Shell also influenced video games such as the Metal Gear Solid series, Deus Ex, and Cyberpunk 2077. Notes References External links Madman Entertainment's Australian distribution release site Artificial intelligence in fiction Bandai Namco franchises Brain–computer interfacing in fiction Cybernetted society in fiction Cyberpunk Cyberpunk anime and manga Cyborgs in fiction Fiction about consciousness transfer Fiction about memory erasure and alteration IG Port franchises Kodansha franchises Post-apocalyptic fiction Postcyberpunk Prosthetics in fiction Fiction about robots Transhumanism Transhumanism in fiction
14669433
https://en.wikipedia.org/wiki/CyberCIEGE
CyberCIEGE
CyberCIEGE is a serious game designed to teach network security concepts. Its development was sponsored by the U.S. Navy, and it is used as a training tool by agencies of the U.S. government, universities and community colleges. CyberCIEGE covers a broad range of cybersecurity topics. Players purchase and configure computers and network devices to keep demanding users happy (e.g., by providing Internet access) all while protecting assets from a variety of attacks. The game includes a number of different scenarios, some of which focus on basic training and awareness, others on more advanced network security concepts. A "Scenario Development Kit" is available for creating and customizing scenarios. Network security components include configurable firewalls, VPN gateways, VPN clients, link encryptors and authentication servers. Workstations and servers include access control lists (ACLs) may be configured with operating systems that enforce label-based mandatory access control policies. Players can deploy Public Key Infrastructure (PKI)-based cryptography to protect email, web traffic and VPNs. The game also includes identity management devices such as biometric scanners and card readers to control access to workstations and physical areas. The CyberCIEGE game engine consumes a “scenario development language” that describes each scenario in terms of users (and their goals), assets (and their values), the initial state of the scenario in terms of pre-existing components, and the conditions and triggers that provide flow to the scenario. The game engine is defined with enough fidelity to host scenarios ranging from e-mail attachment awareness to cyber warfare. Game play CyberCIEGE scenarios place the player into situations in which the player must make information assurance decisions. The interactive simulation illustrates potential consequences of player choices in terms of attacks on information assets and disruptions to authorized user access to assets. The game employs hyperbole as a means of engaging students in the scenario, and thus the simulation is not intended to always identify the actual consequences of specific choices. The game confronts the student with problems, conflicts and questions that should be considered when developing and implementing a security policy. The game is designed as a "construction and management simulation" set in a three-dimensional virtual world. Players build networks and observe virtual users and their thoughts. Each scenario is divided into multiple phases and each phase includes one or more objectives the player must achieve prior to moving on to the next phase. Players view the status of the virtual user’s success in achieving goals (i.e., accessing enterprise assets via computers and networks). Unproductive users express unhappy thoughts, utter comic book style speech bubbles and bang on their keyboards. Players see the consequences of attacks as lost money, pop-up messages, video clips and burning computers. Game Engine CyberCIEGE includes a sophisticated attack engine that assesses network topologies, component configurations, physical security, user training and procedural security settings. The attack engine weighs resultant vulnerabilities against the attacker motives to compromise assets on the network—and this motive may vary by asset. Thus, some assets might be defended via a firewall, while other assets might require an air gap or high assurance protection mechanisms. Attack types include Trojan horses, viruses, trap doors, denial of service, insiders (i.e., bribed users who lack background checks), un-patched flaws and physical attacks. The attack engine is coupled with an economy engine that measures the virtual user’s ability to achieve goals (i.e., read or write assets) using computers and networks. This combination supports scenarios that illustrate real-world trade-offs such as the use of air-gaps versus the risks of cross-domain solutions when accessing assets on both classified and unclassified networks. The game engine includes a defined set of assessable conditions and resultant triggers that allow the scenario designer to provide players with feedback, (e.g., bubble speech from characters, screen tickers, pop-up messages, etc.), and to transition the game to new phases. CyberCIEGE Fidelity The fidelity of the game engine is intended to be high enough for players to make meaningful choices with respect to deploying network security countermeasures, but not be so high as to engulf the player with administrative minutiae. CyberCIEGE illustrates abstract functions of technical protection mechanisms and configuration-related vulnerabilities. For example, an attack might occur because a particular firewall port is left open and a specific software service is not patched. CyberCIEGE has been designed to provide a fairly consistent level of abstraction among the various network and computer components and technical countermeasures. This can be seen by considering several CyberCIEGE game components. CyberCIEGE firewalls include network filters that let players block traffic over selected application “ports” (e.g., Telnet). Players can configure these filters for different network interfaces and different traffic directions. This lets players see the consequences of leaving ports open (e.g., attacks). And this allows players to experience the need to open some ports (e.g., one of the characters might be unable to achieve a goal unless the filter is configured to allow SSH traffic). CyberCIEGE includes VPN gateways and computer based VPN mechanisms that players configure to identify the characteristics of the protection (e.g., encryption, authentication or neither) provided to network traffic, depending on its source and destination. This allows CyberCIEGE to illustrate risks associated with providing unprotected Internet access to the same workstation that has a VPN tunnel into the corporate network. Other network components (e.g., workstations) include configuration choices related to the type of component. CyberCIEGE lets players select consequential password policies and other procedural and configuration settings. References External links CyberCIEGE official site Computer network security
2924038
https://en.wikipedia.org/wiki/Ioctl
Ioctl
In computing, ioctl (an abbreviation of input/output control) is a system call for device-specific input/output operations and other operations which cannot be expressed by regular system calls. It takes a parameter specifying a request code; the effect of a call depends completely on the request code. Request codes are often device-specific. For instance, a CD-ROM device driver which can instruct a physical device to eject a disc would provide an ioctl request code to do so. Device-independent request codes are sometimes used to give userspace access to kernel functions which are only used by core system software or still under development. The ioctl system call first appeared in Version 7 of Unix under that name. It is supported by most Unix and Unix-like systems, including Linux and macOS, though the available request codes differ from system to system. Microsoft Windows provides a similar function, named "DeviceIoControl", in its Win32 API. Background Conventional operating systems can be divided into two layers, userspace and the kernel. Application code such as a text editor resides in userspace, while the underlying facilities of the operating system, such as the network stack, reside in the kernel. Kernel code handles sensitive resources and implements the security and reliability barriers between applications; for this reason, user mode applications are prevented by the operating system from directly accessing kernel resources. Userspace applications typically make requests to the kernel by means of system calls, whose code lies in the kernel layer. A system call usually takes the form of a "system call vector", in which the desired system call is indicated with an index number. For instance, exit() might be system call number 1, and write() number 4. The system call vector is then used to find the desired kernel function for the request. In this way, conventional operating systems typically provide several hundred system calls to the userspace. Though an expedient design for accessing standard kernel facilities, system calls are sometimes inappropriate for accessing non-standard hardware peripherals. By necessity, most hardware peripherals (aka devices) are directly addressable only within the kernel. But user code may need to communicate directly with devices; for instance, an administrator might configure the media type on an Ethernet interface. Modern operating systems support diverse devices, many of which offer a large collection of facilities. Some of these facilities may not be foreseen by the kernel designer, and as a consequence it is difficult for a kernel to provide system calls for using the devices. To solve this problem, the kernel is designed to be extensible, and may accept an extra module called a device driver which runs in kernel space and can directly address the device. An ioctl interface is a single system call by which userspace may communicate with device drivers. Requests on a device driver are vectored with respect to this ioctl system call, typically by a handle to the device and a request number. The basic kernel can thus allow the userspace to access a device driver without knowing anything about the facilities supported by the device, and without needing an unmanageably large collection of system calls. Uses Hardware device configuration A common use of ioctl is to control hardware devices. For example, on Win32 systems, ioctl calls can communicate with USB devices, or they can discover drive-geometry information of the attached storage-devices. On OpenBSD and NetBSD, ioctl is used by the pseudo-device driver and the bioctl utility to implement RAID volume management in a unified vendor-agnostic interface similar to ifconfig. On NetBSD, ioctl is also used by the sysmon framework. Terminals One use of ioctl in code exposed to end-user applications is terminal I/O. Unix operating systems have traditionally made heavy use of command-line interfaces. The Unix command-line interface is built on pseudo terminals (ptys), which emulate hardware text terminals such as VT100s. A pty is controlled and configured as if it were a hardware device, using ioctl calls. For instance, the window size of a pty is set using the TIOCSWINSZ call. The TIOCSTI (terminal I/O control, simulate terminal input) ioctl function can push a character into a device stream. Kernel extensions When applications need to extend the kernel, for instance to accelerate network processing, ioctl calls provide a convenient way to bridge userspace code to kernel extensions. Kernel extensions can provide a location in the filesystem that can be opened by name, through which an arbitrary number of ioctl calls can be dispatched, allowing the extension to be programmed without adding system calls to the operating system. sysctl alternative According to an OpenBSD developer, ioctl and sysctl are the two system calls for extending the kernel, with sysctl possibly being the simpler of the two. In NetBSD, the sysmon_envsys framework for hardware monitoring uses ioctl through proplib; whereas OpenBSD and DragonFly BSD instead use sysctl for their corresponding hw.sensors framework. The original revision of envsys in NetBSD was implemented with ioctl before proplib was available, and had a message suggesting that the framework is experimental, and should be replaced by a sysctl(8) interface, should one be developed, which potentially explains the choice of sysctl in OpenBSD with its subsequent introduction of hw.sensors in 2003. However, when the envsys framework was redesigned in 2007 around proplib, the system call remained as ioctl, and the message was removed. Implementations Unix The ioctl system call first appeared in Version 7 Unix, as a renamed stty. An ioctl call takes as parameters: an open file descriptor a request code number either an integer value, possibly unsigned (going to the driver) or a pointer to data (either going to the driver, coming back from the driver, or both). The kernel generally dispatches an ioctl call straight to the device driver, which can interpret the request number and data in whatever way required. The writers of each driver document request numbers for that particular driver and provide them as constants in a header file. Some Unix systems, including Linux, have conventions which encode within the request number the size of the data to be transferred to/from the device driver, the direction of the data transfer and the identity of the driver implementing the request. Regardless of whether such a convention is followed, the kernel and the driver collaborate to deliver a uniform error code (denoted by the symbolic constant ENOTTY) to an application which makes a request of a driver which does not recognise it. The mnemonic ENOTTY (traditionally associated with the textual message "Not a typewriter") derives from the earliest systems that incorporated an ioctl call, where only the teletype (tty) device raised this error. Though the symbolic mnemonic is fixed by compatibility requirements, some modern systems more helpfully render a more general message such as "Inappropriate device control operation" (or a localization thereof). TCSETS exemplifies an ioctl call on a serial port. The normal read and write calls on a serial port receive and send data bytes. An ioctl(fd,TCSETS,data) call, separate from such normal I/O, controls various driver options like handling of special characters, or the output signals on the port (such as the DTR signal). Win32 A Win32 DeviceIoControl takes as parameters: an open object handle (the Win32 equivalent of a file descriptor) a request code number (the "control code") a buffer for input parameters length of the input buffer a buffer for output results length of the output buffer an OVERLAPPED structure, if overlapped I/O is being used. The Win32 device control code takes into consideration the mode of the operation being performed. There are 4 defined modes of operation, impacting the security of the device driver - METHOD_IN_DIRECT: The buffer address is verified to be readable by the user mode caller. METHOD_OUT_DIRECT: The buffer address is verified to be writable by the user mode caller. METHOD_NEITHER: User mode virtual addresses are passed to the driver without mapping or validation. METHOD_BUFFERED: IO Manager controlled shared buffers are used to move data to and from user mode. Alternatives Other vectored call interfaces Devices and kernel extensions may be linked to userspace using additional new system calls, although this approach is rarely taken, because operating system developers try to keep the system call interface focused and efficient. On Unix operating systems, two other vectored call interfaces are popular: the fcntl ("file control") system call configures open files, and is used in situations such as enabling non-blocking I/O; and the setsockopt ("set socket option") system call configures open network sockets, a facility used to configure the ipfw packet firewall on BSD Unix systems. Memory mapping Unix Device interfaces and input/output capabilities are sometimes provided using memory-mapped files. Applications that interact with devices open a location on the filesystem corresponding to the device, as they would for an ioctl call, but then use memory mapping system calls to tie a portion of their address space to that of the kernel. This interface is a far more efficient way to provide bulk data transfer between a device and a userspace application; individual ioctl or read/write system calls inflict overhead due to repeated userspace-to-kernel transitions, where access to a memory-mapped range of addresses incurs no such overhead. Win32 Buffered IO methods or named file mapping objects can be used; however, for simple device drivers the standard DeviceIoControl METHOD_ accesses are sufficient. Netlink Netlink is a socket-like mechanism for inter-process communication (IPC), designed to be a more flexible successor to ioctl. Implications Complexity ioctl calls minimize the complexity of the kernel's system call interface. However, by providing a place for developers to "stash" bits and pieces of kernel programming interfaces, ioctl calls complicate the overall user-to-kernel API. A kernel that provides several hundred system calls may provide several thousand ioctl calls. Though the interface to ioctl calls appears somewhat different from conventional system calls, there is in practice little difference between an ioctl call and a system call; an ioctl call is simply a system call with a different dispatching mechanism. Many of the arguments against expanding the kernel system call interface could therefore be applied to ioctl interfaces. To application developers, system calls appear no different from application subroutines; they are simply function calls that take arguments and return values. The runtime libraries of the OS mask the complexity involved in invoking system calls. Unfortunately, runtime libraries do not make ioctl calls as transparent. Simple operations like discovering the IP addresses for a machine often require tangled messes of ioctl calls, each requiring magic numbers and argument structures. Libpcap and libdnet are two examples of third-party wrapper Unix libraries designed to mask the complexity of ioctl interfaces, for packet capture and packet I/O, respectively. Security The user-to-kernel interfaces of mainstream operating systems are often audited heavily for code flaws and security vulnerabilities prior to release. These audits typically focus on the well-documented system call interfaces; for instance, auditors might ensure that sensitive security calls such as changing user IDs are only available to administrative users. ioctl interfaces are more complicated, more diverse, and thus harder to audit than system calls. Furthermore, because ioctl calls can be provided by third-party developers, often after the core operating system has been released, ioctl call implementations may receive less scrutiny and thus harbor more vulnerabilities. Finally, many ioctl calls, particularly for third-party device drivers, are undocumented. Because the handler for an ioctl call resides directly in kernel mode, the input from userspace should be validated carefully. Vulnerabilities in device drivers can be exploited by local users by passing invalid buffers to ioctl calls. Win32 and Unix operating systems can protect a userspace device name from access by applications with specific access controls applied to the device. Security problems can arise when device driver developers do not apply appropriate access controls to the userspace accessible object. Some modern operating systems protect the kernel from hostile userspace code (such as applications that have been infected by buffer overflow exploits) using system call wrappers. System call wrappers implement role-based access control by specifying which system calls can be invoked by which applications; wrappers can, for instance, be used to "revoke" the right of a mail program to spawn other programs. ioctl interfaces complicate system call wrappers because there are large numbers of them, each taking different arguments, some of which may be required by normal programs. Further reading W. Richard Stevens, Advanced Programming in the UNIX Environment (Addison-Wesley, 1992, ), section 3.14. Generic I/O Control operations in the online manual for the GNU C Library "DeviceIoControl Documentation at the Microsoft Developer Network References Unix System calls
39158860
https://en.wikipedia.org/wiki/Viktor%20Bodrogi
Viktor Bodrogi
Viktor Bodrogi (born 28 December 1983) is a Hungarian former swimmer, who specialized in backstroke and butterfly events. He is a two-time Olympian, a five-time All-American honoree, and a multiple-time Hungarian title and record holder in both backstroke and butterfly (50, 100, and 200). He also defended two titles in the same stroke (200 m) at the 2000 and 2001 European Junior Swimming Championships in Dunkerque, France, and in Valletta, Malta, respectively. Bodrogi is a former varsity swimmer for the USC Trojans under head coach Dave Salo, and a graduate of history and social sciences at the University of Southern California in Los Angeles. Bodrogi's Olympic debut came as the youngest male swimmer (aged 16) for the Hungarian squad at the 2000 Summer Olympics in Sydney, competing in two swimming events. In the 200 m butterfly, Bodrogi placed twenty-fourth on the morning prelims. Swimming in heat three, he edged out Greece's Ioannis Drymonakos to take a second spot by a hundredth of a second (0.01) in 2:00.74. In his second event, 200 m backstroke, Bodrogi was disqualified from the fourth heat for passing and breaching the 15-metre start line during the race. At the 2001 FINA World Championships in Fukuoka, Japan, Bodrogi cleared a two-minute barrier to lead a third fastest semifinal time and set a Hungarian record of 1:59.24 in the 200 m backstroke. Bodrogi swam only for the 200 m backstroke at the 2004 Summer Olympics in Athens. He achieved a FINA A-standard of 2:00.13 from the national championships in Székesfehérvár. He challenged seven other swimmers in heat four, including British duo James Goddard and Gregor Tait. He rounded out the field to last place by more than half a second (0.50) behind New Zealand's Cameron Gibson in 2:03.16. Bodrogi failed to advance into the semifinals, as he placed twenty-fourth overall in the preliminaries. References External links Player Bio – USC Trojans 1983 births Living people Hungarian male swimmers Olympic swimmers of Hungary Swimmers at the 2000 Summer Olympics Swimmers at the 2004 Summer Olympics Male backstroke swimmers Male butterfly swimmers USC Trojans men's swimmers University of Southern California alumni Swimmers from Budapest
15085760
https://en.wikipedia.org/wiki/Dave%20Barker
Dave Barker
Dave Barker (born David John Crooks, 10 October 1947, Franklyn Town, Kingston, Jamaica) is a reggae and rocksteady singer who has made a string of solo albums along with recordings as a member of The Techniques and as half of the duo Dave and Ansell Collins. Biography Crooks was born in 1947 and raised by his grandmother and three uncles from the age of 4 after his mother emigrated to England in 1952 (his father emigrated to the United States before he was born). He was beaten by his uncles and teachers. He also had a stammer but began singing as a teenager, inspired by James Brown and Otis Redding, whom he heard on American radio stations. His first group was the Two Tones, formed with friends Brenton Matthews and Fathead, who recorded unsuccessfully for Duke Reid. Barker had a brief stint in Winston Riley's Techniques, singing alongside Riley and Bruce Ruffin, and formed a duo with Glen Brown in the duo Glen and Dave, recording for both Harry J and Coxsone Dodd, while also working in the pressing plant at Studio One. Brown introduced Crooks to Lee "Scratch" Perry, suggesting that Perry get Crooks to voice a track at a recording session. The resulting track was "Prisoner of Love", and led to Crooks becoming a regular vocalist for Perry, who decided Crooks should record as Dave Barker, and encouraged American-style deejay vocals in addition to Barker's usual high tenor singing. Barker had hit singles with "Shocks of a Mighty" and "Spinning Wheel" (with Melanie Jonas), followed in 1970 by his debut album Prisoner of Love. Working with Perry at the same time as The Wailers, Barker toasted over the latter's "Small Axe" for 1971's "Shocks 71". March 1971 brought international fame as part of a duo with Ansell Collins. "Double Barrel" was a number 1 hit in the United Kingdom, and was also the first recording on which drummer Sly Dunbar played. "Double Barrel" was followed in June of the same year with a number 7 UK hit in the form of "Monkey Spanner". The duo failed to sustain success in the UK, and Collins returned to Jamaica, with Barker settling in England and embarking once again on a solo career, releasing 1976's In The Ghetto, which while credited to Dave and Ansell Collins, was a solo effort. He then joined vocal group Chain Reaction along with Bruce Ruffin and Bobby Davis, targeting the soul market, but without great success. Barker has continued to record, including a live album featuring Barker playing with The Selecter. In 2005 Dave Barker performed a couple of tracks with UK ska/ reggae band The Riffs at Club Ska in London. A version of Double Barrel (recorded with The Riffs) was released on The Riffs - Live at Club Ska album the following year on Moon Ska Records. He appeared as a guest on The Radcliffe & Maconie BBC Radio 6 Music show on 20 August 2012 talking extensively about the music he was influenced by and the music had created alongside Ansell and about Jamaica's independence. Discography Albums Prisoner Of Love (1970) Trojan (Dave Barker meets The Upsetters) Double Barrel (1971) Big Tree/Techniques (Dave & Ansell Collins) also issued on RAS with different track listing In The Ghetto (1976) Trojan Roadblock (1979) Bushranger Never Lose Never Win (1976) Gull (Chain Reaction) Change of Action (1983) Vista Chase a Miracle (1983) Vista Classics (1991) Techniques (The Techniques) Monkey Spanner (1997) Trojan Dave And Ansel Collins (2001) Armoury (Dave & Ansell Collins) Kingston Affair (200?) Moon Ska (Dave Barker & The Selecter) References External links Dave Barker at Roots Archives Jamaican reggae musicians Musicians from Kingston, Jamaica Living people 1947 births
4684241
https://en.wikipedia.org/wiki/NTP%20server%20misuse%20and%20abuse
NTP server misuse and abuse
NTP server misuse and abuse covers a number of practices which cause damage or degradation to a Network Time Protocol (NTP) server, ranging from flooding it with traffic (effectively a DDoS attack) or violating the server's access policy or the NTP rules of engagement. One incident was branded NTP vandalism in an open letter from Poul-Henning Kamp to the router manufacturer D-Link in 2006. This term has later been extended by others to retroactively include other incidents. There is, however, no evidence that any of these problems are deliberate vandalism. They are more usually caused by shortsighted or poorly chosen default configurations. A deliberate form of NTP server abuse came to note at the end of 2013, when NTP servers were used as part of amplification denial-of-service attacks. Some NTP servers would respond to a single "monlist" UDP request packet, with packets describing up to 600 associations. By using a request with a spoofed IP address attackers could direct an amplified stream of packets at a network. This resulted in one of the largest distributed denial-of-service attacks known at the time. Common NTP client problems The most troublesome problems have involved NTP server addresses hardcoded in the firmware of consumer networking devices. As major manufacturers and OEMs have produced hundreds of thousands of devices using NTP coupled with customers almost never upgrading the firmware of these devices, NTP query storms problems will persist for as long as the devices are in service. One particularly common NTP software error is to generate query packets at short (less than five second) intervals until a response is received When placed behind aggressive firewalls that block the server responses, this implementation leads to a never-ending stream of client requests to the variously blocked NTP servers. Such over-eager clients (particularly those polling once per second) commonly make up more than 50% of the traffic of public NTP servers, despite being a minuscule fraction of the total clients. While it might be technically reasonable to send a few initial packets at short intervals, it is essential for the health of any network that client connection re-attempts are generated at logarithmically or exponentially decreasing rates to prevent denial of service. This in protocol exponential or logarithmic backdown applies to any connectionless protocol, and by extension many portions of connection-based protocols. Examples of this backing down method can be found in the TCP specification for connection establishment, zero-window probing, and keepalive transmissions. Notable cases Tardis and Trinity College, Dublin In October 2002, one of the earliest known cases of time server misuse resulted in problems for a web server at Trinity College, Dublin. The traffic was ultimately traced to misbehaving copies of a program called Tardis with thousands of copies around the world contacting the web server and obtaining a timestamp via HTTP. Ultimately, the solution was to modify the web server configuration so as to deliver a customized version of the home page (greatly reduced in size) and to return a bogus time value, which caused most of the clients to choose a different time server. An updated version of Tardis was later released to correct for this problem. Netgear and the University of Wisconsin–Madison The first widely known case of NTP server problems began in May 2003, when Netgear's hardware products flooded the University of Wisconsin–Madison's NTP server with requests. University personnel initially assumed this was a malicious distributed denial of service attack and took actions to block the flood at their network border. Rather than abating (as most DDOS attacks do) the flow increased, reaching 250,000 packets-per-second (150 megabits per second) by June. Subsequent investigation revealed that four models of Netgear routers were the source of the problem. It was found that the SNTP (Simple NTP) client in the routers has two serious flaws. First, it relies on a single NTP server (at the University of Wisconsin–Madison) whose IP address was hard-coded in the firmware. Second, it polls the server at one second intervals until it receives a response. A total of 707,147 products with the faulty client were produced. Netgear has released firmware updates for the affected products (DG814, HR314, MR814 and RP614) which query Netgear's own servers, poll only once every ten minutes, and give up after five failures. While this update fixes the flaws in the original SNTP client, it does not solve the larger problem. Most consumers will never update their router's firmware, particularly if the device seems to be operating properly. The University of Wisconsin–Madison NTP server continues to receive high levels of traffic from Netgear routers, with occasional floods of up to 100,000 packets-per-second. Netgear has donated $375,000 to the University of Wisconsin–Madison's Division of Information Technology for their help in identifying the flaw. SMC and CSIRO Also in 2003, another case forced the NTP servers of the Australian Commonwealth Scientific and Industrial Research Organisation's (CSIRO) National Measurement Laboratory to close to the public. The traffic was shown to come from a bad NTP implementation in some SMC router models where the IP address of the CSIRO server was embedded in the firmware. SMC has released firmware updates for the products: the 7004VBR and 7004VWBR models are known to be affected. D-Link and Poul-Henning Kamp In 2005 Poul-Henning Kamp, the manager of the only Danish Stratum 1 NTP server available to the general public, observed a huge rise in traffic and discovered that between 75 and 90% was originating with D-Link's router products. Stratum 1 NTP servers receive their time signal from an accurate external source, such as a GPS receiver, radio clock, or a calibrated atomic clock. By convention, Stratum 1 time servers should only be used by applications requiring extremely precise time measurements, such as scientific applications or Stratum 2 servers with a large number of clients. A home networking router does not meet either of these criteria. In addition, Kamp's server's access policy explicitly limited it to servers directly connected to the Danish Internet Exchange (DIX). The direct use of this and other Stratum 1 servers by D-Link's routers resulted in a huge rise in traffic, increasing bandwidth costs and server load. In many countries, official timekeeping services are provided by a government agency (such as NIST in the U.S.). Since there is no Danish equivalent, Kamp provides his time service "pro bono publico". In return, DIX agreed to provide a free connection for his time server under the assumption that the bandwidth involved would be relatively low, given the limited number of servers and potential clients. With the increased traffic caused by the D-Link routers, DIX requested he pay a yearly connection fee of (approximately or ). Kamp contacted D-Link in November 2005, hoping to get them to fix the problem and compensate him for the time and money he spent tracking down the problem and the bandwidth charges caused by D-Link products. The company denied any problem, accused him of extortion, and offered an amount in compensation which Kamp asserted did not cover his expenses. On 7 April 2006, Kamp posted the story on his website. The story was picked up by Slashdot, Reddit and other news sites. After going public, Kamp realized that D-Link routers were directly querying other Stratum 1 time servers, violating the access policies of at least 43 of them in the process. On April 27, 2006, D-Link and Kamp announced that they had "amicably resolved" their dispute. IT providers and swisstime.ethz.ch For over 20 years ETH Zurich has provided open access to the time server swisstime.ethz.ch for operational time synchronization. Due to excessive bandwidth usage, averaging upwards of 20 GB / day, it has become necessary to direct external usage to public time server pools, such as ch.pool.ntp.org. Misuse, caused mostly by IT-providers synchronizing their client infrastructures, has made unusually high demands on network traffic, thereby causing ETH to take effective measures. , the availability of swisstime.ethz.ch has been changed to closed access. , access to the server is blocked entirely for the NTP protocol. Snapchat on iOS In December 2016, the operator community NTPPool.org noticed a significant increase in NTP traffic, starting December 13. Investigation showed that the Snapchat application running on iOS was prone to querying all NTP servers that were hardcoded into a third party iOS NTP library, and that a request to a Snapchat-owned domain followed the NTP request flood. After Snap Inc. was contacted, their developers resolved the problem within 24 hours after notification with an update to their application. As an apology and to assist in dealing with the load they generated, Snap also contributed timeservers to the Australia and South America NTP pools. As a positive side effect, the NTP library used is open source, and the error-prone default settings were improved after feedback from the NTP community. Connectivity testing on TP-Link Wi‑Fi extenders Firmware for TP-Link Wi‑Fi extenders in 2016 and 2017 hardcoded five NTP servers, including Fukuoka University in Japan and the Australia and New Zealand NTP server pools, and would repeatedly issue one NTP request and five DNS requests every five seconds consuming 0.72 GB per month per device. The excessive requests were misused to power an Internet connectivity check that displayed the device's connectivity status in their web administration interface. The issue was acknowledged by TP-Link's branch in Japan who pushed the company to redesign the connectivity test and issue firmware updates addressing the issue for affected devices. The affected devices are unlikely to install the new firmware as WiFi extenders from TP-Link does not install firmware updates automatically, nor do they notify the owner about firmware update availability. TP-Link firmware update availability also varies by country, even though the issue affects all WiFi range extenders sold globally. The servers of Fukuoka University are reported as being shut down sometime between February and April 2018, and should be removed from the NTP Public Time Server Lists. Technical solutions After these incidents, it became clear that apart from stating a server's access policy, a technical means of enforcing a policy was needed. One such mechanism was provided by extending semantics of a Reference Identifier field in an NTP packet when a Stratum field is 0. In January 2006, RFC 4330 was published, updating details of the SNTP protocol, but also provisionally clarifying and extending the related NTP protocol in some areas. Sections 8 to 11 of RFC 4330 are of particular relevance to this topic (The Kiss-o'-Death Packet, On Being a Good Network Citizen, Best Practices, Security Considerations). Section 8 introduces Kiss-o'-Death packets: The new requirements of the NTP protocol do not work retroactively, and old clients and implementations of earlier version of the protocol do not recognize KoD and act on it. For the time being there are no good technical means to counteract misuse of NTP servers. In 2015, due to possible attacks to Network Time Protocol, a Network Time Security for NTP (Internet Draft draft-ietf-ntp-using-nts-for-ntp-19) was proposed using a Transport Layer Security implementation. On June 21, 2019 Cloudflare started a trial service around the world, based on a previous Internet Draft. References External links The Trinity College incident SMC/CSIRO Incident Poul-Henning Kamp's open letter to D-Link (changed on 27 April 2006) Copy of Poul-Henning Kamp's Original letter to D-Link (from 23 April 2006) When Firmware Attacks! (DDoS by D-Link) by Richard Clayton OSnews' article on "NTP vandalism" Engadget on "NTP vandalism" FortiGate bug: firewalls sending excessive requests to the NTP Pool Timesyncd bug: doesn't limit rate of requests Network time-related software Denial-of-service attacks
911519
https://en.wikipedia.org/wiki/Freedom%20of%20information
Freedom of information
Freedom of information is freedom of a person or people to publish and consume information. Access to information is the ability for an individual to seek, receive and impart information effectively. This sometimes includes "scientific, indigenous, and traditional knowledge; freedom of information, building of open knowledge resources, including open Internet and open standards, and open access and availability of data; preservation of digital heritage; respect for cultural and linguistic diversity, such as fostering access to local content in accessible languages; quality education for all, including lifelong and e-learning; diffusion of new media and information literacy and skills, and social inclusion online, including addressing inequalities based on skills, education, gender, age, race, ethnicity, and accessibility by those with disabilities; and the development of connectivity and affordable ICTs, including mobile, the Internet, and broadband infrastructures". Public access to government information, including through the open publication of information, and formal freedom of information laws, is widely considered an important basic component of democracy and integrity in government. Michael Buckland defines six types of barriers that have to be overcome in order for access to information to be achieved: identification of the source, availability of the source, price of the user, cost to the provider, cognitive access, acceptability. While "access to information", "right to information", "right to know" and "freedom of information" are sometimes used as synonyms, the diverse terminology does highlight particular (albeit related) dimensions of the issue. Freedom of information is related to freedom of expression, which can apply to any medium, be it oral, writing, print, electronic, or through art forms. This means that the protection of freedom of speech as a right includes not only the content, but also the means of expression. Freedom of information is a separate concept which sometimes comes into conflict with the right to privacy in the content of the Internet and information technology. As with the right to freedom of expression, the right to privacy is a recognized human right and freedom of information acts as an extension to this right. The government of the United Kingdom has theorised it as being an extension of freedom of speech, and a fundamental human right. It is recognized in international law. The international and United States Pirate Party have established political platforms based largely on freedom of information issues. Overview There has been a significant increase in access to the Internet, which reached just over three billion users in 2014, amounting to about 42 per cent of the world's population. But the digital divide continues to exclude over half of the world's population, particularly women and girls, and especially in Africa and the least developed countries as well as several Small Island Developing States. Further, individuals with disabilities can either be advantaged or further disadvantaged by the design of technologies or through the presence or absence of training and education. Context The Digital Divide Access to information faces great difficulties because of the global digital divide. A digital divide is an economic and social inequality with regard to access to, use of, or impact of information and communication technologies (ICT). The divide within countries (such as the digital divide in the United States) may refer to inequalities between individuals, households, businesses, or geographic areas, usually at different socioeconomic levels or other demographic categories. The divide between differing countries or regions of the world is referred to as the global digital divide, examining this technological gap between developing and developed countries on an international scale. Racial divide Although many groups in society are affected by a lack of access to computers or the internet, communities of color are specifically observed to be negatively affected by the digital divide. This is evident when it comes to observing home-internet access among different races and ethnicities. 81% of Whites and 83% of Asians have home internet access, compared to 70% of Hispanics, 68% of Blacks, 72% of American Indian/Alaska Natives, and 68% of Native Hawaiian/Pacific Islanders. Although income is a factor in home-internet access disparities, there are still racial and ethnic inequalities that are present among those within lower income groups. 58% of low income Whites are reported to have home-internet access in comparison to 51% of Hispanics and 50% of Blacks. This information is reported in a report titled "Digital Denied: The Impact of Systemic Racial Discrimination on Home-Internet Adoption" which was published by the DC-based public interest group Fress Press. The report concludes that structural barriers and discrimination that perpetuates bias against people of different races and ethnicities contribute to having an impact on the digital divide. The report also concludes that those who do not have internet access still have a high demand for it, and reduction in the price of home-internet access would allow for an increase in equitable participation and improve internet adoption by marginalized groups. Digital censorship and algorithmic bias are observed to be present in the racial divide. Hate-speech rules as well as hate speech algorithms online platforms such as Facebook have favored white males and those belonging to elite groups in society over marginalized groups in society, such as women and people of color. In a collection of internal documents that were collected in a project conducted by ProPublica, Facebook’s guidelines in regards to distinguishing hate speech and recognizing protected groups revealed slides that identified three groups, each one containing either female drivers, black children, or white men. When the question of which subset group is protected is presented, the correct answer was white men. Minority group language is negatively impacted by automated tools of hate detection due to human bias that ultimately decides what is considered hate speech and what is not. Online platforms have also been observed to tolerate hateful content towards people of color but restrict content from people of color. Aboriginal memes on a Facebook page were posted with racially abusive content and comments depicting Aboriginal people as inferior. While the contents on the page were removed by the originators after an investigation conducted by the Australian Communications and Media Authority, Facebook did not delete the page and has allowed it to remain under the classification of controversial humor. However, a post by an African American woman addressing her discomfort with being the only person of color in a small-town restaurant was met with racist and hateful messages. When reporting the online abuse to Facebook, her account was suspended by Facebook for three days for posting the screenshots while those responsible for the racist comments she received were not suspended. Shared experiences between people of color can be at risk of being silenced under removal policies for online platforms. Disability divide Inequities in access to information technologies are present among individuals living with a disability in comparison to those who are not living with a disability. According to The Pew Internet 54% of households with a person who has a disability have home internet access compared to 81% of households that have home internet access and do not have a person who has a disability. The type of disability an individual has can prevent one from interacting with computer screens and smartphone screens, such as having a quadriplegia disability or having a disability in the hands. However, there is still a lack of access to technology and home internet access among those who have a cognitive and auditory disability as well. There is a concern of whether or not the increase in the use of information technologies will increase equality through offering opportunities for individuals living with disabilities or whether it will only add to the present inequalities and lead to individuals living with disabilities being left behind in society. Issues such as the perception of disabilities in society, Federal and state government policy, corporate policy, mainstream computing technologies, and real-time online communication have been found to contribute to the impact of the digital divide on individuals with disabilities. People with disabilities are also the targets of online abuse. Online disability hate crimes have increased by 33% within the past year across the UK according to a report published by Leonard Cheshire.org. Accounts of online hate abuse towards people with disabilities were shared during an incident in 2019 when model Katie Price's son was the target of online abuse that was attributed to him having a disability. In response to the abuse, a campaign was launched by Katie Price to ensure that Britain's MP's held those who are guilty of perpetuating online abuse towards those with disabilities accountable. Online abuse towards individuals with disabilities is a factor that can discourage people from engaging online which could prevent people from learning information that could improve their lives. Many individuals living with disabilities face online abuse in the form of accusations of benefit fraud and "faking" their disability for financial gain, which in some cases leads to unnecessary investigations. Gender divide Women's freedom of information and access to information globally is less than men's. Social barriers such as illiteracy and lack of digital empowerment have created stark inequalities in navigating the tools used for access to information, often exacerbating lack of awareness of issues that directly relate to women and gender, such as sexual health. There have also been examples of more extreme measures, such as local community authorities banning or restricting mobile phone use for girls and unmarried women in their communities. According to the Wharton School of Public Policy, the expansion of Information and Communication Technology (ICT) has resulted in multiple disparities that have had an impact on women's access to ICT with the gender gap being as high as 31% in some developing countries and 12% globally in 2016. Socioeconomic barriers that result from these disparities are known as what we call the digital divide. Among low-income countries and low-income regions alike, the high price of internet access presents a barrier to women since women are generally paid less and face an unequal dividend between paid and unpaid work. Cultural norms in certain countries may prohibit women from access to the internet and technology as well by preventing women from attaining a certain level of education or from being the breadwinners in their households, thus resulting in a lack of control in the household finances. However, even when women have access to ICT, the digital divide is still prevalent. LGBTQIA divide, and repression by states and tech companies A number of states, including some that have introduced new laws since 2010, notably censor voices from and content related to the LGBTQI community, posing serious consequences to access to information about sexual orientation and gender identity. Digital platforms play a powerful role in limiting access to certain content, such as YouTube's 2017 decision to classify non-explicit videos with LGBTQIA themes as 'restricted', a classification designed to filter out "potentially inappropriate content". The internet provides information that can create a safe space for marginalized groups such as the LGBTQIA community to connect with others and engage in honest dialogues and conversations that are affecting their communities. It can also be viewed as an agent of change for the LGBTQIA community and provide a means of engaging in social justice. It can allow for LGBTQIA individuals who may be living in rural areas or in areas where they are isolated to gain access to information that are not within their rural system as well as gaining information from other LGBT individuals. This includes information such as healthcare, partners, and news. GayHealth provides online medical and health information and Gay and Lesbians Alliance Against Defamation contains online publications and news that focus on human rights campaigns and issues focused on LGBTQIA issues. The Internet also allows LGBTQIA individuals to maintain anonymity. Lack of access to the internet can hinder these things, due to lack of broadband access in remote rural areas. LGBT Tech has emphasized launching newer technologies with 5G technology in order to help close the digital divide that can cause members of the LGBTQIA community to lose access to reliable and fast technology that can provide information on healthcare, economic opportunities, and safe communities. There are also other factors that can prevent LGBTQIA members from accessing information online or subject them to having their information abused. Internet filters are also used to censor and restrict LGBTQIA content that is in relation to the LGBTQIA community in public schools and libraries. There is also the presence of online abuse by online predators that target LGBTQIA members by seeking out their personal information and providing them with inaccurate information. The use of the internet can provide a way for LGBTQIA individuals to gain access to information to deal with societal setbacks through therapeutic advice, social support systems, and an online environment that fosters a collaboration of ideas, concerns, and helps LGBTQIA individuals move forward. This can be fostered through human service professionals who can utilize the internet with evidence and evaluation to provide information to LGBTQIA individuals who are dealing with the circumstances of coming out and the possible repercussions that could follow as a result. The security argument With the evolution of the digital age, application of freedom of speech and its corollaries (freedom of information, access to information) becomes more controversial as new means of communication and restrictions arise including government control or commercial methods putting personal information to danger. Digital access Freedom of information (or information freedom) also refers to the protection of the right to freedom of expression with regard to the Internet and information technology. Freedom of information may also concern censorship in an information technology context, i.e. the ability to access Web content, without censorship or restrictions. Information and media literacy According to Kuzmin and Parshakova, access to information entails learning in formal and informal education settings. It also entails fostering the competencies of information and media literacy that enable users to be empowered and make full use of access to the Internet. The UNESCO's support for journalism education is an example of how UNESCO seeks to contribute to the provision of independent and verifiable information accessible in cyberspace. Promoting access for disabled persons has been strengthened by the UNESCO-convened conference in 2014, which adopted the "New Delhi Declaration on Inclusive ICTs for Persons with Disabilities: Making Empowerment a Reality”. Open standards According to the International Telecommunication Union (ITU), ""Open Standards" are standards made available to the general public and developed (or approved) and maintained via a collaborative and consensus driven process. "Open Standards" facilitate interoperability and data exchange among different products or services and are intended for widespread adoption." A UNESCO study considers that adopting open standards has the potential to contribute to the vision of a ‘digital commons’ in which citizens can freely find, share, and re-use information. Promoting open source software, which is both free of cost and freely modifiable could help meet the particular needs of marginalized users advocacy on behalf of minority groups, such as targeted outreach, better provision of Internet access, tax incentives for private companies and organizations working to enhance access, and solving underlying issues of social and economic inequalities The Information Society and freedom of expression The World Summit on the Information Society (WSIS) Declaration of Principles adopted in 2003 reaffirms democracy and the universality, indivisibility and interdependence of all human rights and fundamental freedoms. The Declaration also makes specific reference to the importance of the right to freedom of expression for the "Information Society" in stating: We reaffirm, as an essential foundation of the Information Society, and as outlined in Article 19 of the Universal Declaration of Human Rights, that everyone has the right to freedom of opinion and expression; that this right includes freedom to hold opinions without interference and to seek, receive and impart information and ideas through any media and regardless of frontiers. Communication is a fundamental social process, a basic human need and the foundation of all social organisation. It is central to the Information Society. Everyone, everywhere should have the opportunity to participate and no one should be excluded from the benefits the Information Society offers. The 2004 WSIS Declaration of Principles also acknowledged that "it is necessary to prevent the use of information resources and technologies for criminal and terrorist purposes, while respecting human rights". Wolfgang Benedek comments that the WSIS Declaration only contains a number of references to human rights and does not spell out any procedures or mechanism to assure that human rights are considered in practice. Hacktivismo The digital rights group Hacktivismo, founded in 1999, argues that access to information is a basic human right. The group's beliefs are described fully in the "Hacktivismo Declaration" which calls for the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights (ICCPR) to be applied to the Internet. The Declaration recalls the duty of member states to the ICCPR to protect the right to freedom of expression with regard to the internet and in this context freedom of information. The Hacktivismo Declaration recognizes "the importance to fight against human rights abuses with respect to reasonable access to information on the Internet" and calls upon the hacker community to "study ways and means of circumventing state sponsored censorship of the internet" and "implement technologies to challenge information rights violations". The Hacktivismo Declaration does, however, recognise that the right to freedom of expression is subject to limitations, stating "we recognized the right of governments to forbid the publication of properly categorized state secrets, child pornography, and matters related to personal privacy and privilege, among other accepted restrictions." However, the Hacktivist Declaration states "but we oppose the use of state power to control access to the works of critics, intellectuals, artists, or religious figures." Global Network Initiative On October 29, 2008 the Global Network Initiative (GNI) was founded upon its "Principles on Freedom of Expression and Privacy". The Initiative was launched in the 60th Anniversary year of the Universal Declaration of Human Rights (UDHR) and is based on internationally recognized laws and standards for human rights on freedom of expression and privacy set out in the UDHR, the International Covenant on Civil and Political Rights (ICCPR) and the International Covenant on Economic, Social and Cultural Rights (ICESCR). Participants in the Initiative include the Electronic Frontier Foundation, Human Rights Watch, Google, Microsoft, Yahoo, other major companies, human rights NGOs, investors, and academics. According to reports Cisco Systems was invited to the initial discussions but didn't take part in the initiative. Harrington Investments, which proposed that Cisco establish a human rights board, has dismissed the GNI as a voluntary code of conduct not having any impact. Chief executive John Harrington called the GNI "meaningless noise" and instead calls for bylaws to be introduced that force boards of directors to accept human rights responsibilities. Internet censorship Jo Glanville, editor of the Index on Censorship, states that "the internet has been a revolution for censorship as much as for free speech". The concept of freedom of information has emerged in response to state sponsored censorship, monitoring and surveillance of the internet. Internet censorship includes the control or suppression of the publishing or accessing of information on the Internet. According to the Reporters without Borders (RSF) "internet enemy list" the following states engage in pervasive internet censorship: Cuba, Iran, Maldives, Myanmar/Burma, North Korea, Syria, Tunisia, Uzbekistan and Vietnam. A widely publicised example is the so-called "Great Firewall of China" (in reference both to its role as a network firewall and to the ancient Great Wall of China). The system blocks content by preventing IP addresses from being routed through and consists of standard firewall and proxy servers at the Internet gateways. The system also selectively engages in DNS poisoning when particular sites are requested. The government does not appear to be systematically examining Internet content, as this appears to be technically impractical. Internet censorship in the People's Republic of China is conducted under a wide variety of laws and administrative regulations. In accordance with these laws, more than sixty Internet regulations have been made by the People's Republic of China (PRC) government, and censorship systems are vigorously implemented by provincial branches of state-owned ISPs, business companies, and organizations. In 2010, U.S. Secretary of State Hillary Clinton, speaking on behalf of the United States, declared 'we stand for a single internet where all of humanity has equal access to knowledge and ideas'. In her 'Remarks on Internet Freedom' she also draws attention to how 'even in authoritarian countries, information networks are helping people discover new facts and making governments more accountable', while reporting President Barack Obama's pronouncement 'the more freely information flows, the stronger societies become'. Privacy protections Privacy, surveillance and encryption The increasing access to and reliance on digital media to receive and produce information have increased the possibilities for States and private sector companies to track individuals’ behaviors, opinions and networks. States have increasingly adopted laws and policies to legalize monitoring of communication, justifying these practices with the need to defend their own citizens and national interests. In parts of Europe, new anti-terrorism laws have enabled a greater degree of government surveillance and an increase in the ability of intelligence authorities to access citizens’ data. While legality is a precondition for legitimate limitations of human rights, the issue is also whether a given law is aligned to other criteria for justification such as necessity, proportionality and legitimate purpose. International framework The United Nations Human Rights Council has taken a number of steps to highlight the importance of the universal right to privacy online. In 2015, in a resolution on the right to privacy in the digital age, it established a United Nations Special Rapporteur on the Right to Privacy. In 2017, the Human Rights Council emphasized that the ‘unlawful or arbitrary surveillance and/ or interception of communications, as well as the unlawful or arbitrary collection of personal data, as highly intrusive acts, violate the right to privacy, can interfere with other human rights, including the right to freedom of expression and to hold opinions without interference’. Regional framework Number of regional efforts, particularly through the courts, to establish regulations that deal with data protection, privacy and surveillance, and which affect their relationship to journalistic uses. The Council of Europe’s Convention 108, the Convention for the protection of individuals with regard to automatic processing of personal data, has undergone a modernization process to address new challenges to privacy. Since 2012, four new countries belonging to the Council of Europe have signed or ratified the Convention, as well as three countries that do not belong to the Council, from Africa and Latin America. Regional courts are also playing a noteworthy role in the development of online privacy regulations. In 2015 the European Court of Justice found that the so-called ‘Safe Harbour Agreement’, which allowed private companies to ‘legally transmit personal data from their European subscribers to the US’, was not valid under European law in that it did not offer sufficient protections for the data of European citizens or protect them from arbitrary surveillance. In 2016, the European Commission and United States Government reached an agreement to replace Safe Harbour, the EU-U.S. Privacy Shield, which includes data protection obligations on companies receiving personal data from the European Union, safeguards on United States government access to data, protection and redress for individuals, and an annual joint review to monitor implementation. The European Court of Justice's 2014 decision in the Google Spain case allowed people to claim a "right to be forgotten" or "right to be de-listed" in a much-debated approach to the balance between privacy, free expression and transparency. Following the Google Spain decision the "right to be forgotten" or "right to be de-listed" has been recognized in a number of countries across the world, particularly in Latin America and the Caribbean. Recital 153 of the European Union General Data Protection Regulation states "Member States law should reconcile the rules governing freedom of expression and information, including journalistic...with the right to the protection of personal data pursuant to this Regulation. The processing of personal data solely for journalistic purposes…should be subject to derogations or exemptions from certain provisions of this Regulation if necessary to reconcile the right to the protection of personal data with the right to freedom of expression and information, as enshrined in Article 11 of the Charter." National framework The number of countries around the world with data protection laws has also continued to grow. According to the World Trends Report 2017/2018, between 2012 and 2016, 20 UNESCO Member States adopted data protection laws for first time, bringing the global total to 101. Of these new adoptions, nine were in Africa, four in Asia and the Pacific, three in Latin America and the Caribbean, two in the Arab region and one in Western Europe and North America. During the same period, 23 countries revised their data protection laws, reflecting the new challenges to data protection in the digital era. According to Global Partners Digital, only four States have secured in national legislation a general right to encryption, and 31 have enacted national legislation that grants law enforcement agencies the power to intercept or decrypt encrypted communications. Private sector implications Since 2010, to increase the protection of the information and communications of their users and to promote trust in their services’. High-profile examples of this have been WhatsApp's implementation of full end-to-end encryption in its messenger service, and Apple's contestation of a law enforcement warrant to unlock an iPhone used by the perpetrators of a terror attack. Protection of confidential sources and whistle-blowing Rapid changes in the digital environment, coupled with contemporary journalist practice that increasingly relies on digital communication technologies, pose new risks for the protection of journalism sources. Leading contemporary threats include mass surveillance technologies, mandatory data retention policies, and disclosure of personal digital activities by third party intermediaries. Without a thorough understanding of how to shield their digital communications and traces, journalists and sources can unwittingly reveal identifying information. Employment of national security legislation, such as counter-terrorism laws, to override existing legal protections for source protection is also becoming a common practice. In many regions, persistent secrecy laws or new cybersecurity laws threaten the protection of sources, such as when they give governments the right to intercept online communications in the interest of overly broad definitions of national security. Developments in regards to source protection laws have occurred between 2007 and mid-2015 in 84 (69 per cent) of the 121 countries surveyed. The Arab region had the most notable developments, where 86 per cent of States had demonstrated shifts, followed by Latin America and the Caribbean (85 per cent), Asia and the Pacific (75 per cent), Western Europe and North America (66 per cent) and finally Africa, where 56 per cent of States examined had revised their source protection laws. As of 2015, at least 60 states had adopted some form of whistle-blower protection. At the international level, the United Nations Convention against Corruption entered into force in 2005. By July 2017, the majority of countries around the globe, 179 in total, had ratified the Convention, which includes provisions for the protection of whistleblowers. Regional conventions against corruption that contain protection for whistle-blowers have also been widely ratified. These include the Inter-American Convention Against Corruption, which has been ratified by 33 Member States, and the African Union Convention on Preventing and Combating Corruption, which was ratified by 36 UNESCO Member States. In 2009, the Organisation for Economic Co-operation and Development (OECD) Council adopted the Recommendation for Further Combating Bribery of Foreign Public Officials in International Business Transactions. Media pluralism According to the World Trends Report, access to a variety of media increased between 2012 and 2016. The internet has registered the highest growth in users supported by massive investments in infrastructure and significant uptake in mobile usage. Internet mobile The United Nations 2030 Agenda for Sustainable Development, the work of the Broadband Commission for Sustainable Development, co-chaired by UNESCO, and the Internet Governance Forum’s intersessional work on ‘Connecting the Next Billion' are proof of the international commitments towards providing Internet access for all. According to the International Telecommunication Union (ITU), by the end of 2017, an estimated 48 per cent of individuals regularly connect to the internet, up from 34 per cent in 2012. Despite the significant increase in absolute numbers, however, in the same period the annual growth rate of internet users has slowed down, with five per cent annual growth in 2017, dropping from a 10 per cent growth rate in 2012. The number of unique mobile cellular subscriptions increased from 3.89 billion in 2012 to 4.83 billion in 2016, two-thirds of the world’s population, with more than half of subscriptions located in Asia and the Pacific. The number of subscriptions is predicted to rise to 5.69 billion users in 2020. As of 2016, almost 60 per cent of the world’s population had access to a 4G broadband cellular network, up from almost 50 per cent in 2015 and 11 per cent in 2012. The limits that users face on accessing information via mobile applications coincide with a broader process of fragmentation of the internet. Zero-rating, the practice of internet providers allowing users free connectivity to access specific content or applications for free, has offered some opportunities for individuals to surmount economic hurdles, but has also been accused by its critics as creating a ‘two-tiered’ internet. To address the issues with zero-rating, an alternative model has emerged in the concept of ‘equal rating’ and is being tested in experiments by Mozilla and Orange in Africa. Equal rating prevents prioritization of one type of content and zero-rates all content up to a specified data cap. Some countries in the region had a handful of plans to choose from (across all mobile network operators) while others, such as Colombia, offered as many as 30 pre-paid and 34 post-paid plans. Broadcast media In Western Europe and North America, the primacy of television as a main source of information is being challenged by the internet, while in other regions, such as Africa, television is gaining greater audience share than radio, which has historically been the most widely accessed media platform. Age plays a profound role in determining the balance between radio, television and the internet as the leading source of news. According to the 2017 Reuters Institute Digital News Report, in 36 countries and territories surveyed, 51 per cent of adults 55 years and older consider television as their main news source, compared to only 24 per cent of respondents between 18 and 24. The pattern is reversed when it comes to online media, chosen by 64 per cent of users between 18 and 24 as their primary source, but only by 28 per cent of users 55 and older. According to the Arab Youth Survey, in 2016, 45 per cent of the young people interviewed considered social media as a major source of news. Satellite television has continued to add global or transnational alternatives to national viewing options for many audiences. Global news providers such as the BBC, Al Jazeera, Agence France-Presse, RT (formerly Russia Today) and the Spanish-language Agencia EFE, have used the internet and satellite television to better reach audiences across borders and have added specialist broadcasts to target specific foreign audiences. Reflecting a more outward looking orientation, China Global Television Network (CGTN), the multi-language and multi-channel grouping owned and operated by China Central Television, changed its name from CCTV-NEWS in January 2017. After years of budget cuts and shrinking global operations, in 2016 BBC announced the launch of 12 new language services (in Afaan Oromo, Amharic, Gujarati, Igbo, Korean, Marathi, Pidgin, Punjabi, Telugu, Tigrinya, and Yoruba), branded as a component of its biggest expansion ‘since the 1940s’. Also expanding access to content are changes in usage patterns with non-linear viewing, as online streaming is becoming an important component of users’ experience. Since expanding its global service to 130 new countries in January 2016, Netflix experienced a surge in subscribers, surpassing 100 million subscribers in the second quarter of 2017, up from 40 million in 2012. The audience has also become more diverse with 47 per cent of users based outside of the United States, where the company began in 1997. Newspaper industry The Internet has challenged the press as an alternative source of information and opinion but has also provided a new platform for newspaper organizations to reach new audiences. Between 2012 and 2016, print newspaper circulation continued to fall in almost all regions, with the exception of Asia and the Pacific, where the dramatic increase in sales in a few select countries has offset falls in historically strong Asian markets such as Japan and the Republic of Korea. Between 2012 and 2016, India’s print circulation grew by 89 per cent. As many newspapers make the transition to online platforms, revenues from digital subscriptions and digital advertising have been growing significantly. How to capture more of this growth remains a pressing challenge for newspapers. International framework UNESCO's work Mandate The 2030 Agenda for Sustainable Development, adopted by the United Nations General Assembly in September 2015, includes Goal 16.10 to ‘ensure public access to information and protect fundamental freedoms, in accordance with national legislation and international agreements’. UNESCO has been assigned as the custodian agency responsible for global reporting on indicator 16.10.2 regarding the ‘number of countries that adopt and implement constitutional, statutory and/or policy guarantees for public access to information’. This responsibility aligns with UNESCO's commitment to promote universal access to information, grounded in its constitutional mandate to ‘promote the free flow of ideas by word and image’. In 2015, UNESCO's General Conference proclaimed 28 September as the International Day for Universal Access to Information. The following year, participants of UNESCO's annual celebration of World Press Freedom Day adopted the Finlandia Declaration on access to information and fundamental freedoms, 250 years after the first freedom of information law was adopted in what is modern day Finland and Sweden. History 38th Session of the General Conference in 2015, Resolution 38 C/70 proclaiming 28 September as the "International Day for the Universal Access to Information" Article 19 of the Universal Declaration of Human Rights Article 19 of the International Covenant on Civil and Political Rights Brisbane Declaration Dakar Declaration Finlandia Declaration Maputo Declaration New Delhi Declaration Recommendation concerning the Promotion and Use of Multilingualism and Universal Access to Cyberspace 2003 United Nations Convention on the Rights of Persons with Disabilities The International Programme for Development of Communication The International Programme for the Development of Communication (IPDC) is a United Nations Educational, Scientific and Cultural Organization (UNESCO) programme aimed at strengthening the development of mass media in developing countries. Its mandate since 2003 is "... to contribute to sustainable development, democracy and good governance by fostering universal access to and distribution of information and knowledge by strengthening the capacities of the developing countries and countries in transition in the field of electronic media and the printed press." The International Programme for the Development of Communication is responsible for the follow-up of the Sustainable Development Goal (SDG) 16 through indicators 16.10.1 and 16.10.2. Every two years, a report containing information from the Member States on the status of judicial inquiries on each of the killings condemned by UNESCO is submitted to the IPDC Council by UNESCO's Director-General. The journalists safety indicators are a tool developed by UNESCO which, according to UNESCO's website, aims on mapping the key features that can help assess safety of journalists, and help determine whether adequate follow-up is given to crimes committed against them. The IPDC Talks also allow the Programme to raise awareness on the importance of access to information. The IPDC is also the programme that monitors and reports on access to information laws around the world through the United Nations Secretary-General global report on follow-up to SDGs. On 28 September 2015, UNESCO adopted the International Day for the Universal Access to Information during its 38th session. During the International Day, the IPDC organized the "IPDC Talks: Powering Sustainable Development with Access to Information" event, which gathered high-level participants. The annual event aims on highlighting the "importance of access to information" for sustainable development. The Internet Universality framework Internet Universality is the concept that "the Internet is much more than infrastructure and applications, it is a network of economic and social interactions and relationships, which has the potential to enable human rights, empower individuals and communities, and facilitate sustainable development. The concept is based on four principles stressing the Internet should be Human rights-based, Open, Accessible, and based on Multistakeholder participation. These have been abbreviated as the R-O-A-M principles. Understanding the Internet in this way helps to draw together different facets of Internet development, concerned with technology and public policy, rights and development." Through the concept internet universality UNESCO highlights access to information as a key to assess a better Internet environment. There is special relevance to the Internet of the broader principle of social inclusion. This puts forward the role of accessibility in overcoming digital divides, digital inequalities, and exclusions based on skills, literacy, language, gender or disability. It also points to the need for sustainable business models for Internet activity, and to trust in the preservation, quality, integrity, security, and authenticity of information and knowledge. Accessibility is interlinked to rights and openness. Based on the ROAM principles, UNESCO is now developing Internet Universality indicators to help governments and other stakeholders assess their own national Internet environments and to promote the values associated with Internet Universality, such as access to information. The World Bank initiatives In 2010, the World Bank launched the World Bank policy on access to information, which constitutes a major shift in the World Bank's strategy. The principle binds the World Bank to disclose any requested information, unless it is on a "list of exception": "Personal information Communications of Governors and/or Executive Directors’ Offices Ethics Committee Attorney-Client Privilege Security and Safety Information Separate Disclosure Regimes Confidential Client/Third Party Information Corporate Administrative Deliberative Information* Financial Information" The World Bank is prone to Open Developments with its Open data, Open Finance and Open knowledge repository. The World Summit on the Information Societies The World Summit on the Information Society (WSIS) was a two-phase United Nations-sponsored summit on information, communication and, in broad terms, the information society that took place in 2003 in Geneva and in 2005 in Tunis. One of its chief aims was to bridge the global digital divide separating rich countries from poor countries by spreading access to the Internet in the developing world. The conferences established 17 May as World Information Society Day. Regional framework The results from UNESCO monitoring of SDG 16.10.2 show that 112 countries have now adopted freedom of information legislation or similar administrative regulations. Of these, 22 adopted new legislation since 2012. At the regional level, Africa has seen the highest growth, with 10 countries adopting freedom of information legislation in the last five years, more than doubling the number of countries in the region to have such legislation from nine to 19. A similarly high growth rate has occurred in the Asia-Pacific region, where seven countries adopted freedom of information laws in the last five years, bringing the total to 22. In addition, during the reporting period, two countries in the Arab region, two countries in Latin America and the Caribbean, and one country in Western Europe and North America adopted freedom of information legislation. The vast majority of the world's population now lives in a country with a freedom of information law, and several countries currently have freedom of information bills under consideration. National framework Freedom of information laws In June 2006, nearly 70 countries had freedom of information legislation applying to information held by government bodies and in certain circumstances to private bodies. In 19 of these countries, the freedom of information legislation also applied to private bodies. Access to information was increasingly recognised as a prerequisite for transparency and accountability of governments, as facilitating consumers' ability to make informed choices, and as safeguarding citizens against mismanagement and corruption. This has led an increasing number of countries to enact freedom of information legislation in the past 10 years. In recent years, private bodies have started to perform functions which were previously carried out by public bodies. Privatisation and de-regulation saw banks, telecommunications companies, hospitals and universities being run by private entities, leading to demands for the extension of freedom of information legislation to cover private bodies. While there has been an increase in countries with freedom of information laws, their implementation and effectiveness vary considerably across the world. The Global Right to Information Rating is a programme providing advocates, legislators, reformers with tools to assess the strength of a legal framework. In measuring the strength and legal framework of each country's freedom of information law using the Right to Information Rating, one notable trend appears. Largely regardless of geographic location, top scoring countries tend to have younger laws. According to the United Nations Secretary General’s 2017 report on the Sustainable Development Goals, to which UNESCO contributed freedom of information-related information, of the 109 countries with available data on implementation of freedom of information laws, 43 per cent do not sufficiently provide for public outreach and 43 per cent have overly-wide definitions of exceptions to disclosure, which run counter to the aim of increased transparency and accountability. Despite the adoption of freedom of information laws; officials are often unfamiliar with the norms of transparency at the core of freedom of information or are unwilling to recognize them in practice. Journalists often do not make effective use of freedom of information laws for a multitude of reasons: official failure to respond to information requests, extensive delays, receipt of heavily redacted documents, arbitrarily steep fees for certain types of requests, and a lack of professional training. Debates around public access to information have also focused on further developments in encouraging open data approaches to government transparency. In 2009, the data.gov portal was launched in the United States, collecting in one place most of the government open data; in the years following, there was a wave of government data opening around the world. As part of the Open Government Partnership, a multilateral network established in 2011, some 70 countries have now issued National Action Plans, the majority of which contain strong open data commitments designed to foster greater transparency, generate economic growth, empower citizens, fight corruption and more generally enhance governance. In 2015 the Open Data Charter was founded in a multistakeholder process in order to establish principles for ‘how governments should be publishing information’. The Charter has been adopted by 17 national governments half of which were from Latin America and the Caribbean. The 2017 Open Data Barometer, conducted by the World Wide Web Foundation, shows that while 79 out of the 115 countries surveyed have open government data portals, in most cases "the right policies are not in place, nor is the breadth and quality of the data-sets released sufficient". In general, the Open Data Barometer found that government data is usually "incomplete, out of date, of low quality, and fragmented". Private bodies , the following 19 countries had freedom of information legislation that extended to government bodies and private bodies: Antigua and Barbuda, Angola, Armenia, Colombia, the Czech Republic, the Dominican Republic, Estonia, Finland, France, Iceland, Liechtenstein, Panama, Poland, Peru, South Africa, Turkey, Trinidad and Tobago, Slovakia, and the United Kingdom. The degree to which private bodies are covered under freedom of information legislation varies, in Angola, Armenia and Peru the legislation only applies to private companies that perform what are considered to be public functions. In the Czech Republic, the Dominican Republic, Finland, Trinidad and Tobago, Slovakia, Poland and Iceland private bodies that receive public funding are subject to freedom of information legislation. Freedom of information legislation in Estonia, France and UK covers private bodies in certain sectors. In South Africa the access provisions of the Promotion of Access to Information Act have been used by individuals to establish why their loan application has been denied. The access provisions have also been used by minority shareholders in private companies and environmental groups, who were seeking information on the potential environmental damage caused by company projects. Consumer protection In 1983, the United Nations Commission on Transnational Corporations adopted the United Nations Guidelines for Consumer Protection stipulating eight consumer rights, including "consumer access to adequate information to enable making informed choices according to individual wishes and needs". Access to information became regarded as a basic consumer right, and preventive disclosure, i.e. the disclosure of information on threats to human lives, health and safety, began to be emphasized. Investors Secretive decision making by company directors and corporate scandal led to freedom of information legislation to be published for the benefits of investors. Such legislation was first adopted in Britain in the early 20th century, and later in North America and other countries. Disclosure regimes for the benefit of investors regained attention at the beginning of the 21st century as a number of corporate scandals were linked to accounting fraud and company director secrecy. Starting with Enron, the subsequent scandals involving Worldcom, Tyco, Adelphia and Global Crossing prompted the US Congress to impose new information disclosure obligations on companies with the Sarbanes-Oxley Act 2002. See also Access to public information Action For Economic Reforms Citizen oversight Criticism of copyright Crypto-anarchism Culture vs. Copyright Cypherpunk Digital rights Directorate-General for Information Society and Media (European Commission) Freedom of information laws by country Freedom of panorama Free-culture movement Free Haven Project Foundation for a Free Information Infrastructure Freenet Hacking Hacktivism Illegal number International Right to Know Day I2P Information Information activism Information commissioner Information ethics Information privacy Information wants to be free Intellectual property Internet censorship Internet freedom Internet privacy Market for loyalties theory Medical law Netsukuku Openness Open data Open Music Model Pirate Party Right to know Stop Online Piracy Act Tor (anonymity network) Transparency (humanities) Virtual private network References Sources Attribution External links Freedom of Information: A Comparative Study, a 57 country study by Global Integrity. Internet Censorship: A Comparative Study, a 55 country study by Global Integrity. Right2Info , good law and practice from around the world, including FOI and other relevant laws and constitutional provisions from some 100 countries. Mike Godwin, Cyber Rights: Defending Free Speech in the Digital Age Learn about the latest cases from the Information Commissioner and the Tribunal via the Freedom of Information Update Podcasts and Webcasts, http://www.informationlaw.org.uk Amazon take down of WikiLeaks - Is the Free Internet Dead? Paul Jay of The Real News (TRNN) discusses the topic with Marc Rotenberg, Tim Bray and Rebecca Parsons of ThoughtWorks - December 8, 2010 (video: 43:41) Rights Freedom of speech Human rights by issue
161784
https://en.wikipedia.org/wiki/Technion%20%E2%80%93%20Israel%20Institute%20of%20Technology
Technion – Israel Institute of Technology
The Technion – Israel Institute of Technology () is a public research university located in Haifa, Israel. Established in 1912 under the dominion of the Ottoman Empire, the Technion is the oldest university in the country. The Technion is ranked as the top university in both Israel and the Middle East, and in the top 100 universities in the world in the Academic Ranking of World Universities of 2019. The university offers degrees in science and engineering, and related fields such as architecture, medicine, industrial management, and education. It has 19 academic departments, 60 research centers, and 12 affiliated teaching hospitals. Since its founding, it has awarded more than 100,000 degrees and its graduates are cited for providing the skills and education behind the creation and protection of the State of Israel. Technion's 565 faculty members include three Nobel Laureates in chemistry. Four Nobel Laureates have been associated with the university. The current president of the Technion is Uri Sivan. The selection of Hebrew as the language of instruction, defeating German in the War of the Languages, was an important milestone in Hebrew's consolidation as Israel's official language. The Technion is also a major factor behind the growth of Israel's high-tech industry and innovation, including the country's technical cluster in Silicon Wadi. History The Technikum was conceived in the early 1900s by the German-Jewish fund Ezrah as a school of engineering and sciences. It was to be the only institution of higher learning in the then Ottoman Palestine, other than the Bezalel Academy of Art and Design in Jerusalem (founded in 1907). In October 1913, the board of trustees selected German as the language of instruction, provoking a major controversy known as the War of the Languages. After opposition from American and Russian Jews to the use of German, the board of trustees reversed itself in February 1914 and selected Hebrew as the language of instruction. The German name Technikum was also replaced by the Hebrew name Technion. Technion's cornerstone was laid in 1912, and studies began 12 years later in 1924. In 1923 Albert Einstein visited and planted the now-famous first palm tree, as an initiative of Nobel tradition. The first palm tree still stands today in front of the old Technion building, which is now the MadaTech museum, in the Hadar neighborhood. Einstein founded the first Technion Society, and served as its president upon his return to Germany. In 1924, Arthur Blok became the Technion's first president. In the early 1950s, under the administration of Yaakov Dori, who had served as the Israel Defense Forces’ first chief of staff, the Technion launched a campaign to recruit Jewish and pro-Israel scientists from abroad to establish research laboratories and teaching departments in the natural and exact sciences. Campuses Haifa Technion City generally refers to the 1.2-square-kilometer site located on the pine-covered north-eastern slopes of Mount Carmel. The campus comprises 100 buildings, occupied by thousands of people every day. The Technion has two additional campuses. Its original building in midtown Haifa, in use by the Technion until the mid-1980s, now houses the Israel National Museum of Science, Technology and Space. The Rappaport Faculty of Medicine is located in the neighborhood of Bat Galim, adjacent to Rambam Hospital, the largest medical center in Northern Israel. Recreational activities on the main campus include an Olympic-size swimming pool as well as gymnastics, squash, and tennis facilities. The Technion Symphony Orchestra and Choir are composed mainly of Technion students and staff. Each term, the Orchestra offers a series of daytime and evening concerts. Films and live performances by leading Israeli artists take place on campus on a regular basis. Tel Aviv Technion's Division of Continuing Education and External Studies has been operating in the Tel Aviv area since 1958. In July 2013, the Technion moved to a new campus in Sarona. The Technion satellite campus in Sarona includes three buildings in a 1,800 sq. meter area, with a total of 16 modern classrooms. Among the programs that are taught at Sarona are the Technion's International MBA program, which includes students from around the world and guest lecturers from universities such as London Business School, Columbia University, and INSEAD. Cornell Tech On 19 December 2011, a bid by a consortium of Cornell University and Technion won a competition to establish a new high-tier applied science and engineering institution in New York City. The competition was established by New York City Mayor Michael Bloomberg in order to increase entrepreneurship and job growth in the city's technology sector. The winning bid consisted of a state-of-the-art tech campus being built on Roosevelt Island, which will have its first phase completed by 2017, with a temporary off-site campus opening in 2013 at the Google New York City headquarters building at 111 Eighth Avenue. The new 'School of Genius' in New York City has been named the Jacobs Technion-Cornell Institute. Its Founding Director was Craig Gotsman, Technion's Hewlett-Packard Professor of Computer Engineering. In 2015, AOL announced an investment of $5 million in a video research project at the institute. Positive media coverage abounds, as well as some small scale protests from the margins of political and environmental activism. Guangdong Technion Israel Institute of Technology In September 2013 the Li Ka Shing Foundation and the Technion announced they would be joining forces to create a new institute for technology at Shantou University, Guangdong province, south-eastern China. The Li Ka Shing Foundation pledged a grant of US$130 million for the creation of the institute. The degrees taught, including Bachelors, Masters and Doctorates, will be accredited by the Technion. The total construction costs are $147 million. English will be GTIIT's language of instruction. GTIIT will comprise three units: the College of Engineering; the College of Science; and the College of Life Science. The goal is to have about 5,000 students eventually. The institute will eventually grant Technion engineering degrees at all levels - Bachelor, Masters and PhD. Initially the courses offered are Chemical Engineering, Biotechnology and Food Engineering, Materials Engineering. By 2020 the institute will start teaching other disciplines from Mechanics to Aerospace Engineering. Faculties Aerospace Engineering Founded in 1954, the Faculty of Aerospace Engineering conducts research and education in a wide range of aerospace disciplines. The Aerospace Research Center also consists of the Aerodynamics (wind tunnels) Laboratory, the Aerospace Structures Laboratory, the Combustion and Rocket Propulsion Laboratory, the Turbo and Jet Engine Laboratory, the Flight Control Laboratory and the Design for Manufacturing Laboratory. Architecture and Town Planning The Technion Faculty of Architecture awards BSc in architecture degree after four years and March professional degree after six years of study. The faculty offers 4 programs: Architecture (under graduate and graduate), Landscape architecture (undergraduate and graduate), Industrial design (graduate) and regional and urban planning (graduate). Its undergraduate program in architecture and landscape architecture accepts about 100 students each year. Its graduate programs accepts about 70 students each year, and it accepts about 15 doctoral students, focusing on subjects related to it four programs. Biology The Faculty of Biology was established in 1971. Advanced research is carried out in 23 research groups, focusing on a variety of aspects of cellular, molecular and developmental biology. The faculty has extensive collaborations with the pharmaceutical and biotechnology industries. The Faculty has around 350 undergraduate students and over 100 graduate students. Biomedical Engineering Established in 1968, the Faculty of Biomedical Engineering has a multidisciplinary scope nurturing research activities that blend medical and biological engineering. Research projects have resulted in the development of patented medical aids. Recent research breakthroughs include the identification of a structured neurological code for syllables and could let paraplegics "speak" virtually through the connection of the brain to a computer. Biotechnology and Food Engineering Unique in Israel, the Faculty of Biotechnology and Food Engineering offers a blend of courses in engineering, life and natural sciences as well as joint degree programs with the Faculties of Biology and Chemistry. The Faculty houses biotechnology laboratories, as well as a large food processing pilot plant and a packaging laboratory. It currently has 350 undergraduates and more than 120 graduate students. Civil and Environmental Engineering In 2002, two of the original Technion Faculties – Civil and Agricultural engineering, were merged to create the Faculty of Civil and Environmental Engineering. Its state vision is to "maintain and enhance the leading position of the Faculty of Civil & Environmental Engineering amongst the top departments in the world... and to position the Faculty as the national center for research & development and human resources for the sustainable development." The Faculty is the home of Technion's expanding International School of Engineering. Chemical Engineering The Wolfson Faculty of Chemical Engineering is Israel's oldest and largest faculty in the field, educating the vast majority of chemical engineers in Israel's chemical industries. Research activities include materials, complex fluids, processing, transport and surface phenomena and process control. Chemistry The Schulich Faculty of Chemistry offers a variety of joint programs, including with materials engineering, chemical engineering, physics, and food engineering. It also offers a joint degree with the Faculty of Biology leading to a degree in molecular biochemistry. Around 100 research projects at the faculty are sponsored by industry and national and international foundations. It also offers a variety of outreach and youth programs. The Coastal and Marine Engineering Research Institute (CAMERI) Held equally by the Technion and The Israeli Ports Company, The Israeli CAMERI is the leading applicative research Institute in Israel dedicated to Physical Oceanography, Marine Engineering and Coastal Engineering. Founded in 1976 it hosts two unique research facilities (wave-basin and wave-flume), both are the biggest of their kind in Israel. CAMERI has become with time a national authority in various data and research aspects related to the Israeli coastal and marine environments. Computer Science Founded in 1969, this is one of the largest Technion faculties, with over 1,000 undergraduate students and 200 graduate students. The Faculty of Computer Science was ranked 15th among 500 universities in computer sciences for 2011 and 18th of 500 since 2012. The Faculty is located in the Taub Family Science and Technology Center, following the support of the philanthropist Henry Taub. Education in Technology and Science Founded in 1965, the Department of Education in Technology & Science became a faculty in 2015. The faculty trains undergraduates in the most advanced methods of teaching science and technology in schools. The faculty is home to a research and development center in the field. It has over 100 graduate students and 350 undergraduate students including second career engineers and scientists who elect to study toward STEM educator career. Electrical Engineering The Faculty of Electrical Engineering claims to be the major source of engineers who lead the development of advanced Israeli technology in the fields of electronics, computers and communications. Some 2000 undergraduate students study in the department for a BSc degree in electrical engineering / computer engineering / computer and software engineering, and 400 graduate students study for the degrees of MSc and PhD. The department has extensive relations with industry as well as academic and industrial special liaison support programs. Humanities and Arts The Department of Humanities and Arts serves all the Technion community, offering courses taught by renowned visiting and adjunct scholars including philosophy of science, social and political sciences, linguistics, psychology, law and anthropology and an array of theoretical and performing arts courses. The Technion Theater was established in 1986, by Professor Ouriel Zohar. The theater teaches 8 courses, and it has about 150 students per semester. The theater presented 52 performances in different styles, some by Hanoch Levin, Yehoshua Sobol, Moliere, Shakespeare, Pierre de Marivaux, Henrik Ibsen and Bernard Shaw, Sławomir Mrożek among others. The theater also presented plays written by the director and the actors. The theater is invited to many festivals in Europe universities. Director Scandar Copti received Ophir Award in 2010, played in "End End" directed by Zohar, which was presented in Jerusalem festival in 2001. Shlomo Plessner participated in collective plays "soft mattress" and "Mix Marriage" 1986–1987. The play "Transparent Chains" by Sheli Baliti from the Faculty of Chemical Engineering performed at Bratislava in 2006, Granada, Haifa Cinematheque, Neve Yosef Festival and the Theatre Department at the University of Haifa in 2007. The play An Enemy of the People by Ibsen, the main actor Rooney Navon, Professor at the Faculty of Civil and Environmental Engineering, received honorable nomination at Benevento in Italy in 2009, and also in Isfia near Haifa. The Show "invisible clothes" written and directed by Ouriel Zohar presented at The International Theater Festival in Saint Petersburg State University in 2012. Industrial Engineering and Management The William Davidson Faculty of Industrial Engineering & Management (IE&M) is the oldest such department in Israel. IE&M was launched as a Technion academic Department in 1958. The Department grew under the leadership of Pinchas Naor, who served as its founding Dean. Naor's vision was to combine Industrial engineering with Management by creating a large, inherently multidisciplinary unit covering a wide spectrum of activities such as applied engineering, mathematical modeling, economics, behavioral sciences, operations research, and statistics. Materials Science and Engineering Home to Nobel Laureate in Chemistry Distinguished Prof. Dan Shechtman, the Faculty of Materials Engineering is Israel's major study center in materials science. The Faculty houses the Electron Microscopy Center, the X-Ray Diffraction Laboratory, the Atomic force microscopy Laboratory and the Physical and Mechanical Measurements lab. Mathematics The Faculty of Mathematics houses both pure and applied mathematics, and was home to the mathematician Paul Erdős. Founded in 1950, it has around 46 faculty members, 200 undergraduate students and 100 graduate students. It provides instruction for students in all other Technion faculties and organizes mathematics competition for gifted high school students and a summer camp in number theory. Mechanical Engineering Founded in 1948, the Technion Faculty of Mechanical Engineering has over 830 students and 215 graduate students. Research is conducted in the faculty's 36 laboratories across the whole spectrum of mechanical engineering, from nano-scale fields through to applied engineering of national projects. Medicine The Ruth and Bruce Rappaport Faculty of Medicine is home to two Nobel Laureates: Avram Hershko and Aaron Ciechanover. It is one of four state-sponsored medical schools in Israel. It was founded in 1979 through the philanthropy of Bruce Rappaport and is active in basic science research and pre-clinical medical training in anatomy, biochemistry, biophysics, immunology, microbiology, physiology, and pharmacology. Other facilities on the Faculty of Medicine campus include teaching laboratories, a medical library, lecture halls, and seminar rooms. Academic programs lead to Master of Science (M.S.), Doctor of Philosophy (PhD), and Doctor of Medicine (M.D.) degrees. It also offers medical training leading to a M.D. degree to qualified American and Canadian graduates of pre-med programs under the Technion American Medical School Program (TeAMS). The school has developed collaborative research and medical education programs with various institutions in medicine and bio-medical engineering including Harvard University, New York University, Johns Hopkins University, University of Toronto, University of California, Santa Cruz and Mayo Medical School. Physics The Faculty of Physics engages in experimental and theoretical research in the fields of astrophysics, high energy physics, solid state physics and biophysics. Founded in 1960, it includes the Einstein Institute of Physics, the Lidow Physics Complex, The Rosen Solid State Building and the Werksman Physics Building. Multidisciplinary centers Nanotechnology and science The Russell Berrie Nanotechnology Institute (RBNI) was established in January 2005 as a joint endeavour of the Russell Berrie Foundation, the government of Israel, and the Technion. It is one of the largest academic programs in Israel, and is among the largest nanotechnology centers in Europe and the US. RBNI has over 110 faculty members, and approximately 300 graduate students and postdoctoral fellows under its auspices at Technion. Its multidisciplinary activities span 14 different faculties. Energy research The GTEP Nancy and Stephen Grand Technion Energy Program is a multidisciplinary center of excellence bringing together Technion's top researchers in energy science and technology from over nine different faculties. Founded in 2007, GTEP's 4-point strategy targets research and development of alternative fuels; renewable energy sources; energy storage and conversion; and energy conservation. The GTEP is presently the only center in Israel offering graduate studies in energy science and technology. Space research The Norman and Helen Asher Space Research Institute (ASRI) is a specialized institute dedicated to multidisciplinary scientific research. Established in 1984, its members come from five Technion faculties, and it has a technical staff of Technion scientists in a variety of space-related fields: (Physics, Aerospace Engineering, Mechanical Engineering, Electrical Engineering, Autonomous Systems, and Computer Sciences). Technion international The Technion International (TI) is a department in the Technion, offering courses taught entirely in English. The TI began its first year in 2009, and now offers a full BSc in Civil Engineering, BSc in Mechanical Engineering as well as various study abroad options, all taught in English. Student come from all over the globe – Asia, Africa, North and South America, Europe and Israel. They live on campus and enjoy trips around Israel and activities throughout the year. Technology transfer, partnerships and outreach programs Technion has a dedicated office to bridge the transition of scientific and technological discovery to successfully commercialized innovation since 2007: T3 – Technion Technology Transfer. As of 2011, 424 patents were granted to Technion innovations, with 845 patents pending. T3's partners include incubators, entrepreneurs, private investors, VCs and angel groups. It has strategic partnerships with Microsoft, IBM, Intel, Philips, Johnson & Johnson, Coca-Cola, among others. Technion offers after-school and summer enrichment courses for young people on subjects ranging from introductory electronics and computer programming to aerospace, architecture, biology, chemistry and physics. Two examples are Scitech and the Math Summer Camp, devoted to number theory. Technion set up the Israeli chapter of Engineers Without Borders, which among other projects, installed a network of biogas systems in rural Nepal providing sustainable energy and improved sanitation. The Technion empowers students from underrepresented groups such as Haredim and Arabs through scholarships, social programs and financial support. The Technion is one of the main sponsors of the Israeli league of FIRST robotics competition which became a formal project of the Technion since 2013. The percentage of Arab students at the Technion equals the percentage of the general Arab population in Israel: 20%. The Technion and Technische Universität Darmstadt formed a partnership in cyber security, entrepreneurship and materials science. Technion became a partner of Washington University of St. Louis through the McDonnell International Scholars Academy. Rankings In 2019, the Shanghai Academic Ranking rated the Technion as 85th in its list of the top 100 universities in the world. In 2012, the magazine Business Insider ranked Technion among the top 25 engineering schools in the world. In 2012, the Center for World University Rankings ranked Technion 51st in the world and third in Israel in its CWUR World University Rankings. For national rankings in 2011, Technion was ranked the No. 2 of universities in Israel by ARWU. In global rankings for that year, Technion was ranked #102–150 by ARWU, No. 220 by QS. It was ranked #401–500 by Times Higher Education World University Rankings in 2020. In 2013, the Technion was the only school outside the United States to make it into the top 10 on a new Bloomberg Rankings list of schools whose graduates are CEOs of top U.S. tech companies. Notable research In 1982, Dan Shechtman discovered a Quasicrystal structure. This is a structure with a Symmetry in the order of 5 – a phenomenon considered impossible until then by the then-current prevailing theories of Crystallography. In 2011 he won the Nobel Prize in Chemistry for this discovery. In 2004, two Technion professors, Avram Hershko and Aaron Ciechanover, won the Nobel Prize for the discovery of the biological system responsible for disassembling protein in the cell. Shulamit Levenberg, 37, was chosen by Scientific American magazine as one of the leading scientists in 2006 for the discovery of a method to transplant skin in a way the body does not reject. Moussa B.H. Youdim developed Rasagiline, a drug marketed by Teva Pharmaceuticals as Azilect (TM) for the treatment of neurodegenerative disease, especially Parkinson's disease. In 1998, Technion successfully launched the "Gurwin TechSat II" microsatellite, making Technion one of five universities with a student program that designs, builds, and launches its own satellite. The satellite stayed in orbit until 2010 In the 1970s, computer scientists Abraham Lempel and Jacob Ziv developed the Lempel-Ziv-Welch algorithm for data compression. In 1995 and 2007 they won an IEEE Richard W. Hamming Medal for pioneering work in data compression and especially for developing the algorithm. In 2019, a team of 12 students won a gold medal at iGEM for developing bee-free honey Library System The Technion library system is made of the Elyachar Central Library and research libraries that are located in the faculty buildings. The Central Library determines professional policies and guidelines and provides services for all the Technion libraries, including the library operating systems, the libraries' web portal, acquisitions, cataloging, classification, and interlibrary loans. The faculty research libraries' aim is to focus on the information needs of their students and academic staff. The libraries are transforming from traditional academic libraries to learning commons. The transition includes an ongoing process of evaluation of the libraries' collections with the aim of identifying items in high demand and use to keep, valuable items to preserve, items in nominal use to archive in the Central Library, rare and precious items to preserve in the historical archive of the Technion, at the Central Library, and items to withdraw, according to professional criteria. Nobel Laureates and notable people Nobel Laureates 2004 Avram Hershko, Chemistry 2004 Aaron Ciechanover, Chemistry 2011 Dan Shechtman, Chemistry 2013 Arieh Warshel, Chemistry Select faculty Moshe Arens, professor of aeronautics from 1957 to 1962. Eli Biham, cryptanalyst and cryptographer Yaakov Dori, President Avram Hershko and Aaron Ciechanover, recipients of the 2004 Nobel Prize in chemistry for the discovery of ubiquitin-mediated protein degradation Amos Horev, former president, former Chairman of Rafael; member of the Israeli Turkel Commission of Inquiry into the Gaza flotilla raid Abraham Lempel and Jacob Ziv, developers of the Lempel-Ziv (LZW) compression algorithm Liviu Librescu, hero during the Virginia Tech shooting Marcelle Machluf, biotechnology and food engineering Shlomo Moran, computer scientist Yehudit Naot, scientist and politician, Israeli Minister of the Environment Eliahu Nissim (born 1933), Professor of Aeronautical Engineering; President of the Open University of Israel Asher Peres, co-discoverer of quantum teleportation, awarded the 2004 Rothschild Prize in Physics Anat Rafaeli, organisational behaviour researcher Nathan Rosen, co-author with Albert Einstein and Boris Podolsky of physics paper about the EPR paradox in quantum mechanics Rachel Shalon, first woman engineer in Israel Shlomo Shamai, electrical information theorist, winner of the 2011 Shannon Award. Dan Shechtman, first observer of quasicrystals and winner of the 2011 Nobel Prize in chemistry Daniel Weihs (born 1942), Aeronautical Engineering professor Shmuel Zaks, computer scientist and mathematician Mario Livio, astrophysicist and an author of works that popularize science and mathematics Notable alumni Technion graduates have been estimated to constitute over 70 percent of the founders and managers of high-tech businesses in Israel. 80 percent of Israeli NASDAQ companies were founded and/or are led by Technion graduates, and 74 percent of managers in Israel's electronic industries hold Technion degrees. In the book, Technion Nation, Shlomo Maital, Amnon Frenkel and Ilana Debare document the contribution of Technion alumni in building the modern State of Israel. Johny Srouji – Senior Vice President of Hardware Technologies at Apple Inc. reporting to CEO Tim Cook Shai Agassi – IT entrepreneur, former Executive Board member of SAP AG and founder of Better Place Saul Amarel – pioneer in Artificial intelligence Ella Amitay Sadovsky – artist Ron Arad (b. 1958) – Air Force weapon systems officer; classified as missing in action since 1986 Itzhak Bentov – inventor and author Moti Bodek (b. 1961) – architect Andrei Broder – captcha developer, Vice President of Yahoo, formerly vice president of AltaVista Shimshon Brokman (b. 1957) – Olympic sailor Yaron Brook – president and executive director of the Ayn Rand Institute Chaim Elata - professor emeritus of mechanical engineering, President of Ben-Gurion University of the Negev, and Chairman of the Israel Public Utility Authority for Electricity Haim Eshed (b. 1933) – retired brigadier general in the Israel Defense Forces, former director of space programs for the Israel Ministry of Defense and former officer in the Israeli Military Intelligence. Now professor at the Asher Institute for Space Research at Technion. Yona Friedman (b. 1923) – architect Yossi Gross – Medical devices innovator and entrepreneur; founding partner of Rainbow Medical Andi Gutmans – developer of PHP and co-founder of Zend Technologies Abraham H. Haddad – computer scientist Daniel Hershkowitz (born 1953) - politician, mathematician, rabbi, and president of Bar-Ilan University Maor Farid - researcher at MIT, social activist and author Hossam Haick – Scientist and engineer Aharon Isser - aeronautical engineer Ram Karmi (b. 1931) – architect Shaul Ladany – world-record-holding racewalker, Bergen-Belsen survivor, Munich Massacre survivor, Professor of Industrial Engineering Uzi Landau – politician, Minister of Tourism Daniel M. Lewin – co-founder and CTO of Akamai, holder of two Technion degrees, killed while resisting the hijacking of American Airlines Flight in the 11 September attacks on the United States. Yoelle Maarek - Research VP, Amazon Dov Moran - entrepreneur, inventor and investor, best known as the inventor of the USB flash drive Yuval Ne'eman (1925–2006) - physicist, politician, and President of Tel Aviv University Eliahu Nissim (born 1933) - Professor of Aeronautical Engineering; President of the Open University of Israel Neri Oxman – architect and designer who teaches at MIT, known for her work in environmental design and digital morphogenesis Guillermo Sapiro – Contributor to Adobe software including Photoshop and After Effects. Also one of the people who originally developed the LOCO-I Lossless Image Compression Algorithm which is used in Mars Rovers' ICER image file format. Amos Shapira, former President of El Al Airlines, Cellcom, and the University of Haifa Zehev Tadmor (born 1937) - chemical engineer and president of the Technion Yossi Vardi – For over 40 years he has founded and helped build over 60 high-tech companies in a variety of fields, among them software, energy, Internet, mobile, electro-optics and water technology. Arieh Warshel – chemist known for the development of multiscale models for complex chemical systems and the winner of the 2013 Nobel Prize in chemistry. Daniel Zajfman (born 1959) - physicist; president of the Weizmann Institute Shlomo Zilberstein – computer scientist known for his contributions to artificial intelligence, as well as a professor and associate dean at the University of Massachusetts Amherst Yehuda and Zohar Zisapel – co-founders of the RAD Group, "fathers" of Israel's hi-tech industry See also List of universities in Israel Science and technology in Israel Education in Israel References External links Technion LIVE – Division of Public Affairs & Resource Development at Technion – Israel Institute of Technology. Multimedia resource in English. New renderings of Cornell-Technion Campus in New York City Technion – Israel Institute of Technology website Radio 1m – The students' radio station Technion Suing Senior Professor for 50% of His Company Stake 1912 establishments in the Ottoman Empire Buildings and structures in Haifa Educational institutions established in 1912 Multidisciplinary research institutes Research institutes in Israel Science and technology in Israel Technical universities and colleges in Israel
39816912
https://en.wikipedia.org/wiki/Mount%20Olive%20Trojans
Mount Olive Trojans
The Mount Olive Trojans are the athletic teams that represent the University of Mount Olive, located in Mount Olive, North Carolina, in NCAA Division II intercollegiate sporting competitions. UMO's sports teams are known as the Trojans; their colors are green and white. The Trojans participate as a member of Conference Carolinas at the NCAA Division II level in 18 sports: Varsity sports Championships In 2008, Mount Olive won the NCAA Division II Baseball National Championship. The Trojans posted a 58–6 record that year, winning the Conference Carolinas and NCAA II South Atlantic Regional titles. The Trojans defeated Ouachita Baptist (Ark.) 6–5 in the first round of the National Finals and Ashland (Ohio) 18–7 in the second round. Mount Olive defeated Central Missouri 5–3 in the semifinal round and claimed its first-ever national championship with a 6–2 victory over Ouachita Baptist in the title game. The national championship game was televised live on CBS College Sports. The National Finals took place in Sauget, Illinois. Mount Olive teams have made 30 NCAA Division II tournament appearances. Mount Olive teams have won a combined 39 Conference Carolinas regular season and/or tournament championships. Mount Olive was the recipient of the 2011–12 Joby Hawn Cup, awarded to the top athletics program in Conference Carolinas. In addition to the overall award, Mount Olive also captured the Men's Sports Hawn Cup and the Women's Sports Hawn Cup. Team (1) Individual sports Lacrosse Men's and women's lacrosse have been added as the college's newest teams and will begin competing in spring 2013. Basketball In 2005, the Trojan men's basketball team won the NCAA II East Regional and advanced to the Elite 8 in Grand Forks, North Dakota. References
16732092
https://en.wikipedia.org/wiki/Vinzant%20Software
Vinzant Software
Vinzant Software is a privately held company that is based in Hobart, IN. Vinzant Software develops and markets enterprise job scheduling solutions for most platforms including Windows, Unix, Linux, IBM i and MPE/ix. It was founded in 1988 by David Vinzant and has solely focused on job scheduling since 1995. History Vinzant, Inc. was started in 1987 by Dave Vinzant. Initially, it developed add-on programs to a property management software package called SKYLINE, which Vinzant had been involved in developing. As client–server computing was first getting started in 1988, Vinzant, Inc. worked with Microsoft and Novell. Vinzant developed SQLFile, the first shipping front end for the Ashton-Tate/Microsoft SQL Server in 1989. SQLFile was expanded to include support for DOS, OS/2 and Windows. It was a flexible development tool that created screens and reports that accessed data stored in Microsoft SQL Server, NetWare SQL and Oracle Server. In 1990, Novell announced the NetWare Loadable Module(NLM) version of NetWare SQL and the SQLFile System for DOS was shipped to thousands of users worldwide as part of Novell's 'Client Server Starter Kit'. Due to their experience with Oracle and NetWare, Vinzant, Inc. was selected by Oracle to develop the installation program for the Oracle Server for NetWare. Other SQL-based products developed by Vinzant, Inc. include the SQL BASIC Library which allowed compiled BASIC to be used with Microsoft SQL Server and the SQL Server Connectivity Pack which allowed SQL Server to be run on Novell networks. Due to its involvement with companies that were beginning to downsize their operations to PC's, Vinzant, Inc. recognized the need for a mission-critical job scheduling product. With their development of the Event Control Server (ECS) in the early 1990s, Vinzant shifted its focus to the emerging job scheduling and batch processing market. At that time a variety of simple PC utility products were on the market, but the 'downsizers' were looking for a product that could manage mission-critical applications. ECS was developed to provide solutions to some of the problems with other products, such as providing a central point of control for a network of DOS, OS/2, and Windows-based PC's. In 1999, ECS was re-designed to provide support for Unix and Linux systems. The result was the Global Event Control Server (Global ECS or GECS)due to its IP-based architecture and ability to control computers and processes worldwide. Solutions Business Process Automation Job Scheduling Batch Scheduling Products Global ECS Global ECS is the main product offering from Vinzant Software. It is an event-driven enterprise level job scheduling solution. It supports native Agents for other platforms including Unix, Linux, IBM i and MPE/ix that can all be managed from a single point using either a Windows or browser-based client. In addition to traditional time-based job scheduling, it supports triggers such as the existence of a file or completion of another job(s)or batch. Global ECS includes user-definable recovery actions that allow for built in job logic to allow the production flow to self-correct. It also includes flexible exception management that allows for multiple methods of notification. Implementations Job scheduling and batch processing are tools to help manage data processing systems by automating operations, improving quality, reducing costs, and improving resource utilization. Typical uses are scheduling file transfers, database updates, report generation, compilations and backups. Global ECS is particularly used by businesses whose core product is their information or data, such as financial institutions (banking, insurance, securities, brokerage, retirement and credit cards), government agencies, and information resources. Other industries to use Global ECS are energy, health care, retail and manufacturing. Platforms IBM i AIX HP-UX Linux Solaris Tru64 UnixWare Windows FreeBSD MPE/ix Industry Affiliations Apple Developer Connection HP Developer and Solution Partner Program IBM DeveloperWorks Intel Software Partner Program Microsoft Developer Network Novell Developer Network Red Hat Developer Program SCO Developer Network SGI Global Developer Program Sun Developer Network See also Information Management Business Process Automation Job Scheduling External links Vinzantsoftware.com official company site. Processor Magazine Job Scheduler Matrix. References Job scheduling
1913008
https://en.wikipedia.org/wiki/Jim%20Allchin
Jim Allchin
James Edward Allchin (born 1951, Grand Rapids, Michigan, United States) is an American computer scientist, philanthropist and guitarist. He is a former Microsoft executive. He assisted Microsoft in creating many of the system platform components including Microsoft Windows, Windows Server, server products such as SQL Server, and developer technologies. He is best known for building Microsoft's server business. He is also known for his leading role in the architecture and development of the directory services technology Banyan VINES. He has won numerous awards in his career such as the Technical Excellence Person of the Year in 2001. Jim Allchin led the Platforms division at Microsoft, overseeing the development of Windows client from Windows 98 to Windows Vista, Windows Server from NT Server 4.0 to Windows Server 2008, as well as several releases of Microsoft server products as well as Windows CE and Windows Embedded line of products. After serving sixteen years at Microsoft, Allchin retired in early 2007 when Microsoft officially released the Windows Vista operating system to consumers. He is now a professional musician. Biography Early years Allchin was born in Grand Rapids, Michigan, in 1951. While he was still an infant, the Allchin family moved to Keysville, Florida, where his parents worked on a farm. Allchin grew up in a tin-roof house built by his father. Later, Allchin and his older brother Keith also worked on the farm to support the family financially. While fixing equipment on the farm, Allchin developed an interest in engineering. He studied Electrical engineering at the University of Florida, but dropped out to play in a number of bands. He later returned to the university and graduated with BS in Computer Science in 1973. After receiving his degree, Allchin joined Texas Instruments, where he helped build a new operating system. Afterwards, he followed a former lecturer, Dick Kiger, to Wyoming, where he helped him to start a new company offering computer services nationwide, before moving on to another company in Dallas, Texas. Allchin returned to his studies, earning the degree of MS in Computer Science from Stanford University in 1980. Allchin wrote a language-independent, portable file system while at Stanford. While pursuing his PhD in Computer Science at Georgia Institute of Technology in the early eighties, he was the primary architect of the Clouds distributed object-oriented operating system; his PhD thesis was entitled "An Architecture for Reliable Decentralized Systems". In 1983, Allchin was recruited to Banyan by founder Dave Mahoney, eventually being Senior Vice President and Chief Technology Officer. During his seven years at Banyan, he created the VINES distributed operating system, which included the StreetTalk directory protocol as well as a series of network services based on the Xerox XNS stack. Bill Gates tried to recruit a reluctant Allchin to join Microsoft for a year, yet successfully convinced him to join in 1990. Gates told him that whatever he created would have a wider customer base through Microsoft than anyone else. Allchin is known as a prolific computer programmer/engineer. Allchin is also known for debugging systems remotely by having the person on the phone toggle in hexadecimal via front panel switches of early computers to correct problems. Microsoft Initially, Gates put Allchin in charge of revamping LAN Manager, using his networking expertise. However, Allchin scrapped the project, citing the need to start fresh. Allchin's first high-profile project at Microsoft was the Cairo technology, which was intended to add on to Windows NT to create the next version of the operating system. At the NT Developer Conference in July 1992, Allchin gave a presentation about the future Microsoft operating system. One of the main goals for Cairo was the ability for users to locate files based on their content as opposed to their name. Users would also have access to files stored on other machines on a network as easily as they had access to files on their own hard drives. Cairo was scheduled to ship as a single package in 1994. After several delays, it was finally released in pieces; the technology was shipped in successive operating system releases. The Cairo and Windows NT groups were combined, and Allchin created a new client and server organization focused on business and IT users. Allchin replaced Dave Cutler as the lead developer on Windows NT version 3.5 onwards. This led to conflict with Vice President Brad Silverberg, who was in charge of the development of Windows 95, the operating system which was aimed at the personal computer market as opposed to Windows NT's business computer market. In 1999, Microsoft re-organized its corporate structure. The Consumer Division, which maintained versions of Windows for home users and the Business & Enterprise Division, which maintained Windows NT, were combined into a single operating system division: the Platform Group. Allchin became the vice president of the new combined group. This promotion put him in charge of the development of both the home and business versions of Windows. With the release of Windows XP in 2001, both business and client versions of the operating system utilized the same code base. The server business grew substantially during this period of time. Allchin was diagnosed with cancer in late 2002 and took a leave of absence for part of 2003. Gates suggested that Allchin stay for a while. On September 20, 2005, Microsoft announced that Allchin would become co-president of a new Platform Products and Services Group, which combined the old Platform Group, Server and Tools Group, and the MSN Group. Microsoft also announced that Allchin would retire after Windows Vista shipped, leaving Kevin Johnson as the president. Allchin was a member of the Senior Leadership Team at Microsoft – a small group responsible for developing Microsoft's core direction along with Steve Ballmer and Bill Gates. He was claimed to be blunt, technical, and "straightforward." Bill Gates complimented him: Controversies Running the most profitable product areas within Microsoft caused Allchin to be involved in many controversies and disputes along with his business and technical leadership responsibilities. Brad Silverberg, the Microsoft executive who had been responsible for Windows 95, emailed several people in Microsoft September 27, 1991: Jim Allchin replied: During the United States v. Microsoft antitrust trial, emails sent by Allchin to other Microsoft executives were considered as an evidence by the government lawyers to back up their claim that the integration of Internet Explorer and Windows had more to do with their competition with Netscape Communications Corporation than innovation. During the Caldera v. Microsoft case, emails from Allchin to Bill Gates were introduced as an evidence. One email, from September 1991, included Allchin telling Gates that "We need to slaughter Novell before they get stronger." In August 1998, Allchin asked an engineer named Vinod Valloppillil to analyse the open source movement and the Linux operating system. Valloppillil wrote two memos which were intended for Senior Vice-President Paul Maritz who was the most senior executive at that time after Bill Gates and Steve Ballmer. Both memos were leaked and popularly known as the Halloween documents. On September 29, 1998, Allchin was deposed to respond to the testimony of Professor Edward Felten. Later he testified in court from February 1 to 4, 1999. Some of his testimony centered on the possibilities of removing Internet Explorer as stated by Felten. While Allchin proved his written testimony was correct in court, a video-taped demonstration created by Microsoft attorneys, which supposedly illustrated Allchin's points, was shown to be misleading. David Boies believed it was an avoidable mistake made by the Microsoft attorneys. In May 2002, Allchin testified before Judge Colleen Kollar-Kotelly during the settlement hearing between Microsoft and the nine states (as well as the District of Columbia) involved in the United States v. Microsoft antitrust trial. Allchin was called to testify on two issues, the first of which gained the most publicity. In relation to the issue of sharing technical API and protocol information used throughout Microsoft products, which the states were seeking, Allchin's testimony discussed how releasing certain information could increase the security risk to consumers. According to exhibits filed in 2006 by the plaintiff in the case of Comes v. Microsoft, Allchin wrote a memo to Bill Gates and Steve Ballmer in January 2004, one which was critical of Microsoft and Longhorn. The letter said that Gates and Ballmer had lost their way and compared them to Apple who he believed had not. Allchin was also critical of Microsoft relaxing its requirements for computers to carry the 'Vista Capable' badge. The seal, designed to inform customers of a computer's ability to run the Windows Vista operating system, was not initially intended for computers running Intel's 915 chipset. This was overturned, however, after Intel voiced their dissatisfaction with the decision. In an email to Microsoft's Steve Ballmer, Allchin wrote: Post-Microsoft career Today, Allchin devotes his time to music, technology, and charity work. He released his first album, Enigma, in 2009 calling the album a beta test. Then in September 2011, Allchin released his first widely distributed blues-themed album: Overclocked. The album Q.E.D. was released in September 2013 and Decisions followed in June 2017. Prime Blues, his latest album is scheduled to be released September 2018. Overclocked, Q.E.D., and Decisions all received widespread acclaim especially for Allchin's guitar work. Overclocked, Q.E.D., and Decisions were featured on iTunes as New and Noteworthy and all reached in the top 10 on Internet Blues Radio. Decisions was the #1 Blues Album in Washington State for 18 weeks starting in June, 2017. And Decisions was in the 10 top Blues Rocks album nationally for 14 weeks. Prime Blues remained in the top 10 Blues Albums on the national Roots Music charts for 18 straight weeks, staying in the number 1 position for 2 weeks. Prime Blues was the number 1 Contemporary Blues Album nationally for 11 weeks. Bibliography Allchin, James Edward (1982). A suite of algorithms for maintaining replicated data using weak correctness conditions. Georgia Institute of Technology. ISBN B0006YHLIY. Allchin, James Edward (1982). Object-based synchronization and recovery. Georgia Institute of Technology. ISBN B0006YLEL4. Allchin, James Edward (1983). Support for objects and actions in CLOUDS: Status Report. Georgia Institute of Technology. ISBN B0006YHLI4. Allchin, James Edward (1983). An architecture for reliable decentralized systems. Georgia Institute of Technology. ISBN B0006YIFDY. Allchin, James Edward (1983). Facilities for supporting atomicity in operating systems. Georgia Institute of Technology. ISBN B0006YHLQG. Allchin, James Edward (1983). How to shadow a shadow. Georgia Institute of Technology. ISBN B00071W5PA. Discography Enigma (2009) Overclocked (2011) QED (2013) Decisions (2017) Prime Blues (2018) References External links Channel 9 Interview with Jim Allchin, August 2005 1951 births Georgia Tech alumni Living people Microsoft employees Stanford University alumni University of Florida College of Engineering alumni Microsoft Windows people Lead guitarists American blues guitarists American male guitarists American rock guitarists American blues singers American rock singers Blues rock musicians Electric blues musicians Musicians from Grand Rapids, Michigan Singers from Michigan Guitarists from Michigan 20th-century American guitarists 20th-century American male musicians 21st-century American guitarists 21st-century American male musicians
37203811
https://en.wikipedia.org/wiki/SS%20Peel%20Castle
SS Peel Castle
The passenger steamer SS Peel Castle was operated by the Isle of Man Steam Packet Company from her purchase in 1912 until she was sold for breaking in 1939. Construction and dimensions Peel Castle was built as Duke of York at Dumbarton by William Denny and Brothers, who also supplied her engines and boilers. She had a registered tonnage of ; length 310 feet; beam 37 feet; draught 16 feet and a design speed of 17 knots. Peel Castle had accommodation for 1,162 passengers, and a crew of 42. Pre-war service As Duke of York she entered service in 1894 on the joint Fleetwood - Belfast service of the Lancashire & Yorkshire Railway and the London & North Western Railway. In 1911, she was sold to the "Turkish Patriotic Committee". In 1912 the Isle of Man Steam Packet Company purchased her and renamed her Peel Castle; the company also purchased Duke of Lancaster, renamed The Ramsey. War service She was requisitioned by the Admiralty at the outbreak of war in 1914. She was fitted out as an Armed Boarding Vessel (ABV) by Cammell Laird in late November 1914. She was to have 100 officers and crew and was fitted out as an auxiliary, capable of carrying boarding parties and prize crews, and was put under the command of Lieutenant-Commander P. E. Haynes RNR Peel Castle sailed under the White Ensign in January 1915, her engine room manned mostly by Steam Packet Company personnel, and became part of the Downs Boarding Flotilla, a section of the Dover Patrol. She remained with the Patrol for three years and one month. She would spend ten days at sea and four days in port at Dover. She was mainly tasked with regulating the large amount of maritime traffic between the Kent coast and the Goodwin Sands and to act as one of the guard ships that would stop and examine all neutral shipping proceeding to Continental ports. For the duration of the war the Dover Strait was closed by anti-submarine nets and minefields. No ship could pass except through the Patrol, and at one stage British warships were intercepting, searching and marshalling up to 115 vessels a day. In this strenuous work, Peel Castle played a busy and successful part. She was first fired upon by a retreating merchantman, and then at least once by British shore batteries. She endured many air raids and sent boarding parties onto many ships. Her crew captured a number of enemy personnel who were trying to get back to Europe hidden in neutral ships, including Franz von Rintelen, an infamous agent of Admiral Tirpitz. In 1916 she was badly damaged by fire and had to be refitted at Chatham. Later she transferred to the Orkney Islands. After having been fitted with depth charge throwers and paravanes and with her boat deck extended as a landing for kite balloons, she patrolled north of Shetland. Peel Castle was next moved to the Humber-Tyne Patrol, an area where shipping was very concentrated and where losses had been heavy. With her observer aloft in the balloon she would cruise up and down the convoys, looking for enemy submarines. The war over, Peel Castle was refitted once more, this time as a troop carrier, work she continued until May 1919, after which she returned to Liverpool and resumed her peacetime Steam Packet Company duties. Post-war service Upon returning to the Steam Packet fleet, Peel Castle was mainly used for subsidiary winter services. She undertook the direct Friday sailing from Liverpool to Ramsey, and also excursion trips and cargo duties. On the morning of Saturday 7 June 1924, Peel Castle went aground in Douglas Bay. She had left Liverpool operating the midnight sailing, and had approximately 550 passengers embarked. As she approached Douglas, Peel Castle encountered a bank of fog which forced her to slow her speed. Steaming at minimal speed, Peel Castle continued inbound to Douglas, sounding her whistle continuously, but ran aground on a bank of sand in the middle of Douglas foreshore. The impact was reported to be so slight that passengers were unaware of the situation until members of the ship's company informed them of the development. As the tide receded and the fog began to lift, a large crowd gathered on the shore. Passengers remained on board Peel Castle until she refloated. Shortly after 13:00hrs the vessel was refloated, aided by the Fenella, and then proceeded to Douglas Harbour under her own power. No apparent damage could be seen so Peel Castle left Douglas the following day, bound for the Graving Dock at Cammell Laird's in order to be inspected by representatives of Lloyd's of London. No damage was found on the subsequent inspection, and Peel Castle returned to service. Disposal Peel Castle was retired after a service life of 45 years. She was broken up at Dalmuir on the Clyde in February 1939. References Bibliography Ships of the Isle of Man Steam Packet Company 1894 ships Steamships of the United Kingdom
44044592
https://en.wikipedia.org/wiki/Arlene%20Blencowe
Arlene Blencowe
Arlene Blencowe (born 11 April 1983) is a mixed martial artist and boxer, currently competing in the Women's Featherweight division of Bellator MMA, where she is the first Australian female fighter in the promotion's history. As of October 5, 2021, she is #6 in the Bellator Women's pound-for-pound Rankings and #1 in the Bellator Women's Featherweight Rankings. Boxing career Blencowe began her boxing career in 2012 in her native Australia. She has world championships in World Boxing Federation and Women’s International Boxing Association. She has a current record of 4 wins and 4 losses. Mixed martial arts career Early career Blencowe began her professional MMA career a year after she began boxing in April 2013. Fighting exclusively in her native Australia, she amassed a record of 5 wins and 4 losses over the first year and a half of her career. Bellator MMA On 27 October 2014 it was announced that Blencowe had signed with Bellator MMA. Blencowe made her Bellator debut against Adrienna Jenkins on 15 May 2015 at Bellator 137. She won the fight via TKO in the first round. In her second fight for the promotion, Blencowe faced Marloes Coenen on 28 August 2015 at Bellator 141. She lost the fight via armbar submission in the second round. In her third fight for the promotion, Blencowe faced Gabby Holloway on 20 November 2015 at Bellator 146. She won the fight via split decision. In her fourth fight for the promotion, Blencowe faced Julia Budd on 21 October 2016 at Bellator 162. She lost the bout via majority decision. After picking up two wins outside of Bellator, Blencowe returned to face Sinead Kavanagh on 25 August 2017 at Bellator 182. She won the fight via split decision. Blencowe fought for the Bellator Women's Featherweight Championship against Julia Budd in a rematch on 1 December 2017 at Bellator 189. She lost the fight by split decision. Blencowe faced Amber Leibrock on 29 September 2018 at Bellator at Bellator 206. She won the fight via technical knockout in the third round. Racking up three straight victories, Blencowe next challenged Cris Cyborg for the Bellator Women's Featherweight World Championship at Bellator 249 on 15 October 2020. She lost the bout via second round submission. In December 2020, Blencowe signed a new multi-fight contract with Bellator. Blencowe faced Dayana Silva on July 16, 2021 at Bellator 262. She won the bout via TKO in the third round. Blencowe faced Pam Sorenson on November 12, 2021 at Bellator 271. She won the bout via unanimous decision. Championships and accomplishments Bellator MMA Most fights in Bellator Women's Featherweight division history (12) Most knockout wins in Bellator Women's Featherweight division history (four) Most stoppage wins in Bellator Women's Featherweight division history (four) Boxing record |- | Align="center" colspan=8|4 Wins (2 TKOs, 2 decisions), 4 Losses (4 decisions), 0 Draws |- style="font-size: 85%; text-align: left;" class="wikitable sortable" width="100%" |- !style="border-style: none none solid solid; background: #e3e3e3"|Result !style="border-style: none none solid solid; background: #e3e3e3"|Record !style="border-style: none none solid solid; background: #e3e3e3"|Opponent !style="border-style: none none solid solid; background: #e3e3e3"|Type !style="border-style: none none solid solid; background: #e3e3e3"|Rd, Time !style="border-style: none none solid solid; background: #e3e3e3"|Date !style="border-style: none none solid solid; background: #e3e3e3"|Location !style="border-style: none none solid solid; background: #e3e3e3"|Notes |- | Win | 4–4–0 | Kewarin Boonme | Referee's Technical Decision | | 2015-12-12 | Align=left| Sydney, Australia | |- | Win | 3–4–0 | Samon Khunurat | TKO | | 2015-10-03 | Align=left| Sydney, Australia | |- | Loss | 2–4–0 | Erin McGowan | Unanimous Decision | 10 | 2014-11-21 | Align=left| Perth, Australia | For vacant Women's International Boxing Association World Lightweight Title |- | Loss | 2–3–0 | Tori Nelson | Split Decision | 10 | 2014-09-27 | Align=left| Springfield, Virginia, United States | Fought for WIBA Women's International Boxing Association World welterweight title |- | Loss | 2–2–0 | Sabrina Ostowari | Unanimous Decision | 8 | 2014-02-14 | Align=left| Queensland, Australia |Fought for the vacant Australia female lightweight title |- | Win | 2–1–0 | Daniella Smith | Unanimous Decision | 10 | 2013-06-13 | Align=left| Auckland, New Zealand | Fought for vacant WIBA Women's International Boxing Association World light welterweight title Fought for vacant World Boxing Federation female welterweight title |- | Loss | 1–1–0 | Sarah Howett | Decision | 6 | 2012-10-20 | Align=left| Victoria, Australia | |- | Win | 1–0–0 | Nadine Brown | Decision | 6 | 2012-07-08 | Align=left| Queensland | Professional debut Mixed martial arts record |- | Win | align=center| 15–8 | Pam Sorenson | Decision (unanimous) | Bellator 271 | | align=center| 3 | align=center| 5:00 | Hollywood, Florida, United States | |- |Win |align=center|14–8 |Dayana Silva |TKO (punches) |Bellator 262 | |align=center|3 |align=center|1:00 |Uncasville, Connecticut, United States | |- | Loss | align=center| 13–8 | Cris Cyborg | Submission (rear-naked choke) | Bellator 249 | | align=center| 2 | align=center| 2:36 | Uncasville, Connecticut, United States | |- | Win | align=center| 13–7 | Leslie Smith | Decision (unanimous) | Bellator 233 | | align=center| 3 | align=center| 5:00 | Thackerville, Oklahoma, United States | |- | Win | align=center| 12–7 | Amanda Bell | KO (punches) | Bellator 224 | | align=center| 1 | align=center| 0:22 | Thackerville, Oklahoma, United States | |- | Win | align=center| 11–7 | Amber Leibrock | TKO (slam and punches) | Bellator 206 | | align=center| 3 | align=center| 1:23 | San Jose, California, United States | |- | Loss | align=center| 10–7 | Julia Budd | Decision (split) | Bellator 189 | | align=center| 5 | align=center| 5:00 | Thackerville, Oklahoma, United States | |- |Win |align=center|10–6 |Sinead Kavanagh |Decision (split) |Bellator 182 | |align=center|3 |align=center|5:00 |Verona, New York, United States | |- |Win |align=center|9–6 |Rhiannon Thompson |TKO |Australian FC 18: Night 2 | |align=center|1 |align=center|1 |Gold Coast, Australia | |- |Win |align=center|8–6 |Janay Harding |TKO (punches) |Legend MMA 1 | |align=center|1 |align=center|1:08 |Gold Coast, Australia | |- | Loss | align=center|7–6 | Julia Budd | Decision (majority) | Bellator 162 | | align=center| 3 | align=center| 5:00 | Memphis, Tennessee, United States | |- | Win | align=center|7–5 | Gabby Holloway | Decision (split) | Bellator 146 | | align=center| 3 | align=center| 5:00 | Thackerville, Oklahoma, United States | |- | Loss | align=center|6–5 | Marloes Coenen | Submission (armbar) | Bellator 141 | | align=center| 2 | align=center| 3:23 | Temecula, California, United States | |- |Win |align=center|6–4 |Adrienna Jenkins |TKO (punches) |Bellator 137 | |align=center|1 |align=center|4:08 |Temecula, California, United States | |- |Win |align=center|5–4 |Faith Van Duin |KO (knee to the body) |Storm MMA - Storm Damage 5 | |align=center|3 |align=center|3:00 |Queensland, Australia | |- |Win |align=center|4–4 |Kenani Mangakahia |Submission (triangle choke) |FightWorld Cup 17 | |align=center|1 |align=center|4:21 |Adelaide, Australia | |- |Win |align=center|3–4 |Mae-Lin Leow |TKO (punches) |MMA Down Under 4 | |align=center|1 |align=center|4:21 |Adelaide, Australia | |- |Loss |align=center|2–4 |Faith Van Duin |Decision (split) |Storm MMA - Storm Damage 3 | |align=center|3 |align=center|3:00 |Canberra, Australia | |- |Win |align=center|2–3 |Maryanne Mullahy |Decision (unanimous) |Storm MMA - Storm Damage 3 | |align=center|3 |align=center|5:00 |Canberra, Australia | |- |Loss |align=center|1–3 |Kate Da Silva |Submission (rear-naked choke) |Storm MMA - Storm Damage 3 | |align=center|2 |align=center|3:00 |Canberra, Australia | |- |Loss |align=center|1–2 |Jessica-Rose Clark |Submission (rear-naked choke) |Nitro MMA - Nitro 9 | |align=center|2 |align=center|3:38 |Queensland, Australia | |- |Win |align=center|1–1 |Kerry Barrett |Decision (split) |Brace For War 20 | |align=center|5 |align=center|5:00 |Queensland, Australia | |- |Loss |align=center|0–1 |Kyra Purcell |Submission (armbar) |FightWorld Cup 14 | |align=center|2 |align=center|1:27 |Queensland, Australia | See also List of female boxers List of female mixed martial artists References External links Arlene Blencowe at Awakening Fighters Arlene Blencowe Documentary at Flying Machine Films 1983 births Living people Sportswomen from New South Wales Australian female mixed martial artists Australian women boxers Boxers from Sydney Lightweight mixed martial artists Featherweight mixed martial artists Mixed martial artists utilizing boxing World boxing champions World welterweight boxing champions Welterweight boxers Lightweight boxers Bellator female fighters Australian people of Filipino descent
28279836
https://en.wikipedia.org/wiki/Vector%20Fabrics%2C%20B.V.
Vector Fabrics, B.V.
Vector Fabrics, B.V. was a software-development tools vendor originated from Eindhoven based in Zaltbommel, the Netherlands. They developed tools for programming multicore platforms. Vector Fabrics says to help software developers and OEMs that struggle to write error-free and efficient code for multicore and (heterogeneous) manycore processors. Products Vector Fabrics' Pareon Profile is a predictive profiling tool based on dynamic analysis to explore opportunities and bottlenecks for parallel execution of C and C++ code. The product includes a model of the target platform (e.g. ARM Android) to predict the performance and power gains of a proposed code rewrite. It has been used a.o. to optimize Blink and Webkit, the engine underlying the Chrome browser, the Bullet Physics engine, the IdTech4 game engine underlying Doom 3, and a number of video codecs and image processing applications. Vector Fabrics' Pareon Verify uses dynamic analysis to find bugs in C or C++ application code. It has been used to find bugs in various open source software projects like PicoTCP, VTK, Navit and YARP. vfTasks is an open-source library for writing multi-threaded applications in C and C++. It includes APIs for various synchronization and parallel programming patterns. History February 2007, Vector Fabrics was founded by three experts in multicore programming from NXP Semiconductors and Philips Research. November 2012, Vector Fabrics was included in the EE Times 'Silicon 60' list of emerging startups. June 2012, Vector Fabrics released Pareon Profile, a tool to help programmers optimize software for multicore platforms. April 2013, Gartner selected Vector Fabrics as 'Cool Vendor in Embedded Systems & Software' in 2012. May 2013, Vector Fabrics joined the Multicore Association (MCA). May 2015, Vector Fabric moved from the center of Eindhoven, the Netherlands (Province of Brabant) to Zaltbommel, the Netherlands (Province of Gelderland). October 2015 sees the public release of Pareon Verify, a tool to find software bugs via dynamic analysis. Vector Fabrics was declared bankrupt in May, 2016. References External links Vector Fabrics website The open-source vfTasks library Parallel computing Compilers Zaltbommel
19358462
https://en.wikipedia.org/wiki/Locator/Identifier%20Separation%20Protocol
Locator/Identifier Separation Protocol
Locator/ID Separation Protocol (LISP) () is a "map-and-encapsulate" protocol which is developed by the Internet Engineering Task Force LISP Working Group. The basic idea behind the separation is that the Internet architecture combines two functions, routing locators (where a client is attached to the network) and identifiers (who the client is) in one number space: the IP address. LISP supports the separation of the IPv4 and IPv6 address space following a network-based map-and-encapsulate scheme (). In LISP, both identifiers and locators can be IP addresses or arbitrary elements like a set of GPS coordinates or a MAC address. Historical origin The Internet Architecture Board's October 2006 Routing and Addressing Workshop renewed interest in the design of a scalable routing and addressing architecture for the Internet. Key issues driving this renewed interest include concerns about the scalability of the routing system and the impending exhaustion of IPv4 address space. Since the IAB workshop, several proposals have emerged that attempted to address the concerns expressed at the workshop. All of these proposals are based on a common concept: the separation of Locator and Identifier in the numbering of Internet devices, often termed the "Loc/ID split". Current Internet Protocol Architecture The current namespace architecture used by the Internet Protocol uses IP addresses for two separate functions: as an end-point identifier to uniquely identify a network interface within its local network addressing context as a locator for routing purposes, to identify where a network interface is located within a larger routing context LISP There are several advantages to decoupling Location and Identifier, and to LISP specifically. Improved routing scalability BGP-free multihoming in active-active configuration Address family traversal: IPv4 over IPv4, IPv4 over IPv6, IPv6 over IPv6, IPv6 over IPv4 Inbound traffic engineering Mobility Simple deployability No host changes are needed Customer driven VPN provisioning replacing MPLS-VPN Network virtualization Customer operated encrypted VPN based on LISP/GETVPN replacing IPsec scalability problems High availability for seamless communication sessions through (constraint-based) multihoming A recent discussion of several LISP use cases may be found in IETF has an active workgroup establishing standards for LISP. As of 2016, the LISP specifications are on the experimental track. The LISP workgroup started to move the core specifications onto the standards track in 2017 - as of June 2021 three revisions (for RFC 6830, RFC 6833, and 8113) are ready for publication as RFCs, but they await completion of work on a revision of RFC 6834 and the LISP Security Framework. Terminology Routing Locator (RLOC): A RLOC is an IPv4 or IPv6 address of an egress tunnel router (ETR). A RLOC is the output of an EID-to-RLOC mapping lookup. Endpoint ID (EID): An EID is an IPv4 or IPv6 address used in the source and destination address fields of the first (most inner) LISP header of a packet. Egress Tunnel Router (ETR): An ETR is a device that is the tunnel endpoint; it accepts an IP packet where the destination address in the "outer" IP header is one of its own RLOCs. ETR functionality does not have to be limited to a router device; server host can be the endpoint of a LISP tunnel as well. Ingress Tunnel Router (ITR): An ITR is a device that is the tunnel start point; it receives IP packets from site end-systems on one side and sends LISP-encapsulated IP packets, across the Internet to an ETR, on the other side. Proxy ETR (PETR): A LISP PETR implements ETR functions on behalf of non-LISP sites. A PETR is typically used when a LISP site needs to send traffic to non-LISP sites but the LISP site is connected through a service provider that does not accept nonroutable EIDs as packet sources. Proxy ITR (PITR): A PITR is used for inter-networking between Non-LISP and LISP sites, a PITR acts like an ITR but does so on behalf of non-LISP sites which send packets to destinations at LISP sites. xTR: A xTR refers to a device which functions both as an ITR and an ETR (which is typical), when the direction of data flow is not part of the context description. Re-encapsulating Tunnel Router (RTR): An RTR is used for connecting LISP-to-LISP communications within environments where direct connectivity is not supported. Examples include: 1) joining LISP sites connected to "disjointed locator spaces"—for example a LISP site with IPv4-only RLOC connectivity and a LISP site with IPv6-only RLOC connectivity; and 2) creating a data plane 'anchor point' by a LISP-speaking device behind a NAT box to send and receive traffic through the NAT device. The LISP mapping system In the Locator/Identifier Separation Protocol the network elements (routers) are responsible for looking up the mapping between end-point-identifiers (EID) and route locators (RLOC) and this process is invisible to the Internet end-hosts. The mappings are stored in a distributed database called the mapping system, which responds to the lookup queries. The LISP beta network initially used a BGP-based mapping system called LISP ALternative Topology (LISP+ALT), but this has now been replaced by a DNS-like indexing system called DDT inspired from LISP-TREE. The protocol design made it easy to plug in a new mapping system, when a different design proved to have benefits. Some proposals have already emerged and have been compared. Implementations Cisco has released public IOS, IOS XR, IOS XE and NX-OS images which support LISP. A team of researchers from the Université catholique de Louvain and T-Labs/TU Berlin have written a FreeBSD implementation called OpenLISP. The LIP6 lab of UPMC, France, has implemented a fully featured control-plane (MS/MR, DDT, xTR) for OpenLISP Historically, LISPmob was an open source implementation of LISP for Linux, OpenWRT and Android maintained at Polytechnic University of Catalonia. It could act as xTR or LISP Mobile Node. Recently, this implementation has been further developed into a full open source LISP router called the "Open Overlay Router" or OOR. AVM added LISP support in firmware for their FRITZ!Box devices starring from FRITZ!OS version 6.00. LANCOM Systems supports LISP in the router operating system HPE supports LISP in their Comware 7 platform based routers (under the marketing name FlexNetwork MSR and VSR). This platform is developed by H3C Technologies and sold in China under their own logo. OpenDaylight supports LISP flow mappings. ONOS develops a distributed LISP control plane as an SDN application. Lispers.net provides an open source, feature complete implementation of LISP. fd.io also supports LISP by the Overlay Network Engine (ONE). A simple LISP Mapping System implementation is also available in Java. LISP beta network A testbed has been developed to gain real-life experience with LISP. Participants include Google, Facebook, NTT, Level3, InTouch N.V. and the Internet Systems Consortium. As of January 2014, around 600 companies, universities, and individual contributors from 34 countries are involved. The geographical distribution of participating routers, and the prefixes they are responsible for, can be observed on the LISPmon project website (updated daily). The multi-company, LISP-community initiative LISP4.net/LISP6.net publishes relevant information about this beta network on http://www.lisp4.net/ and http://www.lisp6.net/. Since March 2020 the LISP Beta Network is not maintained anymore. LISP-Lab consortium research network The LISP-Lab project, coordinated by UPMC/LIP6, aims at building a LISP network experimentation platform exclusively built using open source LISP nodes (OpenLISP) acting as ITR/ETR tunnelling routers, MS/MR mapping servers/resolvers, DDT root and Proxy ITR/ETR. Partners include two academic institutions (UPMC, TPT), two Cloud Networking SME (Alphalink, NSS), two network operators (Renater, Orange), two SMEs on Access/Edge Networking (Border 6, Ucopia) and one Internet eXchange Point (Rezopole). More information on https://web.archive.org/web/20190508220217/http://www.lisp-lab.org/ . The platform should be opened to external partners on 2014/2015 and is already interconnected to the LISP Beta Network with an OpenLISP DDT root. Future use of LISP ICAO is considering Ground-Based LISP as a candidate technology for the next-generation Aeronautical Telecommunications Network (ATN). The solution is under further development in part of the SESAR (Single European Sky ATM Research) FCI activities. Other approaches Several proposals for separating the two functions and allowing the Internet to scale better have been proposed, for instance GSE/8+8 as network based solution and SHIM6, HIP and ILNP as host based solutions. See also Host Identity Protocol (HIP) Identifier/Locator Network Protocol (ILNP) Proxy Mobile IPv6 (PMIPv6) References External links The LISP Network (Book) LISP Network Deployment and Troubleshooting (Book) LISP Online Bibliography IETF Workgroup: Locator/ID Separation Protocol (lisp) http://www.openoverlayrouter.org/ https://web.archive.org/web/20190508220217/http://www.lisp-lab.org/ ODL LISP Flow Mapping User Guide Data Communication Lectures of Manfred Lindner Part LISP LISP assignment information at RIPE LISP routing group on Facebook LISP group on LinkedIn Internet architecture Multihoming Internet Protocol Internet layer protocols
26874183
https://en.wikipedia.org/wiki/Endeavour%20Software%20Project%20Management
Endeavour Software Project Management
Endeavour Software Project Management is an open-source solution to manage large-scale enterprise software projects in an iterative and incremental development process. History Endeavour Software Project Management was founded in September 2008 with the intention to develop a solution for replacing expensive and complex project management systems that is easy to use, intuitive, and realistic by eliminating features considered unnecessary. In September 2009 the project was registered in SourceForge, and in April 2010 the project was included in SourceForge's blog with an average of 210 weekly downloads. Features The major features include support for the following software artifacts: Projects Use cases Iterations Project plans Change requests Defect tracking Test cases Test plans Task Actors Document management Project glossary Project Wiki Developer management Reports (assignments, defects, cumulative flow) SVN browser integration with Svenson Continuous Integration with Hudson Email notifications Fully internationalizable System requirements Endeavour Software Project Management can be deployed in any Java EE-compliant application server and any relational database running under a variety of different operating systems. Its cross-browser capability allows it to run in most popular web browsers. Usage Software project management Iterative and incremental development Use-case-driven Issue tracking Test-case management software Integrated wiki See also Integrated Computer Solutions Project management software List of project management software Notes References Lee Schlesinger. Social media specialist at SourceForge.net blog post about Endeavour Software Project Management http://www.softpedia.com/get/Programming/Coding-languages-Compilers/Endeavour-Software-Project-Management.shtml http://freshmeat.net/projects/endeavour-software-project-management https://web.archive.org/web/20100504041659/http://www.federalarchitect.com/2009/07/21/new-open-source-project-management-tool-for-large-scale-enterprise-systems/ External links http://sourceforge.net/projects/endeavour-mgmt/reviews Software project management Software requirements Bug and issue tracking software Software development process Free software programmed in Java (programming language) Free project management software
17351293
https://en.wikipedia.org/wiki/John%20Archer%20%28basketball%29
John Archer (basketball)
John Archer (died October 13, 1998) was the head basketball coach of the Troy State Trojans from 1956-1973. He was the third coach in the history of the Troy basketball program and accumulated a record of 304-185 in his seventeen seasons as head coach. He guided the program to three NAIA National Tournament appearances and three Alabama Collegiate Conference titles. He came to Troy State Teachers College in 1956 as head basketball and tennis coach, line coach of the football team, and as an instructor in the physical education program. When his coaching days were over, Archer remained on the Troy State staff as a physical education instructor and intramural director. In 1969, Archer's ability for selecting winners came to national attention when he was named to the U.S. Olympic Men's Basketball Committee. He joined other college basketball experts around the country to select the members of the team, which represented the United States in the 1972 Summer Olympics and the Pan American Games. References Year of birth missing 1998 deaths Basketball coaches from Alabama Troy Trojans football coaches Troy Trojans men's basketball coaches
4886277
https://en.wikipedia.org/wiki/Royal%20Australian%20Survey%20Corps
Royal Australian Survey Corps
The Royal Australian Survey Corps (RA Svy) was a Corps of the Australian Army, formed on 1 July 1915 and disbanded on 1 July 1996. As one of the principal military survey units in Australia, the role of the Royal Australian Survey Corps was to provide the maps, aeronautical charts, hydrographical charts and geodetic and control survey data required for land combat operations. Functional responsibilities associated with this role were: theatre wide geodetic survey for – artillery, naval gunfire and close air support – mapping and charting – navigation systems – command and control, communications, intelligence, reconnaissance and surveillance systems; map production and printing for new maps and charts, plans, overprints, battle maps, air photo mosaics and photomaps, rapid map and chart revision; map holding and map distribution; production, maintenance and distribution of digital topographic information and products. RA Svy survey and mapping information was, and still is, a key information source for geospatial intelligence. The operational doctrine was that the combat force deployed into the area of operations with topographic products adequate for planning, force insertion and initial conduct of tactical operations, that new products and broad area updates of the topographic base would be provided by the support area and communication zone survey forces, and that the combat support survey force in the area of operations would update the topographic base, add tactical operational and intelligence information and provide the value-added products required by the combat force. The Historical Collection of the Survey Corps is maintained by the Australian Army Museum of Military Engineering at Holsworthy Barracks, south-west Sydney, New South Wales. Survey Corps Associations of ex-members, family and friends are located in Adelaide, Bendigo, Brisbane, Canberra, Perth and Sydney. Many wartime maps produced by the Survey Corps are in the Australian War Memorial collection, while all of the maps produced by the Corps are also in the national collection at the National Library of Australia. All of these are available to the public and some are on-line. History Origins Australia's first surveyor, Lieutenant August Alt, was an Army officer of the 8th (The King's) Regiment of Foot, which arrived in Australia with the First Fleet in January 1788. Eighteen years before him, Lieutenant James Cook, Royal Navy, used his knowledge and skills of topographic survey by plane-table for his surveys and charting of the east coast. This graphical method of topographic survey, first used before 1600, was the mainstay of the Australian Survey Corps for the first 20 years and used in the two world wars and occasionally much later. Cook had learned surveying in Canada from Royal Engineer Samuel Holland who then (1758) was the first Surveyor-General of British North America. For 113 years after the arrival of the First Fleet, much of the mapping of Australia, mainly for colonial exploration, settlement and development, was supervised and conducted by naval and military officers. These officers included the well known explorers and surveyors: Captain Matthew Flinders, Royal Navy; Lieutenant William Dawes (Royal Marines officer), (New South Wales); Lieutenant Philip Parker King, Royal Navy (mainly Tasmania, Western Australia and Northern Territory), Lieutenant John Oxley, Royal Navy, (New South Wales); Lieutenant Colonel Sir Thomas Mitchell (New South Wales, Victoria and Queensland); Captain Charles Sturt (New South Wales and South Australia); Lieutenant John Septimus Roe, Royal Navy (Western Australia), and Colonel William Light (South Australia). After 100 years of settlement some topographic maps had been produced for both public and private use but only a few maps were of any real military utility. The colonial part-time Defence Forces prepared small-area training manoeuvre maps and some colonies had produced small systematic topographic surveys for defence of the main ports of trade and commerce in the 1880s and 1890s. That is not to say that the need for topographic mapping was not a government and public concern but there were only minor attempts to allocate appropriate public resources. After Federation in 1901, the Defence Act 1903 made provision for military survey but nothing was immediately done to address the matter. Most recently a royal commission into the way the British Army had conducted itself in South Africa (Boer War) found that the troops had to fight without adequate topographic information. Indeed, accurate maps of the Boer republics did not exist. The Times' history of the Boer War 1899–1902 included: 'The chief deduction to be made in the matter is that no efforts during a war will compensate for the lack of a proper topographical survey made in peace time. Maps are a necessity to a modern army, and the expense of making them is very small compared with the cost of a campaign.' In early 1907, Colonel William Bridges then Chief of Intelligence and the senior Australian military officer recommended to Defence Minister Sir Thomas Ewing, member for Richmond, New South Wales, that a Chief of the General Staff should be appointed in place of Chief of Intelligence and that a General Staff be established. The Minister, who was a licensed surveyor, was unimpressed by the advice preferring to continue for the time being with the arrangements of a Military Board. He was concerned that there were no plans for mobilization, or local defence, noting that "there were few reliable maps, even of the most important localities". These were the important national defence issues that Ewing recommended to Bridges for his immediate attention. The Department of Defence and the Government considered a number of options to address the important and urgent need for a military survey, finally deciding in late 1907 to raise the Australian Intelligence Corps (AIC) manned by part-time Citizen Military Force (CMF) officers who were trained in surveying or draughting and whose duties included the preparation of strategic and tactical maps and plans. In Victoria the military survey was the responsibility of Lieutenant-Colonel John Monash who was then the senior AIC militia officer in the 3rd Military District. By 1909 the limitations of this arrangement were evident, and after advice from a senior British Army Royal Engineer Survey Officer, it was decided to embark on a systematic military survey conducted full-time by a Survey Section, Royal Australian Engineers (RAE) (Permanent Forces), allotted for duty under the supervision of the AIC. The Section was to be commanded by an Australian survey officer and staffed by Australian warrant officer draughtsmen and non-commissioned officers (NCO) topographers and four NCO topographers on an initial two-year loan from the British Army Royal Engineers. The Royal Engineer topographers (Corporal Lynch, Lance-Corporals Barrett, Davies and Wilcox) arrived in Melbourne on 11 April 1910 and on 16 April 1910, draughtsman Warrant Officer John J Raisbeck was the first Australian appointed to the Survey Section and soon after he was joined by draughtsman Warrant Officer Class 1 George Constable. Raisbeck was a survey draughtsman from the Department of Mines, Bendigo, and a Second-Lieutenant in the CMF 9th Light Horse Regiment. He relinquished his commission to enlist in the Permanent Military Forces. He was granted the Honorary rank Second-Lieutenant and was retired with the rank Lieutenant-Colonel 33 years later at age 63 years having served in France in World War I and in World War II. Fitzgerald said that JJ Raisbeck was the 'pioneer of military mapping in Australia'. The first officer appointed to the position Lieutenant Survey Section RAE (Permanent) was Lieutenant William Lawrence Whitham, a licensed surveyor from South Australia, having been highly recommended by the Surveyor-General South Australia. He assumed his appointment on 1 July 1910 at which time there were seven members of the Section. All men were professional surveyors and draftsman. CMF officers of the AIC in the Military District Headquarters supervised the work of the Section. In 1912 the Queensland, New South Wales, Victoria and South Australia State Governments each agreed to loan a Lands Department Survey Officer to the Survey Section RAE for two years to supervise the work of the Section in each State. Each Survey Officer was a militia officer in the AIC. The Section was divided into two sub-sections and employed in New South Wales, Victoria, South Australia and Western Australia producing scale one-mile-to-one-inch military maps ('one-mile' map) mainly of areas around cities and key infrastructure. By mid-1913 the Section had completed topographic field sheets by plane-tabling for eight maps around Newcastle, Melbourne and Canberra. The initial method of using parish plans to position the topographic detail was soon found to be inadequate and a geodetic subsection was established in 1914 to provide surveys by geodetic triangulation as spatial frameworks for the field sheets produced by the topographers. The triangulation work started at Werribee south-west of Melbourne and proceeded west along the coast to Warrnambool. This work included the 1860 Geodetic Survey of Victoria where appropriate – much of which was produced by Royal Engineer officers. The AIC was disbanded in 1914 but the work of the Survey Section continued under the supervision and control of the Intelligence Section of the General Staff under the general direction of the Chief of the General Staff (CGS). First World War The outbreak of the First World War on 4 August 1914 did not, at first, seriously affect the work of the members of the Section as the highest priority and urgent work of the Permanent Forces was military survey for defence of the major cities and ports in Australia. Indeed, the Section gained an increase in personnel and equipment. At that stage it was understood that the British Army would provide the maps required for the Australian Imperial Force (AIF) war fighting. However, by 17 August the Draughting sub-Section of the Survey Section RAE in Melbourne had prepared a map of the north-east frontier of France for reproduction of 1,000 copies by photo-lithography by the Victorian Lands Department for issue to the Australian Imperial Force. On 3 July 1915, Military Order 396 of 1915 promulgated that His Excellency the Governor-General has been pleased to approve of: 'A Corps to be called the Australian Survey Corps being raised as a unit of the Permanent Military Forces. All officers, warrant officers, non-commissioned officers and men now serving in the Survey Section of the Royal Australian Engineers being transferred to the Australian Survey Corps with their present ranks and seniority.' The effective date of the foundation of the Corps was 1 July 1915 – on that day there were two officers and seventeen warrant officers and NCOs of the Corps (at that stage there were no sappers in the Corps). The Australian Survey Corps was placed fourth in the Order of Precedence of Corps after the Royal Australian Engineers. Raising the Australian Survey Corps had nothing to do with supporting the AIF then at Gallipoli, but provided for the key tasks of military survey of the high defence priority areas of Australia without direction or control of Intelligence Staff or the Royal Australian Engineers. The first Officer Commanding Australian Survey Corps was Honorary Captain Cecil Verdon Quinlan who had been appointed Lieutenant Survey Section RAE in March 1913 after Lieutenant Whitham resigned. Quinlan later gave credit to the creation of the Australian Survey Corps to the then General Staff Director of Military Operations – Major Brudenell White (later General Sir, Chief of the General Staff). Survey Corps members started to enlist in the AIF in November 1915, with three Warrant Officer draughtsmen (Murray, Shiels and MacDonald) working with the Headquarters of the Egyptian Expeditionary Force by late 1916. Then in late 1917, General Headquarters in France/Belgium requested survey personnel for topographic surveys in that theatre and a volunteer Australian Survey Corps AIF draft of four officers and seven NCOs arrived in France early in 1918. Military Survey in Australia then came to a virtual standstill when the AIF members departed for the Middle East and the Western Front. 2nd Lieutenant Raisbeck, Sergeants Anderson and Clements and Corporal Watson served with the Australian Corps Topographic Section (not a unit of the Australian Survey Corps) in France/Belgium and Lieutenants Vance and Lynch, 2nd Lieutenant Davies, Sergeants Clews and Rossiter and Corporals Blaikie and Roberts served with General Headquarters Royal Engineer Survey Companies employed amongst other things on evaluating the suitability of the French triangulation networks for the purposes of military survey and mapping. In late 1916, Warrant Officer Class 1 Hector E McMurtrie was enlisted into the Corps as a draughtsman for duty with the survey staff (not Australian Survey Corps) working on land title surveys for the military administration by the Australian Naval and Military Expeditionary Force in the New Guinea area. In 1919 he died from a service related illness, and is commemorated on the Australian War Memorial Roll of Honour. He was the first Australian Survey Corps member to die in war. In early 1918 fourteen Corps members were serving with the AIF and eight were in Australia working on the military survey. One Survey Corps member (Warrant Officer Class 1 Alan Stewart Murray) and one Topographic Section Topographer Corporal Stafford were awarded the Distinguished Conduct Medal for mapping under enemy fire, Topographic Section Topographer Sergeant Finlason was recommended for a Military Medal but was awarded the French Croix De Guerre, Topographic Section Draughtsman Sergeant Wightman was awarded the Meritorious Service Medal. the Officer Commanding the 1st ANZAC Topographic Section in 1917 and the Australian Corps Topographic Section in 1918 Lieutenant Buchanan was awarded the Belgian Croix De Guerre and Topographic Section Lithographer Sergeant Dunstan was awarded the French Croix De Guerre. Warrant Officer Murray's citation for his work in Sinai and Palestine read "for conspicuous gallantry and dedication to duty. For a prolonged period this Warrant Officer was engaged on surveying the area between the lines repeatedly working under machine gun fire and sniping. In order not to attract attention he usually worked alone with his plane table and instruments. Owing to his energy and coolness he has mapped a piece of country accurately and his work has been most valuable." In March 1919 Warrant Officer Class 1 Norman Lyndhurst Shiels was mentioned in the despatches of the General Officer Commanding Egyptian Expeditionary Force, possibly for his work with the Royal Air Force in mapping from aerial photography. Between the World Wars When the AIF members returned to Australia in 1919 they were discharged and most were re-appointed to the Survey Section RAE (Permanent) or the Australian Survey Corps (Permanent). Military reorganisation after World War I, and the general reduction in military capability, saw the numbers reduced to fourteen all ranks. In 1920 the Corps was suspended and reverted, in title, to the Survey Section, RAE (Permanent). Mapping continued, mainly the 'one-mile' maps, albeit at a reduced rate consistent with the allocated resources, with priority given to areas around Brisbane, Sydney and Melbourne. By 1929 the Section had produced fifty-four 'one-mile' maps including five revised maps. Technical developments provided for improved efficiencies. These included use of single photographs and strips of air photographs to supplement ground surveys by plane-table for topographic detail. The first military map produced with a significant component of aerial photography flown by the Royal Australian Air Force was the Albury 'one-mile' map published in 1933. Compilation of this map highlighted inconsistencies in State triangulation systems and once again demonstrated the need for a national geodetic survey which had been identified by colonial survey officers in the 1890s. The Survey Corps undertook this field and computation work linking QLD, NSW, VIC and SA with a 1st Order trigonometric network by 1939. In 1932, the Army decided that the Order in Council which created the Corps in 1915 had never been broken and so the title of the military survey unit was once again the Australian Survey Corps (Permanent Forces) with the personnel establishment of fourteen all ranks as it had been for twelve years. Recruiting recommenced and in 1935 the Corps establishment was raised to twenty-five all ranks. Increments allowed for a Survey (Topographic) Section to be based in each of three states, New South Wales, Victoria and Queensland, a Survey (Geodetic) Section and the Survey (Draughting) Section in Melbourne. Then in 1938, with war clouds once again on the horizon, a three-year Long Range Mapping Programme was approved, with additional funding bringing the total to ninety-seven all ranks. These extra resources would provide for a total of about thirty-five new '1 mile' maps each year. However, at the outbreak of the Second World War there were only fifty members in the Australian Survey Corps. Most of the men were professionally qualified, and the Corps was very knowledgeable of emerging technical developments which would improve efficiencies in map production. That there were no militia units of the Corps was due mainly to the fact that little progress in the mapping programme was achieved with part-time effort. The Corps was well equipped for the range of field survey and office mapping tasks with the exception of printing presses. Maps were printed by the Victorian State Government Printer. At this stage, just before the Second World War, the total coverage of military 'one-mile' maps was eighty-one maps or about or only about 40% of the identified military mapping requirement at that stage. Sections (all Permanent Forces) from the end of the First World War to pre-Second World War were: Survey Section, RAE No 1 Survey (Draughting) Section (Melbourne) No 1 Survey (Topographic) Section (Queensland, South Australia and Victoria) No 2 Survey (Topographic) Section (Victoria) No 3 Survey (Topographic) Section (New South Wales) No 4 Survey (Geodetic) Section (1st Order triangulation Southern Queensland – New South Wales – Victoria – South Australia) Second World War In July 1939 'Instructions for War – Survey' were issued. This outlined the military survey organisation required to undertake an Emergency Mapping Programme to complete the outstanding Long Range Mapping Programme and be the nucleus for expansion to war establishment. The Emergency Mapping Programme was initially for strategic mapping at scale eight-miles-to-one-inch of inland Australia, four-miles-to-one-inch ('four-mile maps') covering a coastal strip 200 miles (300 km) inland from Townsville to Port Augusta and 100 miles (200 km) inland from Albany to Geraldton and key strategic areas in Tasmania and around Darwin and 'one-mile' maps of populated places. Map production was from existing State Lands Department's information and conducted jointly between State Lands Departments and Survey Corps units. The programme expanded to include more of Australia, New Guinea, New Britain and New Ireland and although many maps were of a preliminary standard only, providing general coverage critical at the time. Initially, the Australian Survey Corps continued its mapping for the defence of Australia, proceeded with basic survey triangulation and provided training cadre for new field and training units. In January 1940, Field Survey Units RAE (Militia) were established in 1st Military District – Queensland, 2nd Military District – New South Wales and 3rd Military District – Victoria to accelerate the mapping effort. Deputy Assistant Directors – Survey were appointed to Military District Headquarters to advise on survey needs and to liaise with State agencies. In April 1940 the 2/1 Corps Field Survey Company RAE was raised as part of the 2nd Australian Imperial Force for service overseas and a Survey staff was appointed to the formation (Corps) headquarters. In September 1940 significant expansion of the Australian Survey Corps was approved by Cabinet to include a Survey Directorate on Army Headquarters (AHQ) and higher formation headquarters, an AHQ Survey Company, an AHQ Cartographic Company, regional command field survey companies, a Survey Section 7th Military District (Darwin) and a Corps Survey Mobile Reproduction Section. Four militia Field Survey Companies RAE were established in the regional commands (1 Field Survey Coy RAE – Queensland, 2 Field Survey Coy RAE – New South Wales, 3 Field Survey Coy RAE – Victoria, 4 Field Survey Coy RAE – Western Australia) absorbing the Australian Survey Corps Sections and the Field Survey Units RAE (M) in the Commands. In early 1941, 2/1 Corps Field Survey Company RAE, sailed with the 2nd Australian Imperial Force, to provide survey and mapping to the Australian Corps in the Middle East theatre including Greece, Egypt, Cyrenaica and the border zones of Palestine, Syria, Trans-Jordan and Turkey. In response to the Japanese late-1941 and early-1942 offensives in South-East Asia and the Pacific the 2/1 Corps Field Survey Company RAE returned to Australia in early 1942 with a large part of I Aust Corps. Over the next four years fifteen survey units with various roles relating to production of topographic maps provided survey and mapping support to military operations in the South West Pacific Area theatre of the war including Northern Territory, Papua, New Guinea, New Britain, New Ireland, Bougainville, Dutch New Guinea, Borneo and the States of Australia in particular northern Australia. Women of the Australian Women's Army Service served in Survey units and formation headquarter sections in Australia and in New Guinea. In June 1943 all topographic survey related units were concentrated in the Australian Survey Corps being transferred from Royal Australian Engineers. Acknowledgement of the value of survey support organic to the combat forces increased as the war progressed. Early doctrine was that survey support was at Army Corps level, but additional support was added at Army and Force levels and by the end of the war survey sections of 5th Field Survey Company were assigned to both 7th Australian Infantry Division and 9th Australian Infantry Division for the large scale amphibious landings at Labuan and Balikpapan in Borneo. Later, the General Staff of Headquarters 1 Australian Corps said "...never in this war have Australian troops been so well provided with accurate maps, sketches and photo reproductions..." More than 1440 new maps of the theatres of war (Middle East 28, Australian mainland 708, New Guinea area 364, Borneo 147, Mindanao Philippines 200), printing more than 15 million copies of the maps. Many of the Second World War maps are available on-line from the National Library of Australia. The highly valued efforts of the survey units did not go unnoticed by senior commanders. On 13th November 1942, Lieutenant-General Edmund Herring, General Officer Commanding New Guinea Force, wrote to Director of Survey Advanced Land Headquarters thanking him for the noteworthy efficiency and splendid cooperation in having an urgently needed of Buna, Papua, sent to 2/1st Army Topographic Survey Company RAE in Toowoomba QLD for printing and then returned for issue to forward troops 48 hours later. On 1st June 1943, Lieutenant-General John Northcott, Chief of the General Staff, wrote a letter of appreciation of the work of the Survey units in New Guinea to the Director of Survey, Advanced Land Headquarters. On 19th October 1943, soon after the successful assaults on Lae and Finschhafen in New Guinea, General Douglas MacArthur, Commander-in-Chief, South West Pacific Area wrote a letter of high commendation of the performance of 2/1st Aust Army Topographic Survey Company, 3rd Aust Field Survey Company and 8th Field Survey Section to General Thomas Blamey, Commander, Allied land Forces, South West Pacific Area. At Morotai, the 1st Mobile Lithographic Section was given the privilege of preparing the Instrument of Surrender signed by Commander, Second Japanese Army and countersigned by General Blamey, Commander-in-Chief, Australian Military Forces. The unit then printed thousands of copies of the surrender document for souvenirs. At the end of the war more than half of the Corps strength of 1700 were on active service outside Australia. Colonel Fitzgerald noted that 'One of the most satisfying tasks followed immediately after the cessation of hostilities. It was the preparation of maps to assist in the recovery of our prisoners of war in SWPA. It was an urgent commitment readily undertaken.' The achievements of the Corps during the Second World War were its greatest contributions to the nation than at any other time during its existence. This was duly recognised in 1948 when King George VI granted the title 'Royal' to the Australian Survey Corps. Sixteen members of the Australian Survey Corps, or soldiers serving with Corps units, died during the war and are commemorated on the Australian War Memorial Roll of Honour. Five members of the Corps were formally recognised with awards and twenty-one members were mentioned-in-despatches. Awards - Colonel L Fitzgerald OBE, Captain HAJ Fryer MBE US Legion of Merit, Major HA Johnson MBE, Lieutenant-Colonel AF Kurrle MBE twice Mentioned in Despatches, Lieutenant-Colonel D Macdonald US Medal of Freedom Mentioned in Despatches. Others Mentioned in Despatches - Lieutenant NT Banks, Lieutenant FD Buckland, Captain TM Connolly, Sergeant HJ Curry, Captain FJ Cusack, Lieutenant LN Fletcher, Lance-Corporal BA Hagan, Lieutenant HM Hall, Warrant Officer Class 2 LG Holmwood, Lieutenant JD Lines, Captain LJ Lockwood, Lieutenant NG McNaught, Captain JE Middleton, Captain RE Playford, Lieutenant LB Sprenger, Captain CR Stoddart, Captain AJ Townsend, Warrant Officer Class 2 JE Turnbull, Lieutenant NLG Williams. Australian Survey Corps units (from June 1943, earlier being RAE units) during the Second World War were: 2nd/1st Australian Army Topographical Survey Company, formerly 2nd/1st Corps Field Survey Company RAE – Middle East, Papua, New Guinea, Hollandia and Morotai No 6 Aust Army Topographical Survey Company, formerly No 2 Army Topographical Survey Company, absorbed Army/Land Headquarters Survey Company – Victoria, Northern Territory, Western Australia, Queensland, New Guinea, New Britain Land Headquarters Cartographic Company, formerly Army Headquarters Cartographic Company – Melbourne and Bendigo, Victoria No 2 Aust Field Survey Company – absorbed the 2 MD Field Survey Unit RAE(M) and Australian Survey Corps (P) No 3 Survey Sect – New South Wales, Queensland, Dutch New Guinea, New Guinea, New Britain, Bougainville No 3 Aust Field Survey Company – absorbed the 3 MD Field Survey Unit RAE(M) and Australian Survey Corps (P) No 1 Survey Sect – Victoria, Papua, New Guinea, Queensland No 4 Aust Field Survey Company – Western Australia No 5 Aust Field Survey Company, formerly 1 Aust Field Survey Company which absorbed the 1 MD Field Survey Unit RAE(M) – Queensland, Dutch New Guinea, Labuan and Balikpapan in Borneo No 7 Field Survey Section, formerly 7th Military District Survey Section, Northern Territory Force Field Survey Section and 1 Aust Field Survey Section – Northern Territory No 8 Aust Field Survey Section, formerly New Guinea Field Survey Section and No 2 Field Survey Section – Papua, New Guinea No 1 Mobile Lithographic Section, formerly 2 Army Survey Mobile Reproduction Section – Melbourne, Brisbane and Morotai No 11 Aust Field Survey Depot – Bendigo, Victoria No 12 Aust Field Survey Depot – Queensland and Morotai No 13 Aust Field Survey Depot – Sydney, New Guinea Field Survey Training Depot – Bacchus Marsh and Melbourne, Victoria In addition there were Survey Directorates and Survey Staff on formation headquarters: General Headquarters South-West Pacific Area Advanced Land Headquarters – Melbourne, Brisbane, Hollandia and Morotai Headquarters First Australian Army - New Guinea Headquarters I Australian Corps – Middle East, Morotai Headquarters II Australian Corps - New Guinea Headquarters New Guinea Force Headquarters Northern Territory Force Headquarters of Queensland, Victoria, South Australia and Tasmania Lines of Communication areas Headquarters Kanga Force - New Guinea Headquarters Merauke Force - Dutch New Guinea At the end of the Second World War, which by the nature of the total war demanded the best collective capability that the nation could assemble, the Australian Survey Corps was the best organised, best manned and best equipped geodetic and topographic survey and mapping organisation in Australia. The Director of Survey expected that the organisation would play a major role in the future survey and mapping of Australia. Post Second World War Defence survey and mapping programmes (not including Defence international cooperation) After the Second World War the Corps reverted to its peace time role of the military survey of Australia retaining a capability in the Permanent Force Interim Army in 1946, and the Australian Regular Army from 1947 with a desired Corps strength of 460 all ranks, although a strength of 400 was not reached until the early 1960s. By 1950 the all ranks numbers were down to 210 as civilian employment opportunities significantly improved. The Australian Survey Corps structure and size was both the force-in-being and the base for expansion in war. In the early post-war years the Corps continued mainly with the 'one-mile' map military survey programme and provided Defence assistance to nation building projects for water conservation and settlement in the Burdekin River basin in Queensland (194x–1949), investigative surveys for the Snowy River Diversion Scheme (1946–1949) in New South Wales and Victoria, surveys for water flows between the Murrumbidgee and Murray Rivers near Urana, New South Wales (1946), production of maps for the 1947 Australian Census and survey and mapping projects for the Woomera Rocket Range (1946–1953) in South Australia and Western Australia and the atomic test range at Maralinga in South Australia. Corps units were established in each of the regional military districts, except Tasmania and Northern Territory, and a survey school was established in Victoria. In 1956 Australia adopted decimal/metric scales for mapping, largely for military interoperability with major allies and the global trend. After the 'one-mile' military maps were discontinued in 1959, and although Defence preferred 1:50,000 maps for tactical operations, it recognised the resources needed for such a programme over large regional areas of Australia and accepted the 1:100,000 map as a practical substitute with 1:50,000 maps over specific areas of interest and military training areas. Then in 1983 Defence endorsed a program for more than 2600 scale 1:50,000 maps in Defence priority areas in north and north-west Australia, its territories and the main land communication routes. By 1996 the Corps had completed more than 1900 of these maps by – field surveys using mainly US TRANSIT and GPS satellite positioning systems and aircraft mounted laser terrain profiling, new coverage aerial photography, aerotriangulation, compilation and cartographic completion on computer assisted mapping systems and produced both digital and printed topographic products. Production of the military specification Joint Operation Graphics scale 1:250,000 is mentioned in the section "National Survey and Mapping Programmes" below. In addition, there was a miscellany of surveys, maps and information produced by the Corps for units of each of the Armed Services. These included: production of air navigation charts for the Royal Australian Air Force covering Australia and a large area of its region totalling about eight percent of the earth's surface; printing hydrographic charts for the Hydrographer Royal Australian Navy; joint and single service training area special surveys, maps and models including live fire requirements; vital asset protection maps; safeguarding maps for ammunition depots; digital terrain data and models for command and control, communications, intelligence, surveillance, reconnaissance, simulation, weapons and geographic information systems; photomaps using air and satellite photography. Corps units, officers and soldiers were deployed on operations and conflicts of various types including: 1946 – staff posted to British Commonwealth Occupation Force, Japan 1955–1960 – Malayan Campaign, officers posted to Force Headquarters 1965 – 1st Topographical Survey Troop raised in Sydney NSW based at Randwick 1966–1971 Detachment 1st Topographical Survey Troop deployed with the 1st Australian Task Force in Vietnam, in 1967 re-designated A Section 1st Topographical Survey Troop 1975 – Cyclone Tracy – rapid response production of orthophotomaps and a senior Corps officer seconded as Staff Officer to the Army General commanding the emergency operations 1987 – Operation Morris Dance – rapid response mapping of Fiji 1988 – Operation Sailcloth – rapid response mapping of Vanuatu 1990–1991 – Operations Desert Shield and Desert Storm, Corps officers posted in United States and United Kingdom mapping agencies and units on operations 1992 – support to ADF elements of UN peacekeeping force in Western Sahara 1993 – support to ADF elements of UN peacekeeping force in Somalia 1993 – Corps soldiers posted to UN Transitional Authority in Cambodia 1995 – Corps officers posted to UN Protection Force Headquarters in Bosnia-Herzegovina Defence international cooperation Commencing in 1954, the Corps was again involved in surveys and mapping the New Guinea area, initially in cooperation with the United States Army Map Service for two years, and again as solely an Australian force from the early 1960s. The 1956 survey (Project Cutlass) of ship-to-shore triangulation included a 300 kilometre theodolite and chain traverse on New Ireland. From 1962 the Corps resumed geodetic surveys as part of the National Geodetic Survey, linking to the global US HIRAN survey and the high order Division of National Mapping traverse and triangulation through the PNG highlands. There was a continuous Survey Corps presence in Papua New Guinea (PNG) from 1971 to 1995, with 8th Field Survey Squadron raised in PNG and based at Popondetta, Wewak and Port Moresby for geodetic surveys, topographic surveys, map compilation, field completion of compiled maps, to support the Papua New Guinea Defence Force and to provide advice to PNG National Mapping Bureau. During this period the Corps completed the national/defence mapping programme of scale 1:100,000 topographic maps covering the entire country, the derived 1:250,000 Joint Operations Graphic – Ground and Air charts, large scale military city orthophotomaps and participated with the Royal Australian Navy in beach surveys of most of the coastline. In the 1970s up to 60% of the Corps' capability was engaged on PNG surveys and mapping. This mapping programme was based on high altitude (40,000 feet) air photography, acquired by the Royal Australian Air Force using Canberra bombers fitted with Wild RC10 mapping cameras (Operation Skai Piksa), supported by Survey Corps surveyors and photogrammetrists to plan the photography requirements and for quality control to ensure that the photography met the high technical standards for the subsequent mapping processes. Under the Defence Cooperation Programme, the Corps completed many cooperative and collaborative projects with nations in Australia's area of strategic interest. These projects included ground surveys, definition of geodetic datums, air photography, assistance with definition of Exclusive Economic Zones, mapping, provision of equipment and technology transfer and training of officers and technicians. Projects commenced in 1970 in Indonesia and expanded over 25 years to include Solomon Islands, Fiji, Tonga, Kiribati, Nauru, Tuvalu, Vanuatu and Western Samoa. Technical Advisers were posted to national survey and mapping organisations in Fiji, Indonesia, Malaysia, Papua New Guinea, Solomon Islands and Vanuatu. All field survey operations outside of Australia, and indeed in Australia, would not have been possible without essential support of most other Army Corps (Engineers; Signals; Aviation – Cessna, Porter, Nomad, Sioux, Kiowa; Chaplains, Medical, Dental, Transport, Ordnance, Electrical and Mechanical Engineers, Pay, Catering, Service), the Royal Australian Navy (Hydrographic Service, Landing Craft, Patrol Boats) and at times the Royal New Zealand Navy (Hydrographic Service), the Royal Australian Air Force (Canberra, Hercules, Caribou, Iroquois) and civil charter fixed wing and helicopters for aerial survey work and transport. Two members of the Australian Defence Force died on military survey operations in the 1970s in Papua New Guinea and Indonesia and are commemorated on the Australian War Memorial Roll of Honour. Associations with other Armies commenced during the First World War and for more than 50 years after the Second World War the Corps participated in mapping, charting and geodesy projects for standardisation and interoperability with major allies including Canada, New Zealand, United Kingdom and United States. From the 1960s, the Corps participated in cooperative and collaborative geodetic satellite programs with the United States, firstly astro-triangulation of passive satellites Echo and Pageos being observed with Wild BC4 cameras at Thursday Island in Queensland, Narrabri in New South Wales, Perth in Western Australia and Cocos Island, a TRANET fixed station at Smithfield in South Australia in the global network observing US Navy Navigation Satellites (TRANSIT) from 1976 to 1993 and the first observations of US Global Positioning System (GPS) satellites in Australia, at Smithfield, during the development and early operational phases of the system from 1981 to 1994. The Corps managed bi-lateral Defence and Army map exchange arrangements with major allies and regional nations. Personnel exchange programs included Canada, United Kingdom and United States. Participation in national survey and mapping programmes In 1945 the National Mapping Council (NMC) of Australia, comprising Commonwealth and State authorities, was formed to coordinate survey and mapping activities after the Second World War. Despite the huge wartime mapping achievements of producing 224 'four-mile' strategic maps and 397 'one-mile' tactical maps, there was much to be done for a basic coverage of reliable topographic maps for national development and defence. In 1947 a National Mapping Section in the Department of Interior was established and together with the Survey Corps commenced work on the 1954 Cabinet approved general purpose (national development and defence) national topographic map programme, initially the 'four-mile' map then soon after scale 1:250,000 maps (series R502). Army agreed that when not required for solely military purposes, Survey Corps units would be available to work in the Defence priority areas in the Government approved national geodetic survey and topographic mapping programmes. This programme involved control surveys by astronomical fixes, theodolite and chain triangulation and traverse by theodolite and electro-magnetic distance measurement and all aspects of map compilation from aerial photography, final cartography and map printing. The Corps' geodetic surveys were integrated with other Commonwealth and State Government surveys to create the NMC sponsored Australian Geodetic Datum 1966 (AGD66) and the associated Australian Map Grid 1966 (AMG66) of Australia and Papua New Guinea and the Australian Height Datum 1971 (AHD71). By 1968 the Corps had completed its commitment of about half of the 540 series R502 maps and it then embarked on the Defence priority part of the 1965 Cabinet endorsed national programme of general purpose scale 1:100,000 topographic maps. This programme required densification of the national geodetic and height survey networks with mapping quality control surveys of Cape York, Gulf of Carpentaria, northern Northern Territory and north-west Western Australia using mainly airborne electromagnetic distance measurement systems (Aerodist). The Corps completed its commitment of 862 of these maps in 1982. In areas of higher defence interest the Survey Corps replaced the series R502 1:250,000 maps with the military specification 1:250,000 Joint Operation Graphic (JOG) Ground and the companion Air version using materials from the 1:100,000 and Defence 1:50,000 mapping programmes and from other suitable sources. Units and command staff post-Second World War Army Headquarters Directorate of Survey – Army Headquarters Field Force Command – Senior Staff Officer and Survey Section Army Survey Regiment based at Bendigo, Victoria, formerly AHQ Survey Regiment and Southern Command Field Survey Section, AHQ Cartographic Unit, LHQ Cartographic Company and AHQ Cartographic Company 1st Field Survey Squadron based at Gaythorne and Enoggera Barracks Brisbane, Qld, formerly Northern Command Field Survey Section and Northern Command Field Survey Unit – Queensland, Territory Papua and New Guinea 1st Topographic Survey Squadron, now part of the Royal Australian Engineers, based at Enoggera Barracks in Brisbane, formed from 1st Field Survey Squadron and 1st Division Survey Section 2nd Field Survey Squadron based at Sydney, NSW, formerly Eastern Command Field Survey Section and Eastern Command Field Survey Unit – New South Wales, Indonesia, Papua New Guinea, nations of South West Pacific 4th Field Survey Squadron including a Reserve component, based at Keswick Barracks, Adelaide, South Australia, formerly Central Command Field Survey Section and Central Command Field Survey Unit – South Australia, Northern Territory, Papua and New Guinea, Solomon Islands and Vanuatu 5th Field Survey Squadron including a Reserve component, based Perth, formerly Western Command Field Survey Section and Western Command Field Survey Unit – Western Australia, Indonesia New Guinea Field Survey Unit 8th Field Survey Squadron raised and disbanded in Papua New Guinea based at Popondetta, Wewak, Port Moresby 1st Topographical Survey Company (CMF) based in Sydney, NSW 2nd Topographical Survey Company (CMF) based in Melbourne, VIC (an element was absorbed by Army Survey Regiment when the company was disbanded) 1st Topographical Survey Troop – raised and based in Sydney NSW 1st Topographic Survey Troop – Detachment thereof later re designated A Sect in Vietnam, B Sect based in Sydney, NSW 9th Topographic Survey Troop (CMF) based in Sydney, NSW 7th Military Geographic Information Section based at Darwin, NT ANZUK Survey Map Depot, Singapore – formerly 16 Field Survey Depot formerly AHQ Field Survey Depot Detachment Army Map Depot, formerly AHQ Field Survey Depot and Army Field Survey Depot – Victoria School of Military Survey – initially based at Balcombe, Victoria and subsequently at Bonegilla, Victoria. In 1996 the School was integrated into the School of Military Engineering as the Geomatic Engineering Wing now at Holsworthy Barracks in Sydney, New South Wales Joint Intelligence Organisation Printing Section Re-integration with the Royal Australian Engineers The Survey Corps was subject to many Government and Defence reviews since the 1950s, with seven from the early 1980s. Review outcomes led to many reorganisations. In the late 1980s and early 1990s efficiency reviews led to an Army direction that the non-core strategic mapping functions of the Corps were to be tested as part of the Defence Commercial Support Program. Army decisions as part of that review were that: the combat support survey force (1st Topographical Survey Squadron) would be increased significantly; the non-core work, mainly systematic mapping of Australia would be performed by a new Army agency with civilian personnel; and, that the core strategic mapping would be retained by Army until it could be transferred to Defence Intelligence. This last component was achieved in 2000 when the Defence Imagery and Geospatial Organisation (now Australian Geospatial-Intelligence Organisation) was formed. These changes meant that the majority of Survey Corps staff positions would be removed, and so in September 1995 the Chief of the General Staff (CGS) decided that the remaining combat support force and training force functions of the Corps would be once again integrated with the combat force and training force of the Royal Australian Engineers. At the integration parade of the two Corps on 1 July 1996, 81 years after the formation of the Australian Survey Corps, the CGS said that "Since 1915 the Survey Corps has not just been a major contributor to the tactical success of the Australian Army in two World Wars and other conflicts, it has played an outstanding role in the building of this nation – the Commonwealth of Australia – and the building of other nations such as Papua New Guinea". In 2014, the 1st Topographical Survey Squadron RAE came under the command of the 1st Intelligence Battalion. In 2018, the Army geospatial capability, less the field surveying capability, was transferred from RAE to the Australian Army Intelligence Corps. As a part of this process, the 1st Topographical Survey Squadron RAE was retitled 5 Company, 1st Intelligence Battalion while the Geomatic Engineering Wing was retitled the Geospatial Intelligence Wing and transferred from School of Military Engineering to Defence Force School of Intelligence. Corps Appointments and its people Her Majesty Queen Elizabeth II approved the appointment, on 1 July 1988, of Her Royal Highness the Princess of Wales as Colonel-in-Chief of the Royal Australian Survey Corps. Colonels Commandant (honorary appointment), Royal Australian Survey Corps: Brigadier D. Macdonald (Retd), AM (August 1967 – January 1973) Brigadier F.D. Buckland (Retd), OBE (January 1973 – January 1976) Colonel J.L. Stedman (Retd) (September 1978 – February 1983) Lieutenant-Colonel T.C. Sargent (Retd) (February 1983 – February 1989) Colonel N.R.J. Hillier (Retd) (February 1989 – January 1993) Colonel D.G. Swiney (Retd), MBE (January 1993 – January 1996) Officers Commanding, Survey Section RAE: Lieutenant W.L. Whitham (July 1910 – September 1912) Captain C.V. Quinlan (March 1913 – June 1915) Officers Commanding, Australian Survey Corps: Captain C.V. Quinlan (July 1915 – January 1916) Captain J. Lynch (January 1916 – May 1934) Major T.A. Vance (March 1936 – December 1940) Directors of Military Survey (or Survey – Army): Lieutenant-Colonel T.A. Vance (January 1941 – June 1942) Colonel L. Fitzgerald, OBE (June 1942 – January 1960) Colonel D. Macdonald, (January 1960 – March 1967) Colonel F.D. Buckland, OBE (March 1967 – August 1972) Colonel J.K. Nolan (August 1972 – June 1975) Colonel J.L. Stedman (July 1975 – February 1978) Colonel N.R.J. Hillier (March 1978 – July 1983) Colonel A.W. Laing (July 1983 – November 1988) Colonel D.G. Swiney, MBE (November 1988 – January 1991) Colonel S.W. Lemon (January 1991 – June 1996) The high reputation and esteem in which the Corps was held within the Australian Defence Force, the surveying and mapping profession and amongst Australia's military allies and friends was based on its achievements largely possible only by the quality of its people. This was greatly enhanced by the camaraderie and espirit-de-corps of the members of the Corps, knowing the high military value and high quality of the work that they produced. After World War II eight Corps officers were later appointed Surveyors-General or Directors of Survey/Mapping/Lands in the States or Commonwealth organisations. Many personnel went on to leadership positions in professional institutions. The first five members of the Institution of Surveyors, Australia, to be recognised for outstanding service to the profession and awarded a special medial, with a gold medal for exceptional service, had all been Second World War officers of the Survey Corps (Brigadier L Fitzgerald OBE, Lieutenant-Colonel JG Gillespie MBE (gold medal), Lieutenant-Colonel HA Johnson MBE, Captain SE Reilly MBE, Brigadier D Macdonald AM. From the 1960s, most Corps officers were tertiary educated with many at the post-graduate level in either mapping or computer disciplines and military command and staff training. This was the key to understanding the potential, application and implementation of emerging technologies and techniques across all aspects of Corps capability. Corps soldier training was both broad within a trade, across Corps trades and specific to specialised equipment with military training for various levels of leadership. Officers and soldiers posted outside Corps positions were highly regarded. Until the 1970s the Corps sponsored and trained soldiers in trades other than the mapping related trades essential to its operations. These included drivers, storemen and clerks. After some rationalisation the Corps retained career and training responsibility for all mapping related trades, and also photographers (non-public relations), illustrators and projectionists who were posted mainly to training institutions and headquarters. More than 6,300 people served in the Survey Section (RAE), Australian Survey Corps units and Royal Australian Survey Corps units from 1910 to 1996, including more than 580 women from 1942 to 1996. A Survey Corps Nominal Roll 1915-1996 may be accessed from the front page of the website of the Royal Australian Survey Corps Association linked on External Links below. The Corps participated in the national service scheme in the 1950s, training and maintaining two Citizen Military Force topographic survey companies in Sydney and Melbourne from 1951 to 1957, mainly for national servicemen to complete their obligations. National servicemen then served with the Survey Corps in Vietnam from 1966 to 1971. The Army Audio Visual Unit was the only Corps unit not to have a mapping related role. Equipment, Technology and Techniques RA Svy had the enviable military, international and national reputation of leading innovation, development and implementation of many generations of state-of-the-art technology and techniques across all areas of surveying, mapping and printing, striving to improve efficiencies. Significant examples of these include: 1910–1915: established the standard for the Australian Military Map Series, based on United Kingdom Ordnance Survey maps; mainly 'one-mile-to-one-inch' maps (known as the 'one-mile' map) produced from field survey sheets 'one-mile-to-two-inches' using plane-tabling and parish plans for position, scale and orientation, final map compilation by ink fair drawing at 'one-mile-to-two-inches' using a polyconic projection and lithographic draughting for colour separation (seven colours) and photographic reduction to 'one-mile-to-one-inch' for printing by the Victorian Government Printer. The Military Survey sheet numbering system was that of the International Map Congress of 1912. Standard map sheet extent was 30 min longitude x 15 min latitude. Standard height contour interval was 50 feet. 1914: commenced geodetic triangulation (angles by theodolite, azimuth by astronomy and scale by baselines measured with metal tapes) replacing parish plans as the basis for topographic mapping 1923–1927: used No 1 Squadron, Royal Australian Air Force air photography to complement topographic survey by plane-tabling 1930–1933: the first map produced from a significant use of air photography for topographic compilation using graphical methods of perspective rectification – Albury, New South Wales, '1 mile' map. Work on this map highlighted the disparity between the Victoria and New South Wales state survey triangulation networks. The grid on this sheet was most likely the first instance of cartographic scribing in Australia, done by Warrant Officer Harry Raisbeck engraving the emulsion of a glass plate negative using one tip of a broken ruling pen for the thicker 10,000 yard grid lines and a steel needle to scribe the 1,000 yard grid. This was 23 years before the first map was fully scribed. 1933: adopted Sydney Observatory as the geodetic datum for the eastern states, the Clarke 1858 reference ellipsoid and a British modified map grid based on the Transverse Mercator map projection with Australian zones. The first map with the British modified grid, for artillery purposes, was Albury '1 mile' although it was on the polyconic projection. This grid system was used for Australian topographic mapping until 1966. 1934-1939: undertook a 1st Order geodetic survey triangulation program to connect Queensland, New South Wales, Victoria and South Australia into one coherent network. Cooke, Troughton and Sims Tavistock 5 1/2 inch theodolites, reading direct to half-second for 1st Order work, replaced the larger and heavier 8 inch instruments and 3 1/2 inch Tavistocks reading direct to one second were used for second order work. Baselines of four to six miles in length were placed every 200 to 250 miles and measured with invar tapes standardised by steel bands. The use of field thermometers to estimate temperature corrections for thermal expansion of the measuring bands, was replaced with a system of measuring electrical resistance to estimate temperature coefficient of expansion to achieve accuracies of about 1 in 1 million (1 mm in 1 km) of the measured baselines. This world class research and development was in conjunction with Professor Kerr Grant, Department of Physics of University of Adelaide. Baselines for connecting the eastern Australia trigonometric network were measured at Jondaryn (QLD), Somerton (NSW), Benambra (VIC) and Tarlee (SA). 1936: the first map produced on the Transverse Mercator projection and the British modified grid – Helidon, Queensland 'one-mile' 1936: the first map compiled entirely from overlapping strip of air photography and graphical methods of rectification (Arundel method of radial line plotting) – Sale, Victoria 'one-mile'. Second World War: many innovative adaptions of equipment and processes for surveying, aerial photography for topographic compilation, cartography, photo-lithography, various printing methods and battlefield terrain modelling in base and field mobile situations to provide rapid response military survey/mapping support under adverse conditions of extreme heat, cold, high humidity, dust and rain in desert and jungle environments and at times under enemy attack 1952: topographic mapping by multi-projector (Multiplex anaglyph) stereoplotting from overlapping air photography, replacing graphical methods of rectification for map compilation 1953: large format Klimsch Commodore cartographic camera (remained in continuous use until December 1978) 1956: changed to decimal (also known as metric) scale mapping, largely as part of standardisation with allies in the South East Asia Treaty Organisation and adoption of an improved Australian spheroid of reference for mapping, first map Mildura 1:50,000. The 'one-mile' map was discontinued in 1959. 1956: cartographic scribing of map detail replaced fair drawing with ink, first map Mildura 1:50,000 1957: helicopter transport of survey parties revolutionised transport in remote areas 1957: Geodimeter light-based electromagnetic distance measurement (EDM) equipment 1958: Tellurometer MRA1 microwave EDM (and later models) man-portable systems improved geodetic survey efficiencies for rapid network extension and densification replacing triangulation with EDM and theodolite traverse sometimes using Bilby Towers to extend line lengths 1960: adoption of the '165 Spheroid' in Australia (same as World Geodetic System 1960 spheroid) 1961: manual hill-shading of 1:250,000 maps by photographing carved wax terrain models from the north-west corner on the Klimsch camera 1962: Wild A9, B9, B8 optical/mechanical photogrammetric plotters for topographic compilation from super-wide angle (focal length 88.5mm) Wild RC9 air photography started to replace Multiplex plotters; presensitised lithographic printing plates 1963: Zeiss (Jena) Stecometer analytic stereocomparitor for air photography; block aerotriangulation by digital computer; Aristo coordinatograph for grid production; radar airborne profile recorder (Canadian Applied Research Ltd, Mark V, Airborne Profiler Recorder) 1964: vehicle mounted Johnston ground elevation meter; Aerodist MRC2 airborne EDM system for topographic surveys over long distances by trilateration to replace traverse requiring survey station intervisibility 1966–1971: adopted the Australian Geodetic Datum 1966 (AGD66)/Australian Map Grid 1966 (AMG66)(Transverse Mercator projection – Universal Transverse Mercator Grid) and the Australian Height Datum 1971(AHD71)for all mapping of Australia and Papua New Guinea. The local datum AGD/AMG66 was used for surveying and mapping until it was replaced by the local datum AGD84 and later the geocentric datum World Geodetic System 1984 (WGS84)/Geodetic Datum Australia 1994 (GDA94). 1970: Calcomp 718 digital coordinatograph flat-bed plotter for grids, graticules and base compilation sheets with aerial triangulated model control 1971: Wild RC10 super wide angle air survey cameras with virtual distortion free lenses for supplementary, spot and special photography 1972: Aerodist MRB3/201 computer assisted second generation airborne EDM for topographic surveys 1972–1973: IBM 1130 computer; OMI/Nistri AP/C-3 analytical plotter with coordinatograph and OP/C orthophoto projector and Zeiss Planimat D2 stereoplotters with SG-1/GZ-1 orthophoto projectors for orthophoto production from colour and monochrome film air photography 1974–1975: Magnavox AN/PRR-14 portable Doppler satellite (US Navy Navigation Satellite System - TRANSIT) receivers and computing system provided independent three-dimensional point positions anywhere in the world, anytime, in any weather accurate to about 1.5metres with precise satellite ephemerides (station coordinates computed using program DOPPLR at Directorate of Survey - Army in Canberra ACT), to replace geodetic astronomy for absolute positioning and Aerodist airborne EDM; the Australian developed WREMAPS II airborne laser terrain profile recorder to replace terrain heighting by barometry for 1:100,000 mapping; grid and graticule production on Footscray Ammunition Factory's Gerber flatbed plotter 1975: AUTOMAP 1 computer assisted cartography and mapping system (Input Sub-System of four Wild B8s and three Gradicon digitising tables, Optical Line Following Sub-System – Gerber OLF, Verification Sub-System – Gerber 1442 drum plotter, General Purpose Sub-system – HP21MX computer and Output Sub-System – Gerber 1232 flatbed plotter), the first map was published in 1978 (Strickland 3665-3, 1:50,000) 1977: PDP 11/70 computer and OMI/C-3T and AP/C-4 analytical plotters 1978: new cartographic specifications (SYMBAS Symbolisation All Scales) for map and air chart production by digital cartographic methods 1982: Schut's Bundle analytic adjustment was set up on the PDP 11/70 to augment the Schut polynomial strip adjustment of block air photography triangulation; Magnavox MX1502 second generation TRANSIT receivers for relative positioning 1983: Kongsberg flatbed plotter for air triangulation output and associated grids 1983–1984: AUTOMAP 2 second generation computer assisted cartography and mapping system as a precursor to collection of digital geographic information and creation of geographic information systems in support of emerging digital military systems. Supplied by Intergraph Pty Ltd it comprised; superimposition of compiled graphics in the optical train of Wild B8 stereoplotters, dual screen interactive graphic edit workstations, raster scanner/plotter, VAX computers (the first map published was De Grey 2757 1:100,000 including screens and stipples) 1986–1988: Texas Instruments TI4100 portable Global Positioning System (GPS) geodetic receivers and Ferranti FILS3 helicopter and vehicle mounted Inertial Positioning System to replace TRANSIT satellite receivers 1988–1990: established the baseline for a GPS controlled air camera and photogrammetric system to significantly reduce the requirement for ground survey to accurately control air photography for topographic mapping 1988–1992: adopted the World Geodetic System 1984 (WGS84 – used by GPS) as the reference framework and spheroid for all military geospatial products of Australia and the rest of the world. In 1994 the Australian Government adopted the Geodetic Datum Australia 1994 which for practical purposes is coincident with the World Geodetic System 1984 (WGS84). The associated map grids remain based on the Transverse Mercator projection. 1990: Heidelberg Speedmaster 102 five colour printing press; AUTOMAP 2 upgrade to increase storage capacity and computer memory to speed-up data processing, to process all forms of remotely sensed imagery, to install the inhouse developed automated masking and stippling system, to enhance production of Digital Elevation Models, and to further develop the aeronautical chart database; large format film colour processor; large format automatic printing plate processor for positive and negative processing; investigated techniques for rapid kinematic GPS surveys 1990–1992: participation with military allies, Canada, United Kingdom and United States, in research and development of digital geospatial product standards to produce the Digital Chart of the World (DCW) and associated standards which became the baseline for international exchange of digital geospatial information 1991: Wild RC10 air mapping camera, pod mounted in a chartered Air Scan Aust Pty Ltd 35A Lear Jet, operated by the RA Svy Aerial Photography Team; established a digital topographic data Portable Demonstration System; Optical Disk Storage and Retrieval System for mainly AUTOMAP 2 data; desktop and laptop computers with mapping software for field survey, topographic survey and military geographic information sections; printing quality control stations; large format plan printers for topographic survey sections; air photography film processors for field units 1993–1995: high capacity large format process print press for rapid response map printing and print on demand Acknowledgement of Corps History On 1 July 2015, on the occasion of the Centenary of the Royal Australian Survey Corps Wreathlaying Ceremony at the Australian War Memorial, His Excellency General the Honourable Sir Peter Cosgrove AK MC (Retd), Governor-General of the Commonwealth of Australia, delivered the commemorative address. He acknowledged the essential nature of mapping for military operations, the work that the Survey Corps did for conflicts around the world and also for the nation building of Australia. He said "But it is the active service, the sacrifices and the contributions made by the men and women of the Royal Australian Survey Corps that we commemorate here today. On this 100th anniversary, we pay tribute to those whose skill and passion for surveying became integral to the work of the Australian military. And of course we offer our deepest respects to the 20 men who have given their lives serving with the Survey Corps or as members of the ADF on military survey operations. It was their duty to serve and it is our duty to remember them—and that is what we do today, and every day." On 9 July 2007, His Excellency Major General Michael Jeffery AC CVO MC Governor-General of the Commonwealth of Australia, unveiled a plaque at the Australian War Memorial to commemorate Royal Australian Survey Corps units which served in war. In his address the Governor-General praised the efforts of all personnel of the Corps over its 81 years of service to the nation in both war and peace. On the occasion of the 75th Anniversary of the Corps in 1990, the Survey Corps' contribution to the effectiveness of the ADF was acknowledged in a Notice of Motion from the Senate of the Australian Parliament. Moved by Senator MacGibbon on 31 May 1990, the Notice states: "I give notice that, on the next day of sitting, I shall move: That the Senate – (a) notes that 1 July 1990 marks the 75th anniversary of the foundation of the Royal Australian Survey Corps which produces maps and aeronautical charts required by Australia's defence forces; (b) notes that from the time of the explorer Sir Thomas Mitchell, Surveyor General NSW, who as a Lieutenant on Wellington's staff served as a surveyor in the Peninsular War, military surveying has played a vital role in the mapping of and development of Australia; (c) acknowledges the Corps' contribution to the knowledge of Australia's geography, topography and environment; (d) notes that the Royal Australian Survey Corps with its high level of professionalism, has served Australia well in war and in peace; (e) acknowledges the valuable mapping service rendered to New Guinea, Indonesia and the south west Pacific by the Survey Corps as part of Australia's overseas aid program; and (f) congratulates the Royal Australian Survey Corps on its meritorious achievements through the 75 years of existence." Also on the 75th anniversary, Australia Post issued a commemorative first day issue prestamped envelope of the Royal Australian Survey Corps. In his official history of the Royal Australian Survey Corps as part of the Australian Army History Series, the much published author and highly regarded military historian, Chris D. Coulthard-Clark, concluded that "Australians as a whole might still be blissfully unaware and hence unappreciative of the debt of gratitude owed to the generations of surveyors who have helped make possible the enviable standard of living generally enjoyed today across the country. Should that situation ever change, and the story receive the wider recognition that it deserves, then the part within that tale occupied by military mapmakers is worthy of special acclaim by a grateful nation." Gallery – Corps badges and unit colour patches On the left is the coloured badge of the Australian Survey Corps 1915–1948. In the middle is the unit colour patch of Survey Corps units in the 2nd AIF (Second World War) from 1943 – it is based on the colour patch of the First World War 1st ANZAC and Australian Corps Topographic Sections. The triangle shape shows that Survey Corps units were generally assigned at higher formation (Corps) level; the colour purple (Engineers) acknowledges the heritage link to the Royal Australian Engineers and the central vertical white stripe completed the Survey patch; the grey background was that of the 2nd AIF – this patch (minus the AIF grey background) was the basis of Survey Corps unit colour patches when Army reintroduced unit colour patches in 1987. On the right is the coloured badge of the Royal Australian Survey Corps 1952-1996. The badge 1948-1952 was similar except for the Kings Crown. See also Royal Australian Engineers Australian Army Intelligence Corps References Footnotes Citations Further reading External links Royal Australian Survey Corps Association A link to a Survey Corps Nominal Roll 1915-1996 is on the front page of this website. Australian Army Museum of Military Engineering Military units and formations established in 1915 Military units and formations disestablished in 1996 Survey
32649440
https://en.wikipedia.org/wiki/DeVeDe
DeVeDe
DeVeDe is a free and open-source DVD and CD authoring utility. DeVeDe produces disk images ready for authoring to CD or DVD, and allows to burn them to CD/DVD discs. The source material may be in any of a number of audio and video formats, and DeVeDe automatically converts the material to formats compatible with audio CD and video DVD standards, as used by CD and DVD player devices. DeVeDe uses other software packages, including MPlayer, MEncoder/FFmpeg, DVDAuthor, VCDImager and mkisofs, to perform the format conversions, and can use K3b or Brasero to burn an ISO image on Ubuntu, or a variety of other software on Windows. DeVeDe can handle source material in many popular video file formats, including .avi, .mp4, .mpg, and .mkv. Features It can create video DVD, VCD, SVCD, China Video Disc images, and MPEG-4 ASP (playable on "DivX players") and H264 video files. Option for not creating images, but creating only video files. PAL and NTSC support. Optimization for dual core CPUs. Creating menu for video DVDs. Adding subtitle. Shifting audio. Basic video editing operations (rotating, deinterlacing, etc.) Supports large variety of formats. See also DVD-Video DVD authoring List of DVD authoring applications References External links Development Website Ubuntu Development Website Windows Development Website Free DVD burning software Free optical disc authoring software Linux CD/DVD writing software Optical disc authoring software Windows CD/DVD writing software
31034526
https://en.wikipedia.org/wiki/AirDrop
AirDrop
AirDrop is a proprietary ad hoc service in Apple Inc.'s iOS and macOS operating systems, introduced in Mac OS X Lion (Mac OS X 10.7) and iOS 7, which can transfer files among supported Macintosh computers and iOS devices by means of close-range wireless communication. This communication takes place over Apple Wireless Direct Link 'Action Frames' and 'Data Frames' using generated link-local IPv6 addresses instead of the Wi-Fi chip's fixed MAC address. Prior to OS X Yosemite (OS X 10.10), and under OS X Lion, Mountain Lion, and Mavericks (OS X 10.7–10.9, respectively) the AirDrop protocol in macOS was different from the AirDrop protocol of iOS, and the two were therefore not interoperable. OS X Yosemite and later support the iOS AirDrop protocol, which is used for transfers between a Mac and an iOS device as well as between two 2012 or newer Mac computers, and which uses both Wi-Fi and Bluetooth. Legacy mode for the old AirDrop protocol (which only uses Wi-Fi) between a 2012 or older Mac computer (or a computer running OS X Lion through OS X Mavericks) and another Mac computer was also available until macOS Mojave. Apple reveals no limit on the size of the file which AirDrop can transfer. Routine iOS On iOS 7 and later, AirDrop can be accessed by either tapping on Settings > General > AirDrop, or via the Control Center. Both Wi-Fi and Bluetooth are automatically switched on when AirDrop is enabled as they are both utilized. Options for controlling AirDrop discovery by other devices include: No one can see device (AirDrop disabled) Only contacts can see device Everyone can see device. In iOS 7 or later, if an application implements AirDrop support, it is available through the share button. AirDrop is subject to a number of restrictions on iOS, such as the inability to share music or videos from the native apps. macOS On Macs running OS X 10.7 and greater, AirDrop is available in the Finder window sidebar. On Macs running OS X 10.8.1 or later, it can also be accessed through the menu option Go → AirDrop or by pressing ++. Wi-Fi must be turned on in order for AirDrop to recognize the other device. The other device must also have AirDrop selected in a Finder window sidebar to be able to transfer files. Furthermore, files are not automatically accepted; the receiving user must accept the transfer. This is done to improve security and privacy. System limitations Transfer between two iOS devices Running iOS 7 or later: iPhone 5 or newer iPad (4th generation) or newer iPad Air: all models iPad Pro: all models iPad Mini: all models iPod Touch (5th generation) or newer AirDrop can be enabled unofficially on iPad (3rd generation). Although not supported by default, AirDrop can be enabled by jailbreaking the device and installing "AirDrop Enabler 7.0+" from Cydia. This procedure is not supported or recommended by Apple, as engaging in jailbreaking can cause software instability, and can introduce viruses. Transfer between two Mac computers Running Mac OS X Lion (10.7) or later: MacBook Pro: Late 2008 or newer, excluding late 2008 17-inch MacBook Air: Late 2010 or newer Aluminum MacBook: Late 2008 MacBook and iMac: Early 2009 or newer Mac Mini: Mid 2010 or newer Mac Pro: Early 2009 with AirPort Extreme card, or mid 2010 or newer Transfer between a Mac and an iOS device To transfer files between a Mac and an iPhone, iPad or iPod touch, the following minimum requirements have to be met: All iOS devices with AirDrop are supported with iOS 8 or later: Running OS X Yosemite (10.10) or later: MacBook Air: Mid 2012 or newer MacBook (Retina): all models MacBook Pro: Mid 2012 or newer iMac: Late 2012 or newer iMac Pro: all models Mac Mini: Late 2012 or newer Mac Pro: Late 2013 or newer Bluetooth and Wi-Fi have to be turned on for both Mac and iOS devices. (Both devices are not required to be connected to the same Wi-Fi network.) Security and privacy AirDrop uses TLS encryption over a direct Apple-created peer-to-peer Wi-Fi connection for transferring files. The Wi-Fi radios of the source and target devices communicate directly without using an Internet connection or Wi-Fi Access Point. The technical details of AirDrop and the proprietary peer-to-peer Wi-Fi protocol called Apple Wireless Direct Link (AWDL) have been reverse engineered and the resulting open source implementations published as OWL and OpenDrop. There have been numerous reported cases where iOS device users with AirDrop privacy set to "Everyone" have received unwanted files from nearby strangers; the phenomenon has been termed "cyber-flashing." Users have the full ability to control their AirDrop settings and limit who can send them files, with options for "Everyone", "Contacts Only", or "Off". During the initial handshake devices exchange full SHA-256 hashes of users' phone numbers and email addresses, which might be used by attackers to infer the phone numbers and in some cases email addresses themselves. See also Nearby Share, a similar technology for Android smart phones Bonjour, the service discovery protocol employed Shoutr, a free P2P multi-user solution for sharing files among multiple people (Wi-Fi) Wi-Fi Direct, a similar technology Zapya, a free file transfer solution over Wi-Fi References External links How-to: Use AirDrop to send content from your Mac How-to: How to use AirDrop with your iPhone, iPad, or iPod touch MacOS IOS MacOS file sharing software
23762317
https://en.wikipedia.org/wiki/Unisys%202200%20Series%20system%20architecture
Unisys 2200 Series system architecture
The figure shows a high-level architecture of the OS 2200 system identifying major hardware and software components. The majority of the Unisys software is included in the subsystems and applications area of the model. For example, the database managers are subsystems and the compilers are applications. System Basics The details of the system architecture are covered in Unisys publication 3850 7802 Instruction Processor Programming Reference Manual. Also see UNIVAC 1100/2200 series. The 1100 Series has used a 36-bit word with 6-bit characters since 1962. 36-bit computing was driven by a desire to process 10-digit positive and negative numbers. Also the military needed to be able to calculate accurate trajectories, design bridges, and perform other engineering and scientific calculations, they needed more than 32 bits of precision. A 32-bit floating point number only provided about 6 digits of accuracy while a 36 bit number provided the 8 digits of accuracy that were accepted as the minimum requirement. Since memory and storage space and costs drove the system, going to 64 bits was simply not acceptable in general. These systems use ones' complement arithmetic, which was not unusual at the time. Almost all computer manufacturers of the time delivered 36-bit systems with 6-bit characters including IBM, DEC, General Electric, and Sylvania. The 6-bit character set used by the 1100 Series is also a DoD mandated set. It was defined by the Army Signal Corps and called Fieldata (data returned from the field). The 1108 provided a 9-bit character format in order to support ASCII and later the ISO 8-bit sets, but they were not extensively used until the 1980s again because of space constraints. The 2200 Series architecture provides many registers. Base registers logically contain a virtual address that points to a word in a code or data bank (segment). They may point to the beginning of the bank or to any word within the bank. Index registers are used by instructions to modify the offset of the specified or assumed base register. Simple arithmetic (add, subtract) may be performed on all index registers. In addition, index registers consist of a lower offset portion and an upper increment portion. An instruction may both use the offset value in an index register as part of an address and specify that the increment is to be added to the offset. This allows loops to be accomplished with fewer instructions as incrementing the index by the step size can be accomplished without a separate instruction. Arithmetic registers allow the full set of computational instructions including all floating point operations. Some of those instructions work on adjacent pairs of registers to perform double-precision operations. There are no even-odd constraints. Any two registers may be used as a double-precision value. Four of the arithmetic registers are also index registers (the sets overlap – index register X12 is arithmetic register A0). This allows the full range of calculations to be performed on indexes without having to move the results. The rest of the registers, known as R registers, are used as fast temporary storage and for certain special functions. R1 holds the repeat count for those instructions that may be repeated (block transfer, execute repeated, etc.). R2 holds a bit mask for a few instructions that perform a bitwise logical operation in addition to some other functions (e.g., masked load upper) There are two full sets of registers (A, X, R, and B). One set, the user registers, is used by all applications and most portions of the operating system. It is saved and restored as part of activity (thread) state. The other set, the Exec registers, is used by interrupt processing routines and some other portions of the operating system that want to avoid having to save and restore user registers. The Exec registers are not writable by user applications although some user code can read them. As a result, the Exec is carefully designed never to leave private, secure, or confidential information in registers. Instruction interpretation chooses the appropriate register set to use based on a bit in the Processor State Register. This bit is always set (changed to privileged) on an interrupt. All registers are also visible in the address space, but the Exec portion is protected and a reference by non-privileged code will result in a fault interrupt. The 2200 Series uses a 36-bit segmented virtual address space. We’ll look later at the addressing architecture. The 2200 Series is a CISC architecture system. Not only are there a large number of instructions (current count is about 245) but many of them have addressing variants. Some of the variants are encoded directly in the instruction format (partial word references) and some are dependent on Processor State Register settings. Many instructions also perform very complex functions such as one that implements a large part of the COBOL EDIT verb. The above figure shows some of the building blocks of the architecture. "Data" and "COMM" are two of the primary examples of software subsystems that live in a protection ring between that of a user application and the Exec. There are many other such subsystems and users write their own. Memory and Addressing Level As was mentioned earlier the 2200 Series uses a 36-bit segmented virtual address. The original notion of a segmented space came from the earliest implementation that emphasized code and data separation for performance and the use of shared code banks. Over the years this expanded to provide greater flexibility of levels of sharing and far greater protection for security and reliability. Controlled access to shared data was also introduced. A virtual address consists of three parts. The high-order 3 bits define the sharing level. This is the heart of the entire addressing and protection scheme. Every thread has eight Bank Descriptor Tables (Segment Descriptor Tables in the industry) based on B16-B23. The tables are indexed by level – level 0 refers to the Bank Descriptor Table (BDT) based on B16, level 2 the BDT based on B18, etc. The level 0 and level 2 BDTs are common to all threads in the system. Every run (process) has its own level 4 BDT, and that BDT is common to all threads in the run. Every user thread has its own unshared level 6 BDT. Activity Each extended-mode activity (thread) always has six banks, segments, which are totally unique to it. One is the Return Control Stack which holds information about the calling structure including any security relevant privilege and state changes. It is not accessible by the thread except through the use of the CALL, RETURN, and similar instructions. This is a major part of the protection and reliability mechanism. Applications cannot cause bad effects by changing the return addresses or overwriting the return control stack. Another unique bank is the automatic storage bank (Activity Local Store stack). This is used by the compilers to hold local variables created within a block. It is also used to hold all parameter lists passed on a call. One of the checks made by the operating system both on its own behalf and when a call is made to a protected subsystem is to ensure that the operands are on the thread-local stack and that the thread has the right to access the memory region referenced by any parameters. Because the parameters are kept in thread-local space, there is no chance that some other thread may change them during or after validation. It is the responsibility of the called procedure to perform similar checks on any secondary parameters that may exist in shared space (i.e., the primary parameter points to a structure that contains pointers). The procedure is expected to copy any such pointers to its own local space before validating them and then to use only that internally held validated pointer. Activities may create additional segments up to the limit of the available address space (233 words = 8GW or about 36GB). This is a convenient way for multi-threaded applications to get large amounts of memory space knowing that it is totally thread-safe and that they are not taking any space away from the rest of what is available to the program. Each activity in a program has its own independent space meaning an application with say 100 activities is able to use over 800GW (>3TB) of virtual space. Basic-mode activities do not start out with any such banks as basic-mode programs are not aware of the virtual address space, but any calls to extended-mode subsystems will cause those banks to be created. Programs OS 2200 does not implement programs in exactly the same way that UNIX, Linux, and Windows implement processes, but that is the closest analogy. The most obvious difference is that OS 2200 only permits a single program per Run (Job, Session) to be executing at a time. A program may have hundreds of threads, but cannot spawn other programs to run concurrently. There are several banks at the Program level that contain a mixture of Run (job, session) information and program information. These are control structures for the operating system. They have no access or read-only access for the program. Programs may retrieve information from some of these structures for debugging purposes or to retrieve things like the user-id and terminal-id without the overhead of a system call. They cannot be written by the program. They contain things like the thread state save areas, file control blocks, and accounting information. The rest of the banks are used by the program. When a program object file is executed, the operating system obtains the bank information from the file and creates banks as needed and loads the bank initial state from the file. The simplest program has a single bank containing code and data. This is considered very bad form, but is permitted for compatibility with old applications. You can only create such an application with assembly language. The standard compilers create one or more code banks and one or more data banks. Normally the code banks are marked as read-only as a debugging and reliability aid. There are no security concerns either way. The program can only affect itself. Each program thus has its own address space distinct from all other programs in the system. Nothing a program can do can change the contents of any other program’s memory. The OS and shared subsystems are protected by other mechanisms which will be discussed later. Even read access is prohibited to OS and subsystem memory in almost all cases from code in a program. It is possible to create a shared subsystem which is generally readable, or even writable, by multiple programs, but it must be explicitly installed that way by a privileged system administrator. Programs are initially created with just the banks specified in the object file and with a single activity. They may use system calls to create additional banks within their own program level and additional activities. Subsystems The closest analogy to a shared subsystem is a .dll. A subsystem is much like a program in many respects except that it does not have any activities associated with it. Instead it is accessed by other programs and subsystems typically via a CALL instruction. In fact, a program is a subsystem plus one or more activities. Every activity belongs to a "home" subsystem which is the program that created it. This subsystem concept is important as an encapsulation of access rights and privilege. Within their home subsystem, activities typically share common access rights to code and data banks. Code banks in the home subsystem are usually read-only, or even execute-only if they contain no constant data, but all activities will have the right to execute them. Subsystems are also combinations of banks and may contain data banks as well as code banks. All globally shared subsystems must be installed in the system by someone with appropriate administrator privileges. Subsystems may also open files. The Database manager is a subsystem which opens all the database files for its use typically with exclusive access rights. The operating system will attach its own banks to a subsystem to hold the file control tables. OS The OS level contains the banks of the Exec. These banks are never directly accessible by either programs or global subsystems. Entry points to the OS are all handled in the same way as a protected subsystem. Calls made to the OS are always via "gates," instructions that exist for that purpose (ER = Executive Request), or via interrupts. The Bank Descriptor Index (BDI) The next part of the virtual address is the BDI or Bank Descriptor Index. The Level field selected a particular bank descriptor table base register (B16-B23). Base registers B16-B23 are part of the activity state and are maintained by the Exec with no direct access by the activity. The Bank Descriptor tables for the program and activity levels exist within the program-level banks that belong to the operating system. The BDI is simply an index into a Bank Descriptor Table. Each entry in the table contains information about a bank. Each such entry describes up to 1MB (256KW) of virtual address space. When a larger contiguous space is needed, consecutive entries are logically combined to create a larger bank up to the maximum of 230 words. The Bank Descriptor Table Entry (Bank Descriptor – BD) gives the size of the bank (small = up to 256KW, large = up to 16MW, very large = up to 1GW). A small bank is always represented by a single BD. Large banks are represented by up to 64 consecutive BDs and a very large bank by up to 4096 BDs. Large and very large banks need not use all 64 or 4096 consecutive BDs. They only use as many as needed to provide the virtual address space required. The entry also contains upper and lower limits of allowable offsets within the bank. Virtual addresses that are outside the limits generate a fault interrupt. This allows small banks, for example containing a message, to have only the virtual space reserved for it that it actually needs and provides a debugging check against bad pointers and indices. The BD also contains a key value and access control fields. The fields indicate whether read, write, or execute permission is granted to the instruction processor (3 bits). The Special Access Permissions (SAP) applies only to activities executing within the owning subsystem (really only those with a matching key value). The General Access Permissions (GAP) applies to everyone else and is usually zero (no access). The Exec sets a key value in the state of each activity which may be changed by gate and interrupt transitions. Protection Mechanisms The 2200 Series protection architecture uses three pieces of activity state that are reflected in the hardware state. They are Processor Privilege (PP), Ring, and Domain. Processor Privilege controls the ability to execute privileged instructions and access protected registers and other state. PP=0 is used by the Exec and gives full access to all instructions and privileged state. Exec activities and user activities that have used a gate to access an Exec API run at PP=0. PP=1 restricts most privileged instructions but does allow reading of the day clocks and reading the contents of some of the privileged registers. None of the privileged registers contain any truly sensitive information, but allowing general read access could easily lead to undetected errors in user programs. Basically at PP=1, instructions that can change the addressing environment, change the clocks, change instrumentation state, or perform I/O are all restricted. PP=1 is rarely used. PP=2 is the normal user mode and is state in which all other code executes. It is a further restriction of PP=1. There is also a PP=3 which further restricts the instructions a user program can execute, but it is not currently in use as too many existing programs were using some of those instructions. The intent was to restrict access to instructions that may be system model dependent. The Domain mechanism is the heart of the protection mechanism. Each BD (bank descriptor) has a lock field consisting of a ring number and domain number. There is also a key field in the state of each activity. If the key matches the lock or the ring in the key is less than the ring in the lock, the activity has Special Access Permission. Otherwise, the activity has General Access Permission. Ring allows overriding the Domain protection mechanism. User applications run at Ring=3. Protected subsystems run at Ring=2. This gives them access to their own data while still allowing them to access parameters and data in the calling user’s space. Note that it is still not possible for a thread to cause the protected subsystem to access some other user’s space as only this thread’s Bank Descriptor Tables are in use. Ring=0 is used by the OS and allows it to access its own data while still being able to access parameters passed from either user programs or protected subsystems. Gates are another part of the protection mechanism. A gate is a data structure that controls transitions between domains. A gate lives in a gate bank and the hardware enforces that all references to gates must be to addresses at a proper offset (multiple of a gate size) within a gate bank. A gate contains the target address, new values for PP, Ring, and Domain, and may contain a hidden parameter to be passed to the target. Protected subsystems are not directly accessible to other subsystems. Instead a subsystem must request that a gate be built in its gate bank for access to that subsystem. This permits the operating system to perform any access control checks. The linking system will then find the gate address associated with an entry point. In fact, the whole mechanism is usually transparently handled within the linking system. The hidden parameter permits, for example, a file I/O gate to contain the address or handle of the file control block. Since this is guaranteed to be correct as it was created by the OS when the user opened the file, many error checks can be eliminated from the path length to do I/O. Instruction Processors OS 2200 is designed to handle up to 32 instruction processors (or CPUs). A great deal of design has been done over the years optimize for this environment. For example, OS 2200 makes almost no use of critical sections in its design. There’s too high a probability of multiple processors executing the same code. Instead it uses data locking on the finest granularity data possible. Generally locks deal with a single instance of a data object (e.g., activity control structure or file control block) and are contained within the data structure of the object. This minimizes the likelihood of conflicts. When more global locks have to be set as when updating a list of objects, the lock is set only as long as it takes to update the links in the list. Even dispatching is done with separate locks for different priority levels. A check can be made for an empty priority level without setting a lock. The lock need only be set when adding or removing an item from the queue. The register set is in the visible address space. Registers appear to exist in the first 128 words (2008) of the current instruction bank (B0) when referenced as a data item. This does impose a restriction on compilers to not place any data constants in the first 128 words of a code bank. The result of this is an expansion of the instruction set without requiring additional operation codes. Register-to-register operations are accomplished with the register-storage operation codes. Typical instructions contain a function code, the target (or source) register, an index register, a base register and a displacement field. When the function code with its qualifier indicates immediate data, the displacement, base, i, and h fields combine to form a single 18-bit immediate value. This allows loading, adding, multiplying, etc. by small constants to eliminate a memory reference and the associated storage. Processor state as captured on a stack at an interrupt contains the information needed to both return control to the interrupted activity and to determine the type of the interrupt. Interrupts may occur in the middle of long instructions and the state deals with that possibility. Basic mode is another whole form of instruction formats and addressing. Basic mode provides compatibility with previous systems back to the 1108. For all practical purposes, the hardware architecture defines the rules by which addresses and instructions are converted to the above forms. The most obvious difference in basic mode is the lack of explicit B registers in instructions. Instead there are four implicitly used B registers (B12-B15). There is a complex algorithm using the limits of the banks represented by those B registers, the operand address and the B register within which the current instruction is found. The most interesting instructions in the 2200 repertoire are the locking and synchronization instructions. Conditional replace is familiar and quite similar to Compare and Swap in the Intel architecture. These instructions always gain exclusive use of the memory/cache-line holding the referenced word. TS and TSS check a bit in the referenced word. If the bit is clear, they set it and continue (TS) or skip (TSS). If the bit is set, they either interrupt (TS) or fall through to the next instruction(TSS). On a TS interrupt the OS takes one of several actions depending on the instruction sequence and activity priority. Real time and Exec activities simply get control back to allow retry unless there is an even higher-priority activity waiting. The presumption is that the lock is set on another processor and will soon be cleared. If it is a user activity not running at real time priority, it may have its priority temporarily reduced and be placed back in the dispatching queues. Alternatively, the code sequence may indicate that Test & Set Queuing is being used. In this case, the OS places the activity in a wait state and chains it to the end of the list of activities waiting for that particular lock. Activities clearing such a lock check to see if any are waiting and if so notify the OS to allow one of more to try again. Test & Set Queuing is typically used for synchronization within subsystems such as the database manager where activities from many programs may be executing. The result of these mechanisms is very efficient, low overhead, synchronization among activities. The queuing architecture is another interesting special case. It was specifically designed to allow very efficient handling of messaging where the number of messages waiting for processing could be almost unlimited. It is also aimed at reducing one of the primary costs of messaging, namely having to constantly move messages around in memory. Even moving them from the communication manager to the message queue subsystem to the processing program is eliminated. Instead each message is placed in a small bank of its own. Instructions allow placing the bank descriptors of these banks in a queue and removing them from a queue. When a message is placed in a queue, the sending program or subsystem no longer has any access to it. That bank is removed from its address space. When a message is retrieved from a queue, the bank becomes part of the receiver's address space. The queuing instructions also provide activity synchronization functions (e.g., wait for a message). Only "pointers" are moved and they are moved in a way that ensures security and integrity. Once moved, the data in the message is only visible to the recipient. I/O Processors All I/O on 2200 Series systems is handled by I/O processors. These processors offload large portions of the I/O path length and recovery, and by fully isolating the main system from I/O faults, interrupts, bus errors, etc. greatly improve reliability and availability. The I/O processors come in three different types (Storage, Communications, Clustering) but the only real difference is the firmware load. All I/O processors are controlled by the operating system. OS 2200 does provide a raw mode for I/O called "arbitrary device I/O," but even there the OS validates that the program is accessing an allowed device and handles all interrupts and faults before passing appropriate status on to the program. Programs must be granted privileges by the security officer to access devices in arbitrary mode and that may be limited by both the security officer and the system operator to specific devices. Arbitrary I/O is not allowed to a device that is also in use by any other program or the system. The device must be exclusively allocated to the program. The OS takes very general calls from programs and generates command packets with real memory and device addresses which are then passed to the I/O processor. Firmware in the I/O processor actually creates the device specific (e.g., SCSI) packets, sets up the DMA, issues the I/O, and services the interrupts. References Instruction set architectures