id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
46341650
https://en.wikipedia.org/wiki/Parrot%20OS
Parrot OS
Parrot OS is a Linux distribution based on Debian with a focus on security, privacy, and development. Core Parrot is based on Debian's "testing" branch, with a Linux 5.10 kernel. It follows a rolling release development model. The desktop environments are MATE and KDE, and the default display manager is LightDM. The system is certified to run on devices which have a minimum of 256MB of RAM, and it is suitable for both 32-bit (i386) and 64-bit (amd64) processor architectures. Moreover, the project is available for ARMv7 (armhf) architectures. In June 2017, the Parrot Team announced they were considering to change from Debian to Devuan, mainly because of problems with systemd. As of January 21st, 2019, the Parrot team has begun to phase out the development of their 32-bit (i386) ISO. In August 2020, the Parrot OS officially supports Lightweight Xfce Desktop. Editions Parrot has multiple editions that are based upon Debian, with various desktop environments available. Parrot Security Parrot is intended to provide a suite of penetration testing tools to be used for attack mitigation, security research, forensics, and vulnerability assessment. It is designed for penetration testing, vulnerability assessment and mitigation, computer forensics and anonymous web browsing. Parrot Home Parrot Home is the base edition of Parrot designed for daily use, and it targets regular users who need a "lightweight" system on their laptops or workstations. The distribution is useful for daily work. Parrot Home also includes programs to chat privately with people, encrypt documents, or browse the internet anonymously. The system can also be used as a starting point to build a system with a custom set of security tools. Parrot ARM Parrot ARM is a lightweight Parrot release for embedded systems. It is currently available for Raspberry Pi devices. Parrot OS Tools There are multiple Tools in Parrot OS which are specially designed for Security Researchers and are related to penetration testing. A few of them are listed below, more can be found on the official website. Tor Tor, also known as The Onion Router, is a distributed network that anonymizes Internet browsing. It is designed in a way that the IP Address of the client using Tor is hidden from the server that the client is visiting. Also, the data and other details are hidden from the client’s Internet Service Provider (ISP). Tor network uses hops to encrypt the data between the client and the server. Tor network and Tor browser are pre-installed and configured in Parrot OS. Onion Share Onion Share is an open-source utility that can be used to share files of any size over the Tor network securely and anonymously. Onion Share then generates a long random URL that can be used by the recipient to download the file over the TOR network using TOR browser. AnonSurf Anonsurf is a utility that makes the operating system communication go over Tor or other anonymizing networks. According to Parrot, AnonSurf secures your web browser and anonymizes your IP. Release frequency The development team has not specified any official release timeline, but based on release changelogs and the notes included in the official review of the distribution, the project will be released on a monthly basis. Releases See also BackBox BlackArch Devuan Kali Linux List of digital forensics tools Security-focused operating system Notes External links Official Website Blog & Release Notes DistroWatch Debian Derivatives Census Debian-based distributions Computer security software Pentesting software toolkits Rolling Release Linux distributions Linux distributions
24255209
https://en.wikipedia.org/wiki/Khimera
Khimera
Khimera is a software product from Kintech Lab intended for calculation of the kinetic parameters of microscopic processes, thermodynamic and transport properties of substances and their mixtures in gases, plasmas and also of heterogeneous processes. The development of a kinetic mechanism is a key stage of present-day technologies for the creation of hi-tech devices and processes in a wide range of fields, such as microelectronics, chemical industry, and the design and optimization of combustion engines and power stations. Khimera with Chemical WorkBench, another software product from Kintech Lab, allows both the development of complex physical and chemical mechanisms and their validation. Essential feature of Khimera is its user-friendly interface for importing and utilizing the results of quantum-chemical calculations for estimating rate constants of elementary processes and thermodynamic and transport properties. Fields of application Khimera incorporates up to date achievements in the development of the wide range of models of elementary physicochemical processes; these models are of particular importance for hi-tech applications in: microelectronics materials science chemical industry automobile and aviation industry power engineering. Basic capabilities The computation modules of Khimera allow one to calculate the kinetic parameters of elementary processes and thermodynamic and transport properties from the data on the molecular structures and properties obtained from quantum-chemical calculations or from an experiment. The molecular properties and the parameters of molecular interactions can be calculated using quantum-chemical software (Gaussian, GAMESS, Jaguar, ADF) and directly imported into Khimera in an automatic mode. The results of calculations can be presented visually and exported for the further use in kinetic modeling and CFD packages. External links Kintech Lab homepage Khimera's description KHIMERA (Cool tool for creating your own rxn rate constants) in "Video Review of Chemical Workbench-Tool for Modeling Reactive Flows and Developing Chemical Mechanisms" References 1. J Comput Chem 23: 1375–1389, 2002 2. https://web.archive.org/web/20160611153527/http://www.softscout.com/software/Science-and-Laboratory/Laboratory-Information-Management-LIMS/Khimera.html Chemical engineering Chemical kinetics Combustion Computational chemistry software Molecular modelling software
38817
https://en.wikipedia.org/wiki/Amstrad
Amstrad
Amstrad was a British electronics company, founded in 1968 by Alan Sugar at the age of 21. The name is a contraction of Alan Michael Sugar Trading. It was first listed on the London Stock Exchange in April 1980. During the late 1980s, Amstrad had a substantial share of the PC market in the UK. Amstrad was once a FTSE 100 Index constituent, but since 2007 has been wholly owned by Sky UK. , Amstrad's main business was manufacturing Sky UK interactive boxes. In 2010, Sky integrated Amstrad's satellite division as part of Sky so they could make their own set-top boxes in-house. The company had offices in Kings Road, Brentwood, Essex. History 1960s and 1970s Amstrad (also known as AMSTrad) was founded in 1968 by Alan Sugar at the age of 21, the name of the original company being AMS Trading (Amstrad) Limited, derived from its founder's initials (Alan Michael Sugar). Amstrad entered the market in the field of consumer electronics. During the 1970s they were at the forefront of low-priced hi-fi, TV and car stereo cassette technologies. Lower prices were achieved by injection moulding plastic hi-fi turntable covers, undercutting competitors who used the vacuum forming process. Amstrad expanded to the marketing of low cost amplifiers and tuners, imported from the Far East and badged with the Amstrad name for the UK market. Their first electrical product was the Amstrad 8000 amplifier. 1980s In 1980, Amstrad went public trading on the London Stock Exchange, and doubled in size each year during the early '80s. Amstrad began marketing its own home computers in an attempt to capture the market from Commodore and Sinclair, with the Amstrad CPC range in 1984. The CPC 464 was launched in the UK, Ireland, France, Australia, New Zealand, Germany, Spain and Italy. It was followed by the CPC 664 and CPC 6128 models. Later "Plus" variants of the 464 and 6128, launched in 1990, increased their functionality slightly. In 1985, the popular Amstrad PCW range was introduced, which were principally word processors, complete with printer, running the LocoScript word processing program. They were also capable of running the CP/M operating system. The Amsoft division of Amstrad was set up to provide in-house software and consumables. On 7 April 1986 Amstrad announced it had bought from Sinclair Research "the worldwide rights to sell and manufacture all existing and future Sinclair computers and computer products, together with the Sinclair brand name and those intellectual property rights where they relate to computers and computer related products", which included the ZX Spectrum, for £5 million. This included Sinclair's unsold stock of Sinclair QLs and Spectrums. Amstrad made more than £5 million on selling these surplus machines alone. Amstrad launched two new variants of the Spectrum: the ZX Spectrum +2, based on the ZX Spectrum 128, with a built-in cassette tape drive (like the CPC 464) and, the following year, the ZX Spectrum +3, with a built-in floppy disk drive (similar to the CPC 664 and 6128), taking the 3" disks that many Amstrad machines used. In 1986 Amstrad entered the IBM PC-compatible arena with the PC1512 system. In standard Amstrad livery and priced at £399 it was a success, capturing more than 25% of the European computer market. It was MS-DOS-based, but with the GEM graphics interface, and later Windows. In 1988 Amstrad attempted to make the first affordable portable personal computer with the PPC512 and 640 models, introduced a year before the Macintosh Portable. They ran MS-DOS on an 8 MHz processor, and the built-in screen could emulate the Monochrome Display Adapter or Color Graphics Adapter. Amstrad's final (and ill-fated) attempts to exploit the Sinclair brand were based on the company's own PCs; a compact desktop PC derived from the PPC 512, branded as the Sinclair PC200, and the PC1512 rebadged as the Sinclair PC500. Amstrad's second generation of PCs, the PC2000 series, were launched in 1989. However, due to a problem with the Seagate ST277R hard disk shipped with the PC2386 model, these had to be recalled and fitted with Western Digital controllers. Amstrad later successfully sued Seagate, but following bad press over the hard disk problems, Amstrad lost its lead in the European PC market. 1990s In the early 1990s, Amstrad began to focus on portable computers rather than desktop computers. In 1990, Amstrad tried to enter the video game console market with the Amstrad GX4000, similar to what Commodore did at the same time with the C64 GS. The console, based on the Amstrad 464 Plus hardware, was a commercial failure, because it used outdated technology, and most games available for it were straight ports of CPC games that could be purchased for much less in their original format. In 1993, Amstrad was licensed by Sega to produce a system which was similar to the Sega TeraDrive, going by the name of the Amstrad Mega PC, to try to regain their image in the gaming market. The system didn't succeed as well as expected, mostly due to its high initial retail price of £999. In that same year, Amstrad released the PenPad, a PDA similar to the Apple Newton, and released only weeks before it. It was a commercial failure, and had several technical and usability problems. It lacked most features that the Apple Newton included, but had a lower price at $450. As Amstrad began to concentrate less on computers and more in communication, they purchased several telecommunications businesses including Betacom, Dancall Telecom, Viglen Computers and Dataflex Design Communications during the early 1990s. Amstrad has been a major supplier of set top boxes to UK satellite TV provider Sky since its launch in 1989. Amstrad was key to the introduction of Sky, as the company was responsible for finding methods to produce the requisite equipment at an attractive price for the consumer - Alan Sugar famously approached "someone who bashes out dustbin lids", to manufacture satellite dishes cheaply. Ultimately, it was the only manufacturer producing receiver boxes and dishes at the system's launch, and has continued to manufacture set top boxes for Sky, from analogue to digital and now including Sky's Sky+ digital video recorder. By 1996, Alan Sugar was reported as having been looking for a buyer for Amstrad "for some time". Amongst the group's assets, cumulatively valued at , the Dancall subsidiary was of particular interest to potential acquirer Psion, producer of handheld computer products, for its expertise in "GSM digital mobile phone functionality" and the potential to integrate such functionality into Psion's own product range. Despite "long drawn out negotiations", the parties failed to agree a price and a strategy to dispose of the group's other assets. In 1997, Amstrad PLC was wound up, its shares being split into Viglen and Betacom instead. Betacom PLC was then renamed Amstrad PLC. The same year, Amstrad supplied set top boxes to Australian broadcaster Foxtel, and in 2004 to Italian broadcaster Sky Italia. 21st Century In 2000, Amstrad released the first of its combined telephony and e-mail devices, called the E-m@iler. This was followed by the E-m@iler Plus in 2002, and the E3 Videophone in 2004. Amstrad's UK E-m@iler business is operated through a separate company, Amserve Ltd which is 89.8% owned by Amstrad and 10.2% owned by DSG International plc (formerly Dixons plc). Amstrad has also produced a variety of home entertainment products over their history, including hi-fi, televisions, VCRs, and DVD players. BSkyB takeover In July 2007, BSkyB announced a takeover of Amstrad for £125m, a 23.7% premium on its market capitalisation. BSkyB had been a major client of Amstrad, accounting for 75% of sales for its 'set top box' business. Having supplied BSkyB with hardware since its inception in 1988, market analysts had noted the two companies becoming increasingly close. Sugar commented that he wished to play a part in the business, saying: "I turn 60 this year and I have had 40 years of hustling in the business, but now I have to start thinking about my team of loyal staff, many of whom have been with me for many years." It was announced on 2 July 2008 that Sugar had stepped down as Chairman of Amstrad, which had been planned since BSkyB took over in 2007. Amstrad was taken off the Stock Exchange on 9 October 2008. Amstrad has ceased operations as a trading company, and exists in name only. Under Sky, Amstrad currently only produce satellite receivers for Sky, as doing so allows them to reduce costs by cutting out the middleman. Amstrad's former offices are now a Premier Inn Hotel. Sky bought Amstrad so they could have their own hardware development division to develop new Satellite boxes (Sky Q) made in-house. Computer product lines Home computers CPC464 (64 KB RAM, cassette drive) CPC472 (same as CPC464 but with 72 KB instead of 64 KB) CPC664 (3 inch internal disk variant of CPC464) CPC6128 (128 KB version of the CPC664 with 3 inch disk) 464 Plus (CPC464 with enhanced graphics and sound) 6128 Plus (CPC6128 with enhanced graphics and sound) GX4000 (games console based on 464 Plus) Sinclair ZX Spectrum +2 (re-engineered ZX Spectrum 128 with tape drive) Sinclair ZX Spectrum +3 (as ZX Spectrum +2 but with 3 inch disk drive instead of tape drive) Word processors PCW8256 (Z80, 3.5 MHz, 256 KB RAM, single 180 KB 3" floppy drive, dot-matrix printer, green screen) PCW8512 (same as PCW8256 but with 512 KB RAM, 180 KB 3" A: drive, 720 KB 3" B: drive) PCW9512 (Z80, 3.5 MHz, 512 KB RAM, single or dual 720 KB 3" floppy drives, daisywheel printer, "paper white" screen) PcW9256 (Z80, 3.5 MHz, 256 KB RAM, single 720 KB 3.5" floppy drive, dot-matrix printer, "paper white" screen) PcW9512+ (same as PCW9512 but with single 3.5" 720 KB floppy drive) PcW10 (same as PcW9256 but with 512 KB RAM and a built-in parallel port) PcW16 (Z80, 16 MHz, single 1.44 MB 3.5" floppy drive, new machine not directly compatible with old PCWs) Notepad computers NC100 (Z80, 64 KB RAM, 80×8 character LCD) NC150 (NC100 with 128 KB RAM, floppy disk interface and NC200 firmware — sold in France and Italy) NC200 (Z80, 128 KB RAM, adjustable 80×16 character LCD, 3.5 in floppy disk drive) PC compatibles PC1512 (Intel 8086, 8 MHz, 512 KB RAM, CGA graphics) - Marketed in the United States as the PC5120 PC1640 (Intel 8086, 8 MHz, 640 KB RAM, MDA/Hercules/CGA/EGA colour graphics) - Marketed in the United States as the PC6400 PPC512 (Portable using NEC V30 processor, 512 KB RAM, non-backlit supertwist CGA, one or two 720 KB 3.5" floppy drives) - released around the same time as the PC1512. PPC640 (Portable using NEC V30 processor, 640 KB RAM, non-backlit supertwist CGA, one or two 720 KB 3.5" floppy drives, internal modem) - released around the same time as the PC1640. Sinclair PC200 (integral desktop PC for home computer market based on PPC512) PC-20 the Australian version of the Sinclair PC200 Sinclair PC500 (rebadged PC1512) PC1286 PC1386 (Intel 80386SX CPU, 20 MHz, 1 MB RAM) PC2086 (Intel 8086 CPU, 8 MHz, 640 KB RAM, VGA graphics) launched 1989 PC2286 (Intel 80286 CPU, 12.5 MHz, 1 MB RAM, VGA graphics) launched 1989 PC2386 (Intel 80386DX CPU, 20 MHz, 4 MB RAM, VGA graphics) launched 1989. PC3086 ( 8 MHz 8086 CPU, 640 KB RAM) PC3286 (16 MHz 80286 CPU, 1 MB RAM) PC3386SX (20 MHz 80286SX CPU, 1 MB RAM) PC4386SX (20 MHz 80386SX CPU, 4 MB RAM) PC5086 (8 MHz 8086 CPU, 640 KB RAM) PC5286 (16 MHz 80286 CPU, 1 MB RAM) PC5386SX (20 MHz 80386SX CPU, 2 MB RAM, VGA graphics) launched 1991 PC6486SX PC7000 series: PC7286, PC7386SX, PC7486SLC PC8486 PC9486 (25 or 33 MHz 80486SX, or 50 MHz 80486DX2) PC9486i (66 MHz 80486DX2 CPU, 4 MB RAM) PC9555i (120 MHz Pentium) Amstrad Mega PC (Intel 80386SX CPU, 25 MHz, Integrated Mega Drive) ALT286 (laptop; 16 MHz 80286 CPU, 1 MB RAM) ALT386SX (laptop; 16 MHz 80386SX CPU, 1 MB RAM) ACL386SX (laptop; 20 MHz 80386SX CPU, 1 MB RAM, colour TFT LCD) ANB386SX (notebook; 80386SX CPU, 1 MB RAM) PC accessories Amstrad DMP1000 9-pin dot matrix printer Amstrad DMP3000, DMP3160, DMP3250di 9-pin dot matrix printer (different printing speed), the special model 3250di (dual interface) having both serial and parallel ports Amstrad SM2400 2400 baud internal modem (came with Mirror software) PDA PDA 600 Pen Pad (1993, Z8S180 CPU) Set-top box Amstrad/Fidelity Satellite Systems SRX100 (1989), SRX200 (1989), SRD400 (1990) Amstrad Sky box DRX100 (2001), DRX200 (2001), DRX300 (2003), DRX400 (2004), DRX500 (2004), DRX550, (2006) Amstrad Sky+ box DRX180 (2003), DRX280 (2003) Amstrad Sky+HD box DRX780 (2007), DRX890, DRX895 (2009) Amstrad Sky HD Multiroom Receiver DRX595 (2011) See also Amsoft PC-1512 Amstrad Action Amstrad NC150 Amstrad NC200 Amstrad NC100 References Further reading Sugar, Alan. What You See Is What You Get: My Autobiography (2010) hardback Thomas, David. Alan Sugar: The Amstrad Story (1991), paperback . External links Electronics companies established in 1968 Home computer hardware companies Electronics companies of the United Kingdom Computer companies of the United Kingdom Companies formerly listed on the London Stock Exchange Companies based in Brentwood, Essex 1968 establishments in England Defunct computer hardware companies
11924
https://en.wikipedia.org/wiki/Game%20theory
Game theory
Game theory is the study of mathematical models of strategic interactions among rational agents. It has applications in all fields of social science, as well as in logic, systems science and computer science. Originally, it addressed two-person zero-sum games, in which each participant's gains or losses are exactly balanced by those of other participants. In the 21st century, game theory applies to a wide range of behavioral relations; it is now an umbrella term for the science of logical decision making in humans, animals, as well as computers. Modern game theory began with the idea of mixed-strategy equilibria in two-person zero-sum game and its proof by John von Neumann. Von Neumann's original proof used the Brouwer fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by the 1944 book Theory of Games and Economic Behavior, co-written with Oskar Morgenstern, which considered cooperative games of several players. The second edition of this book provided an axiomatic theory of expected utility, which allowed mathematical statisticians and economists to treat decision-making under uncertainty. Game theory was developed extensively in the 1950s by many scholars. It was explicitly applied to evolution in the 1970s, although similar developments go back at least as far as the 1930s. Game theory has been widely recognized as an important tool in many fields. , with the Nobel Memorial Prize in Economic Sciences going to game theorists Paul Milgrom and Robert B. Wilson, fifteen game theorists have won the economics Nobel Prize. John Maynard Smith was awarded the Crafoord Prize for his application of evolutionary game theory. History Precursors Discussions on the mathematics of games began long before the rise of modern mathematical game theory. Cardano's work on games of chance in Liber de ludo aleae (Book on Games of Chance), which was written around 1564 but published posthumously in 1663, formulated some of the field's basic ideas. In the 1650s, Pascal and Huygens developed the concept of expectation on reasoning about the structure of games of chance, and Huygens published his gambling calculus in De ratiociniis in ludo aleæ (On Reasoning in Games of Chance) in 1657. In 1713, a letter attributed to Charles Waldegrave analyzed a game called "le Her". He was an active Jacobite and uncle to James Waldegrave, a British diplomat. In this letter, Waldegrave provided a minimax mixed strategy solution to a two-person version of the card game le Her, and the problem is now known as Waldegrave problem. In his 1838 Recherches sur les principes mathématiques de la théorie des richesses (Researches into the Mathematical Principles of the Theory of Wealth), Antoine Augustin Cournot considered a duopoly and presented a solution that is the Nash equilibrium of the game. In 1913, Ernst Zermelo published Über eine Anwendung der Mengenlehre auf die Theorie des Schachspiels (On an Application of Set Theory to the Theory of the Game of Chess), which proved that the optimal chess strategy is strictly determined. This paved the way for more general theorems. In 1938, the Danish mathematical economist Frederik Zeuthen proved that the mathematical model had a winning strategy by using Brouwer's fixed point theorem. In his 1938 book Applications aux Jeux de Hasard and earlier notes, Émile Borel proved a minimax theorem for two-person zero-sum matrix games only when the pay-off matrix is symmetric and provided a solution to a non-trivial infinite game (known in English as Blotto game). Borel conjectured the non-existence of mixed-strategy equilibria in finite two-person zero-sum games, a conjecture that was proved false by von Neumann. Birth and early developments Game theory did not exist as a unique field until John von Neumann published the paper On the Theory of Games of Strategy in 1928. Von Neumann's original proof used Brouwer's fixed-point theorem on continuous mappings into compact convex sets, which became a standard method in game theory and mathematical economics. His paper was followed by his 1944 book Theory of Games and Economic Behavior co-authored with Oskar Morgenstern. The second edition of this book provided an axiomatic theory of utility, which reincarnated Daniel Bernoulli's old theory of utility (of money) as an independent discipline. Von Neumann's work in game theory culminated in this 1944 book. This foundational work contains the method for finding mutually consistent solutions for two-person zero-sum games. Subsequent work focused primarily on cooperative game theory, which analyzes optimal strategies for groups of individuals, presuming that they can enforce agreements between them about proper strategies. In 1950, the first mathematical discussion of the prisoner's dilemma appeared, and an experiment was undertaken by notable mathematicians Merrill M. Flood and Melvin Dresher, as part of the RAND Corporation's investigations into game theory. RAND pursued the studies because of possible applications to global nuclear strategy. Around this same time, John Nash developed a criterion for mutual consistency of players' strategies known as the Nash equilibrium, applicable to a wider variety of games than the criterion proposed by von Neumann and Morgenstern. Nash proved that every finite n-player, non-zero-sum (not just two-player zero-sum) non-cooperative game has what is now known as a Nash equilibrium in mixed strategies. Game theory experienced a flurry of activity in the 1950s, during which the concepts of the core, the extensive form game, fictitious play, repeated games, and the Shapley value were developed. The 1950s also saw the first applications of game theory to philosophy and political science. Prize-winning achievements In 1965, Reinhard Selten introduced his solution concept of subgame perfect equilibria, which further refined the Nash equilibrium. Later he would introduce trembling hand perfection as well. In 1994 Nash, Selten and Harsanyi became Economics Nobel Laureates for their contributions to economic game theory. In the 1970s, game theory was extensively applied in biology, largely as a result of the work of John Maynard Smith and his evolutionarily stable strategy. In addition, the concepts of correlated equilibrium, trembling hand perfection, and common knowledge were introduced and analyzed. In 2005, game theorists Thomas Schelling and Robert Aumann followed Nash, Selten, and Harsanyi as Nobel Laureates. Schelling worked on dynamic models, early examples of evolutionary game theory. Aumann contributed more to the equilibrium school, introducing equilibrium coarsening and correlated equilibria, and developing an extensive formal analysis of the assumption of common knowledge and of its consequences. In 2007, Leonid Hurwicz, Eric Maskin, and Roger Myerson were awarded the Nobel Prize in Economics "for having laid the foundations of mechanism design theory". Myerson's contributions include the notion of proper equilibrium, and an important graduate text: Game Theory, Analysis of Conflict. Hurwicz introduced and formalized the concept of incentive compatibility. In 2012, Alvin E. Roth and Lloyd S. Shapley were awarded the Nobel Prize in Economics "for the theory of stable allocations and the practice of market design". In 2014, the Nobel went to game theorist Jean Tirole. Game types Cooperative / non-cooperative A game is cooperative if the players are able to form binding commitments externally enforced (e.g. through contract law). A game is non-cooperative if players cannot form alliances or if all agreements need to be self-enforcing (e.g. through credible threats). Cooperative games are often analyzed through the framework of cooperative game theory, which focuses on predicting which coalitions will form, the joint actions that groups take, and the resulting collective payoffs. It is opposed to the traditional non-cooperative game theory which focuses on predicting individual players' actions and payoffs and analyzing Nash equilibria. The focus on individual payoff can result in a phenomenon known as Tragedy of the Commons, where resources are used to a collectively inefficient level. The lack of formal negotiation leads to the deterioration of public goods through over-use and under provision that stems from private incentives. Cooperative game theory provides a high-level approach as it describes only the structure, strategies, and payoffs of coalitions, whereas non-cooperative game theory also looks at how bargaining procedures will affect the distribution of payoffs within each coalition. As non-cooperative game theory is more general, cooperative games can be analyzed through the approach of non-cooperative game theory (the converse does not hold) provided that sufficient assumptions are made to encompass all the possible strategies available to players due to the possibility of external enforcement of cooperation. While using a single theory may be desirable, in many instances insufficient information is available to accurately model the formal procedures available during the strategic bargaining process, or the resulting model would be too complex to offer a practical tool in the real world. In such cases, cooperative game theory provides a simplified approach that allows analysis of the game at large without having to make any assumption about bargaining powers. Symmetric / asymmetric A symmetric game is a game where the payoffs for playing a particular strategy depend only on the other strategies employed, not on who is playing them. That is, if the identities of the players can be changed without changing the payoff to the strategies, then a game is symmetric. Many of the commonly studied 2×2 games are symmetric. The standard representations of chicken, the prisoner's dilemma, and the stag hunt are all symmetric games. Some scholars would consider certain asymmetric games as examples of these games as well. However, the most common payoffs for each of these games are symmetric. The most commonly studied asymmetric games are games where there are not identical strategy sets for both players. For instance, the ultimatum game and similarly the dictator game have different strategies for each player. It is possible, however, for a game to have identical strategies for both players, yet be asymmetric. For example, the game pictured in this section's graphic is asymmetric despite having identical strategy sets for both players. Zero-sum / non-zero-sum Zero-sum games are a special case of constant-sum games in which choices by players can neither increase nor decrease the available resources. In zero-sum games, the total benefit goes to all players in a game, for every combination of strategies, always adds to zero (more informally, a player benefits only at the equal expense of others). Poker exemplifies a zero-sum game (ignoring the possibility of the house's cut), because one wins exactly the amount one's opponents lose. Other zero-sum games include matching pennies and most classical board games including Go and chess. Many games studied by game theorists (including the famed prisoner's dilemma) are non-zero-sum games, because the outcome has net results greater or less than zero. Informally, in non-zero-sum games, a gain by one player does not necessarily correspond with a loss by another. Constant-sum games correspond to activities like theft and gambling, but not to the fundamental economic situation in which there are potential gains from trade. It is possible to transform any constant-sum game into a (possibly asymmetric) zero-sum game by adding a dummy player (often called "the board") whose losses compensate the players' net winnings. Simultaneous / sequential Simultaneous games are games where both players move simultaneously, or instead the later players are unaware of the earlier players' actions (making them effectively simultaneous). Sequential games (or dynamic games) are games where later players have some knowledge about earlier actions. This need not be perfect information about every action of earlier players; it might be very little knowledge. For instance, a player may know that an earlier player did not perform one particular action, while they do not know which of the other available actions the first player actually performed. The difference between simultaneous and sequential games is captured in the different representations discussed above. Often, normal form is used to represent simultaneous games, while extensive form is used to represent sequential ones. The transformation of extensive to normal form is one way, meaning that multiple extensive form games correspond to the same normal form. Consequently, notions of equilibrium for simultaneous games are insufficient for reasoning about sequential games; see subgame perfection. In short, the differences between sequential and simultaneous games are as follows: Cournot Competition The Cournot competition model involves players choosing quantity of a homogenous product to produce independently and simultaneously, where marginal cost can be different for each firm and the firm's payoff is profit. The production costs are public information and the firm aims to find their profit-maximising quantity based on what they believe the other firm will produce and behave like monopolies. In this game firms want to produce at the monopoly quantity but there is a high incentive to deviate and produce more, which decreases the market-clearing price. For example, firms may be tempted to deviate from the monopoly quantity if there is a low monopoly quantity and high price, with the aim of increasing production to maximise profit. However this option does not provide the highest payoff, as a firm's ability to maximise profits depends on its market share and the elasticity of the market demand. The Cournot equilibrium is reached when each firm operates on their reaction function with no incentive to deviate, as they have the best response based on the other firms output. Within the game, firms reach the Nash equilibrium when the Cournot equilibrium is achieved. Bertrand Competition The Bertrand competition, assumes homogenous products and a constant marginal cost and players choose the prices. The equilibrium of price competition is where the price is equal to marginal costs, assuming complete information about the competitors' costs. Therefore, the firms have an incentive to deviate from the equilibrium because a homogenous product with a lower price will gain all of the market share, known as a cost advantage. Perfect information and imperfect information An important subset of sequential games consists of games of perfect information. A game is one of perfect information if all players, at every move in the game, know the moves previously made by all other players. In reality, this can be applied to firms and consumers having information about price and quality of all the available goods in a market. An imperfect information game is played when the players do not know all moves already made by the opponent such as a simultaneous move game. Most games studied in game theory are imperfect-information games. Examples of perfect-information games include tic-tac-toe, checkers, chess, and Go. Many card games are games of imperfect information, such as poker and bridge. Perfect information is often confused with complete information, which is a similar concept. Complete information requires that every player know the strategies and payoffs available to the other players but not necessarily the actions taken, whereas perfect information is knowledge of all aspects of the game and players. Games of incomplete information can be reduced, however, to games of imperfect information by introducing "moves by nature". Bayesian game One of the assumptions of the Nash equilibrium is that every player has correct beliefs about the actions of the other players. However, there are many situations in game theory where participants do not fully understand the characteristics of their opponents. Negotiators may be unaware of their opponent's valuation of the object of negotiation, companies may be unaware of their opponent's cost functions, combatants may be unaware of their opponent's strengths, and jurors may be unaware of their colleague's interpretation of the evidence at trial. In some cases, participants may know the character of their opponent well, but may not know how well their opponent knows his or her own character. Bayesian game means a strategic game with incomplete information. For a strategic game, decision makers are players, and every player has a group of actions. A core part of the imperfect information specification is the set of states. Every state completely describes a collection of characteristics relevant to the player such as their preferences and details about them. There must be a state for every set of features that some player believes may exist. For example, where Player 1 is unsure whether Player 2 would rather date her or get away from her, while Player 2 understands Player 1's preferences as before. To be specific, supposing that Player 1 believes that Player 2 wants to date her under a probability of 1/2 and get away from her under a probability of 1/2 (this evaluation comes from Player 1's experience probably: she faces players who want to date her half of the time in such a case and players who want to avoid her half of the time). Due to the probability involved, the analysis of this situation requires to understand the player's preference for the draw, even though people are only interested in pure strategic equilibrium. Games in which the difficulty of finding an optimal strategy stems from the multiplicity of possible moves are called combinatorial games. Examples include chess and Go. Games that involve imperfect information may also have a strong combinatorial character, for instance backgammon. There is no unified theory addressing combinatorial elements in games. There are, however, mathematical tools that can solve particular problems and answer general questions. Games of perfect information have been studied in combinatorial game theory, which has developed novel representations, e.g. surreal numbers, as well as combinatorial and algebraic (and sometimes non-constructive) proof methods to solve games of certain types, including "loopy" games that may result in infinitely long sequences of moves. These methods address games with higher combinatorial complexity than those usually considered in traditional (or "economic") game theory. A typical game that has been solved this way is Hex. A related field of study, drawing from computational complexity theory, is game complexity, which is concerned with estimating the computational difficulty of finding optimal strategies. Research in artificial intelligence has addressed both perfect and imperfect information games that have very complex combinatorial structures (like chess, go, or backgammon) for which no provable optimal strategies have been found. The practical solutions involve computational heuristics, like alpha–beta pruning or use of artificial neural networks trained by reinforcement learning, which make games more tractable in computing practice. Infinitely long games Games, as studied by economists and real-world game players, are generally finished in finitely many moves. Pure mathematicians are not so constrained, and set theorists in particular study games that last for infinitely many moves, with the winner (or other payoff) not known until after all those moves are completed. The focus of attention is usually not so much on the best way to play such a game, but whether one player has a winning strategy. (It can be proven, using the axiom of choice, that there are gameseven with perfect information and where the only outcomes are "win" or "lose"for which neither player has a winning strategy.) The existence of such strategies, for cleverly designed games, has important consequences in descriptive set theory. Discrete and continuous games Much of game theory is concerned with finite, discrete games that have a finite number of players, moves, events, outcomes, etc. Many concepts can be extended, however. Continuous games allow players to choose a strategy from a continuous strategy set. For instance, Cournot competition is typically modeled with players' strategies being any non-negative quantities, including fractional quantities. Differential games Differential games such as the continuous pursuit and evasion game are continuous games where the evolution of the players' state variables is governed by differential equations. The problem of finding an optimal strategy in a differential game is closely related to the optimal control theory. In particular, there are two types of strategies: the open-loop strategies are found using the Pontryagin maximum principle while the closed-loop strategies are found using Bellman's Dynamic Programming method. A particular case of differential games are the games with a random time horizon. In such games, the terminal time is a random variable with a given probability distribution function. Therefore, the players maximize the mathematical expectation of the cost function. It was shown that the modified optimization problem can be reformulated as a discounted differential game over an infinite time interval. Evolutionary game theory Evolutionary game theory studies players who adjust their strategies over time according to rules that are not necessarily rational or farsighted. In general, the evolution of strategies over time according to such rules is modeled as a Markov chain with a state variable such as the current strategy profile or how the game has been played in the recent past. Such rules may feature imitation, optimization, or survival of the fittest. In biology, such models can represent evolution, in which offspring adopt their parents' strategies and parents who play more successful strategies (i.e. corresponding to higher payoffs) have a greater number of offspring. In the social sciences, such models typically represent strategic adjustment by players who play a game many times within their lifetime and, consciously or unconsciously, occasionally adjust their strategies. Stochastic outcomes (and relation to other fields) Individual decision problems with stochastic outcomes are sometimes considered "one-player games". These situations are not considered game theoretical by some authors. They may be modeled using similar tools within the related disciplines of decision theory, operations research, and areas of artificial intelligence, particularly AI planning (with uncertainty) and multi-agent system. Although these fields may have different motivators, the mathematics involved are substantially the same, e.g. using Markov decision processes (MDP). Stochastic outcomes can also be modeled in terms of game theory by adding a randomly acting player who makes "chance moves" ("moves by nature"). This player is not typically considered a third player in what is otherwise a two-player game, but merely serves to provide a roll of the dice where required by the game. For some problems, different approaches to modeling stochastic outcomes may lead to different solutions. For example, the difference in approach between MDPs and the minimax solution is that the latter considers the worst-case over a set of adversarial moves, rather than reasoning in expectation about these moves given a fixed probability distribution. The minimax approach may be advantageous where stochastic models of uncertainty are not available, but may also be overestimating extremely unlikely (but costly) events, dramatically swaying the strategy in such scenarios if it is assumed that an adversary can force such an event to happen. (See Black swan theory for more discussion on this kind of modeling issue, particularly as it relates to predicting and limiting losses in investment banking.) General models that include all elements of stochastic outcomes, adversaries, and partial or noisy observability (of moves by other players) have also been studied. The "gold standard" is considered to be partially observable stochastic game (POSG), but few realistic problems are computationally feasible in POSG representation. Metagames These are games the play of which is the development of the rules for another game, the target or subject game. Metagames seek to maximize the utility value of the rule set developed. The theory of metagames is related to mechanism design theory. The term metagame analysis is also used to refer to a practical approach developed by Nigel Howard. whereby a situation is framed as a strategic game in which stakeholders try to realize their objectives by means of the options available to them. Subsequent developments have led to the formulation of confrontation analysis. Pooling games These are games prevailing over all forms of society. Pooling games are repeated plays with changing payoff table in general over an experienced path, and their equilibrium strategies usually take a form of evolutionary social convention and economic convention. Pooling game theory emerges to formally recognize the interaction between optimal choice in one play and the emergence of forthcoming payoff table update path, identify the invariance existence and robustness, and predict variance over time. The theory is based upon topological transformation classification of payoff table update over time to predict variance and invariance, and is also within the jurisdiction of the computational law of reachable optimality for ordered system. Mean field game theory Mean field game theory is the study of strategic decision making in very large populations of small interacting agents. This class of problems was considered in the economics literature by Boyan Jovanovic and Robert W. Rosenthal, in the engineering literature by Peter E. Caines, and by mathematician Pierre-Louis Lions and Jean-Michel Lasry. Representation of games The games studied in game theory are well-defined mathematical objects. To be fully defined, a game must specify the following elements: the players of the game, the information and actions available to each player at each decision point, and the payoffs for each outcome. (Eric Rasmusen refers to these four "essential elements" by the acronym "PAPI".) A game theorist typically uses these elements, along with a solution concept of their choosing, to deduce a set of equilibrium strategies for each player such that, when these strategies are employed, no player can profit by unilaterally deviating from their strategy. These equilibrium strategies determine an equilibrium to the game—a stable state in which either one outcome occurs or a set of outcomes occur with known probability. Most cooperative games are presented in the characteristic function form, while the extensive and the normal forms are used to define noncooperative games. Extensive form The extensive form can be used to formalize games with a time sequencing of moves. Games here are played on trees (as pictured here). Here each vertex (or node) represents a point of choice for a player. The player is specified by a number listed by the vertex. The lines out of the vertex represent a possible action for that player. The payoffs are specified at the bottom of the tree. The extensive form can be viewed as a multi-player generalization of a decision tree. To solve any extensive form game, backward induction must be used. It involves working backward up the game tree to determine what a rational player would do at the last vertex of the tree, what the player with the previous move would do given that the player with the last move is rational, and so on until the first vertex of the tree is reached. The game pictured consists of two players. The way this particular game is structured (i.e., with sequential decision making and perfect information), Player 1 "moves" first by choosing either or (fair or unfair). Next in the sequence, Player 2, who has now seen Player 1s move, chooses to play either or . Once Player 2 has made their choice, the game is considered finished and each player gets their respective payoff. Suppose that Player 1 chooses and then Player 2 chooses : Player 1 then gets a payoff of "eight" (which in real-world terms can be interpreted in many ways, the simplest of which is in terms of money but could mean things such as eight days of vacation or eight countries conquered or even eight more opportunities to play the same game against other players) and Player 2 gets a payoff of "two". The extensive form can also capture simultaneous-move games and games with imperfect information. To represent it, either a dotted line connects different vertices to represent them as being part of the same information set (i.e. the players do not know at which point they are), or a closed line is drawn around them. (See example in the imperfect information section.) Normal form The normal (or strategic form) game is usually represented by a matrix which shows the players, strategies, and payoffs (see the example to the right). More generally it can be represented by any function that associates a payoff for each player with every possible combination of actions. In the accompanying example there are two players; one chooses the row and the other chooses the column. Each player has two strategies, which are specified by the number of rows and the number of columns. The payoffs are provided in the interior. The first number is the payoff received by the row player (Player 1 in our example); the second is the payoff for the column player (Player 2 in our example). Suppose that Player 1 plays Up and that Player 2 plays Left. Then Player 1 gets a payoff of 4, and Player 2 gets 3. When a game is presented in normal form, it is presumed that each player acts simultaneously or, at least, without knowing the actions of the other. If players have some information about the choices of other players, the game is usually presented in extensive form. Every extensive-form game has an equivalent normal-form game, however, the transformation to normal form may result in an exponential blowup in the size of the representation, making it computationally impractical. Characteristic function form In games that possess removable utility, separate rewards are not given; rather, the characteristic function decides the payoff of each unity. The idea is that the unity that is 'empty', so to speak, does not receive a reward at all. The origin of this form is to be found in John von Neumann and Oskar Morgenstern's book; when looking at these instances, they guessed that when a union appears, it works against the fraction as if two individuals were playing a normal game. The balanced payoff of C is a basic function. Although there are differing examples that help determine coalitional amounts from normal games, not all appear that in their function form can be derived from such. Formally, a characteristic function is seen as: (N,v), where N represents the group of people and is a normal utility. Such characteristic functions have expanded to describe games where there is no removable utility. Alternative game representations Alternative game representation forms are used for some subclasses of games or adjusted to the needs of interdisciplinary research. In addition to classical game representations, some of the alternative representations also encode time related aspects. General and applied uses As a method of applied mathematics, game theory has been used to study a wide variety of human and animal behaviors. It was initially developed in economics to understand a large collection of economic behaviors, including behaviors of firms, markets, and consumers. The first use of game-theoretic analysis was by Antoine Augustin Cournot in 1838 with his solution of the Cournot duopoly. The use of game theory in the social sciences has expanded, and game theory has been applied to political, sociological, and psychological behaviors as well. Although pre-twentieth-century naturalists such as Charles Darwin made game-theoretic kinds of statements, the use of game-theoretic analysis in biology began with Ronald Fisher's studies of animal behavior during the 1930s. This work predates the name "game theory", but it shares many important features with this field. The developments in economics were later applied to biology largely by John Maynard Smith in his 1982 book Evolution and the Theory of Games. In addition to being used to describe, predict, and explain behavior, game theory has also been used to develop theories of ethical or normative behavior and to prescribe such behavior. In economics and philosophy, scholars have applied game theory to help in the understanding of good or proper behavior. Game-theoretic arguments of this type can be found as far back as Plato. An alternative version of game theory, called chemical game theory, represents the player's choices as metaphorical chemical reactant molecules called "knowlecules".  Chemical game theory then calculates the outcomes as equilibrium solutions to a system of chemical reactions. Description and modeling The primary use of game theory is to describe and model how human populations behave. Some scholars believe that by finding the equilibria of games they can predict how actual human populations will behave when confronted with situations analogous to the game being studied. This particular view of game theory has been criticized. It is argued that the assumptions made by game theorists are often violated when applied to real-world situations. Game theorists usually assume players act rationally, but in practice, human behavior often deviates from this model. Game theorists respond by comparing their assumptions to those used in physics. Thus while their assumptions do not always hold, they can treat game theory as a reasonable scientific ideal akin to the models used by physicists. However, empirical work has shown that in some classic games, such as the centipede game, guess 2/3 of the average game, and the dictator game, people regularly do not play Nash equilibria. There is an ongoing debate regarding the importance of these experiments and whether the analysis of the experiments fully captures all aspects of the relevant situation. Some game theorists, following the work of John Maynard Smith and George R. Price, have turned to evolutionary game theory in order to resolve these issues. These models presume either no rationality or bounded rationality on the part of players. Despite the name, evolutionary game theory does not necessarily presume natural selection in the biological sense. Evolutionary game theory includes both biological as well as cultural evolution and also models of individual learning (for example, fictitious play dynamics). Prescriptive or normative analysis Some scholars see game theory not as a predictive tool for the behavior of human beings, but as a suggestion for how people ought to behave. Since a strategy, corresponding to a Nash equilibrium of a game constitutes one's best response to the actions of the other players – provided they are in (the same) Nash equilibrium – playing a strategy that is part of a Nash equilibrium seems appropriate. This normative use of game theory has also come under criticism. Economics and business Game theory is a major method used in mathematical economics and business for modeling competing behaviors of interacting agents. Applications include a wide array of economic phenomena and approaches, such as auctions, bargaining, mergers and acquisitions pricing, fair division, duopolies, oligopolies, social network formation, agent-based computational economics, general equilibrium, mechanism design, and voting systems; and across such broad areas as experimental economics, behavioral economics, information economics, industrial organization, and political economy. This research usually focuses on particular sets of strategies known as "solution concepts" or "equilibria". A common assumption is that players act rationally. In non-cooperative games, the most famous of these is the Nash equilibrium. A set of strategies is a Nash equilibrium if each represents a best response to the other strategies. If all the players are playing the strategies in a Nash equilibrium, they have no unilateral incentive to deviate, since their strategy is the best they can do given what others are doing. The payoffs of the game are generally taken to represent the utility of individual players. A prototypical paper on game theory in economics begins by presenting a game that is an abstraction of a particular economic situation. One or more solution concepts are chosen, and the author demonstrates which strategy sets in the presented game are equilibria of the appropriate type. Economists and business professors suggest two primary uses (noted above): descriptive and prescriptive. The Chartered Institute of Procurement & Supply (CIPS) promotes knowledge and use of game theory within the context of business procurement. CIPS and TWS Partners have conducted a series of surveys designed to explore the understanding, awareness and application of game theory among procurement professionals. Some of the main findings in their third annual survey (2019) include: application of game theory to procurement activity has increased – at the time it was at 19% across all survey respondents 65% of participants predict that use of game theory applications will grow 70% of respondents say that they have "only a basic or a below basic understanding" of game theory 20% of participants had undertaken on-the-job training in game theory 50% of respondents said that new or improved software solutions were desirable 90% of respondents said that they do not have the software they need for their work. Project management Sensible decision-making is critical for the success of projects. In project management, game theory is used to model the decision-making process of players, such as investors, project managers, contractors, sub-contractors, governments and customers. Quite often, these players have competing interests, and sometimes their interests are directly detrimental to other players, making project management scenarios well-suited to be modeled by game theory. Piraveenan (2019) in his review provides several examples where game theory is used to model project management scenarios. For instance, an investor typically has several investment options, and each option will likely result in a different project, and thus one of the investment options has to be chosen before the project charter can be produced. Similarly, any large project involving subcontractors, for instance, a construction project, has a complex interplay between the main contractor (the project manager) and subcontractors, or among the subcontractors themselves, which typically has several decision points. For example, if there is an ambiguity in the contract between the contractor and subcontractor, each must decide how hard to push their case without jeopardizing the whole project, and thus their own stake in it. Similarly, when projects from competing organizations are launched, the marketing personnel have to decide what is the best timing and strategy to market the project, or its resultant product or service, so that it can gain maximum traction in the face of competition. In each of these scenarios, the required decisions depend on the decisions of other players who, in some way, have competing interests to the interests of the decision-maker, and thus can ideally be modeled using game theory. Piraveenan summarises that two-player games are predominantly used to model project management scenarios, and based on the identity of these players, five distinct types of games are used in project management. Government-sector–private-sector games (games that model public–private partnerships) Contractor–contractor games Contractor–subcontractor games Subcontractor–subcontractor games Games involving other players In terms of types of games, both cooperative as well as non-cooperative, normal-form as well as extensive-form, and zero-sum as well as non-zero-sum are used to model various project management scenarios. Political science The application of game theory to political science is focused in the overlapping areas of fair division, political economy, public choice, war bargaining, positive political theory, and social choice theory. In each of these areas, researchers have developed game-theoretic models in which the players are often voters, states, special interest groups, and politicians. Early examples of game theory applied to political science are provided by Anthony Downs. In his 1957 book An Economic Theory of Democracy, he applies the Hotelling firm location model to the political process. In the Downsian model, political candidates commit to ideologies on a one-dimensional policy space. Downs first shows how the political candidates will converge to the ideology preferred by the median voter if voters are fully informed, but then argues that voters choose to remain rationally ignorant which allows for candidate divergence. Game theory was applied in 1962 to the Cuban Missile Crisis during the presidency of John F. Kennedy. It has also been proposed that game theory explains the stability of any form of political government. Taking the simplest case of a monarchy, for example, the king, being only one person, does not and cannot maintain his authority by personally exercising physical control over all or even any significant number of his subjects. Sovereign control is instead explained by the recognition by each citizen that all other citizens expect each other to view the king (or other established government) as the person whose orders will be followed. Coordinating communication among citizens to replace the sovereign is effectively barred, since conspiracy to replace the sovereign is generally punishable as a crime. Thus, in a process that can be modeled by variants of the prisoner's dilemma, during periods of stability no citizen will find it rational to move to replace the sovereign, even if all the citizens know they would be better off if they were all to act collectively. A game-theoretic explanation for democratic peace is that public and open debate in democracies sends clear and reliable information regarding their intentions to other states. In contrast, it is difficult to know the intentions of nondemocratic leaders, what effect concessions will have, and if promises will be kept. Thus there will be mistrust and unwillingness to make concessions if at least one of the parties in a dispute is a non-democracy. However, game theory predicts that two countries may still go to war even if their leaders are cognizant of the costs of fighting. War may result from asymmetric information; two countries may have incentives to mis-represent the amount of military resources they have on hand, rendering them unable to settle disputes agreeably without resorting to fighting. Moreover, war may arise because of commitment problems: if two countries wish to settle a dispute via peaceful means, but each wishes to go back on the terms of that settlement, they may have no choice but to resort to warfare. Finally, war may result from issue indivisibilities. Game theory could also help predict a nation's responses when there is a new rule or law to be applied to that nation. One example is Peter John Wood's (2013) research looking into what nations could do to help reduce climate change. Wood thought this could be accomplished by making treaties with other nations to reduce greenhouse gas emissions. However, he concluded that this idea could not work because it would create a prisoner's dilemma for the nations. Biology Unlike those in economics, the payoffs for games in biology are often interpreted as corresponding to fitness. In addition, the focus has been less on equilibria that correspond to a notion of rationality and more on ones that would be maintained by evolutionary forces. The best-known equilibrium in biology is known as the evolutionarily stable strategy (ESS), first introduced in . Although its initial motivation did not involve any of the mental requirements of the Nash equilibrium, every ESS is a Nash equilibrium. In biology, game theory has been used as a model to understand many different phenomena. It was first used to explain the evolution (and stability) of the approximate 1:1 sex ratios. suggested that the 1:1 sex ratios are a result of evolutionary forces acting on individuals who could be seen as trying to maximize their number of grandchildren. Additionally, biologists have used evolutionary game theory and the ESS to explain the emergence of animal communication. The analysis of signaling games and other communication games has provided insight into the evolution of communication among animals. For example, the mobbing behavior of many species, in which a large number of prey animals attack a larger predator, seems to be an example of spontaneous emergent organization. Ants have also been shown to exhibit feed-forward behavior akin to fashion (see Paul Ormerod's Butterfly Economics). Biologists have used the game of chicken to analyze fighting behavior and territoriality. According to Maynard Smith, in the preface to Evolution and the Theory of Games, "paradoxically, it has turned out that game theory is more readily applied to biology than to the field of economic behaviour for which it was originally designed". Evolutionary game theory has been used to explain many seemingly incongruous phenomena in nature. One such phenomenon is known as biological altruism. This is a situation in which an organism appears to act in a way that benefits other organisms and is detrimental to itself. This is distinct from traditional notions of altruism because such actions are not conscious, but appear to be evolutionary adaptations to increase overall fitness. Examples can be found in species ranging from vampire bats that regurgitate blood they have obtained from a night's hunting and give it to group members who have failed to feed, to worker bees that care for the queen bee for their entire lives and never mate, to vervet monkeys that warn group members of a predator's approach, even when it endangers that individual's chance of survival. All of these actions increase the overall fitness of a group, but occur at a cost to the individual. Evolutionary game theory explains this altruism with the idea of kin selection. Altruists discriminate between the individuals they help and favor relatives. Hamilton's rule explains the evolutionary rationale behind this selection with the equation , where the cost to the altruist must be less than the benefit to the recipient multiplied by the coefficient of relatedness . The more closely related two organisms are causes the incidences of altruism to increase because they share many of the same alleles. This means that the altruistic individual, by ensuring that the alleles of its close relative are passed on through survival of its offspring, can forgo the option of having offspring itself because the same number of alleles are passed on. For example, helping a sibling (in diploid animals) has a coefficient of , because (on average) an individual shares half of the alleles in its sibling's offspring. Ensuring that enough of a sibling's offspring survive to adulthood precludes the necessity of the altruistic individual producing offspring. The coefficient values depend heavily on the scope of the playing field; for example if the choice of whom to favor includes all genetic living things, not just all relatives, we assume the discrepancy between all humans only accounts for approximately 1% of the diversity in the playing field, a coefficient that was in the smaller field becomes 0.995. Similarly if it is considered that information other than that of a genetic nature (e.g. epigenetics, religion, science, etc.) persisted through time the playing field becomes larger still, and the discrepancies smaller. Computer science and logic Game theory has come to play an increasingly important role in logic and in computer science. Several logical theories have a basis in game semantics. In addition, computer scientists have used games to model interactive computations. Also, game theory provides a theoretical basis to the field of multi-agent systems. Separately, game theory has played a role in online algorithms; in particular, the -server problem, which has in the past been referred to as games with moving costs and request-answer games. Yao's principle is a game-theoretic technique for proving lower bounds on the computational complexity of randomized algorithms, especially online algorithms. The emergence of the Internet has motivated the development of algorithms for finding equilibria in games, markets, computational auctions, peer-to-peer systems, and security and information markets. Algorithmic game theory and within it algorithmic mechanism design combine computational algorithm design and analysis of complex systems with economic theory. Philosophy Game theory has been put to several uses in philosophy. Responding to two papers by , used game theory to develop a philosophical account of convention. In so doing, he provided the first analysis of common knowledge and employed it in analyzing play in coordination games. In addition, he first suggested that one can understand meaning in terms of signaling games. This later suggestion has been pursued by several philosophers since Lewis. Following game-theoretic account of conventions, Edna Ullmann-Margalit (1977) and Bicchieri (2006) have developed theories of social norms that define them as Nash equilibria that result from transforming a mixed-motive game into a coordination game. Game theory has also challenged philosophers to think in terms of interactive epistemology: what it means for a collective to have common beliefs or knowledge, and what are the consequences of this knowledge for the social outcomes resulting from the interactions of agents. Philosophers who have worked in this area include Bicchieri (1989, 1993), Skyrms (1990), and Stalnaker (1999). In ethics, some (most notably David Gauthier, Gregory Kavka, and Jean Hampton) authors have attempted to pursue Thomas Hobbes' project of deriving morality from self-interest. Since games like the prisoner's dilemma present an apparent conflict between morality and self-interest, explaining why cooperation is required by self-interest is an important component of this project. This general strategy is a component of the general social contract view in political philosophy (for examples, see and ). Other authors have attempted to use evolutionary game theory in order to explain the emergence of human attitudes about morality and corresponding animal behaviors. These authors look at several games including the prisoner's dilemma, stag hunt, and the Nash bargaining game as providing an explanation for the emergence of attitudes about morality (see, e.g., and ). Retail and consumer product pricing Game theory applications are often used in the pricing strategies of retail and consumer markets, particularly for the sale of inelastic goods. With retailers constantly competing against one another for consumer market share, it has become a fairly common practice for retailers to discount certain goods, intermittently, in the hopes of increasing foot-traffic in brick and mortar locations (websites visits for e-commerce retailers) or increasing sales of ancillary or complimentary products. Black Friday, a popular shopping holiday in the US, is when many retailers focus on optimal pricing strategies to capture the holiday shopping market. In the Black Friday scenario, retailers using game theory applications typically ask "what is the dominant competitor's reaction to me?" In such a scenario, the game has two players: the retailer, and the consumer. The retailer is focused on an optimal pricing strategy, while the consumer is focused on the best deal. In this closed system, there often is no dominant strategy as both players have alternative options. That is, retailers can find a different customer, and consumers can shop at a different retailer. Given the market competition that day, however, the dominant strategy for retailers lies in outperforming competitors. The open system assumes multiple retailers selling similar goods, and a finite number of consumers demanding the goods at an optimal price. A blog by a Cornell University professor provided an example of such a strategy, when Amazon priced a Samsung TV $100 below retail value, effectively undercutting competitors. Amazon made up part of the difference by increasing the price of HDMI cables, as it has been found that consumers are less price discriminatory when it comes to the sale of secondary items. Retail markets continue to evolve strategies and applications of game theory when it comes to pricing consumer goods. The key insights found between simulations in a controlled environment and real-world retail experiences show that the applications of such strategies are more complex, as each retailer has to find an optimal balance between pricing, supplier relations, brand image, and the potential to cannibalize the sale of more profitable items. Epidemiology Since the decision to take a vaccine for a particular disease is often made by individuals, who may consider a range of factors and parameters in making this decision (such as the incidence and prevalence of the disease, perceived and real risks associated with contracting the disease, mortality rate, perceived and real risks associated with vaccination, and financial cost of vaccination), game theory has been used to model and predict vaccination uptake in a society. In popular culture Based on the 1998 book by Sylvia Nasar, the life story of game theorist and mathematician John Nash was turned into the 2001 biopic A Beautiful Mind, starring Russell Crowe as Nash. The 1959 military science fiction novel Starship Troopers by Robert A. Heinlein mentioned "games theory" and "theory of games". In the 1997 film of the same name, the character Carl Jenkins referred to his military intelligence assignment as being assigned to "games and theory". The 1964 film Dr. Strangelove satirizes game theoretic ideas about deterrence theory. For example, nuclear deterrence depends on the threat to retaliate catastrophically if a nuclear attack is detected. A game theorist might argue that such threats can fail to be credible, in the sense that they can lead to subgame imperfect equilibria. The movie takes this idea one step further, with the Soviet Union irrevocably committing to a catastrophic nuclear response without making the threat public. The 1980s power pop band Game Theory was founded by singer/songwriter Scott Miller, who described the band's name as alluding to "the study of calculating the most appropriate action given an adversary... to give yourself the minimum amount of failure". Liar Game, a 2005 Japanese manga and 2007 television series, presents the main characters in each episode with a game or problem that is typically drawn from game theory, as demonstrated by the strategies applied by the characters. The 1974 novel Spy Story by Len Deighton explores elements of Game Theory in regard to cold war army exercises. The 2008 novel The Dark Forest by Liu Cixin explores the relationship between extraterrestrial life, humanity, and game theory. The prime antagonist Joker in the movie The Dark Knight presents game theory concepts—notably the prisoner's dilemma in a scene where he asks passengers in two different ferries to bomb the other one to save their own. See also Applied ethics Chainstore paradox Collective intentionality Glossary of game theory Intra-household bargaining Kingmaker scenario Law and economics Outline of artificial intelligence Parrondo's paradox Precautionary principle Quantum refereed game Risk management Self-confirming equilibrium Tragedy of the commons Wilson doctrine (economics) Lists List of cognitive biases List of emerging technologies List of games in game theory Notes References and further reading Textbooks and general references . , Description. . Suitable for undergraduate and business students. https://b-ok.org/book/2640653/e56341. . Suitable for upper-level undergraduates. . Suitable for advanced undergraduates. Published in Europe as . . Presents game theory in formal way suitable for graduate level. Joseph E. Harrington (2008) Games, strategies, and decision making, Worth, . Textbook suitable for undergraduates in applied fields; numerous examples, fewer formalisms in concept presentation. Maschler, Michael; Solan, Eilon; Zamir, Shmuel (2013), Game Theory, Cambridge University Press, . Undergraduate textbook. . Suitable for a general audience. . Undergraduate textbook. . A modern introduction at the graduate level. . A leading textbook at the advanced undergraduate level. Consistent treatment of game types usually claimed by different applied fields, e.g. Markov decision processes. Historically important texts reprinted edition: Shapley, L.S. (1953), A Value for n-person Games, In: Contributions to the Theory of Games volume II, H. W. Kuhn and A. W. Tucker (eds.) Shapley, L.S. (1953), Stochastic Games, Proceedings of National Academy of Science Vol. 39, pp. 1095–1100. English translation: "On the Theory of Games of Strategy," in A. W. Tucker and R. D. Luce, ed. (1959), Contributions to the Theory of Games, v. 4, p. 42. Princeton University Press. Other print references Allan Gibbard, "Manipulation of voting schemes: a general result", Econometrica, Vol. 41, No. 4 (1973), pp. 587–601. , (2002 edition) . A layman's introduction. . External links James Miller (2015): Introductory Game Theory Videos. Paul Walker: History of Game Theory Page. David Levine: Game Theory. Papers, Lecture Notes and much more stuff. Alvin Roth: — Comprehensive list of links to game theory information on the Web Adam Kalai: Game Theory and Computer Science — Lecture notes on Game Theory and Computer Science Mike Shor: GameTheory.net — Lecture notes, interactive illustrations and other information. Jim Ratliff's Graduate Course in Game Theory (lecture notes). Don Ross: Review Of Game Theory in the Stanford Encyclopedia of Philosophy. Bruno Verbeek and Christopher Morris: Game Theory and Ethics Elmer G. Wiens: Game Theory — Introduction, worked examples, play online two-person zero-sum games. Marek M. Kaminski: Game Theory and Politics — Syllabuses and lecture notes for game theory and political science. Websites on game theory and social interactions Kesten Green's — See Papers for evidence on the accuracy of forecasts from game theory and other methods. McKelvey, Richard D., McLennan, Andrew M., and Turocy, Theodore L. (2007) Gambit: Software Tools for Game Theory. Benjamin Polak: Open Course on Game Theory at Yale videos of the course Benjamin Moritz, Bernhard Könsgen, Danny Bures, Ronni Wiersch, (2007) Spieltheorie-Software.de: An application for Game Theory implemented in JAVA. Antonin Kucera: Stochastic Two-Player Games. Yu-Chi Ho: What is Mathematical Game Theory; What is Mathematical Game Theory (#2); What is Mathematical Game Theory (#3); What is Mathematical Game Theory (#4)-Many person game theory; What is Mathematical Game Theory ?( #5) – Finale, summing up, and my own view Artificial intelligence Formal sciences Mathematical economics John von Neumann
48583247
https://en.wikipedia.org/wiki/West%20Area%20Computers
West Area Computers
The West Computers (West Area Computing Unit, West Area Computers) were the African American, female mathematicians who worked as human computers at the Langley Research Center of NACA (predecessor of NASA) from 1943 through 1958. These women were a subset of the hundreds of female mathematicians who began careers in aeronautical research during World War II. To offset the loss of manpower as men joined the war effort, many U.S. organizations began hiring, and actively recruiting, more women and minorities during the 1940s. In 1935, the Langley Research Center had five female human computers on staff. By 1946, the Langley Research Center had recruited about 400 female human computers. The West Computers were originally subject to Virginia's Jim Crow laws and got their name because they worked at Langley's West Area, while the white mathematicians worked in the East section. In order to work at NACA, the applicants had to pass a civil service exam. Despite Executive Order 8802 outlawing discriminatory hiring practices in defense industries, the Jim Crow laws of Virginia overpowered it and made it more difficult for African American women to be hired than white women. If the applicant was black, they would also have to complete a chemistry course at the nearby Hampton Institute. Even though they did the same work as the white female human computers at Langley, the West Computers were required to use segregated work areas, bathrooms, and cafeterias. The West Computers were originally sequestered into the West Area of Langley, hence their nickname. In 1958, when the NACA made the transition to NASA, segregated facilities, including the West Computing office, were abolished. The work of human computers at Langley varied. However, most of the work involved reading, analyzing, and plotting data. The human computers did this work by hand. They would work one-on-one with engineers or in computing sections. The computers played major roles in aircraft testing, supersonic flight research, and the space program. Although the female computers were as skilled as their male counterparts, they were officially hired as "subprofessionals" while males held "professional" status. The status of professional allowed newly-hired males to be paid annually (about $ in ) while newly-hired females began at annually (about $ in ) due to their subprofessional title. According to an unpublished study by Beverly E. Golemba of Langley's early computers, a number of other women did not know about the West Computers. That said, both the black and white women Golemba interviewed recalled that when computers from both groups were assigned to a project together, "everyone worked well together." On November 8, 2019, the Congressional Gold Medal was awarded "In recognition of all the women who served as computers, mathematicians, and engineers at the National Advisory Committee for Aeronautics and the National Aeronautics and Space Administration (NASA) between the 1930s and the 1970s." Notable members In 1949, Dorothy Vaughan was put in charge of supervising the West Computers. She was the first African American manager at NASA. Vaughan was a mathematician who worked at Langley from 1943 through her retirement in 1971. She was an excellent programmer in FORTRAN, a popular computer programming language that is especially suited to numeric computation and scientific computing. Mary Jackson was involved in fluid dynamics (air streams) and flight tests. Her job was to get relevant data from experiments and conduct tests. Mathematician Katherine Johnson, who in 2015 was named a Presidential Medal of Freedom recipient, joined the West Area Computing group in 1953. She was subsequently reassigned to Langley's Flight Research Division, where she performed notable work including providing the trajectory analysis for astronaut John Glenn's MA-6 Project Mercury orbital spaceflight. Katherine started her career working with information from flight tests, but later on a portion of her math work and research were used in lectures and taught to many students, it was called Notes on Space Technology. These talks were given by engineers that later shaped the Space Task Group, that helped with space travel. Mary Jackson also worked in the West Area Computing Unit, and the work of all three women (Vaughan, Johnson, and Jackson) is featured in the 2016 film Hidden Figures. Note that this film incorrectly depicts NASA as segregated. Desegregation occurred in 1958 in the transition from NACA to NASA. Protesting Segregation Some of the West Computers engaged in small acts of protest against segregation at Langley. Many small protests occurred in the segregated dining room since colored women were forbidden to enter the white cafeteria. Miriam Mann repeatedly removed signs denoting where "coloured girls" could sit for their meals. Both Katherine Johnson and Mary Winston Jackson refused to use the segregated cafeterias and exclusively ate at their desks. Katherine Johnson also refused to use segregated restrooms since they were on the opposite side of the campus, so she used an unmarked restroom. After discovering that the males on her team were attending meetings to share important information about their current tasks, Katherine Johnson also began attending these meetings despite no other women being invited to participate. She participated heavily during these meetings by frequently asking questions and engaging in discussions. Christine Darden became an engineer after demonstrating she possessed all, if not more, skills male engineers had and asked to be moved to the engineering pool instead of continuing to be a computer. Also See Melba Roy Mouton References Sex segregation
6896331
https://en.wikipedia.org/wiki/Empire%20%281977%20video%20game%29
Empire (1977 video game)
Empire is a 1977 turn-based wargame with simple rules. The game was conceived by Walter Bright starting in 1971, based on various war movies and board games, notably Battle of Britain and Risk. The game was ported to many platforms in the 1970s and 80s. Several commercial versions were also released, often adding basic graphics to the originally text-based user interface. The basic gameplay is strongly reminiscent of several later games, notably Civilization, which was partly inspired by Empire. Gameplay At the start of a new game, a random game map is generated on a square grid basis. The map normally consists of numerous islands, although a variety of algorithms were used in different versions of the game, producing different styles of maps. Randomly distributed on the land are a number of cities. The players start the game controlling one of these cities each. The area immediately around the city is visible, but the rest of the world map is blacked out. The city can be set to build armies, aircraft, and various types of ships. Cities take a particular number of turns to produce the various units, with the armies typically being the most rapid. Players move these units on the map to explore the world, typically seeing the land within a one square radius around the unit. As they explore they will find other cities, initially independent, and can capture them with their armies. The captured cities are then set to produce new units as well. As the player's collection of cities expands, they are able to set aside some to produce more time-consuming types, like battleships. Ultimately they have to use these forces to take all the cities on the map, including those of the other players, who are often run by the computer's game engine. History and development Walter Bright created Empire as a board wargame as a child, inspired by Risk, Stratego, and the film Battle of Britain. He found gameplay tedious, but later realized that a computer could handle the gameplay and serve as CPU opponent. The initial version of computer Empire was written in BASIC, before being re-written around 1977 in the FORTRAN programming language for the PDP-10 computer at Caltech. This version was spread virally to other PDP-10s, which were common timesharing systems at the time. Later, Bright recoded this in assembly language on a Heathkit H11 and made it available commercially. He sold two copies. At some point, someone broke through the security systems at Caltech, and took a copy of the source code for the FORTRAN/PDP-10 version of the game. This code was continually modified, being passed around from person to person. Eventually, it was found on a computer in Massachusetts by Herb Jacobs and Dave Mitton. They ported the code to the VAX/VMS operating system and, under the alias of "Mario DeNobili and Paulson", submitted the program to DECUS, a large user's group. DECUS programs were often installed on new DEC computers at the time of delivery, and so Empire propagated further. Eventually, Bright heard of this, and in 1983 contacted DECUS, who subsequently credited Bright in the catalog description of the program and re-added his name to the source code. In 1984, Bob Norby from Fort Lauderdale, Florida, ported the DECUS version from the VAX to the PC as shareware. In 1987, Chuck Simmons re-implemented the game in C using the UNIX curses library for its supports of many character-cell terminals. Eric S. Raymond maintains a copy of this version and shared some version with open-source projects. In 1996, Computer Gaming World declared the original Empire the 8th-best computer game ever released. The magazine's wargame columnist Terry Coleman named it his pick for the second-best computer wargame released by late 1996, behind Panzer General. Empire: Wargame of the Century After this, Bright recoded the game in C on an IBM PC. With low commercial expectations, he submitted an announcement to January 1984 BYTE Magazine's "Software Received" section, and received a flood of orders. After writing to many software companies (including Brøderbund, Sirius Software, Simon & Schuster, subLOGIC, Epyx and MicroProse), he licensed the game to a small software company named Interstel. Mark Baldwin was brought in to coauthor the game redesigning it for the commercial market. Starting around 1987, Empire: Wargame of the Century on the Atari ST, Amiga, Commodore 64, Apple II, Macintosh and DOS was produced. Empire Deluxe In the early 1990s, Mark Baldwin and Bob Rakowsky rewrote the game, calling it Empire Deluxe for DOS, Mac OS, and Windows, released in 1993 with New World Computing as the publisher. Empire Deluxe sustained the old game play of Interstel's version in a standard game, while adding a basic version for beginners, and advanced game with new units such as the Bomber and Armor and maps sizes up to 200x200. An expansion pack, Empire Deluxe Scenarios, was produced later in 1993, including a map and scenario statistics tool, a map randomiser tool (as random maps were present in the Interstel version, but lacking from Empire Deluxe), upgrade patches for both DOS and Windows versions and a collection of 37 scenarios (with accompanying maps) from "celebrity" designers, many of them famous in the games industry including Will Wright, Jerry Pournelle, Jim Dunnigan, Johnny Wilson (Computer Gaming World editor), Gordon Walton, Don Gilman (Harpoon series architect), Trevor Sorensen (Star Fleet series designer), and the game's authors Mark Baldwin and Bob Rakosky. Computer Gaming World in 1993 called Empire Deluxe "a welcome addiction (sic) to the library of every serious strategy gamer". A 1993 survey in the magazine of wargames gave the game four stars out of five, noting flaws but stating that "Yet, I keep on playing". It enjoyed great success, and was noted as one of Gamespy's Greatest Games of All Time. Empire Deluxe was reviewed in 1993 in Dragon #195 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4 out of 5 stars. In 1994, PC Gamer US named Empire Deluxe the 35th best computer game ever. The editors called it "an elegant and adaptable game system that [...] allows almost endless replayability." Computer Gaming World in 1993 stated that Empire Deluxe Scenarios offered "a lot of value" to the game's fans. Killer Bee Software In the Winter of 2002, Mark Kinkead of Killer Bee Software purchased the rights for Empire Deluxe from Mark Baldwin and Bob Rakowsky, and in 2003 produced a new version called Empire Deluxe Internet Edition a.k.a. EDIE for Windows. This was essentially a port of the code Baldwin and Rakowsky produced in 1993, with few changes, such as a slightly increased map size (255x255), but did not add any new rules. A year later, Kinkead would create an "Enhanced" version with new units and rules, including artillery, engineers and orbital units. The company produced several other editions for Windows, Android, and iOS. Sequel In 1995, New World Computing published a sequel named Empire II: The Art of War. While the original had been a turn-based strategy, Empire II was shifted towards turn-based tactics: there was no more empire-building and production of units, but the complexity and realism of battles were enhanced with features such as morale rules and various degrees of damage. The playable campaigns consisted of a collection of diverse historical or fictional battles. The game editor feature was enhanced by allowing the user to design not only new maps and campaigns, but also new units with new graphics and sounds. Legacy There are ports and source code for modern PC operating systems available for free download at Walter Bright's Classic Empire webpage. References External links Walter Bright's Empire website EDEE Publisher Page - Killer Bee Software Dos version of Empire at Abandon ware, Empire: War game of the century, port by Bob Norby. Empire for the PDP-11 Source Code 1977 video games Computer wargames Mainframe games Turn-based strategy video games Video games with textual graphics Multiplayer and single-player video games Amiga games Apple II games Atari ST games Commodore 64 games DOS games Linux games Classic Mac OS games Windows games Play-by-email video games Commercial video games with freely available source code Video games developed in the United States
29558063
https://en.wikipedia.org/wiki/Standard%20Interchange%20Protocol
Standard Interchange Protocol
The Standard Interchange Protocol is a proprietary standard for communication between library computer systems and self-service circulation terminals. Although owned and controlled by 3M, the protocol is published and is widely used by other vendors. Version 2.0 of the protocol, known as "SIP2", is a de facto standard for library self-service applications. History SIP version 1.0 was published by 3M in 1993. The first version of the protocol supported basic check in and check out operations, but had minimal support for more advanced operations. Version 2.0 of the protocol was published in 2006 and added support for flexible, more user-friendly notifications, and for the automated processing of payments for late fees. SIP2 was widely adopted by library automation vendors, including ODILO, Bibliotheca, Nedap, Checkpoint, Envisionware, FE Technologies, Meescan and open source integrated library system software such as Koha and Evergreen. The standard was the basis for the NISO Circulation Interchange Protocol (NCIP) standard which is eventually intended to replace it. Description SIP is a simple protocol in which requests to perform operations are sent over a connection, and responses are sent in return. The protocol explicitly does not define how a connection between the two devices is established; it is limited to specifying the format of the messages sent over the connection. There are no "trial" transactions; each operation will be attempted immediately and will either be permitted or not. The protocol specifies messages to check books in and out, to manage fee payments, to request holds and renewals, and to carry out the other basic circulation operations of a library. Encryption and authentication SIP has no built in encryption, so steps need to be taken to send the connection through some sort of encrypted tunnel. Two common methods are to use either stunnel or SSH to add a layer of encryption and/or an extra level of authentication. References Library automation Network protocols
18935273
https://en.wikipedia.org/wiki/WIPO%20Copyright%20and%20Performances%20and%20Phonograms%20Treaties%20Implementation%20Act
WIPO Copyright and Performances and Phonograms Treaties Implementation Act
The WIPO Copyright and Performances and Phonograms Treaties Implementation Act, is a part of the Digital Millennium Copyright Act (DMCA), a 1998 U.S. law. It has two major portions, Section 102, which implements the requirements of the WIPO Copyright Treaty, and Section 103, which arguably provides additional protection against the circumvention of copy prevention systems (with some exceptions) and prohibits the removal of copyright management information. Section 102 Section 102 gives the act its name, which is based on the requirements of the WIPO Copyright Treaty concluded at Geneva, Switzerland, on 20 December 1996. It modifies US copyright law to include works produced in the countries which sign the following treaties: the Universal Copyright Convention the Geneva Phonograms Convention (Convention for the Protection of Producers of Phonograms Against Unauthorized Duplication of Their Phonograms, Geneva, Switzerland, 29 October 1971) the Berne Convention for the Protection of Literary and Artistic Works the WTO Agreement (as defined in the Uruguay Round Agreements Act) the WIPO Copyright Treaty signed at Geneva, Switzerland on 20 December 1996 the WIPO Performances and Phonograms Treaty concluded at Geneva, Switzerland on 20 December 1996 any other copyright treaty to which the United States is a party Section 103 Section 103 provoked most of the controversy which resulted from the act. It is often called DMCA anti-circumvention provisions. It restricts the ability to make, sell, or distribute devices which circumvent Digital Rights Management systems, adding Chapter 12 (sections 1201 through 1205) to US copyright law. Section 1201 makes it illegal to: (1) "circumvent a technological measure that effectively controls access to a work" except as allowed after rulemaking procedures administered by the Register of Copyrights every three years. (The exemptions made through the three-yearly review do not apply to the supply of circumvention devices, only to the act of circumvention itself.) (2) "manufacture, import, offer to the public, provide, or otherwise traffic in" a device, service or component which is primarily intended to circumvent "a technological measure that effectively controls access to a work," and which either has limited commercially significant other uses or is marketed for the anti-circumvention purpose. (3) "manufacture, import, offer to the public, provide, or otherwise traffic in" a device, service or component which is primarily intended to circumvent "protection afforded by a technological measure that effectively protects a right of a copyright owner," and which either has limited commercially significant other uses or is marketed for the anti-circumvention purpose. sell any VHS VCR, 8 mm analogue video tape recorder, Beta video recorder or other analogue video cassette recorder which is not affected by automatic gain control copy protection (the basis of Macrovision), with some exceptions. The act creates a distinction between access-control measures and copy-control measures. An access-control measure limits access to the contents of the protected work, for example by encryption. A copy-control measure only limits the ability of a user to copy the work. Though the act makes it illegal to distribute technology to circumvent either type of copy protection, only the action of circumventing access-control measures is illegal. The action of circumventing a copy-control measure is not prohibited, though any copies made are still subject to other copyright law. The section goes on to limit its apparent reach. The statute says that: it will not affect rights, remedies, limitations, or defenses to copyright infringement, including fair use; it is not necessary to design components specifically to use copy protection systems; "nothing in this section shall enlarge or diminish any rights of free speech or the press for activities using consumer electronics, telecommunications, or computing products"; circumvention for law enforcement, intelligence collection, and other government activities is allowed; reverse engineering to achieve interoperability of computer programs is allowed; encryption research is allowed; systems to prevent minors from accessing some internet content are allowed to circumvent; circumvention to protect personal information by disabling part of a system is allowed; and security testing is allowed. In addition, the statute has a "primary intent" requirement, which creates evidentiary problems for those seeking to prove a violation. In order for a violation to be proved, it must be shown that the alleged violator must have primarily intended to circumvent copyright protection. However, if the primary intent is to achieve interoperability of software or devices, the circumvention is permitted and no violation has occurred. Section 1202 prohibits the removal of copyright management information. On balance, it is difficult to say whether the Act expands copyright enforcement powers or limits them. Because it does not affect the underlying substantive copyright protections, the Act can be viewed as merely changing the penalties and procedures available for enforcement. Because it grants safe harbors in various situations for research, reverse engineering, circumvention, security, and protection of minors, the Act in many ways limits the scope of copyright enforcement. Section 103 cases Judicial enforcement of the statute and the treaty has not been nearly as far-reaching as was originally hoped by its advocates. Here are a handful of notable instances where advocates of proprietary encryption techniques sought to use the law to their advantage: DVDs are often encrypted with the Content Scrambling System (CSS). To play a CSS DVD, it must be decrypted. Jon Johansen and two anonymous colleagues wrote DeCSS, a program that did this decryption, so they could watch DVDs in Linux. US servers distributing this software were asked to stop on the theory they were violating this law. Mr. Johansen was tried in his native Norway under that country's analogous statute. The Norwegian courts ultimately acquitted Mr. Johansen because he was acting consistent with interoperability and he could not be held responsible for others' motives. The software is now widely available. 2600 Magazine was sued under this law for distributing a list of links to websites where DeCSS could be downloaded. The court found that the "primary purpose" of the defendants' actions was to promote redistribution of DVDs, in part because the defendants admitted as much. See Universal v. Reimerdes, 111 F. Supp. 2d 346 (S.D.N.Y. 2000). The finding was upheld by the Second Circuit Court of Appeals on the specific facts of the case, but the appellate court left open the possibility that different facts could change the result. See Universal City Studios, Inc. v. Corley, 273 F.3d 429 (2d Cir. 2001), at footnotes 5 and 16. A similar program, also by Johansen, decrypted iTunes Music Store files so they could be played on Linux. Apple had the software taken down from several servers for violating this law. However, Apple Computer has since reversed its stand and begun advocating encryption-free distribution of content. Dmitry Sklyarov, a Russian programmer was jailed under this law when he visited the U.S., because he had written a program in Russia which allowed users to access documents for which they had forgotten the password. (He was eventually acquitted by a jury of all counts, reportedly because the jury thought the law was unfair—a phenomenon known as jury nullification.) aibohack.com, a website which distributed tools to make Sony's AIBO robotic pet do new tricks, like dance jazz. Sony alleged that the tools violated this law, and asked for them to be taken down. (After negative press they changed their mind.) A company selling mod chips for Sony PlayStations, which allowed the systems to play video games from other countries, was raided by the US government and their products were seized under this law. Smart cards, while they have many other purposes, are also used by DirecTV to decrypt their television satellite signals for paying users. Distributors of smart card readers, which could create smart cards (including ones that could decrypt DirecTV signals) were raided by DirecTV and their products and customer lists were seized. DirecTV then sent a letter to over 100,000 purchasers of the readers and filed lawsuits against over 5,000. They offered to not file or drop the suit for $3500, less than litigating the case would cost. (The suits are ongoing.) Lexmark sued Static Control Components which made replacement recycled toner cartridges for their printers under this law. Lexmark initially won a preliminary injunction, but that injunction was vacated by the Court of Appeals for the Sixth Circuit. The Chamberlain Group sued Skylink Technologies for creating garage door openers that opened their own garage doors under this law. (The lawsuit is ongoing, though the Court of Appeals for the Federal Circuit has issued a ruling casting serious doubt on Chamberlain's likelihood of success.) Prof. Edward Felten and several colleagues, were threatened with a lawsuit under this law if they presented a paper at a technical conference describing how they participated in the Secure Digital Music Initiative (SDMI) decryption challenge. (After Felten sued for declaratory judgment, the threat was dropped.) Secure Network Operations (SNOsoft), a group of security researchers, published a security flaw in HP's Tru64 operating system after HP refused to fix it. HP threatened to sue them under this law. (After negative press they dropped the threat.) Blackboard Inc. filed a civil complaint against university students Billy Hoffman and Virgil Griffith who were researching security holes in the Blackboard Transaction System. A judge issued an injunction on the two students to prevent them from publishing their research. Blackboard Inc. had previously sent a complaint to the students saying they were violating this law. Since that time, however, Blackboard has pledged to cooperate with open-source developers. On February 1, 2007, Blackboard announced via press release "The Blackboard Patent Pledge". In this pledge to the open source and do-it-yourself course management community, the company vows to forever refrain from asserting its patent rights against open-source developers, except when it is itself sued for patent infringement. Princeton student J. Alex Halderman was threatened by SunnComm under this law for explaining how Mediamax CD-3 CD copy protection worked. Halderman explained that the copy protection could be defeated by holding down the shift key when inserting the CD into Windows (this prevented autorun, which installed the Mediamax protection software). After press attention SunnComm withdrew their threat. Blizzard Entertainment threatened the developers of bnetd, a freely available clone of battle.net, a proprietary server system used by all Blizzard games on the Internet. Blizzard claims that these servers allow circumvention of its CD key copy protection scheme. (The Electronic Frontier Foundation is currently negotiating a settlement.) The Advanced Access Content System Licensing Administrator, LLC sent violation notices to a number of sites who had published the encryption key to HD-DVDs. The key and the software with which to decrypt the disks had been published by an anonymous programmer. When Digg took down references to the key, its users revolted and began distributing it in many creative ways. Eventually, Digg was unable to stop its users and gave up. AACS executives have vowed to fight on. . See the AACS encryption key controversy. Open-source software to decrypt content scrambled with the Content Scrambling System presents an intractable problem with the application of this law. Because the decryption is necessary to achieve interoperability of open source operating systems with proprietary operating systems, the circumvention is protected by the Act. However, the nature of open source software makes the decryption techniques available to those who wish to violate copyright laws. Consequently, the exception for interoperability effectively swallows the rule against circumvention. Criticisms Large industry associations like the MPAA and RIAA say the law is necessary to prevent copyright infringement in the digital era, while a growing coalition of open source software developers and Internet activists argue that the law stifles innovation while doing little to stop copyright infringement. Because the content must ultimately be decrypted in order for users to understand it, near-perfect copying of the decrypted content always remains possible for pirates. Meanwhile, developers of open source and other next-generation software must write complex and sophisticated software routines to ensure interoperability of their software with legacy Windows technology. Thus, the opponents are angry at having to bear the costs of technology that results in no benefit. Some proponents of the law claim it was necessary to implement several WIPO treaties. Opponents respond that the law was not necessary, even if it was it went far beyond what the treaties require, and the treaties were written and passed by the same industry lobbyists people who wanted to pass this law. They also note that the severe ambiguities in the law, its difficulty in enforcement, and its numerous exceptions make it ineffective in achieving its stated goal of protecting copyright holders. Others claim that the law is necessary to prevent online copyright infringement, using perfect digital copies. Opponents note that copyright infringement was already illegal and the DMCA does not outlaw infringement but only legal uses like display and performance. Opponents of the law charge that it violates the First Amendment on its face, because it restricts the distribution of computer software, like DeCSS. The Second Circuit rejected this argument in MPAA v. 2600, suggesting that software was not really speech. Under the specific facts of the case, however, the Constitutional decision was not controlling. The defendants' ultimate purpose was to make possible the copying of copyrighted content, not publishing their own speech. Most other circuits that have considered the issue concluded software is speech, but have not considered this law. Opponents also say it creates serious chilling effects stifling legitimate First Amendment speech. For example, John Wiley & Sons changed their mind and decided not to publish a book by Andrew Huang about security flaws in the Xbox because of this law. After Huang tried to self-publish, his online store provider dropped support because of similar concerns. (The book is now being published by No Starch Press.) Opponents also argue that the law might be read to give full control to copyright holders over what uses are and are not permitted, essentially eliminating fair use. For example, ebook readers protected by this law can prevent the user from copying short excerpts from the book, printing a couple pages, and having the computer read the book aloud—all of which are legal under copyright law, but this law could be expanded to prohibit building a tool to do what is otherwise legal. However, other legal scholars note that the law's emphasis on violations of preexisting rights of copyright holders ensures that the DMCA does not expand those rights. If the purpose of the activity is not to violate a preexisting right, the activity is not illegal. Fair use, the scholars say, would still be protected. Copyright Office rulemaking procedures As required by the DMCA, in 1999 the U.S. Copyright Office launched a public appeal for comments on the DMCA in order "to determine whether there are particular classes of works as to which users are, or are likely to be, adversely affected in their ability to make noninfringing uses due to the prohibition on circumvention of access controls". The entire set of written submissions, testimonial transcripts, and final recommendations and rulings for all three rulemakings (2000, 2003, and 2006) are available here. References External links , Text of the law Copyright Office: Rulemaking procedures EFF: Unintended Consequences: Five Years under the DMCA Bill D. Herman and Oscar H. Gandy, Jr., Catch 1201: A Legislative History and Content Analysis of the DMCA Exemption Proceedings United States federal copyright legislation Acts of the 105th United States Congress
25525610
https://en.wikipedia.org/wiki/Dru%20Lavigne
Dru Lavigne
Dru Lavigne is a network and systems administrator, IT instructor, technical writer and director at FreeBSD Foundation. She has been using FreeBSD since 1996, has authored several BSD books, and spent over 10 years developing training materials and providing training on the administration of FreeBSD systems. She has written for O'Reilly, TechRepublic, DNSStuff, and OpenLogic, contributed to Linux Hacks and Hacking Linux Exposed, and is author of BSD Hacks and The Best of FreeBSD Basics. Her third and latest book, The Definitive Guide to PC-BSD, was released in March 2010. She has over a decade of experience administering and teaching Netware, Microsoft, Cisco, Checkpoint, SCO, Solaris, Linux and BSD systems. She is founder and current Chair of the BSD Certification Group Inc., a non-profit organization with a mission to create the standard for certifying BSD system administrators. She is also Community Manager for both the PC-BSD and FreeNAS projects, making her responsible for dealing with issues relating to community relations and the administration of various Forums. She is also the principal author / executive editor of most of the documentation for both projects. Since 22 January 2013 she is a committer in the category "doc" at the FreeBSD Project. References External links Dru Lavigne's Blog Interview: The BSD Certification Group's Dru Lavigne Q&A : Networking Expert Dru Lavigne (Circuit Cellar 2014-02-24) FreeBSD people Free software people Living people Canadian women computer scientists Canadian computer scientists Year of birth missing (living people)
26979812
https://en.wikipedia.org/wiki/O2%20wireless%20box
O2 wireless box
The O2 Wireless Box is a wireless residential gateway router distributed by O2. The latest version is based on the 802.11n standard and also supports 802.11g and 802.11b devices. The device connects to the Internet using either an ADSL2+ or ADSL connection. Features The O2 Wireless Box is a wireless residential gateway router. It supports wireless internet access through 802.11n, 802.11g and 802.11b. Comparison The original O2 Wireless Box (available to new O2 customers from 2006, now discontinued) was a rebadged Thomson SpeedTouch 780WL. The O2 Wireless Box II (available to new O2 customers from 2008) is a rebadged Thomson SpeedTouch TG585v7, and supports Interface type 802.11b/g. It supports 13 channels (Europe Region). It supports WEP and WPA-PSK encryption. Unlike the original Thomson TG585v7, it does not allow changing the ADSL username and password, so it cannot be used to connect to any other service provider. The O2 Wireless Box IV (available to new "O2 Premium" customers during 2010) is a rebadged Thomson SpeedTouch TG587nv2, which has additional support for 802.11n, dual antennas, and also has two USB ports for connection to an external USB hard disk for sharing of files direct to the wifi or Ethernet network, or printer sharing. The O2 Wireless Box V (the new "standard" router) is a rebadged Thomson SpeedTouch TG582n. It has 802.11n support, internal antenna, and one USB port. References External links Routers (computing) Wireless box
32339351
https://en.wikipedia.org/wiki/National%20Computer%20Center%20for%20Higher%20Education%20%28France%29
National Computer Center for Higher Education (France)
The National Computer Center for Higher Education (CINES), based in Montpellier, is a public institution of French administrative character placed under the supervision of the Ministry of Research and Innovation (MESRI), and created by decree in 1999. It offers an IT service used for public research in France. It is one of the major national centers for computing power supply for research in France. It has three missions: High performance computation on supercomputer; The permanent archiving of electronic documents; The hosting of national computer equipment. History The National University Center for Computation (CNUSC) in Montpellier was created in 1981, responsible for hosting scientific apps for the research community, as well as applications in the field of librarianship. At the end of December 1999, the CNUSC was transformed into the current CINES, created by Decree N. 99-318 of 20 April 1999. The change brought new missions and a change of status. On the sixth march of 2014, the status of CINES are amended by decree, which is published in the official journal. This decree brings to light a new mission: the hosting of computer equipment on a national level. Over these years, the number of employees within the institution varied around forty technicians and engineers. Calculating mission The CINES has been providing computing resources to the French research community for so long, and its machine fleet is evolving regularly. For this reason, it cooperates with the National Large Intensive Computing Equipment (GENCI). The Occigen supercomputer is ranked 70th in the world in the TOP 500 calculators in June 2018. The Occigen machine (BullX), with a peak power of 3.5 petaflops, consists of a total of 3,366 nodes and 85,824 calculation cores divided into 2,106 dual-socket nodes equipped with Intel Xeon E5-2690V3 Haswell processors (12 cores), 1,260 dual-socket nodes equipped with Intel Xeon E5-2690V4 Broadwell processors (14 cores), Infiniband FDR 4x (56 GBits),  5 Po of disks (Luster). Old computing machines  In January 2011 CINES had several machines for high performance computing: - an SGI Altix ICE 8200 EX7 machine; - an IBM P1600 + Cluster POWER5 machine; - a Bull cluster (with GPU for hybrid computing). - The supercomputer Jade CINES: The Jade supercomputer (SGI Altix ICE 8200 EX) with a power of 267 Tflops  1,536 dual-socket nodes equipped with Intel Xeon E5472 processors with 32 GB of RAM  1,344 dual-socket nodes equipped with Intel Nehalem X5560 processors with 36 GB of RAM   Infiniband DDR and QDR 4x dual plane 700 TB of disks (Luster)  - The IBM machine with a power of 1.85 + 0.6 TFlops 9 Power4 nodes with 32/64 GB of RAM switch Federation  4TB of Disks (GPFS) 5 nodes P575 to 32 GB Infiniband.   The Jade supercomputer was ranked 27th in the world of TOP 500 calculators in November 2010, it was the sixth European machine and the first French machine for public research. Top500 ranking  Every six months, the world TOP500 supercomputer ranking is held, the table below shows the best places won by the various Cines systems. Permanent archiving mission   Archiving digital data is a key factor in the success of any digitization and information-sharing policy. The second national strategic mission given to CINES is the development and implementation of a powerful solution for long-term preservation of the digital heritage. In addition to exceptional resources and equipment in the field of supercomputing, CINES therefore has access to one of the largest French production platforms dedicated to archiving digital data. The documents that are candidates for long-term archiving in PAC are: -Scientific data from observations, measurements, simulations or calculations;  -Heritage data such as the theses defended in France, pedagogical data, publications (articles published in particular on the HAL platform) or collections of digitized scientific journals; -Administrative data from universities. To respond to some issues raised in the context of sustainable digital archiving, which are all unavoidable risks for which procedures must be put in place to mitigate their impact on the day they occur, CINES relies on the use of national standards (NF Z42-013, Afnor X578: 2005 - Documentation book FD X50-176 - management tools: process management, standard exchange of data for archiving, ...) and international (OAIS model - ISO 14721, ISO 9001, Dublin Core, ISAD-G, ISAAR-CPF, ITIL methodology ...) as well as a quality approach using proactive management risks, and an approach for the certification of the service.  CINES also has an expertise cell on data formats and has a large computer and archival experience that allows it to be one of the leaders in sustainable digital archiving in Europe. Hosting computer equipment mission The new mission of CINES is to provide support to the research community that needs space to help important computing environments operate. The goal is to provide organizations lacking material space in the machine room with the possibility of increasing their capacity in number of racks. It is often difficult to store multiple racks with all the dependencies (energy, air conditioning, 24/24 maintenance). CINES can bring this technical environment because its environment is adapted to accommodate computing in "industrial quantity". See also Supercomputing in Europe Research and Technology Computing Center (France) External links CINES homepage Supercomputer sites Computer science institutes in France 1999 establishments in France
153962
https://en.wikipedia.org/wiki/Spam%20Prevention%20Early%20Warning%20System
Spam Prevention Early Warning System
The Spam Prevention Early Warning System (SPEWS) was an anonymous service which maintained a list of IP address ranges belonging to Internet service providers (ISPs) which host spammers and show little action to prevent their abuse of other networks' resources. It could be used by Internet sites as an additional source of information about the senders of unsolicited bulk email, better known as spam. SPEWS is no longer active. A successor, the Anonymous Postmaster Early Warning System (APEWS), appeared in January 2007. Overview SPEWS itself published a large text file containing its listings, and operated a database where web users could query the reasons for a listing. Users of SPEWS could access these data via DNS for use by software for DNSBL anti-spam techniques. For instance, many mail sites used the SPEWS data provided at spews.relays.osirusoft.com. All DNSBLs hosted by Osirusoft were shut down on August 27, 2003 after several weeks of denial of service attacks. A number of other mirrors existed based on the SPEWS data, which remained accessible to the public. SORBS, for example, provided a mirror of SPEWS data until early 2007. There was a certain degree of controversy regarding SPEWS' anonymity and its methods. By remaining anonymous, the SPEWS admins presumably wanted to avoid harassment and lawsuits of the sort which have hampered other anti-spam services such as the MAPS RBL and ORBS. Some ISP clients whose providers were listed on SPEWS took umbrage that their own IP addresses were associated with spamming, and that their mail might be blocked by users of the SPEWS data; often they did not understand that it was their provider that was listed. Sometimes, the only solution was to leave the blacklisted provider, as SPEWS was not willing to cut holes in a listing for a clean user in an otherwise dirty IP block. There was no way for either the customer or the provider to contact SPEWS, and SPEWS claimed that the listings would be removed only when the associated abuse stopped. The SPEWS database has not been updated since August 24, 2006; dnsbl.com lists its status as dead. Since SPEWS became inactive, the Anonymous Postmaster Early Warning System (APEWS) has taken its place, using similar listing criteria and a nearly identical web page. Process The precise process by which SPEWS gathered data about spam sources is unknown to the public, and it is likely that its operators used multiple techniques. SPEWS seemed to collect some information from honeypots—mail servers or single email addresses to which no legitimate mail is received. These may be dummy addresses which have never sent any email (and therefore could not have requested to be subscribed to any legitimate mailing list). They may also be placed as bait in the header of a Usenet post or on a Web page, where a spammer might discover them and choose to spam them. The SPEWS Website made it clear that when spam was received, the operators filed a complaint with the ISP or other site responsible for the spam source. Only if the spam continued after this complaint was the source listed. However, SPEWS was anonymous—when these complaints were sent, they were not marked as being from SPEWS, and the site was not told that ignoring the complaint would result in a listing. This had the effect of determining the ISP's response to a normal user's spam complaint, and also discouraged listwashing—continuing to spam, but with the complaining address removed from the target list. If the spam did not stop over time, SPEWS increased the size of the address range listed through a process referred to as "escalation". This process was repeated, conceivably until the entire netblock owned by the offending service provider was listed or the block was large enough that the service provider is encouraged to take action by the complaints of its paying customers. Criteria for listing SPEWS criteria were based on "spam support"; when a network operation provided any services to the identified spammers, the resources involved were listed. For instance, part of an ISP's network may have been listed in SPEWS for providing DNS service to a domain mentioned in a piece of e-mail spam, even if the messages weren't sent from said provider's mail servers. Listing data or evidence files IP addresses listed in SPEWS were mentioned in "evidence files". These were plain text files, which appeared to have been edited by hand, in which those IP addresses along with the technical evidence backing the listing, were shown. The contents of those evidence files might seem rather cryptic to readers who were not intimately familiar with the technical jargon of the Internet. Criticism of SPEWS No one knows how many service providers used the SPEWS list to reject mail. Contacting SPEWS One common criticism is that there was no way to contact SPEWS. According to the SPEWS FAQ: "Q41: How does one contact SPEWS? A41: One does not..." Having no way to contact SPEWS is seen as a way for SPEWS to avoid having to deal with complaints—even if they are legitimate—and to be immune from many consequences of mistakes, bad policies, or other problems. This caused SPEWS itself to be listed on some other DNSBLs such as those formerly maintained at RFC-Ignorant. A countervailing view is that SPEWS adopted the policy as a response to vexatious litigation against, e.g. MAPS RBL. There was nothing on the SPEWS web site to indicate that they did not care about legitimate issues, just that they didn't want to deal with specious complaints from spammers. Criticism SPEWS critics claimed it blocked sites for reasons they considered unfair. They argued that an ordinary customer of an ISP should not be held responsible for the actions of other customers of that ISP. Counter argument Supporters responded that SPEWS was a list of ISPs with spam problems. The ISP was listed, not the customers. This was often argued with an analogy of pizza delivery companies who will not deliver to high crime areas. It's a bad situation for someone "stuck" in a bad area, but supporters argue that this also provides encouragement for a good citizen to unstick themselves and move to an ISP without a spam problem. The bad ISP loses revenue and the good ISP gets more customers, further encouraging bad ISPs to clean up. Supporters of SPEWS often pointed to the claim that SPEWS "blocked" email from sites as a misconception. A SPEWS listing only caused mail to be refused if the recipient of the email (or their ISP) chose to block based on the SPEWS IP list. This counter argument has been criticized on the grounds that SPEWS spread information in a way conducive for blocking, with the knowledge that people are using it to block. According to this criticism, SPEWS should then have been considered partly responsible for any blocking that was done and could be legitimately blamed if the blocking was inappropriate. In this view, the claims that lists such as SPEWS are advisory and that SPEWS itself did not block were seen as attempts to evade responsibility for SPEWS's own actions. Delisting According to the SPEWS FAQ, listings were removed when the spam or spam-support has stopped. Just as they did not solicit nominations for listings, the SPEWS operators did not solicit requests for delistings. There was no contact information published on the SPEWS Website. There was no mail server, and the operators of SPEWS did not receive email under the SPEWS name. It is believed that the operators read certain Usenet newsgroups related to spam and email abuse. However, no poster has claimed to be a SPEWS operator and no regular of the newsgroups claimed to know their identity. By the accounts of many of those regulars, SPEWS could detect automatically when such support stopped, but this was not supported by any information in the SPEWS FAQ. See also News.admin.net-abuse.email (NANAE) References External links SPEWS and APEWS websites (Last good archive. Domain is now owned by another entity.) APEWS.org Advice by others First Post To NANAE Newsgroup What to do if you are listed on APEWS (dnsbl.com) Listed on APEWS: what to do (and what definitely not to do!) Spamming Early warning systems Internet properties disestablished in 2007 History of the Internet
46628
https://en.wikipedia.org/wiki/Automated%20teller%20machine
Automated teller machine
An automated teller machine (ATM) or cash machine (in British English) is an electronic telecommunications device that enables customers of financial institutions to perform financial transactions, such as cash withdrawals, deposits, funds transfers, balance inquiries or account information inquiries, at any time and without the need for direct interaction with bank staff. ATMs are known by a variety of names, including automatic teller machine (ATM) in the United States (sometimes redundantly as "ATM machine"). In Canada, the term automated banking machine (ABM) is also used, although ATM is also very commonly used in Canada, with many Canadian organizations using ATM over ABM. In British English, the terms cashpoint, cash machine and hole in the wall are most widely used. Other terms include any time money, cashline, tyme machine, cash dispenser, cash corner, bankomat, or bancomat. ATMs that are not operated by a financial institution are known as "white-label" ATMs. Using an ATM, customers can access their bank deposit or credit accounts in order to make a variety of financial transactions, most notably cash withdrawals and balance checking, as well as transferring credit to and from mobile phones. ATMs can also be used to withdraw cash in a foreign country. If the currency being withdrawn from the ATM is different from that in which the bank account is denominated, the money will be converted at the financial institution's exchange rate. Customers are typically identified by inserting a plastic ATM card (or some other acceptable payment card) into the ATM, with authentication being by the customer entering a personal identification number (PIN), which must match the PIN stored in the chip on the card (if the card is so equipped), or in the issuing financial institution's database. According to the ATM Industry Association (ATMIA), , there were close to 3.5 million ATMs installed worldwide. However, the use of ATMs is gradually declining with the increase in cashless payment systems. History The idea of out-of-hours cash distribution developed from bankers' needs in Japan, Sweden, and the United Kingdom. In 1960 Luther George Simjian invented an automated deposit machine (accepting coins, cash and cheques) although it did not have cash dispensing features. His US patent was first filed on 30 June 1960 and granted on 26 February 1963. The roll-out of this machine, called Bankograph, was delayed by a couple of years, due in part to Simjian's Reflectone Electronics Inc. being acquired by Universal Match Corporation. An experimental Bankograph was installed in New York City in 1961 by the City Bank of New York, but removed after six months due to the lack of customer acceptance. In 1962 Adrian Ashfield invented the idea of a card system to securely identify a user and control and monitor the dispensing of goods or services. This was granted UK Patent 959,713 in June 1964 and assigned to Kins Developments Limited. A Japanese device called the "Computer Loan Machine" supplied cash as a three-month loan at 5% p.a. after inserting a credit card. The device was operational in 1966. However, little is known about the device. A cash machine was put into use by Barclays Bank in its Enfield Town branch in North London, United Kingdom, on 27 June 1967. This machine was inaugurated by English comedy actor Reg Varney. This instance of the invention is credited to the engineering team led by John Shepherd-Barron of printing firm De La Rue, who was awarded an OBE in the 2005 New Year Honours. Transactions were initiated by inserting paper cheques issued by a teller or cashier, marked with carbon-14 for machine readability and security, which in a later model were matched with a six-digit personal identification number (PIN). Shepherd-Barron stated "It struck me there must be a way I could get my own money, anywhere in the world or the UK. I hit upon the idea of a chocolate bar dispenser, but replacing chocolate with cash." The Barclays–De La Rue machine (called De La Rue Automatic Cash System or DACS) beat the Swedish saving banks' and a company called Metior's machine (a device called Bankomat) by a mere nine days and Westminster Bank's–Smith Industries–Chubb system (called Chubb MD2) by a month. The online version of the Swedish machine is listed to have been operational on 6 May 1968, while claiming to be the first online ATM in the world, ahead of similar claims by IBM and Lloyds Bank in 1971, and Oki in 1970. The collaboration of a small start-up called Speytec and Midland Bank developed a fourth machine which was marketed after 1969 in Europe and the US by the Burroughs Corporation. The patent for this device (GB1329964) was filed in September 1969 (and granted in 1973) by John David Edwards, Leonard Perkins, John Henry Donald, Peter Lee Chappell, Sean Benjamin Newcombe, and Malcom David Roe. Both the DACS and MD2 accepted only a single-use token or voucher which was retained by the machine, while the Speytec worked with a card with a magnetic stripe at the back. They used principles including Carbon-14 and low-coercivity magnetism in order to make fraud more difficult. The idea of a PIN stored on the card was developed by a group of engineers working at Smiths Group on the Chubb MD2 in 1965 and which has been credited to James Goodfellow (patent GB1197183 filed on 2 May 1966 with Anthony Davies). The essence of this system was that it enabled the verification of the customer with the debited account without human intervention. This patent is also the earliest instance of a complete "currency dispenser system" in the patent record. This patent was filed on 5 March 1968 in the US (US 3543904) and granted on 1 December 1970. It had a profound influence on the industry as a whole. Not only did future entrants into the cash dispenser market such as NCR Corporation and IBM licence Goodfellow's PIN system, but a number of later patents reference this patent as "Prior Art Device". Propagation Devices designed by British (i.e. Chubb, De La Rue) and Swedish (i.e. Asea Meteor) quickly spread out. For example, given its link with Barclays, Bank of Scotland deployed a DACS in 1968 under the 'Scotcash' brand. Customers were given personal code numbers to activate the machines, similar to the modern PIN. They were also supplied with £10 vouchers. These were fed into the machine, and the corresponding amount debited from the customer's account. A Chubb-made ATM appeared in Sydney in 1969. This was the first ATM installed in Australia. The machine only dispensed $25 at a time and the bank card itself would be mailed to the user after the bank had processed the withdrawal. Asea Metior's Bancomat was the first ATM installed in Spain on 9 January 1969, in central Madrid by Banesto. This device dispensed 1,000 peseta bills (1 to 5 max). Each user had to introduce a security personal key using a combination of the ten numeric buttons. In March of the same year an ad with the instructions to use the Bancomat was published in the same newspaper. Docutel in the United States After looking firsthand at the experiences in Europe, in 1968 the ATM was pioneered in the U.S. by Donald Wetzel, who was a department head at a company called Docutel. Docutel was a subsidiary of Recognition Equipment Inc of Dallas, Texas, which was producing optical scanning equipment and had instructed Docutel to explore automated baggage handling and automated gasoline pumps. On 2 September 1969, Chemical Bank installed a prototype ATM in the U.S. at its branch in Rockville Centre, New York. The first ATMs were designed to dispense a fixed amount of cash when a user inserted a specially coded card. A Chemical Bank advertisement boasted "On Sept. 2 our bank will open at 9:00 and never close again." Chemical's ATM, initially known as a Docuteller was designed by Donald Wetzel and his company Docutel. Chemical executives were initially hesitant about the electronic banking transition given the high cost of the early machines. Additionally, executives were concerned that customers would resist having machines handling their money. In 1995, the Smithsonian National Museum of American History recognised Docutel and Wetzel as the inventors of the networked ATM. To show confidence in Docutel, Chemical installed the first four production machines in a marketing test that proved they worked reliably, customers would use them and even pay a fee for usage. Based on this, banks around the country began to experiment with ATM installations. By 1974, Docutel had acquired 70 percent of the U.S. market; but as a result of the early 1970s worldwide recession and its reliance on a single product line, Docutel lost its independence and was forced to merge with the U.S. subsidiary of Olivetti. In 1973, Wetzel was granted U.S. Patent # 3,761,682; the application had been filed in October 1971. However, the U.S. patent record cites at least three previous applications from Docutel, all relevant to the development of the ATM and where Wetzel does not figure, namely US Patent # 3,662,343, U.S. Patent # 3651976 and U.S. Patent # 3,68,569. These patents are all credited to Kenneth S. Goldstein, MR Karecki, TR Barnes, GR Chastian and John D. White. Further advances In April 1971, Busicom began to manufacture ATMs based on the first commercial microprocessor, the Intel 4004. Busicom manufactured these microprocessor-based automated teller machines for several buyers, with NCR Corporation as the main customer. Mohamed Atalla invented the first hardware security module (HSM), dubbed the "Atalla Box", a security system which encrypted PIN and ATM messages, and protected offline devices with an un-guessable PIN-generating key. In March 1972, Atalla filed for his PIN verification system, which included an encoded card reader and described a system that utilized encryption techniques to assure telephone link security while entering personal ID information that was transmitted to a remote location for verification. He founded Atalla Corporation (now Utimaco Atalla) in 1972, and commercially launched the "Atalla Box" in 1973. The product was released as the Identikey. It was a card reader and customer identification system, providing a terminal with plastic card and PIN capabilities. The Identikey system consisted of a card reader console, two customer PIN pads, intelligent controller and built-in electronic interface package. The device consisted of two keypads, one for the customer and one for the teller. It allowed the customer to type in a secret code, which is transformed by the device, using a microprocessor, into another code for the teller. During a transaction, the customer's account number was read by the card reader. This process replaced manual entry and avoided possible key stroke errors. It allowed users to replace traditional customer verification methods such as signature verification and test questions with a secure PIN system. The success of the "Atalla Box" led to the wide adoption of hardware security modules in ATMs. Its PIN verification process was similar to the later IBM 3624. Atalla's HSM products protect 250million card transactions every day as of 2013, and secure the majority of the world's ATM transactions as of 2014. The IBM 2984 was a modern ATM and came into use at Lloyds Bank, High Street, Brentwood, Essex, the UK in December 1972. The IBM 2984 was designed at the request of Lloyds Bank. The 2984 Cash Issuing Terminal was a true ATM, similar in function to today's machines and named by Lloyds Bank: Cashpoint. Cashpoint is still a registered trademark of Lloyds Banking Group in the UK but is often used as a generic trademark to refer to ATMs of all UK banks. All were online and issued a variable amount which was immediately deducted from the account. A small number of 2984s were supplied to a U.S. bank. A couple of well known historical models of ATMs include the Atalla Box, IBM 3614, IBM 3624 and 473x series, Diebold 10xx and TABS 9000 series, NCR 1780 and earlier NCR 770 series. The first switching system to enable shared automated teller machines between banks went into production operation on 3 February 1979, in Denver, Colorado, in an effort by Colorado National Bank of Denver and Kranzley and Company of Cherry Hill, New Jersey. In 2012, a new ATM at Royal Bank of Scotland allowed customers to withdraw cash up to £130 without a card by inputting a six-digit code requested through their smartphones. Location ATMs can be placed at any location but are most often placed near or inside banks, shopping centers/malls, airports, railway stations, metro stations, grocery stores, petrol/gas stations, restaurants, and other locations. ATMs are also found on cruise ships and on some US Navy ships, where sailors can draw out their pay. ATMs may be on- and off-premises. On-premises ATMs are typically more advanced, multi-function machines that complement a bank branch's capabilities, and are thus more expensive. Off-premises machines are deployed by financial institutions and independent sales organisations (ISOs) where there is a simple need for cash, so they are generally cheaper single-function devices. In the US, Canada and some Gulf countries, banks may have drive-thru lanes providing access to ATMs using an automobile. In recent times, countries like India and some countries in Africa are installing solar-powered ATMs in rural areas. The world's highest ATM is located at the Khunjerab Pass in Pakistan. Installed at an elevation of by the National Bank of Pakistan, it is designed to work in temperatures as low as -40-degree Celsius. Financial networks Most ATMs are connected to interbank networks, enabling people to withdraw and deposit money from machines not belonging to the bank where they have their accounts or in the countries where their accounts are held (enabling cash withdrawals in local currency). Some examples of interbank networks include NYCE, PULSE, PLUS, Cirrus, AFFN, Interac, Interswitch, STAR, LINK, MegaLink, and BancNet. ATMs rely on the authorization of a financial transaction by the card issuer or other authorizing institution on a communications network. This is often performed through an ISO 8583 messaging system. Many banks charge ATM usage fees. In some cases, these fees are charged solely to users who are not customers of the bank that operates the ATM; in other cases, they apply to all users. In order to allow a more diverse range of devices to attach to their networks, some interbank networks have passed rules expanding the definition of an ATM to be a terminal that either has the vault within its footprint or utilises the vault or cash drawer within the merchant establishment, which allows for the use of a scrip cash dispenser. ATMs typically connect directly to their host or ATM Controller on either ADSL or dial-up modem over a telephone line or directly on a leased line. Leased lines are preferable to plain old telephone service (POTS) lines because they require less time to establish a connection. Less-trafficked machines will usually rely on a dial-up modem on a POTS line rather than using a leased line, since a leased line may be comparatively more expensive to operate compared to a POTS line. That dilemma may be solved as high-speed Internet VPN connections become more ubiquitous. Common lower-level layer communication protocols used by ATMs to communicate back to the bank include SNA over SDLC, TC500 over Async, X.25, and TCP/IP over Ethernet. In addition to methods employed for transaction security and secrecy, all communications traffic between the ATM and the Transaction Processor may also be encrypted using methods such as SSL. 2021 Global use There are no hard international or government-compiled numbers totaling the complete number of ATMs in use worldwide. Estimates developed by ATMIA place the number of ATMs currently in use at 3 million units, or approximately 1 ATM per 3,000 people in the world. To simplify the analysis of ATM usage around the world, financial institutions generally divide the world into seven regions, based on the penetration rates, usage statistics, and features deployed. Four regions (USA, Canada, Europe, and Japan) have high numbers of ATMs per million people. Despite the large number of ATMs, there is additional demand for machines in the Asia/Pacific area as well as in Latin America. Macau may have the highest density of ATMs at 254 ATMs per 100,000 adults. ATMs have yet to reach high numbers in the Near East and Africa. Hardware An ATM is typically made up of the following devices: CPU (to control the user interface and transaction devices) Magnetic or chip card reader (to identify the customer) a PIN pad for accepting and encrypting personal identification number EPP4 (similar in layout to a touch tone or calculator keypad), manufactured as part of a secure enclosure Secure cryptoprocessor, generally within a secure enclosure Display (used by the customer for performing the transaction) Function key buttons (usually close to the display) or a touchscreen (used to select the various aspects of the transaction) Record printer (to provide the customer with a record of the transaction) Vault (to store the parts of the machinery requiring restricted access) Housing (for aesthetics and to attach signage to) Sensors and indicators Due to heavier computing demands and the falling price of personal computer–like architectures, ATMs have moved away from custom hardware architectures using microcontrollers or application-specific integrated circuits and have adopted the hardware architecture of a personal computer, such as USB connections for peripherals, Ethernet and IP communications, and use personal computer operating systems. Business owners often lease ATMs from service providers. However, based on the economies of scale, the price of equipment has dropped to the point where many business owners are simply paying for ATMs using a credit card. New ADA voice and text-to-speech guidelines imposed in 2010, but required by March 2012 have forced many ATM owners to either upgrade non-compliant machines or dispose them if they are not upgradable, and purchase new compliant equipment. This has created an avenue for hackers and thieves to obtain ATM hardware at junkyards from improperly disposed decommissioned machines. The vault of an ATM is within the footprint of the device itself and is where items of value are kept. Scrip cash dispensers do not incorporate a vault. Mechanisms found inside the vault may include: Dispensing mechanism (to provide cash or other items of value) Deposit mechanism including a cheque processing module and bulk note acceptor (to allow the customer to make deposits) Security sensors (magnetic, thermal, seismic, gas) Locks (to control access to the contents of the vault) Journaling systems; many are electronic (a sealed flash memory device based on in-house standards) or a solid-state device (an actual printer) which accrues all records of activity including access timestamps, number of notes dispensed, etc. This is considered sensitive data and is secured in similar fashion to the cash as it is a similar liability. ATM vaults are supplied by manufacturers in several grades. Factors influencing vault grade selection include cost, weight, regulatory requirements, ATM type, operator risk avoidance practices and internal volume requirements. Industry standard vault configurations include Underwriters Laboratories UL-291 "Business Hours" and Level 1 Safes, RAL TL-30 derivatives, and CEN EN 1143-1 - CEN III and CEN IV. ATM manufacturers recommend that a vault be attached to the floor to prevent theft, though there is a record of a theft conducted by tunnelling into an ATM floor. Software With the migration to commodity Personal Computer hardware, standard commercial "off-the-shelf" operating systems and programming environments can be used inside of ATMs. Typical platforms previously used in ATM development include RMX or OS/2. Today, the vast majority of ATMs worldwide use Microsoft Windows. In early 2014, 95% of ATMs were running Windows XP. A small number of deployments may still be running older versions of the Windows OS, such as Windows NT, Windows CE, or Windows 2000, even though Microsoft still supports only Windows 8.1,Windows 10 and Windows 11. There is a computer industry security view that general public desktop operating systems(os) have greater risks as operating systems for cash dispensing machines than other types of operating systems like (secure) real-time operating systems (RTOS). RISKS Digest has many articles about ATM operating system vulnerabilities. Linux is also finding some reception in the ATM marketplace. An example of this is Banrisul, the largest bank in the south of Brazil, which has replaced the MS-DOS operating systems in its ATMs with Linux. Banco do Brasil is also migrating ATMs to Linux. Indian-based Vortex Engineering is manufacturing ATMs that operate only with Linux. Common application layer transaction protocols, such as Diebold 91x (911 or 912) and NCR NDC or NDC+ provide emulation of older generations of hardware on newer platforms with incremental extensions made over time to address new capabilities, although companies like NCR continuously improve these protocols issuing newer versions (e.g. NCR's AANDC v3.x.y, where x.y are subversions). Most major ATM manufacturers provide software packages that implement these protocols. Newer protocols such as IFX have yet to find wide acceptance by transaction processors. With the move to a more standardised software base, financial institutions have been increasingly interested in the ability to pick and choose the application programs that drive their equipment. WOSA/XFS, now known as CEN XFS (or simply XFS), provides a common API for accessing and manipulating the various devices of an ATM. J/XFS is a Java implementation of the CEN XFS API. While the perceived benefit of XFS is similar to the Java's "write once, run anywhere" mantra, often different ATM hardware vendors have different interpretations of the XFS standard. The result of these differences in interpretation means that ATM applications typically use a middleware to even out the differences among various platforms. With the onset of Windows operating systems and XFS on ATMs, the software applications have the ability to become more intelligent. This has created a new breed of ATM applications commonly referred to as programmable applications. These types of applications allows for an entirely new host of applications in which the ATM terminal can do more than only communicate with the ATM switch. It is now empowered to connected to other content servers and video banking systems. Notable ATM software that operates on XFS platforms include Triton PRISM, Diebold Agilis EmPower, NCR APTRA Edge, Absolute Systems AbsoluteINTERACT, KAL Kalignite Software Platform, Phoenix Interactive VISTAatm, Wincor Nixdorf ProTopas, Euronet EFTS and Intertech inter-ATM. With the move of ATMs to industry-standard computing environments, concern has risen about the integrity of the ATM's software stack. Impact on labor The number of human bank tellers in the United States increased from approximately 300,000 in 1970 to approximately 600,000 in 2010. Counter-intuitively, a contributing factor may be the introduction of automated teller machines. ATMs let a branch operate with fewer tellers, making it cheaper for banks to open more branches. This likely resulted in more tellers being hired to handle non-automated tasks, but further automation and online banking may reverse this increase. Security Security, as it relates to ATMs, has several dimensions. ATMs also provide a practical demonstration of a number of security systems and concepts operating together and how various security concerns are addressed. Physical Early ATM security focused on making the terminals invulnerable to physical attack; they were effectively safes with dispenser mechanisms. A number of attacks resulted, with thieves attempting to steal entire machines by ram-raiding. Since the late 1990s, criminal groups operating in Japan improved ram-raiding by stealing and using a truck loaded with heavy construction machinery to effectively demolish or uproot an entire ATM and any housing to steal its cash. Another attack method, plofkraak, is to seal all openings of the ATM with silicone and fill the vault with a combustible gas or to place an explosive inside, attached, or near the machine. This gas or explosive is ignited and the vault is opened or distorted by the force of the resulting explosion and the criminals can break in. This type of theft has occurred in the Netherlands, Belgium, France, Denmark, Germany, Australia, and the United Kingdom. These types of attacks can be prevented by a number of gas explosion prevention devices also known as gas suppression system. These systems use explosive gas detection sensor to detect explosive gas and to neutralise it by releasing a special explosion suppression chemical which changes the composition of the explosive gas and renders it ineffective. Several attacks in the UK (at least one of which was successful) have involved digging a concealed tunnel under the ATM and cutting through the reinforced base to remove the money. Modern ATM physical security, per other modern money-handling security, concentrates on denying the use of the money inside the machine to a thief, by using different types of Intelligent Banknote Neutralisation Systems. A common method is to simply rob the staff filling the machine with money. To avoid this, the schedule for filling them is kept secret, varying and random. The money is often kept in cassettes, which will dye the money if incorrectly opened. Transactional secrecy and integrity The security of ATM transactions relies mostly on the integrity of the secure cryptoprocessor: the ATM often uses general commodity components that sometimes are not considered to be "trusted systems". Encryption of personal information, required by law in many jurisdictions, is used to prevent fraud. Sensitive data in ATM transactions are usually encrypted with DES, but transaction processors now usually require the use of Triple DES. Remote Key Loading techniques may be used to ensure the secrecy of the initialisation of the encryption keys in the ATM. Message Authentication Code (MAC) or Partial MAC may also be used to ensure messages have not been tampered with while in transit between the ATM and the financial network. Customer identity integrity There have also been a number of incidents of fraud by Man-in-the-middle attacks, where criminals have attached fake keypads or card readers to existing machines. These have then been used to record customers' PINs and bank card information in order to gain unauthorised access to their accounts. Various ATM manufacturers have put in place countermeasures to protect the equipment they manufacture from these threats. Alternative methods to verify cardholder identities have been tested and deployed in some countries, such as finger and palm vein patterns, iris, and facial recognition technologies. Cheaper mass-produced equipment has been developed and is being installed in machines globally that detect the presence of foreign objects on the front of ATMs, current tests have shown 99% detection success for all types of skimming devices. Device operation integrity Openings on the customer side of ATMs are often covered by mechanical shutters to prevent tampering with the mechanisms when they are not in use. Alarm sensors are placed inside ATMs and their servicing areas to alert their operators when doors have been opened by unauthorised personnel. To protect against hackers, ATMs have a built-in firewall. Once the firewall has detected malicious attempts to break into the machine remotely, the firewall locks down the machine. Rules are usually set by the government or ATM operating body that dictate what happens when integrity systems fail. Depending on the jurisdiction, a bank may or may not be liable when an attempt is made to dispense a customer's money from an ATM and the money either gets outside of the ATM's vault, or was exposed in a non-secure fashion, or they are unable to determine the state of the money after a failed transaction. Customers often commented that it is difficult to recover money lost in this way, but this is often complicated by the policies regarding suspicious activities typical of the criminal element. Customer security In some countries, multiple security cameras and security guards are a common feature. In the United States, The New York State Comptroller's Office has advised the New York State Department of Banking to have more thorough safety inspections of ATMs in high crime areas. Consultants of ATM operators assert that the issue of customer security should have more focus by the banking industry; it has been suggested that efforts are now more concentrated on the preventive measure of deterrent legislation than on the problem of ongoing forced withdrawals. At least as far back as 30 July 1986, consultants of the industry have advised for the adoption of an emergency PIN system for ATMs, where the user is able to send a silent alarm in response to a threat. Legislative efforts to require an emergency PIN system have appeared in Illinois, Kansas and Georgia, but none has succeeded yet. In January 2009, Senate Bill 1355 was proposed in the Illinois Senate that revisits the issue of the reverse emergency PIN system. The bill is again supported by the police and denied by the banking lobby. In 1998, three towns outside Cleveland, Ohio, in response to an ATM crime wave, adopted legislation requiring that an emergency telephone number switch be installed at all outdoor ATMs within their jurisdiction. In the wake of a homicide in Sharon Hill, Pennsylvania, the city council passed an ATM security bill as well. In China and elsewhere, many efforts to promote security have been made. On-premises ATMs are often located inside the bank's lobby, which may be accessible 24 hours a day. These lobbies have extensive security camera coverage, a courtesy telephone for consulting with the bank staff, and a security guard on the premises. Bank lobbies that are not guarded 24 hours a day may also have secure doors that can only be opened from outside by swiping the bank card against a wall-mounted scanner, allowing the bank to identify which card enters the building. Most ATMs will also display on-screen safety warnings and may also be fitted with convex mirrors above the display allowing the user to see what is happening behind them. As of 2013, the only claim available about the extent of ATM-connected homicides is that they range from 500 to 1,000 per year in the US, covering only cases where the victim had an ATM card and the card was used by the killer after the known time of death. Jackpotting The term is used to describe one method criminals utilize to steal money from an ATM. The thieves gain physical access through a small hole drilled in the machine. They disconnect the existing hard drive and connect an external drive using an industrial endoscope. They then depress an internal button that reboots the device so that it is now under the control of the external drive. They can then have the ATM dispense all of its cash. Encryption In recent years, many ATMs also encrypt the hard disk. This means that actually creating the software for jackpotting is more difficult, and provides more security for the ATM. Uses ATMs were originally developed as cash dispensers, and have evolved to provide many other bank-related functions: Paying routine bills, fees, and taxes (utilities, phone bills, social security, legal fees, income taxes, etc.) Printing or ordering bank statements Updating passbooks Cash advances Cheque Processing Module Paying (in full or partially) the credit balance on a card linked to a specific current account. Transferring money between linked accounts (such as transferring between accounts) Deposit currency recognition, acceptance, and recycling In some countries, especially those which benefit from a fully integrated cross-bank network (e.g.: Multibanco in Portugal), ATMs include many functions that are not directly related to the management of one's own bank account, such as: Loading monetary value into stored-value cards Adding pre-paid cell phone / mobile phone credit. Purchasing Concert tickets Gold Lottery tickets Movie tickets Postage stamps. Train tickets Shopping mall gift certificates. Donating to charities Increasingly, banks are seeking to use the ATM as a sales device to deliver pre approved loans and targeted advertising using products such as ITM (the Intelligent Teller Machine) from Aptra Relate from NCR. ATMs can also act as an advertising channel for other companies.* However, several different ATM technologies have not yet reached worldwide acceptance, such as: Videoconferencing with human tellers, known as video tellers Biometrics, where authorization of transactions is based on the scanning of a customer's fingerprint, iris, face, etc. Cheque/cash Acceptance, where the machine accepts and recognises cheques and/or currency without using envelopes Expected to grow in importance in the US through Check 21 legislation. Bar code scanning On-demand printing of "items of value" (such as movie tickets, traveler's cheques, etc.) Dispensing additional media (such as phone cards) Co-ordination of ATMs with mobile phones Integration with non-banking equipment Games and promotional features CRM through the ATM Videoconferencing teller machines are currently referred to as Interactive Teller Machines. Benton Smith, in the Idaho Business Review writes "The software that allows interactive teller machines to function was created by a Salt Lake City-based company called uGenius, a producer of video banking software. NCR, a leading manufacturer of ATMs, acquired uGenius in 2013 and married its own ATM hardware with uGenius' video software." Pharmacy dispensing units Reliability Before an ATM is placed in a public place, it typically has undergone extensive testing with both test money and the backend computer systems that allow it to perform transactions. Banking customers also have come to expect high reliability in their ATMs, which provides incentives to ATM providers to minimise machine and network failures. Financial consequences of incorrect machine operation also provide high degrees of incentive to minimise malfunctions. ATMs and the supporting electronic financial networks are generally very reliable, with industry benchmarks typically producing 98.25% customer availability for ATMs and up to 99.999% availability for host systems that manage the networks of ATMs. If ATM networks do go out of service, customers could be left without the ability to make transactions until the beginning of their bank's next time of opening hours. This said, not all errors are to the detriment of customers; there have been cases of machines giving out money without debiting the account, or giving out higher value notes as a result of incorrect denomination of banknote being loaded in the money cassettes. The result of receiving too much money may be influenced by the card holder agreement in place between the customer and the bank. Errors that can occur may be mechanical (such as card transport mechanisms; keypads; hard disk failures; envelope deposit mechanisms); software (such as operating system; device driver; application); communications; or purely down to operator error. To aid in reliability, some ATMs print each transaction to a roll-paper journal that is stored inside the ATM, which allows its users and the related financial institutions to settle things based on the records in the journal in case there is a dispute. In some cases, transactions are posted to an electronic journal to remove the cost of supplying journal paper to the ATM and for more convenient searching of data. Improper money checking can cause the possibility of a customer receiving counterfeit banknotes from an ATM. While bank personnel are generally trained better at spotting and removing counterfeit cash, the resulting ATM money supplies used by banks provide no guarantee for proper banknotes, as the Federal Criminal Police Office of Germany has confirmed that there are regularly incidents of false banknotes having been dispensed through ATMs. Some ATMs may be stocked and wholly owned by outside companies, which can further complicate this problem. Bill validation technology can be used by ATM providers to help ensure the authenticity of the cash before it is stocked in the machine; those with cash recycling capabilities include this capability. In India, whenever a transaction fails with an ATM due to network or technical issues and if the amount does not get dispensed in spite of the account being debited then the banks are supposed to return the debited amount to the customer within seven working days from the day of receipt of a complaint. Banks are also liable to pay the late fees in case of delay in repayment of funds post seven days. Fraud As with any device containing objects of value, ATMs and the systems they depend on to function are the targets of fraud. Fraud against ATMs and people's attempts to use them takes several forms. The first known instance of a fake ATM was installed at a shopping mall in Manchester, Connecticut, in 1993. By modifying the inner workings of a Fujitsu model 7020 ATM, a criminal gang known as the Bucklands Boys stole information from cards inserted into the machine by customers. WAVY-TV reported an incident in Virginia Beach in September 2006 where a hacker, who had probably obtained a factory-default administrator password for a filling station's white-label ATM, caused the unit to assume it was loaded with US$5 bills instead of $20s, enabling himself—and many subsequent customers—to walk away with four times the money withdrawn from their accounts. This type of scam was featured on the TV series The Real Hustle. ATM behaviour can change during what is called "stand-in" time, where the bank's cash dispensing network is unable to access databases that contain account information (possibly for database maintenance). In order to give customers access to cash, customers may be allowed to withdraw cash up to a certain amount that may be less than their usual daily withdrawal limit, but may still exceed the amount of available money in their accounts, which could result in fraud if the customers intentionally withdraw more money than they had in their accounts. Card fraud In an attempt to prevent criminals from shoulder surfing the customer's personal identification number (PIN), some banks draw privacy areas on the floor. For a low-tech form of fraud, the easiest is to simply steal a customer's card along with its PIN. A later variant of this approach is to trap the card inside of the ATM's card reader with a device often referred to as a Lebanese loop. When the customer gets frustrated by not getting the card back and walks away from the machine, the criminal is able to remove the card and withdraw cash from the customer's account, using the card and its PIN. This type of fraud has spread globally. Although somewhat replaced in terms of volume by skimming incidents, a re-emergence of card trapping has been noticed in regions such as Europe, where EMV chip and PIN cards have increased in circulation. Another simple form of fraud involves attempting to get the customer's bank to issue a new card and its PIN and stealing them from their mail. By contrast, a newer high-tech method of operating, sometimes called card skimming or card cloning, involves the installation of a magnetic card reader over the real ATM's card slot and the use of a wireless surveillance camera or a modified digital camera or a false PIN keypad to observe the user's PIN. Card data is then cloned into a duplicate card and the criminal attempts a standard cash withdrawal. The availability of low-cost commodity wireless cameras, keypads, card readers, and card writers has made it a relatively simple form of fraud, with comparatively low risk to the fraudsters. In an attempt to stop these practices, countermeasures against card cloning have been developed by the banking industry, in particular by the use of smart cards which cannot easily be copied or spoofed by unauthenticated devices, and by attempting to make the outside of their ATMs tamper evident. Older chip-card security systems include the French Carte Bleue, Visa Cash, Mondex, Blue from American Express and EMV '96 or EMV 3.11. The most actively developed form of smart card security in the industry today is known as EMV 2000 or EMV 4.x. EMV is widely used in the UK (Chip and PIN) and other parts of Europe, but when it is not available in a specific area, ATMs must fall back to using the easy–to–copy magnetic stripe to perform transactions. This fallback behaviour can be exploited. However, the fallback option has been removed on the ATMs of some UK banks, meaning if the chip is not read, the transaction will be declined. Card cloning and skimming can be detected by the implementation of magnetic card reader heads and firmware that can read a signature embedded in all magnetic stripes during the card production process. This signature, known as a "MagnePrint" or "BluPrint", can be used in conjunction with common two-factor authentication schemes used in ATM, debit/retail point-of-sale and prepaid card applications. The concept and various methods of copying the contents of an ATM card's magnetic stripe onto a duplicate card to access other people's financial information were well known in the hacking communities by late 1990. In 1996, Andrew Stone, a computer security consultant from Hampshire in the UK, was convicted of stealing more than £1 million by pointing high-definition video cameras at ATMs from a considerable distance and recording the card numbers, expiry dates, etc. from the embossed detail on the ATM cards along with video footage of the PINs being entered. After getting all the information from the videotapes, he was able to produce clone cards which not only allowed him to withdraw the full daily limit for each account, but also allowed him to sidestep withdrawal limits by using multiple copied cards. In court, it was shown that he could withdraw as much as £10,000 per hour by using this method. Stone was sentenced to five years and six months in prison. Related devices A talking ATM is a type of ATM that provides audible instructions so that people who cannot read a screen can independently use the machine, therefore effectively eliminating the need for assistance from an external, potentially malevolent source. All audible information is delivered privately through a standard headphone jack on the face of the machine. Alternatively, some banks such as the Nordea and Swedbank use a built-in external speaker which may be invoked by pressing the talk button on the keypad. Information is delivered to the customer either through pre-recorded sound files or via text-to-speech speech synthesis. A postal interactive kiosk may share many components of an ATM (including a vault), but it only dispenses items related to postage. A scrip cash dispenser may have many components in common with an ATM, but it lacks the ability to dispense physical cash and consequently requires no vault. Instead, the customer requests a withdrawal transaction from the machine, which prints a receipt or scrip. The customer then takes this receipt to a nearby sales clerk, who then exchanges it for cash from the till. A teller assist unit (TAU) is distinct in that it is designed to be operated solely by trained personnel and not by the general public, does integrate directly into interbank networks, and usually is controlled by a computer that is not directly integrated into the overall construction of the unit. A Web ATM is an online interface for ATM card banking that uses a smart card reader. All the usual ATM functions are available, except for withdrawing cash. Most banks in Taiwan provide these online services. See also ATM Industry Association (ATMIA) Automated cash handling Banknote counter Cash register EFTPOS Electronic funds transfer Financial cryptography Key management Payroll Phantom withdrawal RAS syndrome Security of Automated Teller Machines Self service Teller system Verification and validation References Further reading Ali, Peter Ifeanyichukwu. "Impact of automated teller machine on banking services delivery in Nigeria: a stakeholder analysis." Brazilian Journal of Education, Technology and Society 9.1 (2016): 64–72. online Bátiz-Lazo, Bernardo. Cash and Dash: How ATMs and Computers Changed Banking (Oxford University Press, 2018). online review Batiz-Lazo, Bernardo. "Emergence and evolution of ATM networks in the UK, 1967–2000." Business History 51.1 (2009): 1-27. online Batiz-Lazo, Bernardo, and Gustavo del Angel. The Dawn of the Plastic Jungle: The Introduction of the Credit Card in Europe and North America, 1950-1975 (Hoover Institution, 2016), abstract Bessen, J. Learning by Doing: The Real Connection between Innovation, Wages, and Wealth (Yale UP, 2015) Hota, Jyotiranjan, Saboohi Nasim, and Sasmita Mishra. "Drivers and Barriers to Adoption of Multivendor ATM Technology in India: Synthesis of Three Empirical Studies." Journal of Technology Management for Growing Economies 9.1 (2018): 89-102. online McDysan, David E., and Darren L. Spohn. ATM theory and applications (McGraw-Hill Professional, 1998). Mkpojiogu, Emmanuel OC, and A. Asuquo. "The user experience of ATM users in Nigeria: a systematic review of empirical papers." Journal of Research in National Development (2018). online Primary sources "Interview with Mr. Don Wetzel, Co-Patente of the Automatic Teller Machine" (1995) online External links The Money Machines: An account of US cash machine history; By Ellen Florian, Fortune.com World Map and Chart of Automated Teller Machines per 100,000 Adults by Lebanese-economy-forum, World Bank data Computer-related introductions in 1967 Automation Banking equipment Banking technology Embedded systems American inventions English inventions Payment systems Articles containing video clips 1967 in economics 20th-century inventions
41429794
https://en.wikipedia.org/wiki/Anastasia%20Ailamaki
Anastasia Ailamaki
Anastasia Ailamaki is a Professor of Computer Sciences at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the Director of the Data-Intensive Applications and Systems (DIAS) lab. She is also the co-founder of RAW Labs SA, a Swiss company developing real-time analytics infrastructures for heterogeneous big data. Formerly, she was an associate professor of computer science at Carnegie Mellon School of Computer Science. Ailamaki's research interests are in the broad area of database systems and applications, with emphasis on database system behavior on modern processor hardware and disks. Education Ailamaki studied computer science at the University of Patras, and earned her first master's degree at the Technical University of Crete followed by a second diploma from the University of Rochester. She received her Ph.D. in computer science from the University of Wisconsin-Madison in 2000. Career She is the recipient of ten Best Paper and Best Demo awards and was awarded Young Investigator Award by the European Science Foundation. In 2013 she received an ERC Consolidator Award for the ViDa: Transforming raw data into information through virtualization project. She is a Fellow of the IEEE and ACM, a member of Academia Europaea, and the Vice Chair of the Special Interest Group of Management of Data (SIGMOD) within the Association for Computing Machinery. She is a member of the Expert Network of the World Economic Forum and CRA-W mentor. Ailamaki is the author of over 200 peer-reviewed articles published in such journals as the Conference on Innovative Data Systems Research, VLDB, SIGMOD, ACM Transactions on Database Systems. Honors and awards ACM SIGMOD Edgar F. Codd Innovations Award (2019)]: the SIGMOD Edgar F. Codd Innovations Award is given for innovative and highly significant contributions of enduring value to the development, understanding, or use of database systems. NEMITSAS Prize 2018 in Computer Science: the President of the Republic of Cyprus, on behalf of the Takis and Louki Nemitsas Foundation, presents the Nemitsas Prize to one laureate for contributions in his/her scientific field which have been recognized at an international level. IEEE Fellow (since 01/2018): “For contributions to hardware-conscious database systems and scientific data management” ACM Fellow (since 01/2015): “For contributions to the design, implementation, and evaluation of modern database systems” References External links Living people 20th-century births American women computer scientists Greek women computer scientists University of Wisconsin–Madison College of Letters and Science alumni École Polytechnique Fédérale de Lausanne faculty Year of birth missing (living people) Fellows of the Association for Computing Machinery Members of Academia Europaea American computer scientists 21st-century American scientists 21st-century American women scientists Cypriot scientists American women academics
2267488
https://en.wikipedia.org/wiki/Mule%20%28smuggling%29
Mule (smuggling)
A mule or courier is someone who personally smuggles contraband across a border (as opposed to sending by mail, etc.) for a smuggling organization. The organizers employ mules to reduce the risk of getting caught themselves. Methods of smuggling include hiding the goods in vehicles or carried items, attaching them to one's body, or using the body as a container. In the case of transporting illegal drugs, the term drug mule applies. Other slang terms include Kinder Surprise and Easter Egg. Small-scale operations, in which one courier carries one piece or a very small quantity, are sometimes called the ant trade. Techniques Concealment Methods of smuggling include hiding the goods in a large vehicle, luggage, or clothes. In a vehicle, the contraband is hidden in secret compartments. Sometimes the goods are hidden in the bag or vehicle of an innocent person, who does not know about the contraband, for the purpose of retrieving the goods elsewhere. Some contraband is legal to possess but is subject to taxes or other import restrictions, such as second-hand clothes and computers, and the purpose of the smuggling is to get around these restrictions. In this case, smuggling may be done in plain sight, in smaller quantities, so that a suitcase full of used clothes or a new computer can be passed off as a personal possession rather than an importing business. Body packing The practice of transporting goods outside or inside of the body is called body packing. This is done by a person usually called a mule or bait. The contraband is attached to the outside of the body using adhesive tape, glue, or straps, often in such places as between the cheeks of the buttocks or between rolls of fat. Other inconspicuous places, like the soles of cut out shoes, inside belts, or the rim of a hat, were used more often prior to the early 1990s. Due to increased airport security the "body packing" method is rarely used any more. Some narcotics-trafficking organizations, such as the Mexican cartels, will purposely send one or two people with drugs on the outside of their body to purposely be caught, so that the authorities are preoccupied while dozens of mules pass by undetected with drugs inside their body. However, even these diversionary tactics are becoming less prevalent as airport security increases. Swallowing This is often done using a mule's gastrointestinal tract or other body cavities as containers. Swallowing has been used for the transportation of heroin, cocaine, and MDMA/Ecstasy. A swallower typically fills tiny balloons with small quantities of a drug. The balloons may be made with multilayered condoms, fingers of latex gloves, or more sophisticated hollow pellets. One smuggling method involves swallowing the balloons, which are recovered later from the excreted feces. Alternatively, the balloons may be hidden in other natural or artificial body cavities – such as rectum, colostomy, vagina, and mouth – although this method is far more vulnerable to body cavity searches. A drug mule may swallow dozens upon dozens of balloons. The swallower then attempts to cross international borders, excrete the balloons, and sell the drugs. It is most common for the swallower to be making the trip on behalf of a drug lord or drug dealer. Swallowers are often impoverished and agree to transport the drugs in exchange for money or other favors. In fewer cases, the drug dealers can attempt extortion against people by threatening physical harm against friends or family, but the more common practice is for swallowers to willingly accept the job in exchange for big payoffs. As reported in Lost Rights by James Bovard: "Nigerian drug lords have employed an army of 'swallowers', those who will swallow as many as 150 balloons and smuggle drugs into the United States. Given the per capita yearly income of Nigeria is $2,100, Nigerians can collect as much as $15,000 per trip." Swallowers have been apprehended from a variety of age groups, including adults, teens, and children. Detection and medical treatment Routine detection of the smuggled packets is extremely difficult, and many cases come to light because a packet has ruptured or because of intestinal obstruction. Unruptured packets may sometimes be detected by rectal or vaginal examination, but the only reliable way is by X-ray of the abdomen. Hashish appears denser than stool, cocaine is approximately the same density as stool, while heroin looks like air. An increasingly popular type of swallowing involves having the drug in the form of liquid-filled balloons or condoms/packages. These are impossible to detect unless the airport has high-sensitivity X-Ray equipment, as a liquid mixture of water and the drug will most likely not be detected using a standard X-Ray machine. Most of the major airports in Europe, Canada, and the US have the more sensitive machines. In most cases, it is only necessary to wait for the packets to pass normally, but if a packet ruptures or if there is intestinal obstruction, then it may be necessary to operate and surgically remove the packets. Oil-based laxatives should never be used, as they can weaken the latex of condoms and cause packets to rupture. Emetics like syrup of ipecac, enemas, and endoscopic retrieval all carry a risk of packet rupture and should not be used. Repeat imaging is only necessary if the mule does not know the packet count. Ruptured packets can be fatal and often require treatment as for a drug overdose and may require admission to an intensive care unit. Body packers are not always reliable sources of information about the contents of the packages (either because of fears about information being passed on to law enforcement agencies or because the mule genuinely does not know). Urine toxicology may be necessary to determine what drugs are being carried and what antidotes are needed. International incidents China Electronic products such as iPhones sell cheaper in Hong Kong, one of China's Special Administrative Regions where the tax laws are relaxed. Smugglers buy them in Hong Kong and employ mules that strap iPhones around their waists and ankles, and smuggle them across the border from Hong Kong to Shenzhen. According to Customs Law of China and Smuggling Penalties, a person shall be subject to a criminal charge if found smuggling small quantities of goods three times in one year. The maximum jail sentence is three years. United States The U.S. Supreme Court dealt with body packing in United States v. Montoya De Hernandez. In Hernandez, a woman attempted to smuggle 88 balloons of cocaine in her gastrointestinal tract. She had been detained for over 16 hours by customs inspectors before she finally passed some of the balloons. She was being held because her abdomen was noticeably swollen (she claimed to be pregnant), and a search of her body had revealed that she was wearing two pairs of elastic underpants and had lined her crotch area with paper towels. This is done because balloon swallowing makes bowel movements difficult to control. The woman claimed her Fourth Amendment rights had been violated, but the court found in favor of the border authorities. With regard to traffic from South America to the US, the US Drug Enforcement Administration reports: "Unlike cocaine, heroin is often smuggled by people who swallow large numbers of small capsules (50–90), allowing them to transport up to 1.5 kilograms of heroin. United Kingdom In 2003, over 50% of foreign female prisoners in UK jails were drug mules from Jamaica. Nigerian women make a large contribution to the remaining figure. In all, around 18% of the UK's female jail population are foreigners, 60% of whom are serving sentences for drug-related offences – most of them drug mules. See also Drug Enforcement Administration Illegal drug trade in Colombia Money mule U.S. Immigration and Customs Enforcement (ICE) United States Border Patrol War on Drugs References Illegal occupations Smuggling Drug control law Illegal drug trade techniques
12162886
https://en.wikipedia.org/wiki/Fuser%20%28Unix%29
Fuser (Unix)
The Unix command fuser is used to show which processes are using a specified computer file, file system, or Unix socket. Example For example, to check process IDs and users accessing a USB drive: $ fuser -m -u /mnt/usb1 /mnt/usb1: 1347c(root) 1348c(guido) 1349c(guido) The command displays the process identifiers (PIDs) of processes using the specified files or file systems. In the default display mode, each PID is followed by a letter denoting the type of access: c current directory. e executable being run. f open file. F open file for writing. r root directory. m mmap'ed file or shared library Only the PIDs are written to standard output. Additional information is written to standard error. This makes it easier to process the output with computer programs. The command can also be used to check what processes are using a network port: $ fuser -v -n tcp 80 USER PID ACCESS COMMAND 80/tcp: root 3067 F.... (root)httpd apache 3096 F.... (apache)httpd apache 3097 F.... (apache)httpd The command returns a non-zero code if none of the files are accessed or in case of a fatal error. If at least one access has succeeded, fuser returns zero. The output of "fuser" may be useful in diagnosing "resource busy" messages arising when attempting to unmount filesystems. Options POSIX defines the following options: -c Treat the file as a mount point. -f Only report processes accessing the named files. -u Append user names in parentheses to each PID. psmisc adds the following options, among others: -k, --kill Kill all processes accessing a file by sending a SIGKILL. Use e.g. -HUP or -1 to send a different signal. -l, --list-signals List all supported signal names. -i, --interactive Prompt before killing a process. -v, --verbose verbose mode -a, --all Display all files. Without this option, only files accessed by at least one process are shown. -m, --mount Same as -c. Treat all following path names as files on a mounted file system or block device. All processes accessing files on that file system are listed. Related commands The list of all open files and the processes that have them open can be obtained through the lsof command. The equivalent command on BSD operating systems is . References External links Unix SUS2008 utilities Unix process- and task-management-related software
24883
https://en.wikipedia.org/wiki/CD-i
CD-i
The Compact Disc-Interactive (CD-I, later CD-i) is a digital optical disc data storage format that was mostly developed and marketed by Dutch company Philips. It was created as an extension of CDDA and CD-ROM and specified in the Green Book, co-developed by Philips and Sony, to combine audio, text and graphics. The two companies initially expected to impact the education/training, point of sale, and home entertainment industries, but CD-i eventually became best known for its video games. CD-i media physically have the same dimensions as CD, but with up to 744 MB of digital data storage, including up to 72 minutes of full motion video. CD-i players were usually standalone boxes that connect to a standard television; some less common setups included integrated CD-i television sets and expansion modules for personal computers. Most players were created by Philips; the format was licensed by Philips and Microware for use by other manufacturers, notably Sony who released professional CD-i players under the "Intelligent Discman" brand. Unlike CD-ROM drives, CD-i players are complete computer systems centered around dedicated Motorola 68000-based microprocessors and its own operating system called CD-RTOS, which is an acronym for "Compact Disc – Real Time Operating System". Media released on the format included video games and "edutainment" and multimedia reference titles, such as interactive encyclopedias and museum tours – which were popular before public Internet access was widespread – as well as business software. Philips's CD-i system also implemented Internet features, including subscriptions, web browsing, downloading, e-mail, and online play. Philips's aim with its players was to introduce interactive multimedia content for the general public by combining features of a CD player and game console, but at a lower price than a personal computer with a CD-ROM drive. Authoring kits for the format were released first in 1988, and the first player aimed for home consumers, Philips's CDI 910/205, at the end of 1991, initially priced around US$1,000 (), and capable of playing interactive CD-i discs, Audio CDs, CD+G (CD+Graphics), Photo CDs and Video CDs (VCDs), though the latter required an optional "Digital Video Card" to provide MPEG-1 decoding. Initially marketed to consumers as "home entertainment systems", and in later years as a "gaming platform", CD-i did not manage to find enough success in the market, and was mostly abandoned by Philips in 1996. The format continued to be supported for licensees for a few more years after. Specifications Development of the "Compact Disc-Interactive" format began in 1984 (two years after the launch of Compact Disc) and it was first publicly announced by Philips and Sony – two of the largest electronics companies of the time – at Microsoft's CD-ROM Conference in Seattle in March 1986. Microsoft's CEO Bill Gates had no idea beforehand that the format was under development. The Green Book, formally known as the "CD-i Full Functional Specification", defined the format for interactive, multimedia compact discs designed for CD-i players. The Green Book specification also defines a whole hardware set built around the Motorola 68000 microprocessor family, and an operating system called CD-RTOS based on OS-9, a product of Microware. The standard was originally not freely available and had to be licensed from Philips. However, the 1994 version of the standard was eventually made available free by Philips. CD-i discs conform to the Red Book specification of audio CDs (CD-DA). Tracks on a CD-i's program area can be CD-DA tracks or CD-i tracks, but the first track must always be a CD-i track, and all CD-i tracks must be grouped together at the beginning of the area. CD-i tracks are structured according to the CD-ROM XA specification (using either Mode 2 Form 1 or Mode 2 Form 2 modes), and have different classes depending on their contents ("data", "video", "audio", "empty" and "message"). "Message" sectors contain audio data to warn users of CD players that the track they are trying to listen to is a CD-i track and not a CD-DA track. The CD-i specification also specifies a file system similar to (but not compatible with) ISO 9660 to be used on CD-i tracks, as well as certain specific files that are required to be present in a CD-i compatible disc. Compared to the Yellow Book (specification for CD-ROM), the Green Book CD-i standard solves synchronisation problems by interleaving audio and video information on a single track. The format quickly gained interest from large manufacturers, and received backing from many particularly Matsushita. Although a joint effort, Philips eventually took over the majority of CD-i development at the expense of Sony. Philips invested many millions in developing titles and players based on the CD-i specification. Initially branded "CD-I", the name was changed in 1991 to "CD-i" with a lowercase i. The CD-i Ready format is a type of bridge format, also designed by Philips, that defines discs compatible with CD Digital audio players and CD-i players. This format puts CD-i software and data into the pregap of Track 1. The CD-i Bridge format, defined in Philips' White Book, is a transitional format allowing bridge discs to be played both on CD-ROM drives and on CD-i players. The CD-i Digital Video format was launched in 1993 containing movies that could be played on CD-i players with a Digital Video Cartridge add-on. The format was incompatible with Video CD (VCD), although a CD-i unit with the DVC could play both formats. Only about 20 movies were released on the format and it was stopped in 1994 in favor of VCD. Commercial software Applications were developed using authoring software produced by OptImage. This included OptImage's Balboa Runtime Libraries and MediaMogul. The second company that produced authoring software was Script Systems; they produced ABCD-I. Much of the CD-i software were promoted and/or published by American Interactive Media (AIM), a joint venture between Philips and its subsidiary PolyGram formed in Los Angeles in 1986, before its public debut, to publish CD-i based consumer software. Similarly in Europe, Philips Interactive Media was launched. Philips at first marketed CD-i as a family entertainment product, and avoided mentioning video games to not compete against game consoles. Early software releases focused heavily on educational, music, and self-improvement titles, with only a few games, many of them adaptations of board games such as Connect Four. However, the system was handily beaten in the market for multimedia devices by cheap low-end PCs, and the games were the best-selling software. By 1993 Philips encouraged MS-DOS and console developers to create games, introduced a $250 peripheral with more memory and support for full-motion video, and added to new consoles a second controller port for multiplayer games. The attempts to develop a foothold in the games market were unsuccessful, as the system was designed strictly as a multimedia player and thus was under-powered compared to other gaming platforms on the market in most respects. Earlier CD-i games included entries in popular Nintendo franchises, although those games were not developed by Nintendo. Specifically, a Mario game (titled Hotel Mario), and three Legend of Zelda games were released: Zelda: The Wand of Gamelon, Link: The Faces of Evil and Zelda's Adventure. Nintendo and Philips had established an agreement to co-develop a CD-ROM enhancement for the Super Nintendo Entertainment System due to licensing disagreements with Nintendo's previous partner Sony (an agreement that produced a prototype console called the SNES-CD). While Philips and Nintendo never released such a CD-ROM add-on, Philips was still contractually allowed to continue using Nintendo characters. As announced at CES 1992, large number of full motion video titles such as Dragon's Lair and Mad Dog McCree appeared on the system. One of these, Burn:Cycle, is considered one of the stronger CD-i titles and was later ported to PC. The February 1994 issue of Electronic Gaming Monthly remarked that the CD-i's full motion video capabilities were its strongest point, and that nearly all of its best software required the MPEG upgrade card. Philips also released several versions of popular TV game shows for the CD-i, including versions of Jeopardy! (hosted by Alex Trebek), Name That Tune (hosted by Bob Goen), and two versions of The Joker's Wild (one for adults hosted by Wink Martindale and one for kids hosted by Marc Summers). All CD-i games in North America (with the exception of Name That Tune) had Charlie O'Donnell as announcer. The Netherlands also released its version of Lingo on the CD-i in 1994. In 1993, American musician Todd Rundgren created the first music-only fully interactive CD, No World Order, for the CD-i. This application allows the user to completely arrange the whole album in their own personal way with over 15,000 points of customization. Dutch eurodance duo 2 Unlimited released a CD-i compilation album in 1994 called "Beyond Limits" which contains standard CD tracks as well as CD-i-exclusive media on the disc. CD-i has a series of learning games ("edutainment") targeted at children from infancy to adolescence. Those intended for a younger audience included Busytown, The Berenstain Bears and various others which usually had vivid cartoon-like settings accompanied by music and logic puzzles. By mid-1996 the U.S. market for CD-i software had dried up and Philips had given up on releasing titles there, but continued to publish CD-i games in Europe, where the system still held some popularity from a video gaming perspective. With the home market exhausted, Philips tried with some success to position the technology as a solution for kiosk applications and industrial multimedia. Some homebrew developers have released video games on the CD-i format in later years, such as Frog Feast (2005) and Super Quartet (2018). Player models CD-i compatible models were released (as of April 1995) in the U.S., Canada, Benelux, France, Germany, the UK, Japan, Singapore and Hong Kong. It was reported to be released further in Brazil, India and Australia in the "coming months", with plans to also introduce it in China, South Africa, Indonesia and the Philippines. Philips models In addition to consumer models, professional and development players were sold by Philips Interactive Media Systems and their VARs. The first CD-i system was produced by Philips in collaboration with Kyocera in 1988 – the Philips 180/181/182 modular system. Philips marketed several CD-i player models as shown below. The CD-i player 100 series, which consisted of the three-unit 180/181/182 professional system, first demonstrated at the CD-ROM Conference in March 1988. The CD-i player 200 series, which includes the 205, 210, and 220 models. Models in the 200 series were designed for general consumption, and were available at major home electronics outlets around the world. The Philips CDI 910 is the American version of the CDI 205, the most basic model in the series and the first Philips CD-i model, released in December 1991. Originally priced about $799, within a year's time the price dropped to $599. The CD-i player 300 series, which includes the 310, 350, 360, and 370 models. The 300 series consists of portable players designed for the professional market and not marketed to home consumers. A popular use was multimedia sales presentations such as those used by pharmaceutical companies to provide product information to physicians, as the devices could be easily transported by sales representatives. The CD-i player 400 series, which includes the 450, 470, 490 models. The 400 models are slimmed-down units aimed at console and educational markets. The CDI 450 player, for instance, is a budget model designed to compete with game consoles. In this version, an infrared remote controller is not standard but optional, as this model is more gaming-oriented. This series was introduced at CES Chicago in June 1994 and the 450 player retailed at ƒ 799 in the Netherlands. The CD-i player 500 series, which includes the 550 model, which was essentially the same as the 450 with an installed digital video cartridge. It was introduced at CES Chicago in June 1994. The CD-i player 600 series, which includes the 601, 602, 604, 605, 615, 660, and 670 models. The 600 series is designed for professional applications and software development. Units in this line generally include support for floppy disk drives, keyboards and other computer peripherals. Some models can also be connected to an emulator and have software testing and debugging features. The CD-I player 700 series, which consists of the 740 model, the most advanced player and featuring an RS-232 port. It was only released in limited quantities. There also exist a number of hard-to-categorize models, such as the FW380i, an integrated mini-stereo and CD-i player; the 21TCDi30, a television with a built-in CD-i device; the CD-i/PC 2.0, a CD-i module with an ISA interface for IBM-compatible 486 PCs. Gallery Other manufacturers In addition to Philips, several manufacturers produced CD-i players some of which were still on sale years after Philips itself abandoned the format. Manufacturers included: Magnavox (a Philips subsidiary) made rebranded players for the American market. GoldStar / LG Electronics, the LG GDI-700 (c. 1997) was a professional player with a Motorola 68341 processor, faster than the Philips model. GoldStar had a portable player, including another small one without an LCD screen. Digital Video Systems Memorex Grundig Kyocera made the portable Pro 1000S model Maspro Denkoh released a GPS car navigation system with a built-in CD-i player, released in Japan in 1992. Saab Electric Sony produced two models branded Intelligent Discman, a hybrid home/portable CD-i player released in 1990-1991 for professional use only. NBS International Interactive Media (I2m) released in 1995 a CD-i PCI expansion card for 486 PCs, Pentium PCs, 68k-based Macintosh and PowerPC-based Macintosh computers Vobis Highscreen Manna Space branded CD-i models (based on Magnavox's or GoldStar's version of Philips CDI 450) were made for a Japanese travel agency with the same name in 1995. Bang & Olufsen, who produced a high-end television with a built-in CD-i device (Beocenter AV5) on the market from 1997-2001. Before the actual commercial debut of the CD-i format, some other companies had interest in building players and some made prototypes, but were never released – this includes Panasonic (who were originally a major backer of the format), Pioneer, JVC, Toshiba, Epson, Ricoh, Fujitsu, Samsung and Yamaha. In addition, Sanyo showed a prototype portable CD-i player in 1992. Hardware specifications TeleCD-i and CD-MATICS Recognizing the growing need among marketers for networked multimedia, Philips partnered in 1992 with Amsterdam-based CDMATICS to develop TeleCD-i (also TeleCD). In this concept, the CD-i player is connected to a network such as PSTN or Internet, enabling data-communication and rich media presentation. Dutch grocery chain Albert Heijn and mail-order company Neckermann were early adopters and introduced award-winning TeleCD-i applications for their home-shopping and home-delivery services. CDMATICS also developed the special Philips TeleCD-i Assistant and a set of software tools to help the worldwide multimedia industry to develop and implement TeleCD-i. TeleCD-i is the world's first networked multimedia application at the time of its introduction. In 1996, Philips acquired source code rights from CDMATICS. CD-Online Internet services on the CD-i devices were facilitated by the use of an additional hardware modem and "CD-Online" disc (renamed Web-i in the US), which Philips initially released in Britain in 1995 for $150 US. This service provided the CD-i with full internet access (with a 14.4k modem), including online shopping, email, and support for networked multiplayer gaming on select CD-i games. The service required a CD-i player with DV cartridge, and an "Internet Starter Kit" which initially retailed for £99.99. It was advertised as bringing "full Internet access to the living room on TV screens". Andy Stout, a writer for the official CD-i magazine, explained CD-Online: The CD-Online service went live in the UK on October 25, 1995 and in March 1996 in the Netherlands (for 399 guilders), and also released in Belgium. The system was reportedly scheduled to launch in the US as "Web-i" in August 1996. The domain cd-online.co.uk, which was used for the British CD-Online service, went offline in 2000. The Dutch domain cd-online.nl stopped updating too but remained online until 2007. Only one game was released that supported CD-Online, the first-person shooter game RAM Raid. Players from any country in the world could compete against each other as long as they had a copy of the game. Reception and market performance Philips had invested heavily in the CD-i format and system, and it was often compared with the Commodore CDTV as a single combination of computer, CD, and television. The product was touted as a single machine for home entertainment connected to a standard TV and controlled by a regular remote control – although the format was noted to have various non-entertainment business opportunities too, such as travel and tourism or the military. In 1990, Peugeot used CD-i for its point of sale application promoting its then-new 605 automobile, and it was also at the time used by fellow car manufacturer Renault for staff training programmes, and in Japan by the Ministry of Trade and Industry for an exhibition there. A Philips executive, Gaston Bastiaens, quoted in 1990 "CD-I will be 'the medium' for entertainment, education and information in the 90's.". Sony introduced its three portable CD-i players in June 1990, pitching them as "picture books with sound". The ambitious CD-i format had initially created much interest after its 1986 announcement, both in the west and in Japan, buoyed by the success of the CD. However, after repeated delays (hardware were first intended to be ready and shipped by Christmas 1987) interest was slowly lost. Electronic Arts for instance was enthusiastic about CD-i and formed a division for the development of video game titles on the format, but it was eventually halted with the intention of resuming when CD-i players would reach the market. The company eventually never resumed CD-i software development when it was released. The delay also gave more attention to the hyped Digital Video Interactive (DVI) in 1987, which demonstrated full screen, full motion video (FMV) using a compression chip on an IBM PC/AT computer. Amid the attention around its potential rival DVI, Philips and Sony decided to find a way to add full screen FMV abilities to the CD-i standard, causing further delay. Meanwhile, the Microsoft-backed CD-ROM standard was improving and solved certain video playback issues that were present on the CD-i – CD-ROM format products were already on the market by 1987. At the end, CD-ROM standard benefited from the CD-i and DVI mishaps, and by the time CD-i players for consumers were released in 1991, CD-ROM had already become known and established. Ron Gilbert commented in early 1990 "The CD-I specifications look great, but where are the machines? If they'd come out four years ago, they'd have been hot, but now they're behind the times." Another reason that led to fading interest pre-launch was the fact CD-i players would not launch with FMV but instead receive it later through a purchasable add-on cartridge (it was originally expected to come built-in) – as well as the obsolete Motorola processor, OS-9 software, and a launch price considered high. Although Philips had aggressively promoted their CD-i products in the U.S., by August 1993 Computer Gaming World reported that "skepticism persists about its long-term prospects" compared to other platforms like IBM PC compatibles, Apple Macintosh, and Sega Genesis. The magazine stated in January 1994 that despite Philips' new emphasis on games "CD-i is still not the answer for hardcore gamers", but the console "may yet surprise us all in the future". It recommended the CD-i with video cartridge for those needing to buy a new console as "The price is right and there is more software to support it", but 3DO Interactive Multiplayer was probably better for those who could wait a few months. The Electronic Entertainment August 1994 issue noted that the CD-i, along with the Atari Jaguar, neither have an "effective, let alone innovative" game library to compete against the then newly released Sega CD. After being outsold in the market by cheaper multimedia PCs, in 1994 Philips attempted to emphasize CD-i as a game playing machine, but this did not help the situation. An early 1995 review of the system in GamePro stated that "inconsistent game quality puts the CD-i at a disadvantage against other high-powered game producers." A late 1995 review in Next Generation criticized both Philips's approach to marketing the CD-i and the hardware itself ("The unit excels at practically nothing except FMV, and then only with the addition of a $200 digital video cartridge"). The magazine noted that while Philips had not yet officially discontinued the CD-i, it was dead for all intents and purposes, citing as evidence the fact that though Philips had a large booth at the 1995 Electronic Entertainment Expo, there was no CD-i hardware or software on display. Next Generation scored the console one out of five stars. Another trouble for Philips in 1995 was the formation of HDCD, which promised better quality video compared to Video CD's (VCD) MPEG-1 compression method – Philips had heavily promoted the CD-i's VCD playing capabilities. Philips Media consolidated its CD-i activities from its Los Angeles office in March 1996. It was reported in October 1996 that Philips was ready to "call it quits" in the American market. Sales In October 1994, Philips claimed an installed base of one million units for the CD-i worldwide. In 1996, The Wall Street Journal reported that total US sales amounted to 400,000 units. In the Netherlands, about 60,000 CD-i players were sold by the end of December 1994. Legacy Although extensively marketed by Philips, notably via infomercial, consumer interest in CD-i titles remained low. By 1994, sales of CD-i systems had begun to slow, and in 1998 the product line was dropped. Plans for a second generation CD-i system were certainly present and Argonaut Software was even designated to design chip sets for the successor to the CD-i. However, the then president Cor Boonstra saw no interest in the media area for Philips and so Philips sold everything, including the media subsidiary Polygram. The Dutch half of Philips Media was sold to Softmachine, which released The Lost Ride on the CD-i as the last product for the CD-i. Philips then also sold its French half of the gaming subsidiary, Philips Media BV, to French publisher Infogrames in 1997 along with the entire CD-i library. A CD-ROM add-on for the Super NES, which was announced for development with Nintendo in 1991, was never made. The last CD-i game was made by Infogrames, who released Solar Crusade in 1999. After its discontinuation, the CD-i was overwhelmingly panned by critics who blasted its graphics, games, and controls. Microsoft CEO Bill Gates admitted that initially he "was worried" about the CD-i due to Philips' heavy support for the device and its two-pronged attack on both the games console and PC markets, but that in retrospect, "It was a device that kind of basically got caught in the middle. It was a terrible game machine, and it was a terrible PC." The CD-i's various controllers were ranked the fifth worst video game controller by IGN editor Craig Harris. PC World ranked it as fourth on their list of "The 10 Worst Video Game Systems of All Time". Gamepro.com listed it as number four on their list of The 10 Worst-Selling Consoles of All Time. In 2008, CNET listed the system on its list of the worst game console(s) ever. In 2007, GameTrailers ranked the Philips CD-i as the fourth worst console of all time in its Top 10 Worst Console lineup. In later retrospective years, the CD-i has become (infamously) best known for its video games, particularly those from the Nintendo-licensed The Legend of Zelda series, considered by many to be of poor taste. Games that were most heavily criticized include Hotel Mario, Link: The Faces of Evil, Zelda: The Wand of Gamelon, and Zelda's Adventure. EGM's Seanbaby rated The Wand of Gamelon as one of the worst video games of all time. However, Burn:Cycle was positively received by critics and has often been held up as the standout title for the CD-i. See also CD-i Ready High Sierra Format 3DO Interactive Multiplayer MiniDisc CD-ROM Video CD Super NES CD-ROM Digital Video Interactive Commodore CDTV Pioneer LaserActive Sega CD FM Towns Tandy Video Information System NEC TurboDuo References External links Official Philips CD-I FAQ CD-i history CD-i hardware 1990s toys Audio storage CD-ROM-based consoles Compact disc Computer-related introductions in 1990 Home video game consoles Fourth-generation video game consoles Joint ventures Philips products Sony products Products introduced in 1990 Products and services discontinued in 1998 Regionless game consoles Video storage 68k architecture
43928242
https://en.wikipedia.org/wiki/Trishneet%20Arora
Trishneet Arora
Trishneet Arora (born 2 November 1993) is the founder and chief executive officer of TAC Security, a cyber security company. Arora has written books on cyber security, ethical hacking and web defence. He was named in Forbes 30 Under 30 2018 Asia list and Fortune (magazine) 40 Under 40 2019 List of India's Brightest Business Minds. Career Arora founded TAC Security, a cyber security company that provides protection to corporations against network vulnerabilities and data theft. Some of his clients are Reliance Industries, Central Bureau of Investigation, Punjab Police (India) and Gujarat Police. He helps the Punjab and Gujarat police in investigating cyber crimes, for which he has conducted training sessions with officials. Arora's company mainly provides vulnerability assessment and penetration testing services. According to Arora, there has been an increase in the number of attacks against portals of companies. Arora is also member of Forbes Technology Council. Awards and recognition Biographical film Film-maker Sunil Bohra is working on a biographical film about Arora, expected to be released in 2019. Hansal Mehta will be directing the movie and currently working on the story. References External links Official Facebook 1993 births Living people Businesspeople from Ludhiana English-language writers from India Indian technology writers Punjabi people Chief executives of computer security organizations Forbes 30 Under 30 recipients
454995
https://en.wikipedia.org/wiki/Deep%20packet%20inspection
Deep packet inspection
Deep packet inspection (DPI) is a type of data processing that inspects in detail the data being sent over a computer network, and may take actions such as alerting, blocking, re-routing, or logging it accordingly. Deep packet inspection is often used to baseline application behavior, analyze network usage, troubleshoot network performance, ensure that data is in the correct format, check for malicious code, eavesdropping, and internet censorship, among other purposes. There are multiple headers for IP packets; network equipment only needs to use the first of these (the IP header) for normal operation, but use of the second header (such as TCP or UDP) is normally considered to be shallow packet inspection (usually called stateful packet inspection) despite this definition. There are multiple ways to acquire packets for deep packet inspection. Using port mirroring (sometimes called Span Port) is a very common way, as well physically inserting a network tap which duplicates and sends the data stream to an analyzer tool for inspection. Deep Packet Inspection (and filtering) enables advanced network management, user service, and security functions as well as internet data mining, eavesdropping, and internet censorship. Although DPI has been used for Internet management for many years, some advocates of net neutrality fear that the technique may be used anticompetitively or to reduce the openness of the Internet. DPI is used in a wide range of applications, at the so-called "enterprise" level (corporations and larger institutions), in telecommunications service providers, and in governments. Background DPI technology boasts a long and technologically advanced history, starting in the 1990s, before the technology entered what is seen today as common, mainstream deployments. The technology traces its roots back over 30 years, when many of the pioneers contributed their inventions for use among industry participants, such as through common standards and early innovation, such as the following: RMON Sniffer Wireshark Essential DPI functionality includes analysis of packet headers and protocol fields. For example, Wireshark offers essential DPI functionality through its numerous dissectors that display field names and content and, in some cases, offer interpretation of field values. Some security solutions that offer DPI combine the functionality of an intrusion detection system (IDS) and an Intrusion prevention system (IPS) with a traditional stateful firewall. This combination makes it possible to detect certain attacks that neither the IDS/IPS nor the stateful firewall can catch on their own. Stateful firewalls, while able to see the beginning and end of a packet flow, cannot catch events on their own that would be out of bounds for a particular application. While IDSs are able to detect intrusions, they have very little capability in blocking such an attack. DPIs are used to prevent attacks from viruses and worms at wire speeds. More specifically, DPI can be effective against buffer overflow attacks, denial-of-service attacks (DoS), sophisticated intrusions, and a small percentage of worms that fit within a single packet. DPI-enabled devices have the ability to look at Layer 2 and beyond Layer 3 of the OSI model. In some cases, DPI can be invoked to look through Layer 2-7 of the OSI model. This includes headers and data protocol structures as well as the payload of the message. DPI functionality is invoked when a device looks or takes other action based on information beyond Layer 3 of the OSI model. DPI can identify and classify traffic based on a signature database that includes information extracted from the data part of a packet, allowing finer control than classification based only on header information. End points can utilize encryption and obfuscation techniques to evade DPI actions in many cases. A classified packet may be redirected, marked/tagged (see quality of service), blocked, rate limited, and of course, reported to a reporting agent in the network. In this way, HTTP errors of different classifications may be identified and forwarded for analysis. Many DPI devices can identify packet flows (rather than packet-by-packet analysis), allowing control actions based on accumulated flow information. At the enterprise level Initially security at the enterprise level was just a perimeter discipline, with a dominant philosophy of keeping unauthorized users out, and shielding authorized users from the outside world. The most frequently used tool for accomplishing this has been a stateful firewall. It can permit fine-grained control of access from the outside world to pre-defined destinations on the internal network, as well as permitting access back to other hosts only if a request to the outside world has been made previously. Vulnerabilities exist at network layers, however, that are not visible to a stateful firewall. Also, an increase in the use of laptops in enterprise makes it more difficult to prevent threats such as viruses, worms, and spyware from penetrating the corporate network, as many users will connect the laptop to less-secure networks such as home broadband connections or wireless networks in public locations. Firewalls also do not distinguish between permitted and forbidden uses of legitimately-accessed applications. DPI enables IT administrators and security officials to set policies and enforce them at all layers, including the application and user layer to help combat those threats. Deep Packet Inspection is able to detect a few kinds of buffer overflow attacks. DPI may be used by enterprise for Data Leak Prevention (DLP). When an e-mail user tries to send a protected file, the user may be given information on how to get the proper clearance to send the file. At network/Internet service providers In addition to using DPI to secure their internal networks, Internet service providers also apply it on the public networks provided to customers. Common uses of DPI by ISPs are lawful intercept, policy definition and enforcement, targeted advertising, quality of service, offering tiered services, and copyright enforcement. Lawful interception Service providers are required by almost all governments worldwide to enable lawful intercept capabilities. Decades ago in a legacy telephone environment, this was met by creating a traffic access point (TAP) using an intercepting proxy server that connects to the government's surveillance equipment. The acquisition component of this functionality may be provided in many ways, including DPI, DPI-enabled products that are "LI or CALEA-compliant" can be used – when directed by a court order – to access a user's datastream. Policy definition and enforcement Service providers obligated by the service-level agreement with their customers to provide a certain level of service and at the same time, enforce an acceptable use policy, may make use of DPI to implement certain policies that cover copyright infringements, illegal materials, and unfair use of bandwidth. In some countries the ISPs are required to perform filtering, depending on the country's laws. DPI allows service providers to "readily know the packets of information you are receiving online—from e-mail, to websites, to sharing of music, video and software downloads". Policies can be defined that allow or disallow connection to or from an IP address, certain protocols, or even heuristics that identify a certain application or behavior. Targeted advertising Because ISPs route the traffic of all of their customers, they are able to monitor web-browsing habits in a very detailed way allowing them to gain information about their customers' interests, which can be used by companies specializing in targeted advertising. At least 100,000 United States customers are tracked this way, and as many as 10% of U.S. customers have been tracked in this way. Technology providers include NebuAd, Front Porch, and Phorm. U.S. ISPs monitoring their customers include Knology and Wide Open West. In addition, the United Kingdom ISP British Telecom has admitted testing solutions from Phorm without their customers' knowledge or consent. Quality of service DPI can be used against net neutrality. Applications such as peer-to-peer (P2P) traffic present increasing problems for broadband service providers. Typically, P2P traffic is used by applications that do file sharing. These may be any kind of files (i.e. documents, music, videos, or applications). Due to the frequently large size of media files being transferred, P2P drives increasing traffic loads, requiring additional network capacity. Service providers say a minority of users generate large quantities of P2P traffic and degrade performance for the majority of broadband subscribers using applications such as e-mail or Web browsing which use less bandwidth. Poor network performance increases customer dissatisfaction and leads to a decline in service revenues. DPI allows the operators to oversell their available bandwidth while ensuring equitable bandwidth distribution to all users by preventing network congestion. Additionally, a higher priority can be allocated to a VoIP or video conferencing call which requires low latency versus web browsing which does not. This is the approach that service providers use to dynamically allocate bandwidth according to traffic that is passing through their networks. Tiered services Mobile and broadband service providers use DPI as a means to implement tiered service plans, to differentiate "walled garden" services from "value added", "all-you-can-eat" and "one-size-fits-all" data services. By being able to charge for a "walled garden", per application, per service, or "all-you-can-eat" rather than a "one-size-fits-all" package, the operator can tailor his offering to the individual subscriber and increase their average revenue per user (ARPU). A policy is created per user or user group, and the DPI system in turn enforces that policy, allowing the user access to different services and applications. Copyright enforcement ISPs are sometimes requested by copyright owners or required by courts or official policy to help enforce copyrights. In 2006, one of Denmark's largest ISPs, Tele2, was given a court injunction and told it must block its customers from accessing The Pirate Bay, a launching point for BitTorrent. Instead of prosecuting file sharers one at a time, the International Federation of the Phonographic Industry (IFPI) and the big four record labels EMI, Sony BMG, Universal Music, and Warner Music have sued ISPs such as Eircom for not doing enough about protecting their copyrights. The IFPI wants ISPs to filter traffic to remove illicitly uploaded and downloaded copyrighted material from their network, despite European directive 2000/31/EC clearly stating that ISPs may not be put under a general obligation to monitor the information they transmit, and directive 2002/58/EC granting European citizens a right to privacy of communications. The Motion Picture Association of America (MPAA) which enforces movie copyrights, has taken the position with the Federal Communications Commission (FCC) that network neutrality could hurt anti-piracy techniques such as deep packet inspection and other forms of filtering. Statistics DPI allows ISPs to gather statistical information about use patterns by user group. For instance, it might be of interest whether users with a 2Mbit connection use the network in a dissimilar manner to users with a 5Mbit connection. Access to trend data also helps network planning. By governments In addition to using DPI for the security of their own networks, governments in North America, Europe, and Asia use DPI for various purposes such as surveillance and censorship. Many of these programs are classified. United States FCC adopts Internet CALEA requirements: The FCC, pursuant to its mandate from the U.S. Congress, and in line with the policies of most countries worldwide, has required that all telecommunication providers, including Internet services, be capable of supporting the execution of a court order to provide real-time communication forensics of specified users. In 2006, the FCC adopted new Title 47, Subpart Z, rules requiring Internet Access Providers to meet these requirements. DPI was one of the platforms essential to meeting this requirement and has been deployed for this purpose throughout the U.S. The National Security Agency (NSA), with cooperation from AT&T Inc., has used Deep Packet Inspection to make internet traffic surveillance, sorting, and forwarding more intelligent. The DPI is used to find which packets are carrying e-mail or a Voice over Internet Protocol (VoIP) telephone call. Traffic associated with AT&T's Common Backbone was "split" between two fibers, dividing the signal so that 50 percent of the signal strength went to each output fiber. One of the output fibers was diverted to a secure room; the other carried communications on to AT&T's switching equipment. The secure room contained Narus traffic analyzers and logic servers; Narus states that such devices are capable of real-time data collection (recording data for consideration) and capture at 10 gigabits per second. Certain traffic was selected and sent over a dedicated line to a "central location" for analysis. According to an affidavit by expert witness J. Scott Marcus, a former senior advisor for Internet Technology at the US Federal Communications Commission, the diverted traffic "represented all, or substantially all, of AT&T’s peering traffic in the San Francisco Bay area", and thus, "the designers of the ... configuration made no attempt, in terms of location or position of the fiber split, to exclude data sources primarily of domestic data". Narus's Semantic Traffic Analyzer software, which runs on IBM or Dell Linux servers using DPI, sorts through IP traffic at 10Gbit/s to pick out specific messages based on a targeted e-mail address, IP address or, in the case of VoIP, telephone number. President George W. Bush and Attorney General Alberto R. Gonzales have asserted that they believe the president has the authority to order secret intercepts of telephone and e-mail exchanges between people inside the United States and their contacts abroad without obtaining a FISA warrant. The Defense Information Systems Agency has developed a sensor platform that uses Deep Packet Inspection. China The Chinese government uses Deep Packet Inspection to monitor and censor network traffic and content that it claims is harmful to Chinese citizens or state interests. This material includes pornography, information on religion, and political dissent. Chinese network ISPs use DPI to see if there is any sensitive keyword going through their network. If so, the connection will be cut. People within China often find themselves blocked while accessing Web sites containing content related to Taiwanese and Tibetan independence, Falun Gong, the Dalai Lama, the Tiananmen Square protests and massacre of 1989, political parties that oppose that of the ruling Communist party, or a variety of anti-Communist movements as those materials were signed as DPI sensitive keywords already. China previously blocked all VoIP traffic in and out of their country but many available VOIP applications now function in China. Voice traffic in Skype is unaffected, although text messages are subject to filtering, and messages containing sensitive material, such as curse-words, are simply not delivered, with no notification provided to either participant in the conversation. China also blocks visual media sites such as YouTube.com and various photography and blogging sites. Iran The Iranian government purchased a system, reportedly for deep packet inspection, in 2008 from Nokia Siemens Networks (NSN) (a joint venture Siemens AG, the German conglomerate, and Nokia Corp., the Finnish cell telephone company), now NSN is Nokia Solutions and Networks, according to a report in the Wall Street Journal in June, 2009, quoting NSN spokesperson Ben Roome. According to unnamed experts cited in the article, the system "enables authorities to not only block communication but to monitor it to gather information about individuals, as well as alter it for disinformation purposes". The system was purchased by the Telecommunication Infrastructure Co., part of the Iranian government's telecom monopoly. According to the Journal, NSN "provided equipment to Iran last year under the internationally recognized concept of 'lawful intercept,' said Mr. Roome. That relates to intercepting data for the purposes of combating terrorism, child pornography, drug trafficking, and other criminal activities carried out online, a capability that most if not all telecom companies have, he said.... The monitoring center that Nokia Siemens Networks sold to Iran was described in a company brochure as allowing 'the monitoring and interception of all types of voice and data communication on all networks.' The joint venture exited the business that included the monitoring equipment, what it called 'intelligence solution,' at the end of March, by selling it to Perusa Partners Fund 1 LP, a Munich-based investment firm, Mr. Roome said. He said the company determined it was no longer part of its core business. The NSN system followed on purchases by Iran from Secure Computing Corp. earlier in the decade. Questions have been raised about the reporting reliability of the Journal report by David Isenberg, an independent Washington, D.C.-based analyst and Cato Institute Adjunct Scholar, specifically saying that Mr. Roome is denying the quotes attributed to him and that he, Isenberg, also had similar complaints with one of the same Journal reporters in an earlier story. NSN has issued the following denial: NSN "has not provided any deep packet inspection, web censorship or Internet filtering capability to Iran". A concurrent article in The New York Times stated the NSN sale had been covered in a "spate of news reports in April [2009], including The Washington Times," and reviewed censorship of the Internet and other media in the country, but did not mention DPI. According to Walid Al-Saqaf, the developer of the internet censorship circumventor Alkasir, Iran was using deep packet inspection in February 2012, bringing internet speeds in the entire country to a near standstill. This briefly eliminated access to tools such as Tor and Alkasir. Russian Federation DPI is not yet mandated in Russia. Federal Law No.139 enforces blocking websites on the Russian Internet blacklist using IP filtering, but does not force ISPs into analyzing the data part of packets. Yet some ISPs still use different DPI solutions to implement blacklisting. For 2019, the governmental agency Roskomnadzor is planning a nationwide rollout of DPI after the pilot project in one of the country's regions, at an estimated cost of 20 billion roubles (US$300M). Some human rights activists consider Deep Packet inspection contrary to Article 23 of the Constitution of the Russian Federation, though a legal process to prove or refute that has never taken place. Singapore The city state reportedly employs deep packet inspection of Internet traffic. Syria The state reportedly employs deep packet inspection of Internet traffic, to analyze and block forbidden transit. Malaysia The incumbent Malaysian government, headed by Barisan Nasional, was said to be using DPI against a political opponent during the run-up to the 13th general elections held on 5 May 2013. The purpose of DPI, in this instance, was to block and/or hinder access to selected websites, e.g. Facebook accounts, blogs and news portals. Egypt Since 2015, Egypt reportedly started to join the list which was constantly being denied by the Egyptian National Telecom Regulatory Authority (NTRA) officials. However, it came to news when the country decided to block the encrypted messaging app Signal as announced by the application's developer. In April 2017, all VOIP applications including FaceTime, Facebook Messenger, Viber, Whatsapp calls and Skype have been all blocked in the country. Vietnam Vietnam launched its network security center and required ISPs to upgrade their hardware systems to use deep packet inspection to block Internet traffic. Net neutrality People and organizations concerned about privacy or network neutrality find inspection of the content layers of the Internet protocol to be offensive, saying for example, "the 'Net was built on open access and non-discrimination of packets!" Critics of network neutrality rules, meanwhile, call them "a solution in search of a problem" and say that net neutrality rules would reduce incentives to upgrade networks and launch next-generation network services. Deep packet inspection is considered by many to undermine the infrastructure of the internet. Encryption and tunneling subverting DPI With increased use of HTTPS and privacy tunneling using VPNs, the effectiveness of DPI is coming into question. In response, many web application firewalls now offer HTTPS inspection, where they decrypt HTTPS traffic to analyse it. The WAF can either terminate the encryption, so the connection between WAF and client browser uses plain HTTP, or re-encrypt the data using its own HTTPS certificate, which must be distributed to clients beforehand. The techniques used in HTTPS / SSL Inspection (also known as HTTPS / SSL Interception) are the same used by man-in-the-middle (MiTM) attacks It works like this: Client wants to connect to https://www.targetwebsite.com Traffic goes through Firewall or Security Product Firewall works as transparent Proxy Firewall Creates SSL Certificate signed by its own "CompanyFirewall CA" Firewall presents this "CompanyFirewall CA" Signed Certificate to Client (not the targetwebsite.com Certificate) At the same time the Firewall on its own connects to https://www.targetwebsite.com targetwebsite.com Presents its Officially Signed Certificate (Signed by a Trusted CA) Firewall checks Certificate Trust chain on its own Firewall now works as Man-in-the-middle. Traffic from Client will be decrypted (with Key Exchange Information from Client), analysed (for harmful traffic, policy violation or viruses), encrypted (with Key Exchange Information from targetwebsite.com) and sent to targetwebsite.com Traffic from targetwebsite.com will also be decrypted (with Key Exchange Information from targetwebsite.com), analysed (like above), encrypted (with Key Exchange Information from Client) and sent to Client. The Firewall Product can read all information exchanged between SSL-Client and SSL-Server (targetwebsite.com) This can be done with any TLS-Terminated connection (not only HTTPS) as long as the firewall product can modify the TrustStore of the SSL-Client Infrastructure security Traditionally the mantra which has served ISP well has been to only operate at layer 4 and below of the OSI model. This is because simply deciding where packets go and routing them is comparably very easy to handle securely. This traditional model still allows ISPs to accomplish required tasks safely such as restricting bandwidth depending on the amount of bandwidth that is used (layer 4 and below) rather than per protocol or application type (layer 7). There is a very strong and often ignored argument that ISP action above layer 4 of the OSI model provides what are known in the security community as 'stepping stones' or platforms to conduct man in the middle attacks from. This problem is exacerbated by ISP's often choosing cheaper hardware with poor security track records for the very difficult and arguably impossible to secure task of Deep Packet Inspection. OpenBSD's packet filter specifically avoids DPI for the very reason that it cannot be done securely with confidence. This means that DPI dependent security services such as TalkTalk's former HomeSafe implementation are actually trading the security of a few (protectable and often already protectable in many more effective ways) at a cost of decreased security for all where users also have a far less possibility of mitigating the risk. The HomeSafe service in particular is opt in for blocking but its DPI cannot be opted out of, even for business users. Software nDPI (a fork from OpenDPI which is EoL by the developers of ntop) is the open source version for non-obfuscated protocols. PACE, another such engine, includes obfuscated and encrypted protocols, which are the types associated with Skype or encrypted BitTorrent. As OpenDPI is no longer maintained, an OpenDPI-fork named nDPI has been created, actively maintained and extended with new protocols including Skype, Webex, Citrix and many others. L7-Filter is a classifier for Linux's Netfilter that identifies packets based on application layer data. It can classify packets such as Kazaa, HTTP, Jabber, Citrix, Bittorrent, FTP, Gnucleus, eDonkey2000, and others. It classifies streaming, mailing, P2P, VOIP, protocols, and gaming applications. The software has been retired and replaced by the open source Netify DPI Engine. Hippie (Hi-Performance Protocol Identification Engine) is an open source project which was developed as Linux kernel module. It was developed by Josh Ballard. It supports both DPI as well as firewall functionality. SPID (Statistical Protocol IDentification) project is based on statistical analysis of network flows to identify application traffic. The SPID algorithm can detect the application layer protocol (layer 7) by signatures (a sequence of bytes at a particular offset in the handshake), by analyzing flow information (packet sizes, etc.) and payload statistics (how frequently the byte value occurs in order to measure entropy) from pcap files. It is just a proof of concept application and currently supports approximately 15 application/protocols such as eDonkey Obfuscation traffic, Skype UDP and TCP, BitTorrent, IMAP, IRC, MSN, and others. Tstat (TCP STatistic and Analysis Tool) provides insight into traffic patterns and gives details and statistics for numerous applications and protocols. Libprotoident introduces Lightweight Packet Inspection (LPI), which examines only the first four bytes of payload in each direction. That allows to minimize privacy concerns, while decreasing the disk space needed to store the packet traces necessary for the classification. Libprotoident supports over 200 different protocols and the classification is based on a combined approach using payload pattern matching, payload size, port numbers, and IP matching. A French company called Amesys, designed and sold an intrusive and massive internet monitoring system Eagle to Muammar Gaddafi. Comparison A comprehensive comparison of various network traffic classifiers, which depend on Deep Packet Inspection (PACE, OpenDPI, 4 different configurations of L7-filter, NDPI, Libprotoident, and Cisco NBAR), is shown in the Independent Comparison of Popular DPI Tools for Traffic Classification. Hardware There is a greater emphasis being placed on deep packet inspection - this comes in light after the rejection of both the SOPA and PIPA bills. Many current DPI methods are slow and costly, especially for high bandwidth applications. More efficient methods of DPI are being developed. Specialized routers are now able to perform DPI; routers armed with a dictionary of programs will help identify the purposes behind the LAN and internet traffic they are routing. Cisco Systems is now on their second iteration of DPI enabled routers, with their announcement of the CISCO ISR G2 router. See also Common carrier Data Retention Directive Deep content inspection ECHELON Firewall Foreign Intelligence Surveillance Act Golden Shield Intrusion prevention system Network neutrality NSA warrantless surveillance controversy Packet analyzer Stateful firewall Theta Networks Wireshark References External links What is "Deep Inspection"? by Marcus J. Ranum. Retrieved 10 December 2018. A collection of essays from industry experts What Is Deep Packet Inspection and Why the Controversy White Paper "Deep Packet Inspection – Technology, Applications & Net Neutrality" Egypt's cyber-crackdown aided by US Company - DPI used by Egyptian government in recent internet crackdown Deep Packet Inspection puts its stamp on an evolving Internet Deep Packet Inspection Using Quotient Filter Computer network security Internet censorship in China Internet censorship Internet privacy Net neutrality Packets (information technology)
9952783
https://en.wikipedia.org/wiki/PGA%20Tour%20Golf%20Team%20Challenge
PGA Tour Golf Team Challenge
PGA Tour Golf Team Challenge is a trackball-based golf arcade game series manufactured by Global VR of San Jose, California. Based on the PC version of EA Sports' PGA Tour game, the game is run from a computer within the cabinet which has an Intel Pentium 4 processor and Nvidia GeForce video card. The current - and final - edition of the PGA series is titled PGA Tour Golf ‘Team Challenge’. A player can select from a number of real PGA pro golfers & PGA tour courses in either 9 or 18 holes, against up to 3 other players or in team play of up to 4 players per team. The game has 3 main modes of play; ‘Play Golf’ mode (does not require a players’ card or online capabilities) has 13 eighteen-hole courses but if the machine is online/tournament enabled, there's a 14th ‘bonus’ course which changes every month. There are training courses and a driving range here as well as a front 9 ‘course’ and back 9 ‘course’ made up of various Fantasy course holes. ‘World Tour’ mode provides access to all 24 courses each with two hidden/secret skill-shots which unlock customizable items for your created golfer. This mode keeps track of your personal best course scores and once all 24 courses are complete, a ranking will be assigned using your combined average score - Rookie, Amateur, Scratch, Club Pro, Pro, Champion & Legend. You can continue to improve upon your scores to gain a higher ranking. ‘Tournaments’ run weekly featuring one of the 24 courses within the game. Tournament Leaderboards are displayed during the machine's ‘attract mode’ screens. Online play (Tournaments and World Tour) requires the Global VR 2nd generation "Smart Card" reader (recognizable by two LED lights on the front) and accompanying Players’ Card Format for saving created golfers as well as World Tour stats & unlockable items. The original magnetic stripe card reader from the first three editions of PGA Tour Golf (lacks LED lights on the front) is not compatible with ‘Challenge’ and ‘Team Challenge’ editions. History 2002 - EA Sports' PGA Tour Golf is released, running on DFI computer featuring 512 MB RAM, 20 GB hard drive, P4 Processor and Nvidia GeForce 4 video card. 7 courses included; Pebble Beach, The Prince Course, Royal Birkdale, Spyglass Hill, TPC Sawgrass, Timber Hill & Scorpion Ridge. Tournaments are introduced with version 1.2 update October ‘02. August 2003 - Championship edition adds seven new courses (Sahalee, St. Andrews, Poppy Hills, TPC Scottsdale, Highlands, Predator, Timber Hill 2, Scorpion Ridge 2), weather effects, time of day, lefty golfers. September 2003 - Global VR offers conversion kits for sale to Operators who are unhappy with the frequent and expensive Golden Tee upgrades to convert their cabinets to PGA Tour. November 2003 - Championship II adds 5 new courses (including Bay Hill), The upgrade computer, ‘Everlast’, is introduced. Operators can run their own tournaments as well as Global VR tournaments. March 2004 - Championship III adds 4 new courses (Kapalua, Coeur d’Alene, Red Mountain Creek, Emerald Dragon) bringing count to 19 Introduces World Tour Mode. The top WT players go to the $100,000 North American Championship in Las Vegas. The number of Tournaments are increased to bi-weekly instead of monthly now. April 2005 - Challenge Edition brings course number up to 22 (Great White North, Bethpage Black, Edgewood Tahoe, Harbour Town, Pinehurst 2, Sherwood, TPC Avenel, Troon North Monument, Turnberry). Introduces new card readers which utilizes the smart card chip in players card. Adds Game-Face feature and Plus + Points. Also, new dedicated cabinet is offered which has the Everlast computer included. 2006 - Team Challenge 2006 brings course count up to 24. Adds team play. 2007 - Global VR ends the PGA Golf series due to the expensive licensing agreement with EA Sports as well as years of litigation costs defending against Golden Tee makers in court. July 2008 - Tournament support is ended and All-Access Pass is offered in its place, unlocking all 24 courses in Team Challenge without the need for machines to be Tournament Enabled. Courses Note: Some courses only available to tournament (online) enabled machines. Fantasy Courses The Greek Isles The Great White North Red Mountain Creek Black Rock Cove Emerald Dragon The Highlands The Predator Real Courses Bethpage Black Edgewood Tahoe Harbour Town Golf Links Pumpkin Ridge Coghill Pasatiempo Golf Club Reflection Bay Troon North Golf Club - Monument Course Turnberry Golf Club - Ailsa Course Bay Hill Club & Lodge Coeur d’Alene Resort Course Colonial Country Club Kapalua Plantation Course Sahalee Country Club St Andrews Links TPC at Sawgrass Troon North Golf Club - Pinnacle Course Golfers Available Stuart Appleby Retief Goosen Nataile Gulbis Justin Rose John Daly Jim Furyk Colin Montgomerie Vijay Singh Since the game has the PC version of Tiger Woods Golf embedded within, pros who were unlicensed for the coin-op version can be substituted in place of an existing golfer with simple modifications within the game's files (Tiger Woods, Jack Nicklaus, Ben Hogan, Adam Scott and Jesper Parnevik for example). Hardware & Upgrades The series originally launched with a Beige ‘DFI’ computer within. As the series continued, Global VR provided operators with new software on compact discs (to be installed on the computer's hard drive) featuring the latest edition of PGA Tour Golf. The hardware demands for each new edition increased and Global VR provided a computer upgrade option; the superior black ‘Everlast’ computer. Many operators passed on spending on hardware upgrades so subsequent editions of the series may have seen noticeable lag in gameplay as a result. The DFI computer has a nb32 motherboard with maximum capable processor upgrade of a 2.8 GHz Intel Pentium 4 SL7EY 512/400 MHz socket 478N. The Everlast computer has a PS35-BL motherboard with a maximum capable processor upgrade of a 3.4 GHz Intel Pentium 4 SL7PP 865/875 socket 478. Both computers require an AGP (not PCI-e) type video card - Nvidia GeForce 5700 - 7900 is recommended. Also, both computers will perform better if the onboard memory is increased to 1 GB of RAM or more. The compatible memory for a DFI is SDRAM pc133 pc100. 512MB per memory stick. There are 3 slots for memory on a DFI motherboard for up to 1.5GB of memory. References 2006 video games Arcade video games Arcade-only video games EA Sports games Golf video games Multiplayer and single-player video games PGA Tour Trackball video games Video games developed in the United States
24529387
https://en.wikipedia.org/wiki/InterSystems
InterSystems
InterSystems Corporation is a privately held vendor of software systems and technology for high-performance database management, rapid application development, integration, and healthcare information systems. The vendor's products include InterSystems IRIS Data Platform, Caché Database Management System, the InterSystems Ensemble integration platform, the HealthShare healthcare informatics platform and TrakCare healthcare information system, which is sold outside the United States. InterSystems is based in Cambridge, Massachusetts. The company's revenue was $727 million in 2019. History InterSystems was founded in 1978 by Phillip T. (Terry) Ragon, its current CEO. The firm was one of the original vendors of M-technology (aka MUMPS) systems, with a product called ISM. Over the years, it acquired several other MUMPS implementations: DTM from Data Tree (1993); DSM from Digital (1995); and MSM from Micronetics (1998); making InterSystems the dominant M technology vendor. The firm eventually started combining features from these products into one they called OpenM, then consolidated the technologies into a product, Caché, in 1997. At that time they stopped new development for all of their legacy M-based products (although the company still supports existing customers). They launched Ensemble, an integration platform, in 2003 and HealthShare, a scalable health informatics platform, in 2006. In 2007, InterSystems purchased TrakHealth, an Australian vendor of TrakCare, a modular healthcare information system based on InterSystems technology. In May 2011, the firm launched Globals as a free database based on the multi-dimensional array storage technology used in Caché. In September 2011, InterSystems purchased Siemens Health Services (SHS) France from its parent company, Siemens. In September 2017, InterSystems announced InterSystems IRIS Data Platform, which, the company said, combines database management capabilities together with interoperability and analytics, as well as technologies such as sharding for performance. Products The company's products include the following: InterSystems IRIS data platform, a hybrid multi-model database management system for real-time transactions and analytics that is available as a private or public fully managed cloud platform. InterSystems IRIS for Health, a data platform that supports healthcare messaging protocols such as FHIR, HL7, and IHE. HealthShare, a healthcare informatics platform that supports the creation of and secure access to unified care records. TrakCare, a web-based healthcare information system, available outside the U.S. InterSystems Caché, a multi-model database management systems and application server. InterSystems Ensemble, a rapid integration and application development platform. In 2020, InterSystems was named a Visionary in Gartner’s Magic Quadrant for cloud database management systems for its InterSystems IRIS technology. Customers Epic Systems, a privately held health records vendor, is the company’s largest customer and has been using InterSystems technology for more than 40 years. Epic originally built its electronic medical records software on InterSystems Caché but used InterSystems IRIS data platform as the foundation of a new release of its software launched in 2020. As of 2015, Epic EMR software held the records of 54% of all U.S. patients and 2.5% of patients globally. In July 2020, the U.S. Department of Veterans Affairs launched a HealthShare-based platform called InterSystems Veterans Data Integration and Federation Enterprise Platform (VDIF EP) for developing longitudinal patient records. VDIF EP enables care providers both within and outside the Veterans Health Administration to access veterans’ patient records. The VA has used VDIF EP for tracking COVID-19 infections among veterans and VA medical personnel and for managing resource deployment across 172 VA medical centers and more than 1,000 outpatient clinics. Other major InterSystems customers include Credit Suisse, whose trading platform uses InterSystems Caché; the European Space Agency, which used InterSystems Caché for its Gaia mission to create a 3D map of the Milky Way; Partners Healthcare, which built its electronic health records system using InterSystems Caché and Ensemble; and the national health services of England, Scotland, and Wales, which use TrakCare for sharing patient health information and e-prescribing. 3M, BNY Mellon, Canon, Franklin Templeton, HSBC, MSC Mediterranean Shipping Company, Olympus, Ricoh, SPAR, and TD Ameritrade also use InterSystems software. Microsoft dispute On August 14, 2008, the Boston Globe reported that InterSystems was filing a lawsuit against Microsoft Corporation, another tenant in its Cambridge, Mass., headquarters, seeking to prevent Microsoft from expanding in the building. InterSystems also filed a lawsuit against building owner Equity Office Partners, a subsidiary of the Blackstone Group, "contending that it conspired with Microsoft to lease space that InterSystems had rights to, and sought to drive up rents in the process". In 2010, CEO Terry Ragon led a coalition in Cambridge called Save Our Skyline to protest a city zoning change that would have allowed more signs on top of commercial buildings, partly in response to Microsoft's desire to put a sign on top of their shared building. Both disputes were eventually settled, and Microsoft and InterSystems agreed to both put low signs only in front of the building at street level. References External links InterSystems website Software companies based in Massachusetts Companies based in Cambridge, Massachusetts Privately held companies based in Massachusetts Relational database management systems Object-oriented database management systems Proprietary software Electronic health record software companies Software companies of the United States
59860985
https://en.wikipedia.org/wiki/Container%20Linux
Container Linux
Container Linux (formerly CoreOS Linux) is a discontinued open-source lightweight operating system based on the Linux kernel and designed for providing infrastructure to clustered deployments, while focusing on automation, ease of application deployment, security, reliability and scalability. As an operating system, Container Linux provided only the minimal functionality required for deploying applications inside software containers, together with built-in mechanisms for service discovery and configuration sharing. Container Linux shares foundations with Gentoo Linux, Chrome OS, and Chromium OS through a common software development kit (SDK). Container Linux adds new functionality and customization to this shared foundation to support server hardware and use cases. CoreOS was developed primarily by Alex Polvi, Brandon Philips and Michael Marineau, with its major features available as a stable release. The CoreOS team announced the end-of-life for Container Linux on May 26, 2020, offering Fedora CoreOS, and RHEL CoreOS as its replacement, both based on Red Hat. Overview Container Linux provides no package manager as a way for distributing payload applications, requiring instead all applications to run inside their containers. Serving as a single control host, a Container Linux instance uses the underlying operating-system-level virtualization features of the Linux kernel to create and configure multiple containers that perform as isolated Linux systems. That way, resource partitioning between containers is performed through multiple isolated userspace instances, instead of using a hypervisor and providing full-fledged virtual machines. This approach relies on the Linux kernel's cgroups and namespaces functionalities, which together provide abilities to limit, account and isolate resource usage (CPU, memory, disk I/O, etc.) for the collections of userspace processes. Initially, Container Linux exclusively used Docker as a component providing an additional layer of abstraction and interface to the operating-system-level virtualization features of the Linux kernel, as well as providing a standardized format for containers that allows applications to run in different environments. In December 2014, CoreOS released and started to support rkt (initially released as Rocket) as an alternative to Docker, providing through it another standardized format of the application-container images, the related definition of the container runtime environment, and a protocol for discovering and retrieving container images. CoreOS provides rkt as an implementation of the so-called app container (appc) specification that describes required properties of the application container image (ACI); CoreOS initiated appc and ACI as an independent committee-steered set of specifications, aiming at having them become part of the vendor- and operating-system-independent Open Container Initiative (OCI; initially named the Open Container Project or OCP) containerization standard, which was announced in June 2015. Container Linux uses ebuild scripts from Gentoo Linux for automated compilation of its system components, and uses systemd as its primary init system with tight integration between systemd and various Container Linux's internal mechanisms. Updates distribution Container Linux achieves additional security and reliability of its operating system updates by employing FastPatch as a dual-partition scheme for the read-only part of its installation, meaning that the updates are performed as a whole and installed onto a passive secondary boot partition that becomes active upon a reboot or kexec. This approach avoids possible issues arising from updating only certain parts of the operating system, ensures easy rollbacks to a known-to-be-stable version of the operating system, and allows each boot partition to be signed for additional security. The root partition and its root file system are automatically resized to fill all available disk-space upon reboots; while the root partition provides read-write storage space, the operating system itself is mounted read-only under . To ensure that only a certain part of the cluster reboots at once when the operating system updates are applied, preserving that way the resources required for running deployed applications, CoreOS provides locksmith as a reboot manager for Container Linux. Using locksmith, one can select between different update strategies that are determined by how the reboots are performed as the last step in applying updates; for example, one can configure how many cluster members are allowed to reboot simultaneously. Internally, locksmith operates as the daemon that runs on cluster members, while the command-line utility manages configuration parameters. Locksmith is written in the Go language and distributed under the terms of the Apache License 2.0. The updates distribution system employed by Container Linux is based on Google's open-source Omaha project, which provides a mechanism for rolling out updates and the underlying request–response protocol based on XML. Additionally, CoreOS provides CoreUpdate as a web-based dashboard for the management of cluster-wide updates. Operations available through CoreUpdate include assigning cluster members to different groups that share customized update policies, reviewing cluster-wide breakdowns of Container Linux versions, stopping and restarting updates, and reviewing recorded update logs. CoreUpdate also provides a HTTP-based API that allows its integration into third-party utilities or deployment systems. Cluster infrastructure Container Linux provides etcd, a daemon that runs across all computers in a cluster and provides a dynamic configuration registry, allowing various configuration data to be easily and reliably shared between the cluster members. Since the key–value data stored within is automatically distributed and replicated with automated master election and consensus establishment using the Raft algorithm, all changes in stored data are reflected across the entire cluster, while the achieved redundancy prevents failures of single cluster members from causing data loss. Beside the configuration management, also provides service discovery by allowing deployed applications to announce themselves and the services they offer. Communication with is performed through an exposed REST-based API, which internally uses JSON on top of HTTP; the API may be used directly (through or , for example), or indirectly through , which is a specialized command-line utility also supplied by CoreOS. Etcd is also used in Kubernetes software. Container Linux also provides the cluster manager which controls Container Linux's separate systemd instances at the cluster level. As of 2017 "fleet" is no longer actively developed and is deprecated in favor of Kubernetes. By using , Container Linux creates a distributed init system that ties together separate systemd instances and a cluster-wide deployment; internally, daemon communicates with local instances over D-Bus, and with the deployment through its exposed API. Using allows the deployment of single or multiple containers cluster-wide, with more advanced options including redundancy, failover, deployment to specific cluster members, dependencies between containers, and grouped deployment of containers. A command-line utility called is used to configure and monitor this distributed init system; internally, it communicates with the daemon using a JSON-based API on top of HTTP, which may also be used directly. When used locally on a cluster member, communicates with the local instance over a Unix domain socket; when used from an external host, SSH tunneling is used with authentication provided through public SSH keys. All of the above-mentioned daemons and command-line utilities (, , and ) are written in the Go language and distributed under the terms of the Apache License 2.0. Deployment When running on dedicated hardware, Container Linux can be either permanently installed to local storage, such as a hard disk drive (HDD) or solid-state drive (SSD), or booted remotely over a network using Preboot Execution Environment (PXE) in general, or iPXE as one of its implementations. CoreOS also supports deployments on various hardware virtualization platforms, including Amazon EC2, DigitalOcean, Google Compute Engine, Microsoft Azure, OpenStack, QEMU/KVM, Vagrant and VMware. Container Linux may also be installed on Citrix XenServer, noting that a "template" for CoreOS exists. Container Linux can also be deployed through its commercial distribution called Tectonic, which additionally integrates Google's Kubernetes as a cluster management utility. , Tectonic is planned to be offered as beta software to select customers. Furthermore, CoreOS provides Flannel as a component implementing an overlay network required primarily for the integration with Kubernetes. , Container Linux supports only the x86-64 architecture. Derivatives Following its acquisition of CoreOS, Inc. in January 2018, Red Hat announced that it would be merging CoreOS Container Linux with Red Hat's Project Atomic, to create a new operating system, Red Hat CoreOS, while aligning the upstream Fedora Project open source community around Fedora CoreOS, combining technologies from both predecessors. On March 6, 2018, Kinvolk GmbH announced Flatcar Container Linux, a derivative of CoreOS Container Linux. This tracks the upstream CoreOS alpha/beta/stable channel releases, with an experimental Edge release channel added in May 2019. Reception LWN.net reviewed CoreOS in 2014: See also Application virtualization software technology that encapsulates application software from the operating system on which it is executed Comparison of application virtualization software various portable and scripting language virtual machines Comparison of platform virtualization software various emulators and hypervisors, which emulate the whole physical computers LXC (Linux Containers) an environment for running multiple isolated Linux systems (containers) on a single Linux control host Operating-system-level virtualization implementations based on operating system kernel's support for multiple isolated userspace instances Software as a service (SaaS) a software licensing and delivery model that hosts the software centrally and licenses it on a subscription basis Virtualization a general concept of providing virtual versions of computer hardware platforms, operating systems, storage devices, etc. References External links Official and GitHub source code repositories: , , , and First glimpse at CoreOS, September 3, 2013, by Sébastien Han CoreOS: Linux for the cloud and the datacenter, ZDNet, July 2, 2014, by Steven J. Vaughan-Nichols What's CoreOS? An existential threat to Linux vendors, InfoWorld, October 9, 2014, by Matt Asay Understanding CoreOS distributed architecture, March 4, 2015, a talk to Alex Polvi by Aaron Delp and Brian Gracely CoreOS fleet architecture, August 26, 2014, by Brian Waldon et al. Running CoreOS on Google Compute Engine, May 23, 2014 CoreOS moves from Btrfs to Ext4 + OverlayFS, Phoronix, January 18, 2015, by Michael Larabel Containers and persistent data, LWN.net, May 28, 2015, by Josh Berkus Containerization software Linux containerization Operating systems based on the Linux kernel Red Hat software Software using the Apache license Virtualization-related software for Linux X86-64 operating systems
8441513
https://en.wikipedia.org/wiki/Bachchu%20Kadu
Bachchu Kadu
Omprakash Babarao Kadu is an independent Member of the Legislative Assembly from Achalpur, Maharashtra, India. Achalpur assembly constituency is a part of Amravati (Lok Sabha constituency). He has been elected to Maharashtra Legislative Assembly for four consecutive terms from 2004 to 2019. On 19 October 2014, Kadu won the assembly election, defeating congress candidate Bablu Deshmukh candidate by more than 10000 votes. This was the first time in Achalpur assembly elections that a candidate won three times in succession.In 2019 election he won the assembly election by total of 81,252 votes which 44% of the total voters.He is the first candidate to be elected for a fourth consecutive term in Achalpur Assembly elections. He is Leader of Prahar Janshakti Party. He is part of Newly formed MahaVikas Aaghadi or MVA. His party also got one seat from melghat constituency. Rajkumar patel won election from his party. He took oath as Minister of state in uddhav thackeray led Mahavikas aghadi government from shivsena quota He helps many poor patients by taking them to Mumbai for advanced medical treatment and organizes blood donation camps regularly in his constituency. He helps poor patients by providing them much needed medical aid up to 18 lakh for major surgery. In 2019 maharashtra assembly elections he got elected for fourth consecutive term from achalpur constituency by defeating congress candidate bablu deshmukh by margin of 8000 votes. He sworn in as minister of state in uddhav thackeray led mahavikas aghadi. Early life Initially, he was influenced by Balasaheb Thackeray's aggressive politics. Hailing from Belora in Chandurbazar taluka. Bachchu first launched an agitation against Jalsa in his village along with his school mates when he was studying in Class 9. In 1997, he successfully contested Panchayat Samiti elections and became Chairman of the Chandurbazar Panchayat Samiti. He exposed the corruption in the toilet scheme. After clashes with the guardian minister, he quit Shiv Sena when he felt that the party did not support him. When he learned about high incidences of blindness among the old rural folks, he organised a cataract operation camp on the day of election. He and his team donated 350 bottles of blood to KEM Hospital in Mumbai on the occasion of his brother's marriage. He contested the assembly election from achalpur in 1999 and lost by a slender margin of 2,231 votes. But despite not having powers, Bachchu Kadu continued to fight for the common people. He launched a series of agitations like 'auctioning the chairs of inefficient government servants, half burial agitation, Virugiri, Saap Chhodo andolan, Sutli bomb agitation and secured justice for the common people. He again contested the assembly election in 2004 and defeated the former guardian minister. Now his agitations have got political power. He won the election for the third time in row in 2014 as an independent candidate. He is continuing his series of agitations even after becoming an MLA and has created an image as ‘Aapla Manoos’. His Supporter called him as 'Apna Bhidu Bacchu Kadu' Controversies He climbed an overhead water tank with his supporters and threatened to jump down if the police apprehended them in December 2006. The 'Sholay' styled agitation was Kadu's attempt to draw the attention of the central government to Vidarbha farmers’ suicides. After senior ministers gave an assurance that Kadu's demands would be taken up for discussion in the state cabinet within a month, Kadu and his followers agreed to end their agitation, 24 hours after they started it. Kadu ended the agitation only after warning deputy Chief Minister R.R. Patil that if his demands were not considered within a month, he would resume the agitation. On Friday, 14 January 2011, Kadu slapped a Health Ministry clerk, Chandravdan Hagavane. The Maharashtra Secretariat employee union responded with a strike. The police station registered a complaint against Kadu. He beat Indian National Party's candidate Wasudha Deshmukh in the assembly election. Kadu and his followers were working under the banner called Prahar Yuvashakti Sanghatana that became the Prahar Party. On 22 April 2017, Punjab Bharatiya Janata Party MP Hema Malini said she would take action against Kadu for making derogatory comments towards Malini days earlier .In 2019, he entered in a bank premises along with his supporters and threatened and abused a bank manager. This created an outrage among bankers and they started a social media campaign against Bachhu Kaddu exposing his misbehaviour and uncivilised nature. Positions held 1997: Elected as member of Panchayat Samiti 1997: Elected as Chandurbazar Panchayat Samiti 2004: Elected to Maharashtra Legislative Assembly (1st term) 2009: Re-elected to Maharashtra Legislative Assembly (2nd term) 2014: Re-elected to Maharashtra Legislative Assembly (3rd term) 2019: Re-elected to Maharashtra Legislative Assembly (4th term) 2019: Swear in as minister of state in Udhav Thackeray Government. 2019: Appointed as minister of state for Water Resources (Irrigation) & Command Area Development, School Education, Woman & Child Development, Labour, OBC-SEBC-SBC-VJNT Welfare 2020: Appointed as guardian minister of Akola district See also Uddhav Thackeray ministry External links MLA Award For Bacchu Kadu Government officer beat by MLA due to Corruptipon Times Story Bacchu Kadu Passport Winter Session news bacchu Kadu References Living people Marathi politicians Maharashtra MLAs 2004–2009 Maharashtra MLAs 2009–2014 Independent politicians in India Maharashtra MLAs 2014–2019 People from Amravati district Prahar Janshakti Party politicians 1970 births
3477909
https://en.wikipedia.org/wiki/Internet%20Explorer%20for%20UNIX
Internet Explorer for UNIX
Internet Explorer for UNIX is a discontinued version of the Internet Explorer graphical web browser that was available free of charge and produced by Microsoft for use in the X Window System on Solaris or HP-UX. Development ended with a version of Internet Explorer 5 in 2001 and support for it was completely discontinued in 2002. Development history In May 1996, it was reported that Steven Guggenheimer confirmed that they were looking into porting Internet Explorer to run on UNIX-like platforms, but were looking into how exactly it should be done. It was further reported that Steve Ballmer, then executive vice president of Microsoft, had shown an interest earlier in the month for a Microsoft browser to run on Unix as part of the strategy to wage the browser wars: In pursuit of a larger share of the mammoth browser market, Microsoft has been dealing with PC and workstation makers to have its IE browser bundled with newly shipping hardware. Ballmer hinted, however, that not having a Unix browser was posing an obstacle to this OEM-based strategy to try and catch up with No. 1 browser maker Netscape Communications Corp., which holds some 85 percent of the worldwide browser market with its Navigator product line. "We might just have to get one of those", Ballmer said of a Unix-based browser. In June, Microsoft entered into a contract with Bristol Technology to develop a version of Bristol's porting application Wind/U (archived) to port IE for Windows to Unix. At this time Bristol also had a contract with Microsoft allowing it access to Windows source code from September 1994 to September 1997. The project was officially announced by Microsoft at the end of July 1996 that a native version of IE for "Solaris and other popular variants of UNIX" would be finished by the end of the year, which would have "equivalent functionality as that provided in Microsoft Internet Explorer 3.0", thus "delivering on its commitment to provide full-featured Web browser support on all major operating system platforms" as well as "supporting and promoting open standards, including HTML, ActiveX and Java". However, following a dispute in March 1997 concerning each other's performance and because of contract negotiations with Bristol to access Windows source code after September 1997 failing, Microsoft reversed course and decided to directly port the Windows version in-house using the MainWin XDE (eXtended Development Environment) application from Mainsoft, the main competitor to Bristol Technology. (Microsoft would later also use MainWin to port Windows Media Player and Outlook Express to Unix.) Now well behind schedule, the 3.0 branch was apparently scrapped in favor of 4.0 (that was released for Windows half a year earlier), which used the new MSHTML (Trident) browser engine. A beta of the Solaris version was made available on November 5, 1997, with a final version expected by March 1998. Tod Nielsen, general manager of Microsoft's developer relations group, jokingly declared that he wanted to hold the launch of the browser at the Ripley's Believe It or Not museum in San Francisco due to the skepticism by those who believed the project was vaporware. It was further reported that versions for HP-UX, IBM AIX, and Irix were planned (note that at the time MainWin XDE 3.0 was only available for the "Solaris SPARC 2.51 platform", but MainWin XDE 2.1 was "available on Solaris SPARC 2.51, Solaris Intel 5.5.1, SunOS 4.1.4, Irix 5.3, Irix 6.2, HP UX 10.2 and IBM AIX 4.1.5".) IE 4.0 for Unix on Solaris was released on March 4, 1998. Later that year a version for HP-UX was released. March 5, 1998: Microsoft reached a settlement with Bristol which "provided mutual releases for any claims arising out of the IE Agreement". 1999 IE 5.0 for Unix on Solaris and HP-UX released. 2001 IE 5.0 for Unix Service Pack 1 released for Solaris and HP-UX. Versions There are nine versions officially listed by Microsoft, listed below in bold. It is not known why Microsoft omitted references to the other ones from its official list. 5.0 Readme highlights Notable items from the IE for Unix 5.0 Readme: "Internet Explorer 5 for UNIX supports most of the features and technologies of Internet Explorer for Windows, but also differs in some respects. For example, Internet Explorer for UNIX does not support downloadable ActiveX controls or browsing and organizing your local files and folders within the browser window. Other unsupported features include filters/transitions in CSS, the DHTML Editing component, and HTML Applications (HTAs). [...] Internet Explorer for UNIX offers some features not found on the Windows version as well, such as Emacs-style keyboard shortcuts and external program associations." Microsoft had a newsgroup named "microsoft.public.inetexplorer.unix" on its public news server msnews.microsoft.com "The User Agent String for Internet Explorer 5 is static except for the third field which depends on the Operating System and the processor you are using. Here are some common configurations and the user agent strings generated by Internet Explorer on these platforms:" Disappearance The homepage for IE for Unix was removed from Microsoft's website in the third quarter of 2002 without explanation, replaced with the message: "We sincerely apologize, but Internet Explorer technologies for UNIX are no longer available for download." It was noted however, that while the homepage had been removed, the actual download page remained up for a time. The reason given by Microsoft's PR firm was that "low customer demand for this download did not justify the resources required for continued development". Successors Microsoft's Internet Explorer for Mac OS X was the last browser the company released for a UNIX-related platform until the release of Microsoft Edge for macOS and Linux in 2020. See also List of web browsers List of web browsers for Unix and Unix-like operating systems Comparison of web browsers References External links Archived HP-UX version and mirror Archived Solaris versions and mirror POSIX web browsers Internet Explorer Discontinued Microsoft software Discontinued web browsers
3428968
https://en.wikipedia.org/wiki/Citadel/UX
Citadel/UX
Citadel/UX (typically referred to simply as "Citadel") is a collaboration suite (messaging and groupware) that is descended from the Citadel family of programs which became popular in the 1980s and 1990s as a bulletin board system platform. It is designed to run on open source operating systems such as Linux or BSD. Although it is being used for many bulletin board systems, in 1998 the developers began to expand its functionality to a general purpose groupware platform. In order to modernize the Citadel platform for the Internet, the Citadel/UX developers added functionality such as shared calendars, instant messaging, and built-in implementations of Internet protocols such as SMTP, IMAP, Sieve, POP3, GroupDAV and XMPP. All protocols offer OpenSSL encryption for additional security. Users of Citadel/UX systems also have available to them a web-based user interface which employs Ajax style functionality to allow application-like interaction with the system. Citadel uses the Berkeley DB database for all of its data stores, including the message base. Citadel/UX became free and open-source software subject to the terms of the GPL-2.0-or-later license in 1998. In 2006 Citadel was relicensed to the GPL-2.0-only license. In 2007 Citadel was relicensed again to the GPL-3.0-only license. References External links Review of Citadel by Carla Schroder of Enterprise Networking Planet: Part 1 Part 2 Mod_auth_citadel, an Apache Web Server authentication module for Citadel by Stuart Cianos Free groupware Bulletin board system software Collaborative software for Linux Ajax (programming) Free email server software Free email software Free content management systems Formerly proprietary software
432163
https://en.wikipedia.org/wiki/Window%20Maker
Window Maker
Window Maker is a free and open-source window manager for the X Window System, allowing graphical applications to be run on Unix-like operating-systems. It is designed to emulate NeXTSTEP's GUI as an OpenStep-compatible environment. Window Maker is part of the GNU Project. Overview Window Maker has been characterized as reproducing "the elegant look and feel of the NeXTSTEP GUI" and is noted as "easy to configure and easy to use." A graphical tool called Wprefs is included and can be used to configure most aspects of the UI. The interface tends towards a minimalist, high performance environment directly supporting XPM, PNG, JPEG, TIFF, GIF and PPM icons with an alpha-channel and a right-click, sliding-scrolling application menu system which can throw off pinnable menus, along with window-icon miniaturization and other animations on multiple desktops. Menus and preferences can be changed without restarting. As with most window managers it supports themes and many are available. Owing to its NeXT inspiration, Window Maker has a dock like macOS, but Window Maker's look and feel hews mostly to that of its NeXT forebear. Architecture Window Maker has window hints which allow seamless integration with the GNUstep, GNOME, KDE, Motif and OpenLook environments. Significantly it has almost complete ICCCM compliance and internationalization support for at least 11 locales. Window Maker uses the lightweight WINGs widget set which was built specifically for Window Maker as a way to skirt what its developers said would have been the "overkill" (or bloat) of using GNUstep. WINGs is common to other applications including a login display manager called WINGs Display Manager (WDM) and many dockapps. Window Maker dock and clip applets are compatible with those from AfterStep's wharf. History Window Maker was written from scratch primarily by Brazilian programmer Alfredo Kojima as a window manager for the GNUstep desktop environment and originally meant as an improved take on the AfterStep window manager's design concept. The first release was in 1997. For a time it was included as a standard window manager in several Linux distributions and is also available in the FreeBSD and OpenBSD ports collection. Since the goal of the project has been to closely emulate the design of the defunct NeXTstep and OpenStep GUIs, further development has been light. In late 2007 the widely available, stable release version was at 0.92 from July 2005 with subsequent maintenance updates having been made to some distribution packages and ports. In late June 2008 a post on the project's website said active development would resume, noting, "...we are working very hard to revitalize Window Maker's presence on X Window (and perhaps beyond) desktops... We expect to once again provide the de-facto minimalist yet extremely functional window manager to the world." On 29 January 2012, Window Maker 0.95.1 was released, making it the first official release in almost seven years. This was followed by a number of releases; the latest release was 0.95.9, released on 4 April 2020. Name The program's original name was WindowMaker (camelcased and without the space) but a naming conflict arose with an older product called Windowmaker from Windowmaker Software Ltd, a UK company producing software for companies that manufacture windows and doors. A 1998 agreement between the developers of Window Maker and Windowmaker Software specified that Window Maker (in the X sense) should never be used as a single word. Usage Though adhering closely to the NeXT interface, the default appearance can be confusing to someone expecting a Microsoft Windows-style taskbar and start menu. All applications can be accessed by right-clicking on the desktop background to access the fully configurable main menu. The menu can also be displayed using the keyboard, with for the application menu and for a window menu. Window Maker can be configured by double-clicking the screwdriver icon on the dock. An icon depicting a computer monitor is used to launch a command-window and a paperclip icon is used to cycle between workspaces. Any icon in Window Maker, including application icons, can be easily changed. Icons representing running applications appear at the bottom of the screen (the user can extend application windows to cover these). By default, the dock appears at upper right. Icons can be dragged onto the dock to make them permanent. The edge of an icon can be right-clicked to adjust its settings. A separate, dockable application called wmdrawer features a slide-out drawer which can hold application and file launching icons. Basic apps While any X application can be docked in Window Maker, the archetypical WM dockable applications are called dockapps. These tend to be clocks and system monitoring applications. There are many clock implementations, including wmcalclock, , wmclock (a NeXTStep-like calendar clock clone) and wmclockmon. Monitoring applets include , , wmmon, wmnet and . Many other dockapps are available, typically ones intended to interact with other "full fledged" applications. The WPrefs configuration tool enables tuning of most Window Maker preferences. wmakerconf was developed to provide more configuration options, notably theme customization. Configuration files are typically stored in ~/GNUstep/. The background can be changed from the command line with wmsetbg -s -u [filename.jpg] (wmsetbg stands for "window maker set background"). FSViewer is a separate, configurable Miller Columns file browser developed for Window Maker in 1998 by George Clernon as a visual and functional analogy to NeXTstep's Workspace Manager. In 2002, it was adapted to later versions of the WINGs libraries and Window Maker by Guido Scholz. aterm is an rxvt based terminal emulator developed for Afterstep mainly for visual appeal, featuring a NeXTstep style scrollbar (which matches Window Maker's look and feel) along with pseudo-transparency. Menu The application menu can be edited graphically with much versatility. The configuration is recorded in ~/GNUstep/Defaults/WMRootMenu as a text file which can be easily read and edited (in versions after 0.94.0 it can also be automatically generated from a list of installed applications using a program called wmgenmenu). Menu items can be set to: Launch a program or application with or without a filename and other arguments Launch a command line interface with or without further arguments Run a WM command, such as exiting a Window Maker session or listing windows and workspaces List a submenu containing any of the above tasks Many Linux distributions define their own applications menu for Window Maker. This cannot usually be edited using the configuration tool (which will instead offer to replace it with a generic default menu which can be edited). Mascot Amanda the Panda is the mascot of Window Maker. She was designed by Agnieszka Czajkowska. See also Extended Window Manager Hints List of computing mascots :Category:Computing mascots References External links Window Maker Mailing Lists Window Maker Live, an installable Debian/Wheezy based Linux Live CD using Window Maker as default graphical interface "WINGsman", WINGs documentation 1997 software Free X window managers GNU Project software GNUstep
57055637
https://en.wikipedia.org/wiki/Singularity%20%28software%29
Singularity (software)
Singularity is a free, cross-platform and open-source computer program that performs operating-system-level virtualization also known as containerization. One of the main uses of Singularity is to bring containers and reproducibility to scientific computing and the high-performance computing (HPC) world. The need for reproducibility requires the ability to use containers to move applications from system to system. Using Singularity containers, developers can work in reproducible environments of their choosing and design, and these complete environments can easily be copied and executed on other platforms. History Singularity began as an open-source project in 2015, when a team of researchers at Lawrence Berkeley National Laboratory, led by Gregory Kurtzer, developed the initial version and released it under the BSD license. By the end of 2016, many developers from different research facilities joined forces with the team at Lawrence Berkeley National Laboratory to further the development of Singularity Singularity quickly attracted the attention of computing-heavy scientific institutions worldwide: Stanford University Research Computing Center deployed Singularity on their XStream and Sherlock clusters National Institutes of Health installed Singularity on Biowulf, their 95,000+ core/30 PB Linux cluster Various sites of the Open Science Grid Consortium including Fermilab started adopting Singularity; by April 2017, Singularity was deployed on 60% of the Open Science Grid network. For two years in a row, in 2016 and 2017, Singularity was recognized by HPCwire editors as "One of five new technologies to watch". In 2017 Singularity also won the first place for the category "Best HPC Programming Tool or Technology". based on the data entered on a voluntary basis in a public registry, Singularity user base was estimated to be greater than 25,000 installations and includes users at academic institutions such as Ohio State University, and Michigan State University, as well as top HPC centers like Texas Advanced Computing Center, San Diego Supercomputer Center, and Oak Ridge National Laboratory. Features Singularity is able to support natively high-performance interconnects, such as InfiniBand and Intel Omni-Path Architecture (OPA). Similar to the support for InfiniBand and Intel OPA devices, Singularity can support any PCIe-attached device within the compute node, such as graphic accelerators. Singularity also has native support for Open MPI library by utilizing a hybrid MPI container approach where OpenMPI exists both inside and outside the container. These features make Singularity increasingly useful in areas such as machine learning, deep learning and most data-intensive workloads where the applications benefit from the high bandwidth and low latency characteristics of these technologies. Integration HPC systems traditionally already have resource management and job scheduling systems in place, so the container runtime environments must be integrated into the existing system resource manager. Using other enterprise container solutions like Docker in HPC systems would require modifications to the software. Docker containers can be automatically converted to stand-alone singularity files which can then be submitted to HPC resource managers. Singularity seamlessly integrates with many resource managers including: HTCondor Oracle Grid Engine (SGE) SLURM (Simple Linux Utility for Resource Management) TORQUE (Terascale Open-source Resource and QUEue Manager) PBS Pro (PBS Professional) HashiCorp Nomad (A simple and flexible workload orchestrator) See also Grid computing OverlayFS TOP500 References Further reading Proceedings of the 10th International Conference on Utility and Cloud Computing: Is Singularity-based Container Technology Ready for Running MPI Applications on HPC Clouds? Singularity prepares version 3.0, nears 1 million containers served daily Dell HPC: Containerizing HPC Applications with Singularity Intel HPC Developer Conference 2017: Introduction to High-Performance Computing HPC Containers and Singularity HPCwire Reveals Winners of the 2017 Readers’ and Editors’ Choice Awards at SC17 Conference in Denver: Singularity awarded for Best HPC Programming Tool or Technology category External links Free software programmed in Go Linux containerization Operating system technology Operating system security Software using the Apache license Software using the BSD license Virtualization software Virtualization-related software for Linux
230360
https://en.wikipedia.org/wiki/Windows%20Embedded%20Compact
Windows Embedded Compact
Windows Embedded Compact, formerly Windows Embedded CE, Windows Powered and Windows CE, is an operating system subfamily developed by Microsoft as part of its Windows Embedded family of products. Its mainstream support ended in 2018, and "extended support" will end in 2023. Unlike Windows Embedded Standard, which is based on Windows NT, Windows Embedded Compact uses a different hybrid kernel. Microsoft licenses it to original equipment manufacturers (OEMs), who can modify and create their own user interfaces and experiences, with Windows Embedded Compact providing the technical foundation to do so. The current version of Windows Embedded Compact supports x86 and ARM processors with board support package (BSP) directly. The MIPS and SHx architectures had support prior to version 7.0. 7.0 still works on MIPSII architecture. Originally, Windows CE was designed for minimalistic and small computers. However CE had its own kernel whereas those such as Windows XP Embedded are based on NT. Windows CE was a modular/componentized operating system that served as the foundation of several classes of devices such as Handheld PC, Pocket PC, Auto PC, Windows Mobile, Windows Phone 7 and more. Features Windows CE is optimized for devices that have minimal memory; a Windows CE kernel may run with one megabyte of memory. Devices are often configured without disk storage, and may be configured as a "closed" system that does not allow for end-user extension (for instance, it can be burned into ROM). Windows CE conforms to the definition of a real-time operating system, with a deterministic interrupt latency. From Version 3 and onward, the system supports 256 priority levels and uses priority inheritance for dealing with priority inversion. The fundamental unit of execution is the thread. This helps to simplify the interface and improve execution time. The first version known during development under the code name "Pegasus" featured a Windows-like GUI and a number of Microsoft's popular apps, all trimmed down for smaller storage, memory, and speed of the palmtops of the day. Since then, Windows CE has evolved into a component-based, embedded, real-time operating system. It is no longer targeted solely at hand-held computers. Many platforms have been based on the core Windows CE operating system, including Microsoft's AutoPC, Pocket PC 2000, Pocket PC 2002, Windows Mobile 2003, Windows Mobile 2003 SE, Windows Mobile 5, Windows Mobile 6, Smartphone 2002, Smartphone 2003, Portable Media Center, Zune, Windows Phone 7 and many industrial devices and embedded systems. Windows CE even powered select games for the Sega Dreamcast and was the operating system of the Gizmondo handheld. A distinctive feature of Windows CE compared to other Microsoft operating systems is that large parts of it are offered in source code form. First, source code was offered to several vendors, so they could adjust it to their hardware. Then products like Platform Builder (an integrated environment for Windows CE OS image creation and integration, or customized operating system designs based on CE) offered several components in source code form to the general public. However, a number of core components that do not need adaptation to specific hardware environments (other than the CPU family) are still distributed in binary only form. Windows CE 2.11 was the first embedded Windows release to support a console and a Windows CE version of . History Windows Embedded Compact was formerly known as Windows CE. According to Microsoft, "CE" is not an explicit acronym for anything, although it implies a number of notions that Windows developers had in mind, such as "compact", "connectable", "compatible", "companion" and "efficient". The name changed once in 2006, with the release of Windows Embedded CE 6.0, and again in 2011, with the release of Windows Embedded Compact 7. Windows CE was originally announced by Microsoft at the COMDEX expo in 1996 and was demonstrated on stage by Bill Gates and John McGill. Microsoft had been testing Pegasus in early 1995 and released a strict reference platform to several hardware partners. The devices had to have the following minimum hardware specifications: SH3, MIPS 3000 or MIPS 4000 CPU Minimum of 4 MB of ROM Minimum of 2 MB of RAM with a backup power source, such as a CR2032 coin cell battery Powered by two AA batteries Weigh less than 1 lbs A physical QWERTY keyboard including Ctrl, Alt and Shift keys An LCD display of 480×240 pixels with four shades of gray and two bits per pixel with touchscreen that could be operated by either stylus or finger An Infrared transceiver Serial port PC Card slot Built in speaker Devices of the time mainly had 480×240 pixel displays with the exception of the Hewlett-Packard 'Palmtop PC' which had a 640×240 display. Each window took over the full display. Navigation was done by tapping or double tapping on an item. A contextual menu was also available by the user pressing the ALT key and tapping on the screen. Windows CE 1.0 did not include a cascading Start menu unlike Windows 95 and Windows NT 4.0 did. Microsoft released the Windows CE 1.0 Power Toys that included a cascading menu icon that appeared in the system tray. Also bundled were several other utilities, most notable were a sound applet for the system tray, enabling the user to quickly mute or unmute their device or adjust the volume and a 'pocket' version of Paint. The release of Windows CE 2.0 was well received. Microsoft learned its lessons from consumer feedback of Windows CE 1.0 and made many improvements to the operating system. The Start menu was a cascading menu, identical to those found on Windows 95 and Windows NT 4.0. Color screens were also supported and manufacturers raced to release the first color H/PC. The first to market however, was Hewlett Packard with the HP 620LX. Windows CE 2.0 also supported a broader range of CPU architectures. Programs could be also installed directly in the OS by double clicking on CAB files. Due to the nature of the ROMs that contained the operating system, users were not able to flash their devices with the newer operating system. Instead manufacturers released upgrade ROMs that users had to physically install in their devices, after removing the previous version. This would usually wipe the data on the device and present the user with the setup wizard upon first boot. In November 1999, it was reported that Microsoft was planning to rename Windows CE to Windows Powered. The name only appeared in brand in Handheld PC 2000 and a build of Windows 2000 Advanced Server (which bears no relation to Windows CE). Various Windows CE 3.0 products announced at CES 2001 were marketed under a "Windows Powered" umbrella name. Development tools Visual Studio Microsoft Visual Studio 2012, 2013, and 2015 support apps and Platform Builder development for Windows Embedded Compact 2013. Microsoft Visual Studio 2008 and earlier support projects for older releases of Windows CE/Windows Mobile, producing executable programs and platform images either as an emulator or attached by cable to an actual mobile device. A mobile device is not necessary to develop a CE program. The .NET Compact Framework supports a subset of the .NET Framework with projects in C#, and Visual Basic .NET, but not Managed C++. "Managed" apps employing the .NET Compact Framework also require devices with significantly larger memories (8 MB or more) while unmanaged apps can still run successfully on smaller devices. In Visual Studio 2010, the Windows Phone Developer Tools are used as an extension, allowing Windows Phone 7 apps to be designed and tested within Visual Studio. Free Pascal and Lazarus Free Pascal introduced the Windows CE port in Version 2.2.0, targeting ARM and x86 architectures. Later, the Windows CE header files were translated for use with Lazarus, a rapid application development (RAD) software package based on Free Pascal. Windows CE apps are designed and coded in the Lazarus integrated development environment (IDE) and compiled with an appropriate cross compiler. Platform Builder This programming tool is used for building the platform (BSP + Kernel), device drivers (shared source or custom made) and also the apps. This is a one stop environment to get the system up and running. One can also use Platform Builder to export an SDK (software development kit) for the target microprocessor (SuperH, x86, MIPS, ARM etc.) to be used with another associated tool set named below. Others The Embedded Visual C++ (eVC) a tool for development of embedded apps for Windows CE. It can be used standalone using the SDK exported from Platform Builder or using the Platform Builder's Platform Manager connectivity setup. CeGcc project provides GNU development tools, such as GNU C, GNU C++ and binutils that targeting Windows CE; 2 SDKs are available to choose from a standard Windows CE platform SDK based on MinGW, and a newlib-based SDK which may be easier for porting programs from POSIX systems. CodeGear Delphi Prism runs in Visual Studio, also supports the .NET Compact Framework and thus can be used to develop mobile apps. It employs the Oxygene compiler created by RemObjects Software, which targets .NET, the .NET Compact Framework, and Mono. Its command-line compiler is available free of charge. Basic4ppc a programming language similar to Visual Basic, targets the .NET Compact Framework and supports Windows CE and Windows Mobile devices. GLBasic a very easy to learn and use BASIC dialect that compiles for many platforms, including Windows CE and Windows Mobile. It can be extended by writing inline C/C++ code. LabVIEW a graphical programming language, supporting many platforms, including Windows CE. MortScript is the semi-standard, extremely lightweight, automation SDK popular with the GPS enthusiasts. Uses the scripts written in its own language, with the syntax being aside to VBScript or JScript. AutoHotkey a port of the open source macro-creation and automation software utility available for Windows CE. It allows the construction of macros and simple GUI apps developed by systems analyst Jonathan Maxian Timkang. Relationship to Windows Mobile, Pocket PC, and SmartPhone Often Windows CE, Windows Mobile, and Pocket PC are used interchangeably, in part due to their common origin. This practice is not entirely accurate. Windows CE is a modular/componentized operating system that serves as the foundation of several classes of devices. Some of these modules provide subsets of other components' features (e.g. varying levels of windowing support; DCOM vs COM), others which are separate (bitmap or TrueType font support), and others which add additional features to another component. One can buy a kit (the Platform Builder) which contains all these components and the tools with which to develop a custom platform. Apps such as Excel Mobile (formerly Pocket Excel) are not part of this kit. The older Handheld PC version of Pocket Word and several other older apps are included as samples, however. Windows Mobile is best described as a subset of platforms based on a Windows CE underpinning. Currently, Pocket PC (now called Windows Mobile Classic), SmartPhone (Windows Mobile Standard), and Pocket PC Phone Edition (Windows Mobile Professional) are the three main platforms under the Windows Mobile umbrella. Each platform uses different components of Windows CE, plus supplemental features and apps suited for their respective devices. Pocket PC and Windows Mobile are Microsoft-defined custom platforms for general PDA use, consisting of a Microsoft-defined set of minimum profiles (Professional Edition, Premium Edition) of software and hardware that is supported. The rules for manufacturing a Pocket PC device are stricter than those for producing a custom Windows CE-based platform. The defining characteristics of the Pocket PC are the touchscreen as the primary human interface device and its extremely portable size. CE v3.0 is the basis for Pocket PC 2002. A successor to CE v3.0 is CE.net. "PocketPC [is] a separate layer of code on top of the core Windows CE OS... Pocket PC is based on Windows CE, but it's a different offering." And licensees of Pocket PC are forbidden to modify the WinCE part. The SmartPhone platform is a feature-rich OS and interface for cellular phone handsets. SmartPhone offers productivity features to business users, such as email, and multimedia abilities for consumers. The SmartPhone interface relies heavily on joystick navigation and PhonePad input. Devices running SmartPhone do not include a touchscreen interface. SmartPhone devices generally resemble other cellular handset form factors, whereas most Phone Edition devices use a PDA form factor with a larger display. Releases See also ActiveSync Handheld PC Handheld PC Explorer List of Windows CE Devices Microsoft Kin Modular Windows Palm-size PC Pocket PC Portable Media Center Tablet PC Windows Phone Zune HD Dreamcast References External links Benchmarking Real-time Determinism in Microsoft Windows CE A Brief History of Windows CE, by HPC:Factor with screenshots of the various versions , Archived copy of website hosted by Handheld PC Windows XP Embedded on MSDN Mike Hall's Windows Embedded Blog Windows CE ARM operating systems 1996 software
15691578
https://en.wikipedia.org/wiki/Earth%20sciences%20graphics%20software
Earth sciences graphics software
Earth sciences graphics software is a plotting and image processing software used in atmospheric sciences, meteorology, climatology, oceanography and other Earth science disciplines. Earth sciences graphics software includes the capability to read specialized data formats such as netCDF, HDR and GRIB. Such software is sometimes able to access the data from remote data centers. Examples of applications include satellite data processing, analysis of output from complex meteorological models and display of time series of data. Graphics capabilities range from simple line plots, to complex three-dimensional visualizations. This type of graphics software is often used to display results from earth sciences numerical models. External links List of many graphical packages which use NetCDF to provide a glimpse of graphical packages used in Earth Sciences. Plotting software
32095607
https://en.wikipedia.org/wiki/ENX%20Association
ENX Association
The ENX Association is an association of European vehicle manufacturers, suppliers and organisations. History The Association The ENX Association, which was founded in 2000 is an association according to the French law of 1901. Its headquarters are in Boulogne-Billancourt (France) and Frankfurt am Main. The 15 members of the association, which are all also represented on the so-called ENX board, are Audi, BMW, Bosch, Continental, Daimler, DGA, Ford, Renault, Volkswagen, as well as the automotive associations ANFAC (Spain), GALIA (France), SMMT (UK), and VDA (Germany). The association can decide to accept additional members upon request; however, the association rules state that the total number of members is limited. Fields of activity The ENX Association is a non-profit organisation that acts as a legal and organisational roof for the ENX network standard. It provides the participating companies with a platform for the exchange of information and for the initiation of pre-competitive project cooperations in the field of information technology. The main drive behind the German and French industries creating the standard was to protect intellectual property while at the same time reducing costs and complexity concerning data exchange within the automotive industry. One cited benefit of the creation of a "Trusted Community" for branches of industry is that, although companies protect their own infrastructures, problems occur in cases where encryption or authentication solutions are used across different companies and yet should be acknowledged as confidential. An impasse is often reached when both sides seek to implement their own mechanisms, if not before. This is demonstrated by the example of email encryption, with the clash of safety regulations in view of shared application use and thousands of unencrypted data connections. A shared, confidential infrastructure provides a remedy here. Ford cites the use of ENX to communicate with suppliers as an example of how considerable savings can be made through consolidation and standardisation. The implementation of industrial requirements for IT security between companies represents a further sphere of activity. The following are described as subject areas here: Secure cloud computing (between companies) Protecting intellectual property during development cooperations (e.g. using Enterprise Rights Management, ERM) The ENX Association is a member of the ERM.Open project by ProSTEP iViP e.V. and, was active in the forerunner project SP2 together with Adobe, BMW, FH Augsburg, Continental, Daimler Fraunhofer IGD, Microsoft, PROSTEP, Siemens PLM, TU Darmstadt, TAC, Volkswagen and ZF Friedrichshafen. The SkIdentity project, which the ENX Association is involved with, was named as one of 12 winners of the BMWi technology competition "Secure cloud computing for medium-sized businesses and the public sector - Trusted Cloud" by the Federal Ministry of Economics and Technology (BMWi) on 1 March 2011 at the IT exhibition CeBIT in Hanover. The BMWi has set up the Trusted Cloud programme to promote "the development and testing of innovative, secure and legally compliant cloud solutions". Presidents of the ENX Association The presidents of the ENX Association are: Philippe Ludet (since July 2019) Clive Johnson (since April 2013) Prof. Dr. Armin Vornberger (October 2005 - April 2013) Hans-Joachim Heister, Ford-Werke GmbH (July 2001 - October 2005) Dr. Gunter Zimmermeyer, Verband der Automobilindustrie e.V. (July 2000 - July 2001) ENX Association memberships The ENX Association is a member of the following associations and organisations: Automotive Industry Action Group (AIAG), Southfield, Michigan Bundesverband Informationswirtschaft, Telekommunikation und neue Medien e.V. (BITKOM) ProSTEP iViP e.V. RIPE NCC In addition, there are two-way affiliations with ANFAC, GALIA, and SMMT Use of the ENX network Usage scenarios The European automotive industry's communication network of the same name is based on the standards set by the ENX Association concerning security, availability and interoperability. The so-called industry network guarantees the secure exchange of development, production control and logistical data within the European automotive industry. The automotive industry is shaped by strong international cooperation and the necessity for companies to coordinate closely linked processes, which require a precise alignment and seamless exchange of data between partners. This makes "integrated global network concepts" necessary. ENX is described as a platform, which creates the foundation for these types of cooperative production models. Realignment began at the end of 2002. The aim was to bring the technical development in line with user requirements on a consistent basis, particularly for small and medium-sized business. The implementation took several years. In June 2004, French users complained about the lack of cost-effective entry-level solutions in the France Telecom portfolio. In March 2011, over 1,500 companies within the automotive and other industries were using the network, which is available worldwide, in over 30 countries. The network can be used for all IP-compatible protocols and applications. The bandwidth ranges from classic EDI data exchange, to access to databases and secure email exchange, to the carrying out of video conferences. The use of EDI transfer protocols, such as OFTP (Odette File Transfer Protocol), OFTP2 and AS2, is widespread in the ENX network. OFTP2, which was developed from 2004, allows for use via the public Internet. According to the trade press, some vehicle manufacturers have been demanding the use of OFTP2 over the Internet since 2010. "Tens of thousands of suppliers" are affected. In this medium, which is accessible to everyone, substantially more security is required for the transfer of sensitive data; it is difficult to estimate the implementation costs. Registration as a pre-requisite for use You must register with the ENX Association in order to use the ENX network. Registration can either be completed directly with the ENX Association or via one of its representatives. Representatives of the ENX Association In some countries and industries, the ENX is represented by industrial associations and organisations (so-called ENX Business Centres). These organisations act as contact persons in the relevant local language, process registration applications and take responsibility for the initial authorisation of new users in their relevant area of representation. The ENX Association has chosen this model of representation to allow industrial associations and similar organisations to manage user groups on an independent basis. Operating the ENX network Operating the network and the data links Operation by certified service providers The ENX network fulfills the quality and security requirements found in company-owned networks, while also being as open and flexible for participating vehicle manufacturers, suppliers, and their development partners as the public Internet. Data exchange between ENX users takes place via the network of the communication service provider, certified for this role by the ENX Association, using an encrypted Virtual Private Network (VPN). The first certified communication service provider was the Deutsche Telekom subsidiary T-Systems. This was followed by Orange, Telefónica, Infonet and, in 2007, Verizon Business. In 2010, three additional companies successfully acquired ENX certification, namely ANXeBusiness, BCC and Türk Telekom. According to information from the ENX Association, Open Systems AG is an additional service provider currently going through the certification process. The services provided by the certified service providers are interoperable, and are provided in a competitive environment. Overview of the service providers certified in line with the ENX standard Certification process According to the ENX Association, certification is a two-stage process. The first stage, the so-called concept phase, sees the ENX Association testing to see whether the service provider's ENX operating model fulfills the technical ENX specifications. The second stage sees the service provider putting their operating model into practice. Besides inspecting the internal organisation, the IPSec interoperability is also tested in the so-called "ENX IPSecLab". In addition, the ENX encryption is implemented and the connection is made to providers already certified via private peering points, so-called "ENX Points of Interconnection". Once this has been completed, the implementation and adherence to the ENX specifications is tested in a pilot run. Suitable preparation by the service provider should allow the chargeable certification to be completed within approx. three to four months. Central operational elements behind the scenes Central services are provided on behalf of and under the control of the ENX Association. These services provide a simplified connection ("interconnectivity") between the individually certified service providers and the interoperability of the encrypted hardware used. They include the so-called Points of Interconnection ("ENX POIs"), the IPSec Interoperability Laboratory ("ENX IPSec Lab") and the Public Key Infrastructure ("ENX PKI") in the ENX Trust Centre. The Points of Interconnection have a geographically redundant structure, are interconnected, and are operated in data processing centres in the following regions: Rhine-Main region, Germany; Isle-de-France, France and East coast of the United States. These central operational elements are not visible to the individual users. The customer sources their own connection, including IP router, encryption hardware, key material, uninterrupted end-to-end encryption of each communication, and individual service level agreements, directly from the certified telecommunications service provider that they have chosen. Global availability JNX industry network and the ANXeBusiness in North America The Japanese automotive industry has an industry network that is similar to the ENX in terms of technology and organisation, namely the Japanese Network Exchange (JNX). The network is controlled from the JNX Centre, which is tied to the Japanese automotive associations JAMA and JAPIA. JNX and ENX are not linked. In contrast, there are considerable technical, organisational and commercial differences between the ENX standard and the American ANX, which was developed back in the 1990s. Connection between Europe and North America ENX as a mutual standard since 2010 On 26 April 2010, the ENX Association and ANX eBusiness announced that they were going to connect their networks to create a global standard in the automotive industry. The connection resulted in a transatlantic industry network with more than 1,500 connected companies. The network went live with the completion of the pilot stage on 26 May 2010. According to concordant statements by the ENX Association and the ANX eBusiness Corp., only the ENX standard is used for transatlantic connections both in Europe and in North America. In their announcements, the ANX and ENX have described the interconnection as being free of charge for the individual users. Differences between ENX and ANX The network for North America, the so-called Automotive Network Exchange (ANX), is operated by the ANXeBusiness Corp. Although, like the ENX, it was originally initiated by the automotive industry and operated by a consortium; in contrast to the ENX, it was sold and, as a result, operated as a classic profit-making service company. ANX is a physical network. Availability stands at the forefront. For the time being, ANX is based continuously on the operation of fixed connections with high uptime guarantees. With the additional product "TunnelZ", ANX also offers an optional VPN tunnel management, which is not used by all those manufacturers and suppliers connected to the network. In the classic ANX network, key management takes place using Pre-Shared Key (PSK), while the encryption strength is limited to DES. ENX is set up as a managed security service, which continuously incorporates a standardised tunnel management, a trust centre based Public Key Infrastructure (PKI) and authentication and encryption mechanisms based on various networks (from private to public). In fact, while the ANX network has one provider for its customers, namely the ANXeBusiness company itself, ENX services are provided by various companies that are in competition with one another. In order to link the networks despite this, ANXeBusiness continues to operate its own network separately from and untouched by ENX, but provides every ANX user who wants the service with an active native ENX connection, including all required security and service features, via its own physical network. ANX has undergone certification and monitoring by the ENX Association for this purpose, and acts as an ENX certified service provider. Summary With the certification of ANXeBusiness as an ENX provider, ENX and ANX use the aforementioned organisational differences between a non-profit-making industrial consortium (ENX) on the one hand and a service provider (ANX) on the other to connect the two networks. This is not a case of mutual interoperability, as ANX has adopted the ENX standard. There are likely to be new market perspectives for ANX as a result of the potential access to all ENX users. At the same time, it can be assumed that the bridge to the ANX will make it easier for other ENX service providers in the USA and, as a result, will generate competition. References External links Communication networks within the automotive industry (non-profit organisations) ENX Association: worldwide JNX Center: Japan Information from service providers certified in line with ENX standards (commercial solutions) ANXeBusiness Corp. BCC: ENX Connect KPN: ENX – Automotive Industry Services Open Systems: ENX Global Connect Numlog – Orange Business Services Expert Partner for ENX data links ICDSC – Orange Business Services Expert Partner for ENX data links Türk Telekom: TT ENX T-Systems: Extranet Solution - Securely integrate partners and suppliers Verizon: Certification through ENX ENX member organisations ANFAC GALIA SMMT VDA Automotive industry Business organizations based in Europe Transport organizations based in Europe Trade associations based in France Trade associations based in Germany Organizations established in 2000 2000 establishments in France Transport industry associations
639165
https://en.wikipedia.org/wiki/LocoScript
LocoScript
LocoScript is a word processing software package created by Locomotive Software and first released with the Amstrad PCW, a personal computer launched in 1985. Early versions of LocoScript were noted for combining a wide range of facilities with outstanding ease of use. This and the low price of the hardware made it one of the best-selling word processors of the late 1980s. Four major versions of LocoScript were published for the PCW, and two for IBM-compatible PCs running MS-DOS. LocoScript's market share didn't expand with the PC versions, which were not released until after Windows had become the dominant PC operating system. Background and reception LocoScript's developers, Locomotive Software, had produced Locomotive BASIC for Amstrad's CPC 464 home computer, introduced in 1984. For the Amstrad PCW, introduced in 1985, Locomotive produced the LocoScript word processor and Mallard BASIC, and also wrote the PCW's User Guide. These programs and a dot matrix printer were included in the price of the PCW, which was £399 plus VAT for the base model. The PCW, regarded as extremely good value for money, gained 60% of the UK home computer market, and 20% of the European personal computer market. According to Personal Computer World, the PCW "got the technophobes using computers". LocoScript was regarded as easier to use than Wordstar and WordPerfect, which in the mid-1980s were the dominant word processors on IBM-compatible PCs, and many users needed no additional information beyond what the manual's "first 20 minutes" introductory chapter provided. The PCW's keyboard offered clearly labelled, one-press special keys for many common LocoScript functions, including cut, copy, and paste, while LocoScript's competitors required a wide range of key combinations that the user had to remember. Most of the program's other features were presented via a pull-down menu bar in which the top-level options were activated by function keys. The menu system had two structures, one for beginners and the other for experienced users. Locomotive Software's slogan for the product was "Everything you need, nothing you don't." However, LocoScript version 1 was regarded as relatively slow. When the PCW product line was discontinued in 1998, The Daily Telegraph said that the range of independently produced add-on software for LocoScript had contributed to the series' longevity. LocoScript faded into obscurity because its developers were slow to produce a version for IBM-compatible PCs. By the time they released a version that ran under MS-DOS, Windows was becoming the dominant operating system. The developers of WordPerfect made a similar mistake, releasing their first Windows version in 1991, shortly after the second Windows version of Microsoft Word. As late as 1993, a journalist found "special characters" much easier to produce on LocoScript than on PC word processing software. Versions and capabilities LocoScript LocoScript was the principal software included with Amstrad's PCW 8256 and PCW 8512, both of which launched in 1985. LocoScript did not run under the control of an operating system, but, instead, it was the operating system: the computer was booted from the LocoScript floppy disk, and LocoScript ran exclusively on the system. The user had to reboot in order to run any other program (a variety of CP/M applications were supplied on a separate disk). In later years a third-party utility called "Flipper" eventually became available, restricted to those PCWs with the greatest RAM memory, which could divide the bigger memory between LocoScript and CP/M, allowing both to run without the need to reboot. On start-up LocoScript displayed a file management menu, like WordStar but unlike WordPerfect, Microsoft Word and other modern word processors, which start with an empty document. LocoScript enabled users to divide documents into groups, display all the groups on a disk and then the documents in the selected group, and set up a template for each group. File names were restricted to the "8.3" format, but the edit facilities enabled users to add summaries up to 90 characters long, which they could display from the file menu. The "limbo file" facility enabled users to recover accidentally deleted documents until the disk ran out of space (there was no hard disk, all files were stored on floppy disks), when the software would permanently delete files from "limbo" to make room for new ones. Journalist Dave Langford published a collection of his articles about the PCW, and titled it "The Limbo Files". LocoScript was designed to accommodate add-on programs, which could be selected via the file manager. LocoScript supported 150 characters. For each language supported by the PCW, the keyboard and LocoScript were configured so that users could easily type all of the normal character set. Various other languages' characters could be typed by holding down the ALT or EXTRA key, along with the SHIFT key if capitals were required. LocoScript could also display mathematical and technical symbols. All these characters and symbols could be printed, unless the printer was a daisy wheel unit. LocoScript's menu system enabled users to add, singly or in combination, a range of sophisticated typographical effects: monospaced or proportional character spacing; normal or double width characters and spacing; various font sizes; bold, underline, italics, subscript or superscript, and reverse video. All of these except those that affected font size and spacing were displayed on the screen. Reverse video was an on-screen reminder to the user and was never printed, while the other effects were printed, except on daisy wheel printers. Users could optionally set up to two page headers and footers, and could tell LocoScript whether to use one header or footer on odd pages and the other on even pages, one header or footer for the first or last page and another for all the rest, or to omit a header or footer on the first or last page. The program provided codes for the current page number and total number of pages, and aligning them to the left, centre, or right, and for decorations such as leading and trailing hyphens (e.g. "-9-"). LocoScript automatically avoided widows and orphans, ensuring that, if a paragraph of four or more lines split across pages, at least two lines appeared on each page. Users could also tell LocoScript to keep a group of lines or paragraphs together on the same page, or to avoid splitting paragraphs throughout a document, and could force page breaks. Users could control placement of text by means of: margins; indentation; normal tab stops; decimal tab stops, which set the position of the decimal point rather than the start of a number; and left, right or full justification. Different combinations of these settings, called "layouts", were automatically numbered, which made it possible to re-use layouts and to make changes that applied to all parts of a document where a specified layout was used. These facilities could be used for presenting tables. LocoScript's cut, copy and paste facility provided 10 paste buffers ("blocks"), each of which was designated by a number and could be saved for re-use in a different document. Users could also save up to 26 short phrases, identified by letters, although the size of individual phrases and of the whole collection of phrases was limited. Both phrases and paste blocks could be inspected via a menu option. In addition, users could insert whole files, which could be either LocoScript documents or ASCII text files. The "find" and "find and replace" facilities could operate on a whole document, or small sections of one, and "find and replace" ("exchange" in the manual's terminology) had an option to confirm each change or just go ahead. The program did not immediately reflow text after major insertions or deletions, but did this when the user pressed the RELAY key, or automatically if the user moved the cursor through the changed passage. LocoScript allowed the user to edit one document while printing another, so that the relative slowness of the bundled dot matrix printer seldom caused difficulties. Users could ask for all of a document to be printed or a range of pages, set the print quality to "high quality" or "draft", and set the paper used to single-sheet or continuous stationery. LocoScript automatically adjusted the size of margins so that the same number of lines per page appeared on both single-sheet and continuous stationery. Since the printer only accepted one sheet of single-sheet paper at a time, LocoScript displayed a prompt at the end of each page when in single-sheet mode. The program also had the ability to resume at a specified page after a paper jam. In addition to printing LocoScript documents, the program had a "direct printing" mode which operated like a typewriter, printing each piece of text after the user pressed RETURN. This could be used for completing forms. The original LocoScript version 1 had no spell checker or mail merge facilities. Both were available by December 1986. Despite the sophistication of the software, the great drawback of the PCWs was the exclusive reliance of the early models (the PCW 8256 and 8512) on a poor quality dot matrix printer, coupled with the eventual introduction (with the 9512) of a high quality daisy wheel printer that could not print any of the wide range of non-alphanumeric symbols which the LocoScript software was capable of producing. The software was seriously hamstrung by the poor quality of the hardware: but this was due in large part to a commercial decision not to provide any support for third-party printers so long as the software remained exclusive to the PCW format. LocoScript 2 LocoScript 2 was bundled with the Amstrad PCW 9512, introduced in 1987. This version was significantly faster, included the LocoSpell spell checker, and came bundled with a high-quality daisy wheel printer in addition to supporting the original dot matrix printer. The software increased its character set to 400 and allowed users to define up to 16 of their own characters. It could also format, copy and verify disks by itself, instead of requiring the user to switch to CP/M and use the Disc Kit utility. Copying via LocoScript, however, could be much slower. Due to the word processor occupying more RAM than the CP/M system, and so leaving less usable space on the RAM disk, copying had to be done in smaller, more numerous stages. Besides the addition of LocoSpell, LocoScript 2 was also the earliest version to support the optional LocoFile add-on, providing database functionality. LocoScript PC LocoScript PC, later known as LocoScript PC Easy, was the first version of LocoScript to be incompatible with the Amstrad PCW, instead targeting IBM-compatible PCs running MS-DOS 3.0 and above. Released in 1990, the program's feature set was largely inline with LocoScript 2, supporting LocoSpell, LocoMail, and LocoFile, while also adding a few new features, such as support for mixing text with different fonts and new text styling options, as well as a substantially increased range of compatible third-party printers. LocoScript Professional Released in May 1992, LocoScript Professional, or simply Script, was the second major release of LocoScript to run on MS-DOS and included a number of new features and improvements over LocoScript PC. It supported printers that connected using the then-standard parallel port, such as most HP DeskJets and some Brother HL-series Laser printers (which could be run under DOS using the generic LaserJet4 driver). It did not support printers which required a USB connection or which were labelled "Windows only". Some issues exist as to its compatibility with Windows XP, Vista, 7, 8, and 10 (when run on those systems' DOS mode or using the DOSBox emulator). For example, it loses WYSIWYG functionality. However, version 2.51 can be tweaked to work in Windows 10 under DOSbox MB6 {Set in LocoScript by selecting F9-Settings-F5-ScreenMode-Text Screen Direct-OK F10} (DOSBox MB6 maintains printing via the parallel printer port, running in text screen mode). LocoScript 3 LocoScript 3 included the ability to print text at any size using scalable "LX" fonts, and to use multiple fonts in a document. According to the vendor, LocoScript 3 also had the ability to include pictures and draw boxes within documents, a facility to print odd-numbered and even-numbered pages separately, and a word counter. The vendor recommended LocoScript 3 only for use with PCW models that had 512 KB of RAM. Four add-on utilities were included: LocoSpell (a spell checker), LocoMail (a mail merge program), LocoFile (a database program), and a Printer Support Pack, but (unlike in the earlier LocoScript versions) these utilities were no longer sold separately. LocoScript 4 The last major version of LocoScript for the PCW, LocoScript 4, was released in 1996 by LocoScript Software, a new company created by former Locomotive Software employees after the sale of Locomotive to Demon Internet. It added a wider range of fonts, support for colour printing, a label-printing facility, and optional support for hundreds of printers. Version 4 also supported the mail merge program, LocoMail, and the LocoFile database program. See also Amstrad CP/M Plus character set References External links LocoScript 2 box contents Word processors DOS software Amstrad PCW software
910726
https://en.wikipedia.org/wiki/Browser%20Helper%20Object
Browser Helper Object
A Browser Helper Object (BHO) is a DLL module designed as a plugin for the Microsoft Internet Explorer web browser to provide added functionality. BHOs were introduced in October 1997 with the release of version 4 of Internet Explorer. Most BHOs are loaded once by each new instance of Internet Explorer. However, in the case of Windows Explorer, a new instance is launched for each window. BHOs are still supported as of Windows 10, through Internet Explorer 11, while BHOs are not supported in Microsoft Edge. Implementation Each time a new instance of Internet Explorer starts, it checks the Windows Registry for the key HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Explorer\Browser Helper Objects. If Internet Explorer finds this key in the registry, it looks for a CLSID key listed below the key. The CLSID keys under Browser Helper Objects tell the browser which BHOs to load. Removing the registry key prevents the BHO from being loaded. For each CLSID that is listed below the BHO key, Internet Explorer calls CoCreateInstance to start the instance of the BHO in the same process space as the browser. If the BHO is started and implements the IObjectWithSite interface, it can control and receive events from Internet Explorer. BHOs can be created in any language that supports COM. Examples Some modules enable the display of different file formats not ordinarily interpretable by the browser. The Adobe Acrobat plug-in that allows Internet Explorer users to read PDF files within their browser is a BHO. Other modules add toolbars to Internet Explorer, such as the Alexa Toolbar that provides a list of web sites related to the one you are currently browsing, or the Google Toolbar that adds a toolbar with a Google search box to the browser user interface. The Conduit toolbars are based on a BHO that can be used on Internet Explorer 7 and up. This BHO provides a search facility that connects to Microsoft's Bing search. Concerns The BHO API exposes hooks that allow the BHO to access the Document Object Model (DOM) of the current page and to control navigation. Because BHOs have unrestricted access to the Internet Explorer event model, some forms of malware have also been created as BHOs. For example, the Download.ject malware is a BHO that is activated when a secure HTTP connection is made to a financial institution, then begins to record keystrokes for the purpose of capturing user passwords. The MyWay Searchbar tracks users' browsing patterns and passes the information it records to third parties. The C2.LOP malware adds links and popups of its own to web pages in order to drive users to pay-per-click websites. Many BHOs introduce visible changes to a browser's interface, such as installing toolbars in Internet Explorer and the like, but others run without any change to the interface. This renders it easy for malicious coders to conceal the actions of their browser add-on, especially since, after being installed, the BHO seldom requires permission before performing further actions. For instance, variants of the ClSpring trojan use BHOs to install scripts to provide a number of instructions to be performed such as adding and deleting registry values and downloading additional executable files, all completely transparently to the user. The DyFuCA spyware even replaces Internet Explorer's general error page with an ad page. In response to the problems associated with BHOs and similar extensions to Internet Explorer, Microsoft debuted an Add-on Manager in Internet Explorer 6 with the release of Service Pack 2 for Windows XP (updating it to IE6 Security Version 1, a.k.a. SP2). This utility displays a list of all installed BHOs, browser extensions and ActiveX controls, and allows the user to enable or disable them at will. There are also free tools (such as BHODemon) that list installed BHOs and allow the user to disable malicious extensions. Spybot S&D advanced mode has a similar tool built in to allow the user to disable installed BHOs. See also Browser extension Plug-in (computing) HTML Components Add-on (Mozilla) Google Chrome Extensions References External links Sites.google.com Microsoft sites IEHelper-Attaching to Internet Explorer 4.0 by Using a Browser Helper Object Control Internet Explorer Add-ons with Add-on Manageran article on Microsoft.com that explains this new feature of Windows XP Service Pack 2 Building Browser Helper Objects with Visual Studio 2005an October 2006 MSDN article by Tony Schreiner and John Sudds Listings and examples CLSID Listmaster list created by Tony Kleinkramer, which attempts to record and identify every BHO available (previously located atthe now defunctcastlecops.com)also includes Toolbar, Explorer Bar and URLSearchHook GUIDs C++ example code for a BHO C# example code for a BHO Internet Explorer
47896187
https://en.wikipedia.org/wiki/Volkswagen%20emissions%20scandal
Volkswagen emissions scandal
The Volkswagen emissions scandal, sometimes known as Dieselgate or Emissionsgate, began in September 2015, when the United States Environmental Protection Agency (EPA) issued a notice of violation of the Clean Air Act to German automaker Volkswagen Group. The agency had found that Volkswagen had intentionally programmed turbocharged direct injection (TDI) diesel engines to activate their emissions controls only during laboratory emissions testing, which caused the vehicles' output to meet US standards during regulatory testing, while they emitted up to 40 times more in real-world driving. Volkswagen deployed this software in about 11 million cars worldwide, including 500,000 in the United States, in model years 2009 through 2015. Background Introduction In 2014, the California Air Resources Board (CARB) commissioned from the International Council on Clean Transportation (ICCT) a study on emissions discrepancies between European and US models of vehicles, summing up the data on 15 vehicles from three sources. Among those recruited to this task was a group of five scientists at the West Virginia University Center for Alternative Fuels Engines and Emissions (CAFEE), who used a Japanese on-board emission testing system and detected additional emissions during live road tests on two out of three diesel cars. ICCT also purchased data from two other sources. The new road testing data and the purchased data were generated using Portable Emissions Measurement Systems (PEMS) developed by multiple individuals in the mid-late 1990s and published in May 2014. Regulators in multiple countries began to investigate Volkswagen, and its stock price fell in value by a third in the days immediately after the news. Volkswagen Group CEO Martin Winterkorn resigned, and the head of brand development Heinz-Jakob Neusser, Audi research and development head Ulrich Hackenberg, and Porsche research and development head Wolfgang Hatz were suspended. Volkswagen announced plans in April 2016 to spend ( at April 2016 exchange rates) on rectifying the emissions issues, and planned to refit the affected vehicles as part of a recall campaign. In January 2017, Volkswagen pleaded guilty to criminal charges and signed an agreed Statement of Facts, which drew on the results of an investigation Volkswagen had itself commissioned from US lawyers Jones Day. The statement set out how engineers had developed the defeat devices, because diesel models could not pass US emissions tests without them, and deliberately sought to conceal their use. In April 2017, a US federal judge ordered Volkswagen to pay a $2.8 billion criminal fine for "rigging diesel-powered vehicles to cheat on government emissions tests". The "unprecedented" plea deal formalized the punishment which Volkswagen had agreed to. Winterkorn was charged in the United States with fraud and conspiracy on 3 May 2018. , the scandal had cost VW $33.3 billion in fines, penalties, financial settlements and buyback costs. Various government and civil actions are currently undergoing in the U.S., as well as the European Union, where most of the affected vehicles are located; while they remain legal to drive there, consumers groups and governments seek to make sure Volkswagen has compensated these owners appropriately as they had to do in the United States. The scandal raised awareness over the higher levels of pollution emitted by all diesel-powered vehicles from a wide range of car makers, which under real-world driving conditions exceeded legal emission limits. A study conducted by ICCT and ADAC showed the biggest deviations from Volvo, Renault, Jeep, Hyundai, Citroën and Fiat, resulting in investigations opening into other diesel emissions scandals. A discussion was sparked on the topic of software-controlled machinery being generally prone to cheating, and a way out would be to open source the software for public scrutiny. Volkswagen Diesel anti-pollution system In general, three-way catalytic converter technology, which has been very effective since the early 1980s at reducing nitrogen oxide () in petrol engine exhaust, does not work well for diesel vehicles, which emit 20 times more unless somehow treated. To deal with this fact, in 2005 some managers at Volkswagen intended to purchase the rights to Mercedes' bulky, expensive, high maintenance "Selective catalytic reduction," urea-based exhaust BlueTec treatment system for reducing diesel exhaust pollution. Other managers at Volkswagen rejected BlueTec, and preferred to develop their own inexpensive "lean trap" system. The "lean trap" team won, but their solution did not actually work. Actual compliance is very challenging – since 2016, 38 out of 40 diesel cars of all brands tested by ADAC failed a -test based on government standards. Nonetheless, Volkswagen promoted the technological miracle of fast, cheap, and green diesel vehicles – but the impression projected to outsiders did not reflect the reality. In reality, the system failed to combine lower fuel consumption with compliant emissions, and Volkswagen chose around 2006 to program the Engine Control Unit (ECU) to switch from lower fuel consumption and high emissions to low-emission compliant mode when it detected an emissions test, particularly for the EA 189 engine. This caused the engine to emit levels above limits in daily operation, but comply with US standards when being tested, constituting a defeat device. In 2015 the news magazine Der Spiegel reported that at least 30 people at management level in Volkswagen knew about the deceit for years, which Volkswagen denied in 2015. Starting in the 2009 model year, Volkswagen Group began migrating its light-duty passenger vehicle's turbocharged direct injection (TDI) diesel engines to a common-rail fuel injection system. This system allows for higher-precision fuel delivery using electronically controlled fuel injectors and higher injection pressure, theoretically leading to better fuel atomization, better air/fuel ratio control, and by extension, better control of emissions. Volkswagen described the diesel engines as being as clean as or cleaner than US and Californian requirements, while providing good fuel economy and performance. Due to the good fuel economy provided by its diesel fleet, in 2014 Volkswagen was registered with an impressive Corporate Average Fuel Economy (CAFE) of . The low emissions levels of Volkswagen vehicles tested with the defeat device in operation enabled the company to receive green car subsidies and tax exemptions in the US. Early warnings 1998– In 1998, a Swedish researcher criticized the New European Driving Cycle standard for allowing large emission differences between test and reality. The Washington Post also reported that in the late 1990s, EPA engineers at Virginia Testing Laboratory had built a system called ROVER, designed to test a car's emissions on the road. The project was shut down in 2001, despite preliminary tests indicating gaps between emissions from lab tests and real world tests of about 10 to 20 percent. In 2011, the European Commission's Joint Research Centre published a report which found that all tested diesel vehicles emitted 0.93 ± 0.39 g/km and that the tested Euro 5 diesel vehicles emitted 0.62 ± 0.19 g/km, which substantially exceeded the respective Euro 3–5 emission limit. In 2013, the research center then warned: The European Commission and European governments could not agree upon who was responsible for taking action. In the United Kingdom, the Department for Transport received a report from the International Council on Clean Transportation (ICCT) in October 2014, which stated there was a "real world nitrogen oxides compliance issue" with diesel passenger cars. The UK's DEFRA research indicated a significant reduction in and particulate matter from 1983 to 2014. Respirable suspended particles with a diameter of 10 micrometres – also known as PM10 (including diesel particulates) – halved since 1996 despite the increased number and size of diesel cars in the UK. European discrepancies, 2014 The independent body International Council on Clean Transportation (ICCT) commissioned a study in 2014 and obtained data on 15 vehicles from three sources. John German, co-lead of the US branch of ICCT, said the idea for the "very ordinary" test came from Peter Mock, managing director ICCT in Europe. Mr. German said they chose to put US vehicles through on-the-road tests because their emissions regulations are more stringent than those in the European Union. The ICCT expected the cars to pass, and thought they would be able to use the results to demonstrate to Europeans that it was possible to run diesel cars with cleaner emissions. The study found emissions discrepancies in the diesel VW Passat and VW Jetta, and no discrepancies in a BMW X5. They wanted to test a Mercedes as well, but could not obtain one. Emission testing, US 2014 A group of scientists at West Virginia University submitted a proposal to ICCT, and John German awarded them a US$50,000 grant for a study to conduct tests on three diesel cars: a Volkswagen Passat, a Volkswagen Jetta, and a BMW X5. ICCT also purchased data from Emissions Analytics, a UK-based emissions consultancy, and from stakeholders in the Real Driving Emissions – Light Duty Vehicle working group in charge of amending Euro 6 regulations. In early 2014, two professors and two students began testing emissions from the three vehicles under road conditions, using a portable emissions measurement system, making it possible to collect real world driving emissions data, for comparison with laboratory dynamometer testing. The three vehicles were all certified at a California Air Resources Board facility before the tests as falling below the emissions limits when using the standard laboratory testing protocols. They put on the Jetta and X5. For their final test, they wanted to put even more mileage on the Passat and drove it from Los Angeles to Seattle and back again, virtually the entire West Coast of the United States, over . The BMW was "at or below the standard … with exception of rural-up/downhill driving conditions". But the researchers found that under real-world driving conditions the Jetta exceeded US emissions limits "by a factor of 15 to 35" while the Passat exceeded the limit "by a factor of 5 to 20". The emissions far exceeded legal limits set by both European and US standards. One of the testers said, "... we did so much testing that we couldn't repeatedly be doing the same mistake again and again." John German said the deceit required more effort than merely adding some code to the engine software, as the code would also have to be validated. The US test results confirmed the ICCT's findings in Europe. The West Virginia scientists did not identify the defeat device, but they reported their findings in a study they presented to the EPA and CARB in May 2014. In May 2014 Colorado's RapidScreen real-world emissions test data reinforced the suspected abnormally high emissions levels. After a year-long investigation, an international team of investigators identified the defeat device as a piece of code labelled "acoustic condition" which activated emissions-curbing systems when the car's computer identified it was undergoing a test. Underlying U.S. and EU emission standards The Volkswagen and Audi cars identified as violators had been certified to meet either the US EPA Tier 2 / Bin 5 emissions standard or the California LEV-II ULEV standard. Either standard requires that nitrogen oxide emissions not exceed for engines at full useful life which is defined as either or depending on the vehicle and optional certification choices. This standard for nitrogen oxide emissions is among the most stringent in the world. For comparison, the contemporary European standards known as Euro 5 (2008 "EU5 compliant", 2009–2014 models) and Euro 6 (2015 models) only limit nitrogen oxide emissions to and respectively. Defeat devices are forbidden in the EU. The use of a defeat device is subject to a penalty. Note: The vehicles tested were anonymous in the original study. Emissions listed on page 64–65. Limits listed on page 5. treatment listed on page 9. 20 percent of European city dwellers are exposed to unhealthy levels of nitrogen dioxide. In London, where diesel road traffic is responsible for 40 percent of emissions, air pollution causes more than 3,000 deaths a year. A Channel 4 documentary in January 2015 referred to the UK government moving to a emission band system for road tax, which favoured diesel power, as the "great car con", with Barry Gardiner MP, former member of the Blair government, stating that the policy, which lowered emissions yet increased pollution, was a mistake. EPA Notice of Violation, 2015 On 18 September 2015, the US EPA served a Notice of Violation (NOV) on Volkswagen Group alleging that approximately 480,000 Volkswagen and Audi automobiles equipped with 2-litre TDI engines, and sold in the US between 2009 and 2015, had an emissions-compliance "defeat device" installed. A Notice of Violation is a notification to the recipient that the EPA believes it has committed violations and is not a final determination of liability. Volkswagen's "defeat device" is specially-written engine-management-unit firmware that detects "the position of the steering wheel, vehicle speed, the duration of the engine's operation, and barometric pressure" when positioned on a dynamometer using the FTP-75 test schedule. These criteria very closely match the EPA's required emissions testing protocol which allowed the vehicle to comply with emissions regulations by properly activating all emissions control during testing. The EPA's NOV alleged that under normal driving conditions, the software suppressed the emissions controls, allowing better fuel economy, at the expense of emitting up to 40 times more nitrogen oxides than allowed by law. Intelligence agencies, 2015 In February 2017, Der Spiegel reported that in February 2015, ex Israeli diplomat Avi Primor had shown Ferdinand Piëch, then Volkswagen chairman of the board at the time, a document in which US agencies warned CEO Martin Winterkorn early about the manipulation. During this meeting at the end of February 2015, Primor introduced Piëch to his friend Yuval Diskin, who after retiring from directing the Israeli secret service of the Interior Shin Bet, had founded a cybersecurity company. Shin Bet apparently knew about the scandal early. Primor confirmed that the meeting took place, but both Primor and Diskin denied tipping off Piech. In early March 2017, Piech asked Winterkorn whether there had been a warning by US agencies, which Winterkorn denied. Volkswagen's response Initial response August, September 2015 According to the EPA, Volkswagen had insisted for a year before the outbreak of the scandal that discrepancies were mere technical glitches. Volkswagen fully acknowledged that they had manipulated the vehicle emission tests only after being confronted with evidence regarding the "defeat device". The first sign that Volkswagen was ready to come clean reportedly occurred on 21 August 2015 at a conference on green transportation in Pacific Grove, California, where an unnamed company representative approached Christopher Grundler, director of the EPA Office of Transportation and Air Quality, and surprised him by informally admitting that the company had been deceiving regulators. A CARB official was standing next to Grundler at the time. Formal acknowledgement of the deception was made by Volkswagen executives in Germany and the United States to EPA and California officials during a 3 September conference call, during which Volkswagen executives discussed written materials provided to the participants demonstrating how Volkswagen's diesel engine software circumvented US emissions tests. That admission came after the EPA threatened to withhold approval for the company's 2016 Volkswagen and Audi diesel models. Volkswagen's CEO Martin Winterkorn said: "I personally am deeply sorry that we have broken the trust of our customers and the public." Winterkorn was in charge at Volkswagen from the start of 2008 to September 2015. He attributed the admitted wrongdoing to "the terrible mistakes of a few people". Winterkorn initially resisted calls to step down from his leadership role at VW, but then resigned as CEO on 23 September 2015. Volkswagen Group of America CEO Michael Horn was more direct, saying, "We've totally screwed up." Horn added, "Our company was dishonest with the EPA, and the California Air Resources Board and with all of you." Olaf Lies, a Volkswagen board member and economy minister of Lower Saxony, later told the BBC that the people "who allowed this to happen, or who made the decision to install this software" acted criminally, and must be held personally accountable. He also said the board found out about the problems only "shortly before the media did", and expressed concerns over "why the board wasn't informed earlier about the problems when they were known about over a year ago in the United States". Volkswagen announced that 11 million cars were involved in the falsified emission reports, and that over seven billion dollars would be earmarked to deal with the costs of rectifying the software at the heart of the pollution statements. The newly appointed CEO of Volkswagen Mathias Müller stated that the software was activated in only a part of those 11 million cars, which has yet to be determined. The German tabloid Bild claimed that top management had been aware of the software's use to manipulate exhaust settings as early as 2007. Bosch provided the software for testing purposes and warned Volkswagen that it would be illegal to use the software to avoid emissions compliance during normal driving. Der Spiegel followed Bild with an article dated 30 September 2015 to state that some groups of people were aware of this in 2005 or 2006. Süddeutsche Zeitung had similarly reported, that Heinz-Jakob Neusser, one of Volkswagen's top executives, had ignored at least one engineer's warnings over "possibly illegal" practices in 2011. On 28 September 2015, it was reported that Volkswagen had suspended Heinz-Jakob Neusser, head of brand development at its core Volkswagen brand, Ulrich Hackenberg, the head of research and development at its brand Audi who oversees technical development across the Volkswagen group, and Wolfgang Hatz, research and development chief at its sports-car brand Porsche who also heads engine and transmissions development of the Volkswagen group. On the same day it was reported that besides the internal investigation of the incidents, the supervisory board of Volkswagen had hired American law firm Jones Day to carry out an independent investigation. Computerworld suggested that a software audit trail and test logs were ways to investigate what took place when. In February 2016 Volkswagen also contracted three public relations firms (Kekst in the United States, Hering Schuppener in Germany, Finsbury in Britain), in addition to its usual US-retained firm Edelman. To further help deal with the scandal, Volkswagen hired ex-FBI director Louis Freeh, alongside former German constitutional judge Christine Hohmann-Dennhardt previously employed by Daimler, and as of 2016 on Volkswagen's board as its director of integrity and legal affairs. Other irregularities, November 2015 emissions On 3 November 2015, Volkswagen revealed that its internal investigation found that emissions and fuel consumption figures were also affected by "irregularities". These new issues, first estimated to cost up to to repair, involved mainly diesel, but also some petrol models, with initial estimates suggesting that approximately 800,000 vehicles equipped with 1.4, 1.6 and 2.0 litre motors from VW, Skoda, Audi and SEAT might be affected. On 9 December 2015, Volkswagen revised these estimates, saying that only around 36,000 vehicles are affected by the irregularities, while also affirming that it had found no evidence of unlawful changing of emissions data. The news prompted a 7.3 percent increase in Volkswagen preference shares on the same day. In November 2016, California regulators claimed to have discovered software installed on some Audi models that allowed the manufacturer to cheat emissions during standard testing, thereby also masking the cars' contribution to global warming. 3.0 litre TDI emissions On 20 November 2015, the EPA said Volkswagen officials told the agency that all 3.0-litre TDI diesel engines sold in the US from 2009 through 2015 were also fitted with emissions-cheating software, in the form of "alternate exhaust control devices". These are prohibited in the United States, however the software is legal in Europe. Volkswagen acknowledges these devices' existence, but maintains that they were not installed with a "forbidden purpose". On 4 January 2016, the US Department of Justice filed a complaint in a federal court against VW, alleging that the respective 3.0-litre diesel engines meet the legal emission requirements in only a "temperature conditioning" mode that is automatically switched on during testing conditions, while at "all other times, including during normal vehicle operation, the vehicles operate in a 'normal mode' that permits emissions of up to nine times the federal standard". The complaint covers around 85,000 3.0 litre diesel vehicles sold in the United States since 2009, including the Volkswagen Touareg, Porsche Cayenne, Audi A6 Quattro, Audi A7 Quattro, Audi A8, Audi A8L, Audi Q5, and Audi Q7 models. Affected Volkswagen and Audi TDI models *Initial Modification **Subsequent Modification Vehicle recall and consequences On 29 September 2015, Volkswagen announced plans to refit up to 11 million affected vehicles, fitted with Volkswagen's EA 189 diesel engines, including 5 million at Volkswagen brand, 2.1 million at Audi, 1.2 million at Škoda and 1.8 million light commercial vehicles. SEAT said that 700,000 of its diesel models were affected. In Europe alone, a total of 8 million vehicles are affected. In Germany, 2.8 million vehicles will have to be recalled, followed by the UK, with 1.2 million. In France, 984,064 vehicles were affected, in Austria around 360,000, while in the Czech Republic 148,000 vehicles were involved (of which 101,000 were Škodas). In Portugal, Volkswagen said it had sold 94,400 vehicles with the software. The repair may not require a formal recall; in the UK, for example, the company will simply offer to repair the cars free of charge; a recall is required only "when a defect is identified that... could result in serious injury". As the rules violation involved enabling emission controls during testing, but turning it off under normal conditions to improve performance or fuel mileage, it has been speculated that the software update might make cars perform less efficiently and impair fuel economy; according to VW, however, its proposed solutions will be designed to achieve legal EU emissions compliance without impairing engine performance or consumption. it was unclear whether the repair would include hardware modifications, such as selective catalytic reduction (SCR) upgrades. The recall was scheduled to start in January 2016, with all affected cars projected to be fixed by the end of the year. The company also announced a review of all of its brands and models, including its supercar marque Bugatti. On 8 October 2015, Volkswagen US CEO Michael Horn said in testimony before the US Congress that it could take years to repair all the cars, especially the older models, due to the required complex hardware and software changes. He said that the fixes would likely preserve fuel economy ratings but, "there might be a slight impact on performance". On 12 October 2015, Paul Willis, Volkswagen UK managing director, told the Commons Transport Select Committee that about 400,000 Volkswagen cars in the UK will need fuel injectors altered as well as a software fix. The vehicles requiring the hardware fix are the 1.6 litre diesel models. The 1.2 litre and 2.0 litre diesel models will require only a software fix. On the same day, Volkswagen announced it would overhaul its entire diesel strategy, saying that in Europe and North America it will switch "as soon as possible" to the use of selective catalytic reduction technology to improve diesel emissions. It also announced plans to accelerate the development of electric cars and plug-in hybrids, as well as petrol, instead of diesel engines for smaller cars. On 12–13 October 2015, Volkswagen Group vehicle drivers in the UK started receiving notification letters, to "rectify the issue". Volkswagen later announced a timeline for UK diesel recalls, citing March 2016 for 2.0-litre engines, June 2016 for 1.2-litre engines, and October 2016 for 1.6-litre engines. At the beginning of October 2015, Volkswagen suggested to let car owners decide whether their cars would be recalled for handling. However, the German Federal Motor Transport Authority (Kraftfahrt-Bundesamt, or KBA) views the software as illegal, and has ordered a full recall of all affected cars in Germany. Volkswagen then decided to recall around 8.5 million cars in Europe, about a third of all its car deliveries since 2009. KBA requires Volkswagen to send a recall plan to KBA before the end of October for 2.0-litre cars, and end of November for 1.2 and 1.6-litre cars. If KBA approves a plan, Volkswagen can then start handling the cars. The German authorities require that Volkswagen remove the software and that Volkswagen ensures that emission rules are fulfilled. Media estimates that the KBA procedure sets a precedent for how authorities in other countries handle the case. On 18 November 2015, Autoblog reported that the KBA was reviewing a Volkswagen fix for the affected 1.6 diesel engine. On 25 November 2015, Volkswagen said the fix involved a minor hardware modification to the car's air intake system, alongside a software update. This low-cost solution contradicted earlier speculation regarding the possible fitting of new injection nozzles and catalytic converters. In December 2015, Volkswagen said that the affected 1.2-litre and 2.0-litre diesel engines needed only a software update. As of November 2015, the KBA had approved the fixes with the first recalls likely to begin in January 2016. According to VW, the measures aimed to achieve legal EU emissions compliance without impairing engine output, fuel consumption, or performance. The simple fixes with inexpensive parts and software were then possible though not available when the engines were developed, because engine technology understanding and intake flow simulation capabilities had matured in the meantime, to address the burning of diesel and air mixtures via intake flow shaping. As of December 2015, due to stricter environmental legislation, fixes for US vehicles were expected to take longer to produce and be more technically complex. As of February 2016, there were three sizes of affected diesel engines, and more than a dozen variations to the repairs exist, prompting Volkswagen to roll out the recalls in waves for each cluster of vehicle; the first model to be repaired was the low-volume Volkswagen Amarok. Classified as a light commercial vehicle, the Amarok pickup has a higher Euro 5 emissions limit than the passenger cars that are yet to have an available approved fix. German motoring journal Auto Motor und Sport tested two Amarok TDI pickups pre and post software update and found that whilst engine power had remained the same, fuel consumption had increased by 0.5 litres/100 km. This is believed in turn to have delayed the next wave of updates to the larger volume Passat model which had been expected to start on 29 February 2016 due to the further testing of the update by the KBA. Volkswagen confirmed on 11 April 2016 that the Passat recall would be delayed as testing had revealed higher fuel consumption. In 2017 Swedish auto journal Teknikens Värld performed tests on 10 different models and most of them showed a reduction in power output and increase in fuel consumption after having the update applied. Advertising, 2015 In France, the MediaCom media agency, which buys advertising for Volkswagen, warned the French newspapers on 22 September 2015 that it would cancel planned Volkswagen and Audi campaigns in case they would cover the emission violations. Given the scale that the scandal had already taken by that time, the threat had little effect on its coverage. On the occasion of German Unity Day, Volkswagen launched an ad campaign in German Sunday newspapers, that it wanted to express its joy about the 25th anniversary of German reunification, its pride about having shaped the country together with all people for the last 25 years, to give thanks for the confidence of the customers it had experienced during all this time and that it wanted to thank all its employees and trade partners in Germany, and that in one sentence, that "it would do everything to win back the confidence of its customers". New orders, September 2015 In September 2015, Volkswagen's Belgian importer, D'Ieteren, announced that it would offer free engine upgrades to 800 customers who had ordered a vehicle with a diesel engine which was likely to have been fitted with illegal software. As of October 2015, Sales of vehicles with EA 189 engines were halted in some European countries, including Spain, Switzerland, Italy, the Netherlands, Belgium and the UK. In the United States, Volkswagen withdrew its application for emissions certification for its 2016 diesel models, leaving thousands of vehicles stranded at ports in October 2015, which the company said contained software which should have been disclosed to and certified by the EPA. EPA quarantined some 2016-models until it would become clear that their catalysts perform the same on the road as they do in tests. US Congressional Testimony, October 2015 On 8 October 2015, Volkswagen US CEO Michael Horn testified before the United States House Committee on Energy and Commerce stating: "This was not a corporate decision, from my point of view, and to my best knowledge today. This was a couple of software engineers who put this in for whatever reason... some people have made the wrong decisions in order to get away with something that will have to be found out." The response was widely ridiculed. Compensation, November 2015 On 9 November 2015, Volkswagen announced that, in addition to the US$2,000 it was offering current Volkswagen owners for trade-ins, 482,000 diesel Audi and Volkswagen owners in the United States would be eligible to receive US$1,000 in vouchers. On 18 November 2015, Volkswagen said that approximately one quarter of the affected vehicle owners had applied to the program, which was estimated to cost at least $120 million in benefits. Volkswagen confirmed that it is offering vouchers including to customers in Canada. Volkswagen America said that accepting the gift cards does not prevent owners from filing lawsuits. Volkswagen also created a claims fund, managed by the well-known mediation attorney Kenneth Feinberg, which will offer full compensation packages (in the form of cash, buy-backs, repairs or replacement cars) to the approximately 600,000 United States owners affected by the scandal. Despite earlier hints to the contrary, in December 2015 Volkswagen CEO Matthias Müller said that customers outside the US and Canada should also expect some type of compensation package: "we are working on an attractive package, let's call it compensation, for reduction in residual values in our cars". However, on 11 January 2016, a Volkswagen spokesman said "there won't be compensation. All the indications are that residual values are unaffected"; the company, which continued to face pressure from E.U. officials to compensate European drivers as well, blamed the confusion on "a slight mistranslation". E.U. commissioner Elżbieta Bieńkowska said Volkswagen was treating European consumers unfairly, and Volkswagen responded that the situation in US and Canadian markets, where confidence in diesel technology was "severely shaken" and clients needed to wait longer for an engine fix due to tougher emissions standards, was not "automatically comparable" with other markets. On 21 April 2016, the federal district court for the Northern District of California, which was appointed in December 2015 to oversee almost all of the US litigation, including claims filed by vehicle owners and state governments, announced that Volkswagen would offer its US customers "substantial compensation" and buy back nearly 500,000 2.0-litre vehicles as part of a settlement in North America. The court appointed former FBI Director Robert Mueller as a mediator to oversee the negotiations between claimants, regulators, and Volkswagen, to produce a final "consent decree" by late June 2016. European actions, 2015–2020 Following its admission and recall plans in the United States, Volkswagen also started to establish similar plans in the European Union, where it was estimated that 8.5 million of the 11 million diesel vehicles affected by the scandal were located. The European Union warned Volkswagen in 2018 that it did not believe it was moving fast enough to issue repairs on the recalled cars, provide consumers with appropriate information on what steps Volkswagen was doing to resolve the problem, and what compensation they were offering affected consumers. Volkswagen agreed to a fine imposed by Germany for failing to monitor the employees that modified the software behind the scandal in 2018. In Germany, over 60,000 civil lawsuits of various degrees representing about 450,000 citizens were filed from 2015 through 2019 by Volkswagen owners, seeking similar compensation as Volkswagen had given to United States drivers. A case led by the Federation of German Consumer Organizations (VZBV) was brought against Volkswagen. At the Braunschweig Oberlandesgericht (Higher Regional Court) Volkswagen argued that where the United States had banned the affected cars, no EU member state had banned the affected vehicle, and thus there was no basis for any compensation. However, Judge Michael Neef rejected a summary judgement for Volkswagen in September 2019, allowing what was anticipated to be a multi-year case to go forward. Volkswagen had settled with VZBV for about – providing between and to approximately 260,000 Volkswagen owners through the VZBV – in February 2020. Many consumers were angered over this settlement, representing only a fraction of what Volkswagen had paid to United States' owners. One of the other civil cases, serving as a template for those not covered by the VZBV case, had reached the Federal Court of Justice, Germany's highest court, and in May 2020, ruled that the consumer was entitled to the full market value of the car, several times larger than what the settlement would have offered. It is unclear how much Volkswagen will own from a result of the remaining civil lawsuits. A similar class-action suit against Volkswagen representing more than 91,000 owners is currently underway in the United Kingdom, seeking greater compensation for being sold vehicles known by Volkswagen to be defective. The High Court of Justice had giving preliminary findings in the case in April 2020 that there is a likelihood that Volkswagen did sell vehicles with a "defeat device" and attempted to abuse the process, allowing the trial to go forward. Consequences Health consequences Deaths A peer-reviewed study published in Environmental Research Letters estimated that approximately 59 premature deaths will be caused by the excess pollution produced between 2008 and 2015 by vehicles equipped with the defeat device in the United States, the majority due to particulate pollution (87 percent) with the remainder due to ozone (13 percent). The study also found that making these vehicles emissions compliant by the end of 2016 would avert an additional 130 early deaths. Earlier non peer-reviewed studies published in media sources, quoted estimates ranging from 10 to 350 excess deaths in the United States related to the defeat devices based on varying assumptions. A 2022 study by economists found that each cheating Volkswagen car per 1,000 cars caused a low birth weight rate increase of 1.9 percent and infant mortality rate increase by 1.7 percent. Non-fatal health impacts Since is a precursor to ground-level ozone it may cause respiratory problems "including asthma, bronchitis and emphysema". Nitrogen oxides amplify the effect of fine particulate matter soot which causes heart problems, a form of air pollution estimated to kill 50,000 in the United States annually. A peer-reviewed study published in Environmental Pollution estimated that the fraudulent emissions would be associated with 45 thousand disability-adjusted life years (DALYs) and a value of life lost of at least 39 billion US dollars. In June 2016, Axel Friedrich, formerly with the German equivalent of the E.P.A. and a co-founder of the International Council on Clean Transportation stated "It's not just fraud – it's physical assault." Environmental consequences also contribute to acid rain, and visibly brown clouds or smog due to both the visible nature of , and the ground level ozone created by NO. NO and are not greenhouse gases, whereas is. is a precursor to ground-level ozone. Legal and financial repercussions Government actions Australia In October 2015, the Australian Competition and Consumer Commission announced that it will not be investigating Volkswagen for possible violations of emissions standards, citing that a reasonable consumer would not be concerned about the tailpipe emissions of their vehicle and hence would not be a deciding factor in their purchase. In March 2017, the Sydney Morning Herald reported that Audi and Volkswagen issued a voluntary recall for affected cars for software updates and in some cases hardware updates had begun in December 2016. , several class action suits were dropped against Volkswagen, Audi and Skoda. In December 2019 Volkswagen AG was fined A$125 million for making false and misleading representations about compliance with Australian diesel emissions standards. Belgium In October 2015, the Belgian Chamber of Representatives set up a special Dieselgate committee. It finalized a consensus report in March 2016, for the government to implement recommendations, with near-unanimous approval on 28 April 2016. In January 2016, public broadcaster VRT reported on Opel Zafira cars having lower emissions after an update compared to before receiving the update. Opel denied deploying software updates influencing emissions, and the Economic Inspection of the Federal Government started an investigation on the request of Minister of Consumer Protection Kris Peeters. Brazil As of October 2015, Volkswagen Brazil confirmed that 17,057 units of its Amarok mid-size pickups produced between 2011 and 2012 and sold in Brazil were equipped with the emissions cheating software. The Brazilian Institute for the Environment and Renewable Natural Resources (Ibama) launched an investigation, warning that Volkswagen could face fines up to . In September 2017, Volkswagen Brazil was ordered to pay to the 17,000 owners of the Amarok pickups equipped with defeat devices, as decided by the 1st Business Court of the Court of Justice of Rio de Janeiro. The automaker may still appeal the decision. The total amount reaches ( at the September 2017 exchange rate) and each consumer will receive () for material damages and another () for moral damages. In addition, the magistrate ordered the automaker to pay an additional into the National Consumer Protection Fund. According to the judge, the purpose was "to compensate the Brazilian society as a collective moral damage of a pedagogical and punitive nature because of the collective fraud caused in the domestic motor vehicle market". Canada In September 2015, Environment Canada announced that it had begun an investigation to determine if "defeat devices" were installed in Volkswagen vehicles to bypass emission control tests in Canada. On 15 December 2016 an agreement was reached which allowed buybacks or trade-ins based on market value on 18 September 2015 or fitting an approved emissions modification. All three options also added a cash payment between and . Ontario provincial authorities executed a search warrant at Volkswagen Canada offices in the Toronto area on 19 September 2017 as part of its investigation into the emissions scandal that rocked the company two years ago. The Ministry of the Environment and Climate Change have charged Volkswagen AG with one count under the province's Environmental Protection Act, alleging the German company did not comply with Ontario emission standards. The allegations have not been proven in court. In July 2018, Volkswagen Group Canada announced plans for its new Electrify Canada subsidiary to launch a network of public fast-charging stations in major cities and along major highways, starting with 32 charging sites in the four most-populated provinces: Ontario, Quebec, British Columbia and Alberta. On 9 December 2019, Volkswagen AG was charged with 60 counts of contravening the Canadian Environmental Protection Act, 1999. On 22 January 2020, Volkswagen pleaded guilty to all charges and was fined . China In October 2015, China's General Administration of Quality Supervision, Inspection and Quarantine announced the recall of 1,946 imported Tiguan SUVs and four imported Passat B6 sedans, in order to fix the emissions software problems. European Union In September 2015, Government regulatory agencies and investigators initiated proceedings in France, Italy, Germany, Switzerland, Spain, the Netherlands, the Czech Republic and Romania. Several countries called for a Europe-wide investigation. In October 2015 Werner Hoyer, President of the European Investment Bank (EIB) said the bank was considering recalling Volkswagen loans, and announced their own investigation into the matter. On 27 October 2015, the European Parliament voted a resolution urging the bloc to establish a federal authority to oversee car-emissions, following reports in the press that top EU environmental officials had warned, since early 2013, that manufacturers are tweaking vehicles to perform better in the lab than on the road. The resolution urged for tougher emissions tests to be fully implemented in 2017, instead of being phased in between 2017 and 2019, as had been originally planned. However, the European Commission proceeded with passing legislation that allowed the car industry an extra year before having to comply with the newer regulation. Also, it was revealed that the new "realistic" EU driving emissions test will continue to allow cars to emit more than twice the legal limit of nitrogen oxides () from 2019 and up to 50 percent more from 2021. The legislation, opposed only by the Netherlands, is considered a great victory for the car industry, and has drawn stern critique from other MEPs. Dutch MEP Bas Eickhout referred to the new test as "a sham", while liberal democrat MEP Catherine Bearder described the legislation as "a disgraceful stitch-up by national governments, who are once again putting the interests of carmakers ahead of public health". In December 2015, the EU Parliament voted to establish a special committee to investigate whether regulators and executive officials, including the European Commission, failed to oversee the car industry and its pollution testing regimes. In June 2016, documents leaked to the press indicated that in 2010, European Commission officials had been warned by their in-house science team that at least one car manufacturer was possibly using a NOx-related defeat device in order to bypass emission regulation. Kathleen Van Brempt, the chair of the EU inquiry into the scandal, found the documents "shocking" and suggested that they raised serious concerns with regard to the future of commission officials: "These documents show that there has been an astonishing collective blindness to the defeat device issue in the European commission, as well as in other EU institutions". In September 2020, European union laws changed and the European commission has the right to check car conformity to emission standards and to recall vehicles when needed. Fines can be up to per car. France Renault and Peugeot's headquarters were raided by fraud investigators in January and April 2016, respectively. As of January 2016, Renault recalled 15,000 cars for emission testing and fixing. French authorities opened an inquiry in March 2016 into Volkswagen over the rigging of emission tests, with prosecutors investigating suspicions of "aggravated deception". Germany In September 2015 former Volkswagen chief executive Martin Winterkorn resigned over the scandal, saying he had no knowledge of the manipulation of emissions results. One week later German prosecutors launched an investigation against him. On 1 October a German prosecutor clarified, it was looking into allegations of fraud from unidentified individuals, but that Winterkorn was not under formal investigation. On 8 October 2015 police raided Volkswagen headquarters. As of 16 October 2015, twenty investigators worked on the case, targeting "more than two, but a lot fewer than 10" Volkswagen staff. As of November 2015 the Kraftfahrt-Bundesamt KBA tested 50 cars from different manufacturers, both in laboratory and on-road with PEMS. In May 2016, German transport minister Alexander Dobrindt said that Volkswagen, Audi, Mercedes-Benz, Opel and Porsche would all adjust settings that increased emission levels such as nitrogen dioxide in some diesel cars. On 16 March 2017, German authorities raided the headquarters of Audi in Bavaria and Volkswagen in Wolfsburg. On 15 April 2019 Winterkorn and four other executives were charged by prosecutors in Braunschweig, Germany. In August 2019, a district court ruled that updated software didn't properly address the emissions, citing a tested Tiguan turbodiesel engine that only reduced emissions in the ambient temperature range of . Audi's then CEO Rupert Stadler was taken into German custody in June 2018 until being released in October 2018, when he was also removed from being CEO. In July 2019, Stadler was charged with fraud in Munich due to the scandal. Hong Kong The Hong Kong Environmental Protection Department banned the Volkswagen Caddy on 16 October 2015. As of 16 October 2015 the department had also tested the Amarok and Transporter commercial diesel vehicles but found them to be free of the defeat device. India , the Indian government directed the Automotive Research Association of India (ARAI) to investigate whether Volkswagen's vehicles had circumvented Indian laws and regulations on vehicle emission testing. On 22 September 2015 the Indian Foundation of Transport, Research and Training (IFTRT) demanded a probe into Volkswagen's Confirmation of Production process for vehicles sold in India. In October the Government of India later extended its deadline for the test results to the end of October 2015. On 11 January 2017, ARAI's investigation into defeat devices was published and revealed that Volkswagen India had installed a derivation of the software used in the U.S. to defeat emission testing procedures in all of the Volkswagen group's product range in India with EA 189 engine series. This included  1.2-L, 1.5-L, 1.6-L and 2.0-L diesel engine variants across three different brands - Audi, Skoda and Volkswagen. The report called the defeat device "not a product failure but a clear case of cheating". Italy On 6 October 2015 Italy's regulator of competition announced plans to investigate whether Volkswagen engaged in "improper commercial practices" when promoting its affected diesel vehicles. On 15 October 2015, Italian police raided Volkswagen offices in Verona, and Volkswagen's Lamborghini offices in Bologna, placing six executives under investigation. Netherlands In December 2016 the Dutch consumers authority ACM decided to investigate whether Dutch laws were broken and consumers misled, a report was due by June 2017. 5,000 Dutch Volkswagen owners have signed up for a class action lawsuit. Netherlands has spent billions of euros on subsidies in energy-efficient cars in the recent years. Jesse Klaver from the political party GroenLinks responded that the Netherlands must claim back money from the car manufacturers if it emerges that they have committed fraud in the Netherlands. Norway Norway's prosecutors opened a criminal investigation into possible economic crimes committed by VW. In May 2016, Norway's sovereign wealth fund, the world's largest ($850 bn) and also one of the company's biggest investors, announced legal action against Volkswagen, to be filed in Germany as part of a class-action lawsuit being prepared there. Romania On 1 October 2015 the Romanian Automotive Register (RAR) stopped issuing registration documents for Volkswagen vehicles equipped with Euro 5 diesel engines. South Africa On 28 September 2015, the departments of Environmental Affairs and Transport and the National Regulator for Compulsory Specifications said they still needed to determine whether local cars had been affected by the rigging of US vehicle emissions tests. South Korea As of 19 January 2016 South Korea, the world's eighth-largest diesel-car market, planned a criminal case against Volkswagen executives. On 22 September 2015 South Korean authorities announced pollution control investigations into cars manufactured by Volkswagen and other European car-manufacturers. Park Pan-kyu, a deputy director at South Korea's environment ministry said: "If South Korean authorities find problems in the Volkswagen diesel cars, the probe could be expanded to all German diesel cars". In November 2015, after defeat devices had been found in some Volkswagen models, the Environment Minister issued a fine of and ordered the cars to be recalled. As of 20 January 2016, the country's environmental agency had filed criminal charges against VW, seeking up to $48 billion in penalties. Johannes Thammer, managing director of Audi Volkswagen Korea, was placed under investigation and faced up to five years in prison and a fine of up to . Volkswagen's recall plan for South Korea, submitted on 6 January 2016, was rejected by the authorities, as it failed to meet a number of key legal requirements. Authorities are also reported to have rejected a revised plan on 23 March 2016 for the same reasons. In May 2016, following a wider investigation of 20 diesel-powered cars, South Korean authorities accused Nissan of using a defeat device for manipulating emissions data for the British-built Nissan Qashqai, allegations which the Japanese carmaker denied. In August 2019, the government announced a ban on 8 VW Group diesel models cars for cheating emissions regulations. Spain As of 28 October 2015, a Spanish court had opened a criminal probe against Volkswagen AG, to establish whether the company's actions broke any local laws. Sweden As of 29 September 2015, Sweden's chief prosecutor was considering starting a preliminary investigation into Volkswagen's emissions violations. Switzerland On 26 September 2015 Switzerland banned sales of Volkswagen diesel cars, marking the most severe step taken so far by a government in reaction to the emissions crisis. United Kingdom The Department for Transport announced on 24 September 2015 that it would begin re-testing cars from a variety of manufacturers to ensure the use of "defeat devices" was not industry wide. The UK Parliamentary Transport Select Committee opened an enquiry into Volkswagen Emissions Violations with evidence sessions on 12 October 2015 and 25 January 2016. The Select Committee published a letter from Paul Willis, managing director of Volkswagen Group UK Ltd of 21 December 2015 stating: "In very simple terms, the software did amend the NOx characteristics in testing. The vehicles did meet EU5 standards, so it clearly contributed to meeting the EU5 standards in testing". A report on "real world" tests commissioned by the Government published in April 2016 showed emissions from 37 diesel engines up to 14 times higher than had been claimed, with every vehicle exceeding the legal limit of nitrogen oxide emissions. Only Volkswagen group vehicles were found to have test cycle detection software. In January 2017, an action group announced it had 25,000 vehicle owners who were seeking compensation of £3,000–4,000 per vehicle. United States VW suspended sales of TDI-equipped cars in the US on 20 September 2015. On 21 September 2015 the EPA announced that should the allegations be proven, Volkswagen Group could face fines of up to per vehicle (about in total). In addition to possible civil fines, the United States Department of Justice Environment and Natural Resources Division were doing a criminal probe of Volkswagen AG's conduct. 22 September 2015 The United States House Energy Subcommittee on Oversight and Investigations announced that it would hold a hearing into the Volkswagen scandal while New York Attorney General Eric Schneiderman said that his investigation was already underway. As of 29 October 2015, over 25 other states' attorneys general, and the Federal Bureau of Investigation in Detroit, were involved in similar investigations. On 12 November 2015, the FBI confirmed to engineering magazine Ingeniøren that it had an ongoing investigation, after previous unconfirmed reports. As of 6 October 2015, the EPA decided to broaden its investigations onto 28 diesel-powered models made by BMW, Chrysler, General Motors, Land Rover and Mercedes-Benz. The agency would initially focus on one used vehicle of each model, and widen the probe if it encountered suspicious data. The EPA has described the hidden Volkswagen pollution as "knowing endangerment". In May 2016, the owners of Mercedes-Benz confirmed that the US Justice Department asked Daimler AG to run an internal investigation into its diesel emissions testing, as well. On 4 January 2016, the Justice Department, on behalf of the EPA, brought suit against Volkswagen in the United States District Court for the Eastern District of Michigan in Detroit. The complaint, seeking up to $46 billion in penalties for Clean Air Act violations, alleged that Volkswagen equipped certain 2.0 and 3.0-litre diesel-engine vehicles with emissions cheating software, causing pollution to exceed EPA's standards during normal driving conditions. It further claimed that Volkswagen entities provided misleading information and that material omissions impeded and obstructed "efforts to learn the truth about the (excess) emissions". while "so far recall discussions with the company have not produced an acceptable way forward". On 9 January 2016, US officials criticized Volkswagen for citing German law in order to withhold documents from a group of states investigating the company's actions. Schneiderman also complained over Volkswagen's slowness in producing documents from its US files, claiming the company "has sought to delay responses until it completes its 'independent investigation' several months from now". On 12 January 2016, US regulators rejected Volkswagen's recall plans for its affected 2.0-litre diesel engines, submitted to CARB in December 2015, claiming that these "do not adequately address overall impacts on vehicle performance, emissions and safety". Volkswagen confirmed that its discussions with CARB will continue, and said that the company is working on bringing "a package together which satisfies our customers first and foremost and then also the regulators". The states of Arizona, West Virginia, New Mexico, and Texas, as well as Harris County, Texas, all filed separate lawsuits seeking restitution from VW. The company also faces investigations by 48 United States state attorneys (). On 29 March 2016, Volkswagen was additionally sued by the United States Federal Trade Commission for false advertising due to fraudulent claims made by the company in its promotion of the affected models, which touted the "environmental and economic advantages" of diesel engines and contained claims of low emissions output. The suit was consolidated into existing litigation over the matter in San Francisco, which would allow the FTC to participate in global settlements over the matter. The Ninth U.S. Circuit Court of Appeals ruled on 1 June 2020 that Volkswagen was liable for further legal damage lawsuits brought by state and local governments in the emissions fraud. The unanimous ruling by the court paved the way for two counties in Florida and Utah to proceed with litigation against Volkswagen, as well as potential further cases brought by jurisdictions in the US. By June 2020, VW had already expended $33.3 billion in settlements and other costs including buybacks of the excessively polluting diesel vehicles. In a statement, VW said it would ask the circuit court to review the ruling, and that the company if necessary would take the case to the U.S. Supreme Court. Charges against Volkswagen engineering/management On 9 September 2016, James Robert Liang, a Volkswagen engineer working at Volkswagen's testing facility in Oxnard, California, admitted as part of a plea deal with the US Department of Justice that the defeat device had been purposely installed in US vehicles with the knowledge of his engineering team: "Liang admitted that beginning in about 2006, he and his co-conspirators started to design a new "EA 189" diesel engine for sale in the United States. ... When he and his co-conspirators realized that they could not design a diesel engine that would meet the stricter US emissions standards, they designed and implemented [the defeat device] software". On 7 January 2017, former top emissions compliance manager for Volkswagen in the US Oliver Schmidt was arrested by the FBI on a charge of conspiracy to defraud the United States. On 11 January 2017 Volkswagen pleaded guilty to weaving a vast conspiracy to defraud the US government and obstructing a federal investigation and agreed to pay a US$2.8 billion criminal fine and US$1.5 billion in civil penalties. In addition, six executives have been criminally charged. On 3 May 2018, former Volkswagen CEO Martin Winterkorn was indicted on fraud and conspiracy charges in the emissions scandal case. He has repeatedly denied any knowledge of the rigged emissions tests. Settlement On 25 October 2016, a final settlement was approved by a judge. About 475,000 Volkswagen owners in the US were given the choice between a buyback or a free fix and compensation, if a repair becomes available. Volkswagen will begin administering the settlement immediately, having already devoted several hundred employees to handling the process. Buybacks range in value from $12,475 to $44,176, including restitution payments, and vary based on mileage. People who opt for a fix approved by the Environmental Protection Agency will receive payouts ranging from $5,100 to $9,852, depending on the book value of their car. Of the buyback, 138,000 had been completed by 18 February 2017 with 150,000 more to be returned. 52,000 chose to keep their cars. 67,000 diesel cars from model year 2015 were cleared for repairs, but left uncertainty about the future of 325,000 "Generation One" diesel VWs from the 2009–2014 model years, which use the "lean trap" and would be harder to repair. In March 2018, Reuters reported that 294,000 cars from the buyback program have been stored at 37 regional US staging sites; some of the first reported sites included: Colorado Springs, Colorado; Pontiac, Michigan; Baltimore, Maryland; San Bernardino, California; and Gary, Indiana. Volkswagen will also pay $2.7 billion for environmental mitigation and another $2 billion for clean-emissions infrastructure. Toward that end, Volkswagen formed a U.S. subsidiary called Electrify America, LLC., based in Reston, Virginia, that will manage the $2 billion brand-neutral zero-emission vehicle infrastructure programs and marketing campaigns for the next ten years. The group will get four installments of $500 million, at -year intervals, subject to California Air Resources Board and U.S. EPA approval. Volkswagen plans to install hundreds of chargers with 50, 150 and even some ultra-fast 320 kW charge rate, beginning in California in 2017. Competing charge networks (and automakers) saw the effort as controversial. In August 2018, Electrify America launched the first national media advertising campaign to promote electric vehicles; it featured the Chevy Bolt, with other EVs in cameo roles. Securities and Exchange Commission lawsuit On 14 March 2019, the U.S. Securities and Exchange Commission filed a complaint against Volkswagen and its former CEO Martin Winterkorn alleging that they defrauded investors by selling corporate bonds and asset-backed securities while knowingly making false and misleading statements to government regulators, underwriters, and consumers as to the quality of their automobiles. Private actions By 27 September 2015 at least 34 class-action lawsuits had been filed in the United States and Canada on behalf of Volkswagen and Audi owners, accusing Volkswagen of breach of contract, fraudulent concealment, false advertising, and violations of federal and state laws, and positing the "diminished value" of diesels that will be fixed to conform with pollution regulations, due to possible reductions in horsepower and fuel efficiency. According to Reuters, one reason class action lawyers were able to mobilize so fast is that the company's marketing to upscale professionals, including jurists, had backfired. , at least one investor lawsuit seeking class action status for holders of Volkswagen American Depositary Receipts had been filed in the United States seeking compensation for the drop in stock value due to the emissions scandal. On 7 October 2015, the Los Angeles Times reported that the number of class-action lawsuits filed had grown to more than 230. On 19 November 2015, ABC News Australia reported that more than 90,000 VW, Audi and Skoda diesel vehicle owners had filed a class action lawsuit against Volkswagen in the country's Federal Court. On 8 December 2015, the United States Judicial Panel on Multidistrict Litigation issued an order consolidating over 500 class actions against Volkswagen into a single multidistrict litigation, captioned In re: Volkswagen 'Clean Diesel' Marketing, Sales Practices, and Products Liability Litigation, MDL No. 2672, and transferred the entire MDL to Judge Charles R. Breyer of the federal district court for the Northern District of California. On 21 January 2016, Judge Breyer held a hearing on the requests by over 150 plaintiff's attorneys for some kind of leadership role in the gigantic Volkswagen MDL, of which over 50 sought to serve as lead counsel or to chair the plaintiffs' steering committee. More than 100 of those attorneys tried to squeeze into his San Francisco courtroom to argue their requests in person, and some of them had to stand in the aisles or in the outside hallway. That afternoon, Judge Breyer issued an order naming 22 attorneys to a plaintiffs' steering committee, and of those, selected Elizabeth Cabraser of Lieff Cabraser as chair of the committee. On the other side, Volkswagen hired Robert Giuffra of Sullivan & Cromwell as its lead defense counsel in the MDL. On 14 March 2016, Volkswagen AG was sued in Germany for allegedly failing to inform financial markets in a timely manner about defeat devices used in diesel engines. The suit on behalf of 278 institutional investors seeks ( at March 2016 exchange rate) in compensation. BlackRock Inc., the world's largest asset manager, joined other institutional investors in the lawsuit in September 2016. In November 2015, Moody's Investors Service downgraded Volkswagen's bond credit rating from A2 to A3. Fitch Ratings downgraded Volkswagen's Long-term Issuer Default Rating by two notches to BBB+, with a negative outlook. In May 2016, The Children's Investment Fund Management, run by Chris Hohn and retaining a 2 percent stake in Volkswagen preference stock, launched a campaign aiming to overhaul the company's executive pay system, arguing that "for years management has been richly rewarded with massive compensation despite presiding over a productivity and profit collapse", thereby leading to an "aggressive management behavior" and contributing to the diesel emission scandal. Later the same month, German investor group DSW called for an independent audit of Volkswagen's emissions-cheating practices, arguing that the company's internal investigation might not necessarily make everything transparent to smaller shareholders. On 28 June 2016, Volkswagen agreed to pay $15.3 billion to settle the various public and private civil actions in the United States, the largest settlement ever of an automobile-related consumer class action in United States history. On 25 October 2016, a U.S. federal judge approved the settlement. Up to $10 billion will be paid to 475,000 Volkswagen or Audi owners whose cars are equipped with 2.0-litre diesel engines. Owners can also opt to have their car repaired free of charge or can sell it back to the company, who will pay back its estimated value from before the scandal began. Leases can also be terminated without incurring penalty charges. Independent of which options are selected, owners will still receive compensation ranging from $5,000 to $10,000 per affected car. Additionally, should they choose to decline the offer, they are free to pursue independent legal action against the firm. The settlement also includes $2.7 billion for environmental mitigation, $2 billion to promote zero-emissions vehicles and $603 million for claims by 44 states, Washington, D.C., and Puerto Rico. Volkswagen agreed not to resell or export any vehicles it repurchases unless an approved emission repair has been completed. , no practical engineering solutions that would bring the vehicles into compliance with emission standards had been publicly identified. The consumer settlement will resolve all claims by participating consumers against Volkswagen and all its associates, except for any potential claims against Robert Bosch GmbH. Bosch supplied two exhaust treatment components and engine control software. In the case of 3.0-litre V6 TDI engines, Volkswagen suggested it can provide an uncomplicated fix that will bring the vehicles into compliance without adversely affecting performance, a move that the company hopes will avoid an expensive buyback of these cars. European Investment Bank's possible involvement In January 2016, documents obtained by CEE Bankwatch Network provided more details for a European Investment Bank statement that its loans to Volkswagen may have been connected to the car makers use of cheating devices to rig emission tests. The 'Antrieb RDI' loan was supposedly for creating cleaner drive trains. However, during the bank's annual press conference on 14 January 2016, the bank president, Werner Hoyer, admitted that the loan might have been used in the creation of an emissions defeat device. Many redacted documents obtained by Bankwatch, along with the EIB not disclosing the details of the loan, hint to the bank possibly already knowing that there were some discrepancies with the 'Antrieb RDI' loan. In 2017, the European Anti-Fraud Office (OLAF) found that Volkswagen had misled the bank about the car company's use of emissions cheating software, in a scandal that has become known as Dieselgate. Also in 2017, Hoyer said the bank did not find "any indication" that its loans had been misused. However, six months later news website Politico reported that Olaf had concluded that Volkswagen acquired the EIB loan through "fraud" and "deception". Models affected By 22 September 2015, Volkswagen had admitted that 11 million vehicles sold worldwide are affected in addition to the 480,000 vehicles with 2.0 L TDI engines sold in the US. According to Volkswagen, vehicles sold in other countries with the 1.6 L and 2.0 L 4-cylinder TDI engine known as Type EA189 are also affected. This software is also said to affect EA188 and the 2015 EA288 generation of the four-cylinder. Worldwide, around 1.2 million Skodas and 2.1 million Audis may contain the software, including TTs and Qs. VW states that Euro6 model in Germany are not affected, while 2015 US models with the same EA288 engines are affected. This suggests that normal-operation measurements that place the EA288 emissions between the two standards' limits were readily available at Volkswagen headquarters in Germany. According to Müller, the 1.2 and 2.0-litre models may be updated by software, whereas the around 3 million 1.6-litre require various hardware solutions, and some cars may even be replaced. The cars are so diverse that many different solutions are required. Over one quarter of Volkswagen's sales in the US are diesel-powered vehicles. The corporation has chosen a market strategy that emphasizes clean diesel over electric cars or hybrid electric vehicles. The vehicles affected by the recall in the US include the following model years: 2009–2015 Audi A3 2.0 L TDI 2009–2015 Volkswagen Beetle 2.0 L TDI 2009–2015 Volkswagen Beetle Convertible 2.0 L TDI 2009–2015 Volkswagen Golf 2.0 L TDI 2015 Volkswagen Golf Sportwagen 2.0 L TDI 2009–2015 Volkswagen Jetta 2.0 L TDI 2009–2014 Volkswagen Jetta Sportwagen 2.0 L TDI 2012–2015 Volkswagen Passat 2.0 L TDI The EPA revealed on 2 November 2015 that Volkswagen had shipped additional diesel models with defeat devices, including the 2014 VW Touareg and the 2015 Porsche Cayenne. Model year 2016 Audi Quattro diesels were also found affected, including several 2016 Audi Quattro models (the 2016 Audi Quattro A6, A7, A8, A8L, and Q5). Cynthis Giles, the EPA Assistant Administrator for Office of Enforcement and Compliance Assurance, called out the company for further refusing to take responsibility for its failure to comply with the law. Under US federal Clean Air Act, Volkswagen could be liable for up to $375 million in fines. Resale value , the resale value of affected model cars in the US was down from 5 to nearly 16 percent depending on model as compiled by Black Book and Kelley Blue Book based on used car auction prices, the volume of which was also down. On 15 March 2016, Volkswagen Financial Services took a writedown of to cover a potential decline in the residual value of the fleet of its leased cars. Effects on Volkswagen corporate Stock value On 21 September 2015, the first day of trading after the EPA's Notice of Violation to Volkswagen became public, share prices of Volkswagen AG fell 20 percent on the Frankfurt Stock Exchange. On 22 September, the stock fell another 12 percent. On 23 September, the stock quickly fell 10.5 percent, dropping below to a record 4-year low before regaining some lost ground. Share prices of other German automakers were also affected, with BMW down 4.9 percent and Daimler down 5.8%. A year later Volkswagen stock was down by 30 percent. Qatar, one of the biggest Volkswagen shareholders with a 17 percent stake in the company, lost nearly $5 billion as the company stock value fell. Sales The US sale of Volkswagens was 23,882 vehicles in November 2015, a 24.7 percent decline from November 2014. In South Korea, sales in November rose 66 percent to 4,517 units from a year ago due to the Volkswagen's aggressive marketing efforts such as a discount of up to ( at December 2015 exchange rates) for some models. In Great Britain, the scandal did not affect sales, which increased in 2016 to an all-point high, placing Volkswagen second in the league of best-selling cars. VW sales across Europe returned to growth in April 2016 for the first time since the scandal broke, with a group market share of 25.2 percent, compared to its previous level of 26.1 percent. Transgressions by other manufacturers The Volkswagen scandal more generally raised awareness over the high levels of pollution being emitted by diesel vehicles built by a wide range of carmakers, including Volvo, Renault, Mercedes, Jeep, Hyundai, Citroen, BMW, Mazda, Fiat, Ford and Peugeot. Independent tests carried out by ADAC proved that, under normal driving conditions, diesel vehicles including the Volvo S60, Renault's Espace Energy and the Jeep Renegade, exceeded legal European emission limits for nitrogen oxide () by more than 10 times. Researchers have criticized the inadequacy of current regulations and called for the use of a UN-sanctioned test called Worldwide harmonized Light vehicles Test Procedures that better reflects real-life driving conditions, as well as on-road emissions testing via PEMS. The two types of new test started to come into force in 2017, with critics saying that car firms have lobbied fiercely to delay their implementation, due to the high cost of meeting stricter environmental controls. The Volkswagen scandal has increased scrutiny on combustion engines in general, and Volkswagen and several other car makes have been shown to pollute more than allowed. A French government report in 2016 investigated 86 different cars, and about 1/5th of those were found to comply with emission laws. Most did not. One car was measured to emit 17 times more than allowed. An overview of tests showed that cars turned off the exhaust improvement device in many ordinary conditions, with 5 out 38 cars complying with regulations in an English test. A German test showed 10 out of 53 cars compliant when exposed to temperatures below 10 degrees Celsius. A French test showed 4 out of 52 cars compliant when tested outside (not in a laboratory). A 2016 test showed Volkswagen diesel cars to emit at about twice the Euro6 limit, and several other manufacturers emitting more, up to 14 times higher. 38 out of 40 tested diesel cars failed a -test since 2016. Industry consequences Renault believes that diesel cars would become significantly more expensive when re-engineered to comply with new emissions regulations as a result of the Volkswagen disclosures, to the point that diesel cars may not be competitive. Industry-wide, small diesel engines are being replaced by bigger ones, and electric car sales have risen. Suzuki had won the case to terminate its partnership with Volkswagen at the International Court of Arbitration of the International Chamber of Commerce and dissolved the capital tie-up until September 2015, and was not involved in this scandal. On 16 June 2016, Volkswagen announced plans to make major investments into the production of electric vehicles; Matthias Müller predicted that Volkswagen would introduce 30 all-electric models over the next 10 years, and that electric vehicles would account for around a quarter of its annual sales by 2025. Volkswagen plans to fund the initiative by streamlining its operations and engaging in cost-cutting. Müller stated that the changes would "require us – following the serious setback as a result of the diesel issue – to learn from mistakes made, rectify shortcomings and establish a corporate culture that is open, value-driven and rooted in integrity". Volkswagen plans a battery factory near Salzgitter to compensate for the reduced numbers of piston engines. In November 2016, Volkswagen and its labour unions agreed to reduce the workforce by 30,000 people until 2021 as a result of the costs from the violations. However, 9,000 new jobs would come by producing more electric cars. Volkswagen CEO Herbert Diess stated to the German financial publication Handelsblatt that the company planned to stop marketing diesel models in the U.S., citing "the legal framework". Secondary market consequences A study made by researchers in Tel Aviv University explored the effect of the scandal on the secondary market in Israel. According to this study, the Volkswagen emissions scandal had a statistically significant negative effect on the number of transactions in the secondary market involving the affected models (nearly -18 percent) and on their resale price (nearly -6 percent). The study also find that the reduction in the number of transactions was driven mostly by private sellers and that non-private sellers barely shied away from the market. These findings suggest that the supply of used cars among private sellers is much more elastic relative to the supply of used cars among non-private sellers. Monkeygate In January 2018, it was revealed that Volkswagen had experimented on monkeys in May 2015 to prove that diesel exhaust was not harmful to primates. The disclosure of the tests was named Monkeygate. However, the test car was a Volkswagen Beetle fitted with the defeat device that produced far less emissions in the experiment than it would on the highway. Volkswagen's top lobbyist, Thomas Steg, was suspended on 23 January 2018. Reactions Political figures German Chancellor Angela Merkel stated she hoped that all facts in the matter would be made known promptly, urging "complete transparency". She additionally noted that Germany's Transport Minister, Alexander Dobrindt, was in ongoing communication with Volkswagen. Michel Sapin, the French Finance Minister, called for an investigation of diesel-powered cars that would encompass the entire continent of Europe. Catherine Bearder, MEP for South East England, commented on 27 October 2015 in the European Parliament that "we now have the political momentum for a radical overhaul that will ensure carmakers cannot dodge the rules", defending an EU resolution meant to specifically "cut deadly pollution from diesel vehicles". However, when the European Commission proceeded with passing legislation that allowed the car industry more time to comply with the newer regulation, while also permitting cars, even under the more "realistic" tests, to emit more than twice the legal limit of nitrogen oxides (NOx) from 2019 and up to 50 percent more from 2021, Bearder denounced the legislation as "a disgraceful stitch-up by national governments, who are once again putting the interests of carmakers ahead of public health". London Assembly member Stephen Knight suggested on 1 November 2015 that diesel vehicles should either be banned in the future, or face stringent tests before being allowed to enter London's low-emissions zone. The city's deputy mayor for the environment, Matthew Pencharz, responded that such measures could lead to serious economic problems. Automotive industry and other commentators Major car manufacturers, including Toyota, GM, PSA Peugeot Citroen, Renault, Mazda, Daimler (Mercedes Benz), and Honda, issued press statements reaffirming their vehicles' compliance with all regulations and legislation for the markets in which they operate; The Society of Motor Manufacturers and Traders described the issue as affecting "just one company", with no evidence to suggest that the whole industry might be affected. Renault-Nissan CEO Carlos Ghosn said it would be difficult for an automaker to conceal internally an effort to falsify vehicle emissions data, such as has happened at Volkswagen AG: "I don't think you can do something like this hiding in the bushes." Jim Holder, the editorial director of Haymarket Automotive, which publishes WhatCar and AutoCar, opined that there had never been a scandal in the automotive industry of this size. A commentary in Spiegel Online argued that the Volkswagen scandal will affect the entire German industry, and that German companies operating abroad will face a decrease in competitiveness. Alan Brown, chairman of the Volkswagen National Dealer Advisory Council, commented on the scandal's negative impact on US dealers, who were already struggling with overpriced products and a deteriorating relationship between the company and the dealer body. Car and Driver similarly emphasized Volkswagen's inability to efficiently operate in the US market, while also suggesting that the company had grossly underestimated the EPA's power, and inexplicably failed to go public before the story broke, despite receiving ample warning. Tesla Motors CEO Elon Musk was asked about his opinion whether the scandal will weaken the consumer's view on green technologies; he responded saying he expects the opposite to happen: "What Volkswagen is really showing is that we've reached the limit of what's possible with diesel and petrol. The time has come to move to a new generation of technology." Similarly, analysts at Fitch suggested the Volkswagen diesel emissions crisis was likely to affect the entire automotive industry, with petrol cars potentially enjoying a revival in Europe and greater investment being poured into electric vehicles. Other commentators argued that the diesel engine will nevertheless regain its footing in the market, due to its international indispensability, low emissions and strong presence in the US pickup– and commercial–truck segments. On 29 September 2015, S&P Dow Jones Indices and RobecoSAM stated that Volkswagen AG's stock will be de-listed from the Dow Jones Sustainability indexes after close of trading on 5 October 2015. Among the reasons for the de-listing, the statement issued by RobecoSAM cited social and ethical reasons, and confirmed that Volkswagen will no longer be identified as an Industry Group Leader in the "Automobiles & Components" industry group. In early October, Green Car Journal rescinded its Green Car of the Year awards, for models that "best raise the bar in environmental performance", that were given to the 2009 Volkswagen Jetta TDI and 2010 Audi A3 TDI models. In December 2015, a group of business and environmental leaders, including Tesla CEO Elon Musk, addressed an open letter to CARB, urging the agency to absolve Volkswagen of recalling the 85,000 diesel vehicles affected by the scandal in the US, and argued that Volkswagen should instead be asked to allocate resources to an accelerated rollout of zero-emissions vehicles ("cure the air, not the cars"). The letter, which includes a 5-step legally enforceable plan, argues that this course of action could result in a "10 for 1 or greater reduction in pollutant emissions as compared to the pollution associated with the diesel fleet cheating", while suggesting that the affected vehicles on the road in California "represent an insignificant portion of total vehicles emissions in the State" and "do not, individually, present any emissions-related risk to their owners or occupants". Similar requests were put forward by the American Lung Association, who petitioned the EPA to determine Volkswagen to promote zero-emissions vehicles, build sustainable transport infrastructure and retrofit older diesel models with superior emissions controls. Volkswagen got a 2016 Ig Nobel Prize in chemistry from the scientific humor magazine Annals of Improbable Research for "solving the problem of excessive automobile pollution emissions by automatically, electromechanically producing fewer emissions whenever the cars are being tested". Media The Volkswagen TDI emissions scandal has received widespread negative media exposure, with headlines fronting the websites of multiple news gathering and reporting organizations. Reuters said that the crisis at Volkswagen could be a bigger threat to the German economy than the consequences of the 2015 Greek sovereign debt default. Deutsche Welle, one of Germany's state broadcasters, said that a "lawsuit tsunami" was headed for Volkswagen and that the scandal had dealt a blow to the country's psyche and "Made in Germany" brand. Popular Mechanics said that the scandal "is much worse than a recall", highlighting that Volkswagen had engaged in a pattern of "cynical deceit". The Volkswagen emissions cheating scandal has joined the ranks of other -gate suffix stories, with media coining both Dieselgate and Emissionsgate to describe it. Public polling Despite the scandal, one poll conducted for Bild suggested that the majority of Germans (55 percent) still have "great faith" in Volkswagen, with over three-quarters believing that other carmakers are equally guilty of manipulation. Similarly, a poll conducted by the management consultancy Prophet in October 2015 indicated that two-thirds of Germans believe the scandal to be exaggerated and continue to regard Volkswagen as a builder of "excellent cars". A survey by Northwestern University's Kellogg School of Management, Brand Imperatives and Survata said that nearly 50 percent of US consumers had either a positive or very positive impression of Volkswagen, while 7.5 percent had a "very negative" impression. Another US survey by market researcher AutoPacific found that 64 percent of vehicle owners do not trust Volkswagen and only 25 percent of them have a positive view of Volkswagen following the scandal. See also Exhaust gas recirculation FTP-75 NOx adsorber Vehicle regulation Notes Further reading External links EPA Notice of Violation EPA Notices of Violation FAQ State of California EPA In-Use Compliance Letter VW diesel official FAQ Written Testimony of Michael Horn, CEO of Volkswagen Group of America Before the U.S. House Committee on Energy and Commerce, 8 October 2015 Analysis of the emission scandal from a procedural, organizational and technical level: The exhaust emissions scandal ("Dieselgate"), talk at 2015 Chaos Communication Congress. U.S. v. Volkswagen AG, Complaint, Filed 4 January 2016 Infographic – simple overview 2015 in the environment 2015 in technology 2015 in transport 2015 scandals Cheating in business Corporate scandals Emission standards Emissions reduction Environmental controversies Fraud in the European Union Fraud in the United States History of the diesel engine Scandals in Germany Scandals in the United States Conspiracy to defraud the United States case law Regulatory compliance 2015 in Germany 2015 in the United States Emissions violations Emissions violations Regulation in Germany
4077
https://en.wikipedia.org/wiki/Binary%20prefix
Binary prefix
A binary prefix is a unit prefix for multiples of units in data processing, data transmission, and digital information, principally in association with the bit and the byte, to indicate multiplication by a power of 2. As shown in the table to the right there are two sets of symbols for binary prefixes, one set established by International Electrotechnical Commission (IEC) and several other standards and trade organizations using two letter symbols, e.g. Mi indicating 1,048,576 with a second set established by semiconductor industry convention using one letter symbols, e.g., M also indicating 1,048,576. In most contexts, industry uses the multipliers kilo (k), mega (M), giga (G), etc., in a manner consistent with their meaning in the International System of Units (SI), namely as powers of 1000. For example, a 500-gigabyte hard disk holds bytes, and a 1 Gbit/s (gigabit per second) Ethernet connection transfers data at nominal speed of bit/s. In contrast with the binary prefix usage, this use is described as a decimal prefix, as 1000 is a power of 10 (103). The computer industry has historically in citations of main memory (RAM) capacity used the units kilobyte, megabyte, and gigabyte, and the corresponding symbols KB, MB, and GB, in a binary sense: gigabyte customarily means bytes. As this is a power of 1024, and 1024 is a power of two (210), this usage is referred to as a binary measurement. The use of the same unit prefixes with two different meanings has caused confusion. Starting around 1998, the IEC and several other standards and trade organizations attempted to address the ambiguity by publishing standards and recommendations for a set of binary prefixes that refer exclusively to powers of 1024. Accordingly, the US National Institute of Standards and Technology (NIST) requires that SI prefixes be used only in the decimal sense: kilobyte and megabyte denote one thousand bytes and one million bytes respectively (consistent with SI), while new terms such as kibibyte, mebibyte, and gibibyte, having the symbols KiB, MiB, and GiB, denote 1024 bytes, bytes, and bytes, respectively. In 2008, the IEC prefixes were incorporated into the International System of Quantities alongside the decimal prefixes of the international standard system of units (see ISO/IEC 80000). History Main memory Early computers used one of two addressing methods to access the system memory; binary (base 2) or decimal (base 10). For example, the IBM 701 (1952) used binary and could address 2048 words of 36 bits each, while the IBM 702 (1953) used decimal and could address ten thousand 7-bit words. By the mid-1960s, binary addressing had become the standard architecture in most computer designs, and main memory sizes were most commonly powers of two. This is the most natural configuration for memory, as all combinations of their address lines map to a valid address, allowing easy aggregation into a larger block of memory with contiguous addresses. Early computer system documentation would specify the memory size with an exact number such as 4096, 8192, or 16384 words of storage. These are all powers of two, and furthermore are small multiples of 210, or 1024. As storage capacities increased, several different methods were developed to abbreviate these quantities. The method most commonly used today uses prefixes such as kilo, mega, giga, and corresponding symbols K, M, and G, which the computer industry originally adopted from the metric system. The prefixes kilo- and mega-, meaning 1000 and respectively, were commonly used in the electronics industry before World War II. Along with giga- or G-, meaning , they are now known as SI prefixes after the International System of Units (SI), introduced in 1960 to formalize aspects of the metric system. The International System of Units does not define units for digital information but notes that the SI prefixes may be applied outside the contexts where base units or derived units would be used. But as computer main memory in a binary-addressed system is manufactured in sizes that were easily expressed as multiples of 1024, kilobyte, when applied to computer memory, came to be used to mean 1024 bytes instead of 1000. This usage is not consistent with the SI. Compliance with the SI requires that the prefixes take their 1000-based meaning, and that they are not to be used as placeholders for other numbers, like 1024. The use of K in the binary sense as in a "32K core" meaning words, i.e., words, can be found as early as 1959. Gene Amdahl's seminal 1964 article on IBM System/360 used "1K" to mean 1024. This style was used by other computer vendors, the CDC 7600 System Description (1968) made extensive use of K as 1024. Thus the first binary prefix was born. Another style was to truncate the last three digits and append K, essentially using K as a decimal prefix similar to SI, but always truncating to the next lower whole number instead of rounding to the nearest. The exact values words, words and words would then be described as "32K", "65K" and "131K". (If these values had been rounded to nearest they would have become 33K, 66K, and 131K, respectively.) This style was used from about 1965 to 1975. These two styles (K = 1024 and truncation) were used loosely around the same time, sometimes by the same company. In discussions of binary-addressed memories, the exact size was evident from context. (For memory sizes of "41K" and below, there is no difference between the two styles.) The HP 21MX real-time computer (1974) denoted (which is 192×1024) as "196K" and as "1M", while the HP 3000 business computer (1973) could have "64K", "96K", or "128K" bytes of memory. The "truncation" method gradually waned. Capitalization of the letter K became the de facto standard for binary notation, although this could not be extended to higher powers, and use of the lowercase k did persist. Nevertheless, the practice of using the SI-inspired "kilo" to indicate 1024 was later extended to "megabyte" meaning 10242 () bytes, and later "gigabyte" for 10243 () bytes. For example, a "512 megabyte" RAM module is 512×10242 bytes (512 × , or ), rather than . The symbols Kbit, Kbyte, Mbit and Mbyte started to be used as "binary units"—"bit" or "byte" with a multiplier that is a power of 1024—in the early 1970s. For a time, memory capacities were often expressed in K, even when M could have been used: The IBM System/370 Model 158 brochure (1972) had the following: "Real storage capacity is available in 512K increments ranging from 512K to 2,048K bytes." Megabyte was used to describe the 22-bit addressing of DEC PDP-11/70 (1975) and gigabyte the 30-bit addressing DEC VAX-11/780 (1977). In 1998, the International Electrotechnical Commission IEC introduced the binary prefixes kibi, mebi, gibi ... to mean 1024, 10242, 10243 etc., so that 1048576 bytes could be referred to unambiguously as 1 mebibyte. The IEC prefixes were defined for use alongside the International System of Quantities (ISQ) in 2009. Disk drives The disk drive industry has followed a different pattern. Disk drive capacity is generally specified with unit prefixes with decimal meaning, in accordance to SI practices. Unlike computer main memory, disk architecture or construction does not mandate or make it convenient to use binary multiples. Drives can have any practical number of platters or surfaces, and the count of tracks, as well as the count of sectors per track may vary greatly between designs. The first commercially sold disk drive, the IBM 350, had fifty physical disk platters containing a total of 50,000 sectors of 100 characters each, for a total quoted capacity of 5 million characters. It was introduced in September 1956. In the 1960s most disk drives used IBM's variable block length format, called Count Key Data (CKD). Any block size could be specified up to the maximum track length. Since the block headers occupied space, the usable capacity of the drive was dependent on the block size. Blocks ("records" in IBM's terminology) of 88, 96, 880 and 960 were often used because they related to the fixed block size of 80- and 96-character punch cards. The drive capacity was usually stated under conditions of full track record blocking. For example, the 100-megabyte 3336 disk pack only achieved that capacity with a full track block size of 13,030 bytes. Floppy disks for the IBM PC and compatibles quickly standardized on 512-byte sectors, so two sectors were easily referred to as "1K". The 3.5-inch "360 KB" and "720 KB" had 720 (single-sided) and 1440 sectors (double-sided) respectively. When the High Density "1.44 MB" floppies came along, with 2880 of these 512-byte sectors, that terminology represented a hybrid binary-decimal definition of "1 MB" = 210 × 103 = 1 024 000 bytes. In contrast, hard disk drive manufacturers used megabytes or MB, meaning 106 bytes, to characterize their products as early as 1974. By 1977, in its first edition, Disk/Trend, a leading hard disk drive industry marketing consultancy segmented the industry according to MBs (decimal sense) of capacity. One of the earliest hard disk drives in personal computing history, the Seagate ST-412, was specified as Formatted: 10.0 Megabytes. The drive contains four heads and active surfaces (tracks per cylinder), 306 cylinders. When formatted with a sector size of 256 bytes and 32 sectors/track it has a capacity of . This drive was one of several types installed in the IBM PC/XT and extensively advertised and reported as a "10 MB" (formatted) hard disk drive. The cylinder count of 306 is not conveniently close to any power of 1024; operating systems and programs using the customary binary prefixes show this as 9.5625 MB. Many later drives in the personal computer market used 17 sectors per track; still later, zone bit recording was introduced, causing the number of sectors per track to vary from the outer track to the inner. The hard drive industry continues to use decimal prefixes for drive capacity, as well as for transfer rate. For example, a "300 GB" hard drive offers slightly more than , or , bytes, not (which would be about ). Operating systems such as Microsoft Windows that display hard drive sizes using the customary binary prefix "GB" (as it is used for RAM) would display this as "279.4 GB" (meaning bytes, or ). On the other hand, macOS has since version 10.6 shown hard drive size using decimal prefixes (thus matching the drive makers' packaging). (Previous versions of Mac OS X used binary prefixes.) Disk drive manufacturers sometimes use both IEC and SI prefixes with their standardized meanings. Seagate has specified data transfer rates in select manuals of some hard drives with both units, with the conversion between units clearly shown and the numeric values adjusted accordingly. "Advanced Format" drives uses the term "4K sectors", which it defines as having size of "4096 (4K) bytes". Information transfer and clock rates Computer clock frequencies are always quoted using SI prefixes in their decimal sense. For example, the internal clock frequency of the original IBM PC was 4.77 MHz, that is . Similarly, digital information transfer rates are quoted using decimal prefixes: The ATA-100 disk interface refers to bytes per second A "56K" modem refers to bits per second SATA-2 has a raw bit rate of 3 Gbit/s = bits per second PC2-6400 RAM transfers bytes per second Firewire 800 has a raw rate of bits per second In 2011, Seagate specified the sustained transfer rate of some hard disk drive models with both decimal and IEC binary prefixes. Standardization of dual definitions By the mid-1970s it was common to see K meaning 1024 and the occasional M meaning for words or bytes of main memory (RAM) while K and M were commonly used with their decimal meaning for disk storage. In the 1980s, as capacities of both types of devices increased, the SI prefix G, with SI meaning, was commonly applied to disk storage, while M in its binary meaning, became common for computer memory. In the 1990s, the prefix G, in its binary meaning, became commonly used for computer memory capacity. The first terabyte (SI prefix, bytes) hard disk drive was introduced in 2007. The dual usage of the kilo (K), mega (M), and giga (G) prefixes as both powers of 1000 and powers of 1024 has been recorded in standards and dictionaries. For example, the 1986 ANSI/IEEE Std 1084-1986 defined dual uses for kilo and mega. The binary units Kbyte and Mbyte were formally defined in ANSI/IEEE Std 1212-1991. Many dictionaries have noted the practice of using customary prefixes to indicate binary multiples. Oxford online dictionary defines, for example, megabyte as: "Computing: a unit of information equal to one million or (strictly) bytes." The units Kbyte, Mbyte, and Gbyte are found in the trade press and in IEEE journals. Gigabyte was formally defined in IEEE Std 610.10-1994 as either or 230 bytes. Kilobyte, Kbyte, and KB are equivalent units and all are defined in the obsolete standard, IEEE 100–2000. The hardware industry measures system memory (RAM) using the binary meaning while magnetic disk storage uses the SI definition. However, many exceptions exist. Labeling of one type of diskette uses the megabyte to denote 1024×1000 bytes. In the optical disks market, compact discs use MB to mean 10242 bytes while DVDs use GB to mean 10003 bytes. Inconsistent use of units Deviation between powers of 1024 and powers of 1000 Computer storage has become cheaper per unit and thereby larger, by many orders of magnitude since "K" was first used to mean 1024. Because both the SI and "binary" meanings of kilo, mega, etc., are based on powers of 1000 or 1024 rather than simple multiples, the difference between 1M "binary" and 1M "decimal" is proportionally larger than that between 1K "binary" and 1k "decimal," and so on up the scale. The relative difference between the values in the binary and decimal interpretations increases, when using the SI prefixes as the base, from 2.4% for kilo to nearly 21% for the yotta prefix. Consumer confusion In the early days of computers (roughly, prior to the advent of personal computers) there was little or no consumer confusion because of the technical sophistication of the buyers and their familiarity with the products. In addition, it was common for computer manufacturers to specify their products with capacities in full precision. In the personal computing era, one source of consumer confusion is the difference in the way many operating systems display hard drive sizes, compared to the way hard drive manufacturers describe them. Hard drives are specified and sold using "GB" and "TB" in their decimal meaning: one billion and one trillion bytes. Many operating systems and other software, however, display hard drive and file sizes using "MB", "GB" or other SI-looking prefixes in their binary sense, just as they do for displays of RAM capacity. For example, many such systems display a hard drive marketed as "1 TB" as "931 GB". The earliest known presentation of hard disk drive capacity by an operating system using "KB" or "MB" in a binary sense is 1984; earlier operating systems generally presented the hard disk drive capacity as an exact number of bytes, with no prefix of any sort, for example, in the output of the MS-DOS or PC DOS CHKDSK command. Legal disputes The different interpretations of disk size prefixes has led to class action lawsuits against digital storage manufacturers. These cases involved both flash memory and hard disk drives. Early cases Early cases (2004–2007) were settled prior to any court ruling with the manufacturers admitting no wrongdoing but agreeing to clarify the storage capacity of their products on the consumer packaging. Accordingly, many flash memory and hard disk manufacturers have disclosures on their packaging and web sites clarifying the formatted capacity of the devices or defining MB as 1 million bytes and 1 GB as 1 billion bytes. Willem Vroegh v. Eastman Kodak Company On 20 February 2004, Willem Vroegh filed a lawsuit against Lexar Media, Dane–Elec Memory, Fuji Photo Film USA, Eastman Kodak Company, Kingston Technology Company, Inc., Memorex Products, Inc.; PNY Technologies Inc., SanDisk Corporation, Verbatim Corporation, and Viking Interworks alleging that their descriptions of the capacity of their flash memory cards were false and misleading. Vroegh claimed that a 256 MB Flash Memory Device had only 244 MB of accessible memory. "Plaintiffs allege that Defendants marketed the memory capacity of their products by assuming that one megabyte equals one million bytes and one gigabyte equals one billion bytes." The plaintiffs wanted the defendants to use the customary values of 10242 for megabyte and 10243 for gigabyte. The plaintiffs acknowledged that the IEC and IEEE standards define a MB as one million bytes but stated that the industry has largely ignored the IEC standards. The parties agreed that manufacturers could continue to use the decimal definition so long as the definition was added to the packaging and web sites. The consumers could apply for "a discount of ten percent off a future online purchase from Defendants' Online Stores Flash Memory Device". Orin Safier v. Western Digital Corporation On 7 July 2005, an action entitled Orin Safier v. Western Digital Corporation, et al. was filed in the Superior Court for the City and County of San Francisco, Case No. CGC-05-442812. The case was subsequently moved to the Northern District of California, Case No. 05-03353 BZ. Although Western Digital maintained that their usage of units is consistent with "the indisputably correct industry standard for measuring and describing storage capacity", and that they "cannot be expected to reform the software industry", they agreed to settle in March 2006 with 14 June 2006 as the Final Approval hearing date. Western Digital offered to compensate customers with a free download of backup and recovery software valued at US$30. They also paid $500,000 in fees and expenses to San Francisco lawyers Adam Gutride and Seth Safier, who filed the suit. The settlement called for Western Digital to add a disclaimer to their later packaging and advertising. Cho v. Seagate Technology (US) Holdings, Inc. A lawsuit (Cho v. Seagate Technology (US) Holdings, Inc., San Francisco Superior Court, Case No. CGC-06-453195) was filed against Seagate Technology, alleging that Seagate overrepresented the amount of usable storage by 7% on hard drives sold between 22 March 2001 and 26 September 2007. The case was settled without Seagate admitting wrongdoing, but agreeing to supply those purchasers with free backup software or a 5% refund on the cost of the drives. Dinan et al. v. SanDisk LLC On 22 January 2020, the district court of the Northern District of California ruled in favor of the defendant, SanDisk, upholding its use of "GB" to mean . Unique binary prefixes Early suggestions While early computer scientists typically used k to mean 1000, some recognized the convenience that would result from working with multiples of 1024 and the confusion that resulted from using the same prefixes for two different meanings. Several proposals for unique binary prefixes were made in 1968. Donald Morrison proposed to use the Greek letter kappa (κ) to denote 1024, κ2 to denote 10242, and so on. (At the time, memory size was small, and only K was in widespread use.) Wallace Givens responded with a proposal to use bK as an abbreviation for 1024 and bK2 or bK2 for 10242, though he noted that neither the Greek letter nor lowercase letter b would be easy to reproduce on computer printers of the day. Bruce Alan Martin of Brookhaven National Laboratory further proposed that the prefixes be abandoned altogether, and the letter B be used for base-2 exponents, similar to E in decimal scientific notation, to create shorthands like 3B20 for 3×220, a convention still used on some calculators to present binary floating point-numbers today. None of these gained much acceptance, and capitalization of the letter K became the de facto standard for indicating a factor of 1024 instead of 1000, although this could not be extended to higher powers. As the discrepancy between the two systems increased in the higher-order powers, more proposals for unique prefixes were made. In 1996, Markus Kuhn proposed a system with di prefixes, like the "dikilobyte" (K₂B or K2B). Donald Knuth, who uses decimal notation like 1 MB = 1000 kB, expressed "astonishment" that the IEC proposal was adopted, calling them "funny-sounding" and opining that proponents were assuming "that standards are automatically adopted just because they are there." Knuth proposed that the powers of 1024 be designated as "large kilobytes" and "large megabytes" (abbreviated KKB and MMB, as "doubling the letter connotes both binary-ness and large-ness"). Double prefixes were already abolished from SI, however, having a multiplicative meaning ("MMB" would be equivalent to "TB"), and this proposed usage never gained any traction. IEC prefixes The set of binary prefixes that were eventually adopted, now referred to as the "IEC prefixes", were first proposed by the International Union of Pure and Applied Chemistry's (IUPAC) Interdivisional Committee on Nomenclature and Symbols (IDCNS) in 1995. At that time, it was proposed that the terms kilobyte and megabyte be used only for 103 bytes and 106 bytes, respectively. The new prefixes kibi (kilobinary), mebi (megabinary), gibi (gigabinary) and tebi (terabinary) were also proposed at the time, and the proposed symbols for the prefixes were kb, Mb, Gb and Tb respectively, rather than Ki, Mi, Gi and Ti. The proposal was not accepted at the time. The Institute of Electrical and Electronics Engineers (IEEE) began to collaborate with the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC) to find acceptable names for binary prefixes. IEC proposed kibi, mebi, gibi and tebi, with the symbols Ki, Mi, Gi and Ti respectively, in 1996. The names for the new prefixes are derived from the original SI prefixes combined with the term binary, but contracted, by taking the first two letters of the SI prefix and "bi" from binary. The first letter of each such prefix is therefore identical to the corresponding SI prefixes, except for "K", which is used interchangeably with "k", whereas in SI, only the lower-case k represents 1000. The IEEE decided that their standards would use the prefixes kilo, etc. with their metric definitions, but allowed the binary definitions to be used in an interim period as long as such usage was explicitly pointed out on a case-by-case basis. Adoption by IEC, NIST and ISO In January 1999, the IEC published the first international standard (IEC 60027-2 Amendment 2) with the new prefixes, extended up to pebi (Pi) and exbi (Ei). The IEC 60027-2 Amendment 2 also states that the IEC position is the same as that of BIPM (the body that regulates the SI system); the SI prefixes retain their definitions in powers of 1000 and are never used to mean a power of 1024. In usage, products and concepts typically described using powers of 1024 would continue to be, but with the new IEC prefixes. For example, a memory module of bytes () would be referred to as 512 MiB or 512 mebibytes instead of 512 MB or 512 megabytes. Conversely, since hard drives have historically been marketed using the SI convention that "giga" means , a "500 GB" hard drive would still be labeled as such. According to these recommendations, operating systems and other software would also use binary and SI prefixes in the same way, so the purchaser of a "500 GB" hard drive would find the operating system reporting either "500 GB" or "466 GiB", while bytes of RAM would be displayed as "512 MiB". The second edition of the standard, published in 2000, defined them only up to exbi, but in 2005, the third edition added prefixes zebi and yobi, thus matching all SI prefixes with binary counterparts. The harmonized ISO/IEC IEC 80000-13:2008 standard cancels and replaces subclauses 3.8 and 3.9 of IEC 60027-2:2005 (those defining prefixes for binary multiples). The only significant change is the addition of explicit definitions for some quantities. In 2009, the prefixes kibi-, mebi-, etc. were defined by ISO 80000-1 in their own right, independently of the kibibyte, mebibyte, and so on. The BIPM standard JCGM 200:2012 "International vocabulary of metrology – Basic and general concepts and associated terms (VIM), 3rd edition" lists the IEC binary prefixes and states "SI prefixes refer strictly to powers of 10, and should not be used for powers of 2. For example, 1 kilobit should not be used to represent bits (210 bits), which is 1 kibibit." Other standards bodies and organizations The IEC standard binary prefixes are now supported by other standardization bodies and technical organizations. The United States National Institute of Standards and Technology (NIST) supports the ISO/IEC standards for "Prefixes for binary multiples" and has a web site documenting them, describing and justifying their use. NIST suggests that in English, the first syllable of the name of the binary-multiple prefix should be pronounced in the same way as the first syllable of the name of the corresponding SI prefix, and that the second syllable should be pronounced as bee. NIST has stated the SI prefixes "refer strictly to powers of 10" and that the binary definitions "should not be used" for them. The microelectronics industry standards body JEDEC describes the IEC prefixes in its online dictionary with a note: "The definitions of kilo, giga, and mega based on powers of two are included only to reflect common usage." The JEDEC standards for semiconductor memory use the customary prefix symbols K, M and G in the binary sense. On 19 March 2005, the IEEE standard IEEE 1541-2002 ("Prefixes for Binary Multiples") was elevated to a full-use standard by the IEEE Standards Association after a two-year trial period. However, , the IEEE Publications division does not require the use of IEC prefixes in its major magazines such as Spectrum or Computer. The International Bureau of Weights and Measures (BIPM), which maintains the International System of Units (SI), expressly prohibits the use of SI prefixes to denote binary multiples, and recommends the use of the IEC prefixes as an alternative since units of information are not included in SI. The Society of Automotive Engineers (SAE) prohibits the use of SI prefixes with anything but a power-of-1000 meaning, but does not recommend or otherwise cite the IEC binary prefixes. The European Committee for Electrotechnical Standardization (CENELEC) adopted the IEC-recommended binary prefixes via the harmonization document HD 60027-2:2003-03. The European Union (EU) has required the use of the IEC binary prefixes since 2007. Current practice Most computer hardware uses SI prefixes to state capacity and define other performance parameters such as data rate. Main and cache memories are notable exceptions. Capacities of main memory and cache memory are usually expressed with customary binary prefixes On the other hand, flash memory, like that found in solid state drives, mostly uses SI prefixes to state capacity. Some operating systems and other software continue to use the customary binary prefixes in displays of memory, disk storage capacity, and file size, but SI prefixes in other areas such as network communication speeds and processor speeds. In the following subsections, unless otherwise noted, examples are first given using the common prefixes used in each case, and then followed by interpretation using other notation where appropriate. Operating systems Prior to the release of Macintosh System Software (1984), file sizes were typically reported by the operating system without any prefixes. Today, most operating systems report file sizes with prefixes. The Linux kernel uses standards-compliant decimal and binary prefixes when booting up. However, many Unix-like system utilities, such as the ls command, use powers of 1024 indicated as K/M (customary binary prefixes) if called with the "" option. They give the exact value in bytes otherwise. The GNU versions will also use powers of 10 indicated with k/M if called with the "" option. The Ubuntu Linux distribution uses the IEC prefixes for base-2 numbers as of the 10.10 release. Microsoft Windows reports file sizes and disk device capacities using the customary binary prefixes or, in a "Properties" dialog, using the exact value in bytes. iOS 10 and earlier, Mac OS X Leopard and earlier and watchOS use the binary system (1 GB = ). Apple product specifications, iOS and macOS (including Mac OS X Snow Leopard: version 10.6) now report sizes using SI decimal prefixes (1 GB = bytes). Software , most software does not distinguish symbols for binary and decimal prefixes. The IEC binary naming convention has been adopted by a few, but this is not used universally. One of the stated goals of the introduction of the IEC prefixes was "to preserve the SI prefixes as unambiguous decimal multipliers." Programs such as fdisk/cfdisk, parted, and apt-get use SI prefixes with their decimal meaning. Example of the use of IEC binary prefixes in the Linux operating system displaying traffic volume on a network interface in kibibytes (KiB) and mebibytes (MiB), as obtained with the ifconfig utility: eth0 Link encap:Ethernet [...] RX packets:254804 errors:0 dropped:0 overruns:0 frame:0 TX packets:756 errors:0 dropped:0 overruns:0 carrier:0 [...] RX bytes:18613795 (17.7 MiB) TX bytes:45708 (44.6 KiB) Software that uses IEC binary prefixes for powers of 1024 and uses standard SI prefixes for powers of 1000 includes: GNU Core Utilities GParted FreeDOS-32 ifconfig GNOME Network SLIB Cygwin/X HTTrack Pidgin (IM client) Deluge yafc tnftp WinSCP MediaInfo Software that uses standard SI prefixes for powers of 1000, but not IEC binary prefixes for powers of 1024, includes: Mac OS X v10.6 and later for hard drive and file sizes Software that supports decimal prefixes for powers of 1000 and binary prefixes for powers of 1024 (but does not follow SI or IEC nomenclature for this) includes: 4DOS (uses lowercase letters as decimal and uppercase letters as binary prefixes) Computer hardware Hardware types that use powers-of-1024 multipliers, such as memory, continue to be marketed with customary binary prefixes. Computer memory Measurements of most types of electronic memory such as RAM and ROM are given using customary binary prefixes (kilo, mega, and giga). This includes some flash memory, like EEPROMs. For example, a "512-megabyte" memory module is bytes (512 × , or ). JEDEC Solid State Technology Association, the semiconductor engineering standardization body of the Electronic Industries Alliance (EIA), continues to include the customary binary definitions of kilo, mega and giga in their Terms, Definitions, and Letter Symbols document, and uses those definitions in later memory standards (See also JEDEC memory standards.) Many computer programming tasks reference memory in terms of powers of two because of the inherent binary design of current hardware addressing systems. For example, a 16-bit processor register can reference at most 65,536 items (bytes, words, or other objects); this is conveniently expressed as "64K" items. An operating system might map memory as 4096-byte pages, in which case exactly 8192 pages could be allocated within bytes of memory: "8K" (8192) pages of "4 kilobytes" (4096 bytes) each within "32 megabytes" (32 MiB) of memory. Hard disk drives All hard disk drive manufacturers state capacity using SI prefixes. Flash drives USB flash drives, flash-based memory cards like CompactFlash or Secure Digital, and flash-based solid-state drives (SSDs) use SI prefixes; for example, a "256 MB" flash card provides at least 256 million bytes (), not 256×1024×1024 (). The flash memory chips inside these devices contain considerably more than the quoted capacities, but much like a traditional hard drive, some space is reserved for internal functions of the flash drive. These include wear leveling, error correction, sparing, and metadata needed by the device's internal firmware. Floppy drives Floppy disks have existed in numerous physical and logical formats, and have been sized inconsistently. In part, this is because the end user capacity of a particular disk is a function of the controller hardware, so that the same disk could be formatted to a variety of capacities. In many cases, the media are marketed without any indication of the end user capacity, as for example, DSDD, meaning double-sided double-density. The last widely adopted diskette was the 3½-inch high density. This has a formatted capacity of bytes or 1440 KB (1440 × 1024, using "KB" in the customary binary sense). These are marketed as "HD", or "1.44 MB" or both. This usage creates a third definition of "megabyte" as 1000×1024 bytes. Most operating systems display the capacity using "MB" in the customary binary sense, resulting in a display of "1.4 MB" (). Some users have noticed the missing 0.04 MB and both Apple and Microsoft have support bulletins referring to them as 1.4 MB. The earlier "1200 KB" (1200×1024 bytes) 5¼-inch diskette sold with the IBM PC AT was marketed as "1.2 MB" (). The largest 8-inch diskette formats could contain more than a megabyte, and the capacities of those devices were often irregularly specified in megabytes, also without controversy. Older and smaller diskette formats were usually identified as an accurate number of (binary) KB, for example the Apple Disk II described as "140KB" had a 140×1024-byte capacity, and the original "360KB" double sided, double density disk drive used on the IBM PC had a 360×1024-byte capacity. In many cases diskette hardware was marketed based on unformatted capacity, and the overhead required to format sectors on the media would reduce the nominal capacity as well (and this overhead typically varied based on the size of the formatted sectors), leading to more irregularities. Optical discs The capacities of most optical disc storage media like DVD, Blu-ray Disc, HD DVD and magneto-optical (MO) are given using SI decimal prefixes. A "4.7 GB" DVD has a nominal capacity of about 4.38 GiB. However, CD capacities are always given using customary binary prefixes. Thus a "700-MB" (or "80-minute") CD has a nominal capacity of about 700 MiB (approximately 730 MB). Tape drives and media Tape drive and media manufacturers use SI decimal prefixes to identify capacity. Data transmission and clock rates Certain units are always used with SI decimal prefixes even in computing contexts. Two examples are hertz (Hz), which is used to measure the clock rates of electronic components, and to bit/s and B/s, which are used to measure data transmission speed. A 1 GHz processor receives clock ticks per second. A sound file sampled at has samples per second. A MP3 stream consumes bits (16 kilobytes, ) per second. A Internet connection can transfer bits per second ( bytes per second ≈ , assuming an 8-bit byte and no overhead) A Ethernet connection can transfer at nominal speed of bits per second ( bytes per second ≈ , assuming an 8-bit byte and no overhead) A 56k modem transfers bits per second ≈ . Bus clock speeds and therefore bandwidths are both quoted using SI decimal prefixes. PC3200 memory on a double data rate bus, transferring 8 bytes per cycle with a clock speed of ( cycles per second) has a bandwidth of = B/s = (about ). A PCI-X bus at ( cycles per second), 64 bits per transfer, has a bandwidth of transfers per second × 64 bits per transfer = bit/s, or B/s, usually quoted as (about ). Use by industry IEC prefixes are used by Toshiba, IBM, HP to advertise or describe some of their products. According to one HP brochure, "[t]o reduce confusion, vendors are pursuing one of two remedies: they are changing SI prefixes to the new binary prefixes, or they are recalculating the numbers as powers of ten." The IBM Data Center also uses IEC prefixes to reduce confusion. The IBM Style Guide reads To help avoid inaccuracy (especially with the larger prefixes) and potential ambiguity, the International Electrotechnical Commission (IEC) in 2000 adopted a set of prefixes specifically for binary multipliers (See IEC 60027-2). Their use is now supported by the United States National Institute of Standards and Technology (NIST) and incorporated into ISO 80000. They are also required by EU law and in certain contexts in the US. However, most documentation and products in the industry continue to use SI prefixes when referring to binary multipliers. In product documentation, follow the same standard that is used in the product itself (for example, in the interface or firmware). Whether you choose to use IEC prefixes for powers of 2 and SI prefixes for powers of 10, or use SI prefixes for a dual purpose ... be consistent in your usage and explain to the user your adopted system. See also Binary engineering notation ISO/IEC 80000 Nibble Octet Definitions References Further reading – An introduction to binary prefixes —a 1996–1999 paper on bits, bytes, prefixes and symbols —Another description of binary prefixes —White-paper on the controversy over drive capacities External links A plea for sanity A summary of the organizations, software, and so on that have implemented the new binary prefixes KiloBytes vs. kilobits vs. Kibibytes (Binary prefixes) SI/Binary Prefix Converter Storage Capacity Measurement Standards Measurement Naming conventions Units of information Numeral systems
2430155
https://en.wikipedia.org/wiki/Vortex%20Software
Vortex Software
Vortex Software was a video game developer founded by Costa Panayi and Paul Canter in the early 1980s to sell the game Cosmos which Panayi had developed for the Sinclair ZX81. They converted the game to the ZX Spectrum, but due to the low sales of the ZX81 version they licensed the game to Abbex. Luke Andrews, Costa's brother-in-law, and Crete Panayi, Costa's brother, became involved to handle the business affairs and advertising respectively. The company was based at 280 Brooklands Road, Manchester, M23 9HD. In the summer of 1984, Mark Haigh-Hutchinson was offered a position and ported several of the games to the Amstrad CPC in addition to writing Alien Highway for the ZX Spectrum. Chris Wood and David Aubrey-Jones were also associated with Vortex as outside contractors. The company produced several notable games for the 8-bit home computers of the period. Deflektor was also ported to the Commodore Amiga and Atari ST. After the production of Hostile All Terrain Encounter in 1988, Costa spent the next two years deciding where he wanted to go. In 1990 Vortex was reborn with Costa, Mark and Luke and the intention to develop a game for the Amiga and Atari ST. They developed a much enhanced version of Highway Encounter in just three months, but failed to find a software publisher for the game, so it remained unpublished. Several games achieved critical success; Tornado Low Level and Highway Encounter appearing in the "Your Sinclair official top 100", for example. List of games ZX81 ZX81 Othello (1981) Word Mastermind (1981) Pontoon (1981) Crash (1981) Astral Convoy (1983) Serpent's Tomb (1983) ZX Spectrum Cosmos (1982), Abbex Electronics Gun Law (1983) Android One: The Reactor Run (1983), Vortex Software Android Two (1983), Vortex Software Tornado Low Level (1984), Vortex Software Cyclone (1985), Vortex Software Highway Encounter (1985), Vortex Software Alien Highway (1986), Vortex Software Revolution (1986), U.S. Gold Deflektor (1987), Gremlin Graphics Hostile All Terrain Encounter (H.A.T.E.) (1989), Gremlin Graphics Amstrad CPC Android One: The Reactor Run (1985) Android Two (1985) Highway Encounter (1985) Tornado Low Level (T.L.L.) (1986) Alien Highway (1986) Revolution (1986, published by U.S. Gold) Deflektor (1987, published by Gremlin Graphics) Hostile All Terrain Encounter (H.A.T.E.) (1988, published by Gremlin 1990, unfinished) MSX Highway Encounter (1986) Commodore C64 Highway Encounter (Pedigree Software, 1986, published by Gremlin Graphics) Deflektor (Gremlin Graphics, 1987, published by Gremlin Graphics) Hostile All Terrain Encounter (H.A.T.E.) (Gremlin Graphics, 1987, published by Gremlin Graphics) References External links History of Vortex Software by Mark Haigh-Hutchinson Costa Panayi at World of Spectrum Defunct video game companies of the United Kingdom Software companies of the United Kingdom
29593327
https://en.wikipedia.org/wiki/Pidoco
Pidoco
The Pidoco Usability Suite ( ) is a cloud-based collaboration software by Pidoco GmbH for creating, sharing and testing wireframes, mockups, and prototypes of websites, mobile apps, and enterprise software applications. Pidoco Usability Suite The Pidoco Usability Suite is a cloud-based collaboration software for planning, designing, and testing websites, web applications, mobile apps, and enterprise software. The software tool enables users to quickly create clickable wireframes, mockups, and interactive low-fidelity prototypes of GUIs without programming by using drag-and-drop placement of pre-fabricated elements. Pidoco allows users to share projects with team members and other project stakeholders for real-time online collaboration, reviewing, and user testing. The web application is used to visualize requirements, collaborate in the design phases of software development, involve end users, and generate optimized specifications for traditional or agile development processes. Pidoco is browser-based and thus compatible with Windows, Mac OS, and Linux. Functionality Pidoco is structured into several views that give access to various functions. The functions include: Editor – Creating, sharing and exporting wireframes, mockups and prototypes Simulator – Reviewing and discussing projects and requirements Tester – Testing prototypes for usability with end users online Versions The Pidoco Usability Suite is marketed in three editions: Basic – The Basic edition is the entry-level version which includes the editing functions. Classic – The Classic version includes all Basic functionality and additional sharing options as well as real-time collaborative editing capability for teams. Expert – The Expert version is the top-of-line version and includes all Classic functions as well as testing functionality and additional collaboration options like screensharing, VoIP connection, chat and session recording. Compatibility The Pidoco Usability Suite is a browser-based application that runs on various operating systems, including Windows, Mac OS, and Linux. Supported browsers include Internet Explorer 7+, Firefox 3.0+, Chrome 4+, and Safari 4+ for the simulation and testing functions that allow teams to collaborate on prototypes by gathering feedback. Creating and editing prototypes is supported in Firefox 3.0+, Chrome 4+, and Safari 4+. Requirements Pidoco is a JavaScript/SVG-based tool that uses open web standards. It is offered on the software-as-a-service (SaaS) model and does not require download or installation. The software can be accessed online via the application homepage, so a broad-band internet connection and a web browser software are necessary to use the Pidoco software. Software-as-a-Service Pidoco operates on the Software-as-a-Service (SaaS) model, and is available as standard hosted SaaS option, hosted dedicated server option or as an in-house installation. Attributes of SaaS include flexibility and scalability, multi-tenant capability, recurring subscription fees rather than upfront capital cost, and require no internal hardware. See also Software prototyping Software visualization Website wireframe Collaboration Usability List of collaborative software References Further reading External links Official resources Guided tour of Pidoco Usability Suite Official Online Help for Pidoco Usability Suite Cloud applications Collaborative software
5391414
https://en.wikipedia.org/wiki/Jv16%20powertools
Jv16 powertools
jv16 PowerTools, developed by Macecraft Software, is a utility software suite for the Microsoft Windows operating system that according to its Web site can remove unneeded, erroneous, and left-over data, including in particular registry cleaning and optimisation, searching for files and registry entries, and finding the DNS server giving fastest Internet query resolution. It is available translated into many human languages. It has been reviewed in several blogs and articles. Another tool for the same purpose is its competitor CCleaner. Main features Features offered vary with different releases. main functions were: Clean and speed up my computer Startup Optimizer Software Uninstaller Startup Timer Windows AntiSpy Check for Vulnerable Software License The software may be downloaded in fully functional form for a limited trial period, and various licences are available, typically for several computers for one year, and lifetime. Prices, and duration of free trial, vary, and special prices may be offered valid for a limited period. References Utilities for Windows Pascal (programming language) software
23172430
https://en.wikipedia.org/wiki/Voice-directed%20warehousing
Voice-directed warehousing
Voice-directed warehousing (VDW) refers to the use of the voice direction and speech recognition software in warehouses and distribution centers. VDW has been in use since the late 1990s, with its use increasing drastically since. In a voice directed warehouse, workers wear a headset connected to a small wearable computer, similar in size to a Sony Walkman, which tells the worker where to go and what to do using verbal prompts. Workers confirm their tasks by speaking pre-defined commands and reading confirmation codes printed on locations or products throughout the warehouse. The speech recognition software running on the wearable computer 'understands' the workers' responses. Voice-directed warehousing is typically used instead of paper- or mobile computer-based systems that require workers to read instructions and scan barcodes or key-enter information to confirm their tasks. By freeing a worker's hands and eyes, voice directed systems typically improve efficiency, accuracy, and safety. Whilst VDW was originally used in picking orders, now all warehouse functions such as goods receiving, put-away, replenishment, shipping, and returns processing can be coordinated by voice systems. Overview The first incarnations of voice directed warehousing were implemented in distribution centers in the early 1990s. Since then, voice has changed dramatically. Most notably, the technology was originally limited to picking, whereas now all warehouse functions (picking, receiving/put-away, replenishment, shipping) can be coordinated by voice systems. As these processes move from being paper-centric, to RF-centric (barcode scanning) and now voice-centric. For some, voice has become the starting point for re-engineering warehouse processes and systems, rather than an after-thought. VDW technology has also undergone an evolution as more competitors have entered the market. The first solutions were based on dedicated and rugged voice appliances, mobile computing devices that ran the speech recognition software and that communicated with a server over a wireless network. These special purpose voice appliances use a speech recognition engine that was specially designed for the warehouse and provided by the appliance manufacturer. Since the early 2000s, more voice suppliers have entered the market, providing voice recognition systems for standard mobile computing devices that had been used previously for barcode scanning applications in the warehouse. These standard mobile computers from companies like Motorola, Intermec and LXE also support non-proprietary recognition software. The uncoupling of the hardware and speech recognition software has resulted in lower priced voice-directed warehousing solutions and an increase in the number of software providers. These two factors contributed to a rapid rise in adoption of VDW that continues today. Potential benefits Implementing voice systems in the warehouse has among its benefits: Increased picking accuracy Increased inventory accuracy Increased employee productivity Improved safety Reduced new worker training time Increases job satisfaction for warehouse associates Eliminates cost of printing and distributing picking documents Growing customer satisfaction One of the great productivity benefits of voice-based systems is that they allow operators to do two things at once whereas other media used in warehouses such as paper or radio frequency terminals tend to require that you surrender the use of at least one hand or you have to stop and read something before proceeding. This productivity benefit tends to be inversely related to the fraction of an operator's day spent walking. References Freight transport Warehouses
1905859
https://en.wikipedia.org/wiki/List%20of%20University%20of%20Michigan%20alumni
List of University of Michigan alumni
There are more than 500,000 living alumni of the University of Michigan. Notable alumni include computer scientist and entrepreneur Larry Page, actor James Earl Jones, and President of the United States Gerald Ford. Alumni Nobel laureates Stanley Cohen (Ph.D. 1949), co-winner of the 1986 Nobel Prize in Physiology or Medicine for discovering growth factors (proteins regulating cell growth) in human and animal tissue Jerome Karle (Ph.D. 1944), Chief Scientist, Laboratory for the Structure of Matter, Naval Research Laboratory; Nobel Prize in Chemistry, 1985 Paul Milgrom (BA 1970) is an American economist and won the prize for economics in 2020 Marshall Nirenberg (Ph.D. 1957), Chief of Biomedical Genetics, National Heart, Lung and Blood Institute, NIH; Nobel Prize in Physiology or Medicine, 1968 H. David Politzer (BS 1969), physicist; professor at California Institute of Technology; Nobel Prize in Physics, 2004 Robert Shiller (BA 1967), economist, academic, and best-selling author Richard Smalley (COE: BS 1965), chemist, Nobel Prize in 1996 for the co-discovery of fullerenes Samuel C. C. Ting (BS 1959, Ph.D. 1962), physicist, awarded Nobel Prize in 1976 for discovering the J/ψ particle Thomas H. Weller (A.B. 1936, M.S. 1937), Nobel Prize in Physiology or Medicine, 1954 Activists Benjamin Aaron (LS&A 1937), scholar of labor law; director of the National War Labor Board during World War II; vice chairman of the National Wage Stabilization Board during the Truman administration Ricardo Ainslie (Ph.D.), native of Mexico City, Mexico; Guggenheim award winner Santos Primo Amadeo (BA), a.k.a. "Champion of Hábeas Corpus;" attorney and law professor at the University of Puerto Rico; Senator in the Puerto Rico legislature; counsel to the American Civil Liberties Union branch in Puerto Rico, established in 1937; winner of a Guggenheim award Huwaida Arraf (LS&A: 1998), Palestinian rights activist; co-founder of the International Solidarity Movement; chair of the Free Gaza Movement Octavia Williams Bates (BA 1877; LAW 1896), suffragist, clubwoman, author Jan BenDor (SOSW M.S.W.), women's rights activist, member of Michigan Women's Hall of Fame Bunyan Bryant, environmental justice advocate Mary Frances Berry (LAW: JD/Ph.D.), former chairwoman of United States Civil Rights Commission Cindy Cohn (LAW: JD 1988), attorney for Bernstein v. United States, legal director for the Electronic Frontier Foundation Richard Cordley (BA 1854), abolitionist minister who served Lawrence, Kansas during the 19th century George William Crockett (LAW: JD 1934), African American attorney; state court judge in Detroit, Michigan; US Representative; national vice-president of the National Lawyers Guild; participated in the founding convention of the racially integrated National Lawyers Guild in 1937, and later served as its national vice-president; first African American lawyer in the U.S. Department of Labor (1939–1943) Clarence Darrow (LAW 1878), Leopold and Loeb lawyer, defense attorney for John T. Scopes Terry Davis (BUS: MBA 1962), member of the UK Parliament for 28 years, now Secretary General of the Council of Europe and human rights activist Geoffrey Fieger (BA 1974; MA 1976), attorney; defense attorney for Jack Kevorkian Alan Haber, first President of the Students for a Democratic Society Tom Hayden, author of Port Huron Statement; member of Chicago Seven; co-founder of Students for a Democratic Society; member of each house of California's Legislature Alireza Jafarzadeh, Iranian activist and nuclear analyst Lyman T. Johnson (AM 1931), history graduate; the grandson of slaves; successfully sued to integrate the University of Kentucky, opening that state's colleges and universities to African-Americans five years before the landmark Brown v. Board of Education ruling Maureen Greenwood, human rights activist active in Russia Belford Vance Lawson, Jr., attorney who made at least eight appearances before the Supreme Court; attended Michigan and became the school's first African American varsity football player Michael Newdow (LAW: JD 1988), made headlines by challenging the constitutionality of the Pledge of Allegiance Carl Oglesby, writer, academic, and political activist; president of the radical student organization Students for a Democratic Society from 1965 to 1966 Laura Packard (BS 1997), health care activist, blocked by Donald Trump on Twitter, spoke at 2020 Democratic National Convention. Milo Radulovich, became a symbol of the excesses of anti-Communism when he challenged his removal from the Air Force Reserve (judged a security risk) and his story was chronicled by Edward Murrow in 1953 on the television newsmagazine program See It Now; in 2008 the Board of Regents approved a posthumous Bachelor of Science degree with a concentration in physics Ralph Rose, six-time Olympic medalist, began the tradition of refusing to dip the United States flag during opening ceremonies Eliza Read Sunderland (PH.B 1889; PH.D. 1892), writer, educator, lecturer, women's rights advocate Jack Hood Vaughn (BA, MA), second Director of the United States Peace Corps, succeeding Sargent Shriver Raoul Wallenberg (ARCH: B.Arch. 1935), Swedish diplomat, rescued thousands of Jews during the Holocaust, primarily in Hungary Jerry White (BUS: MBA 2005), co-founder and executive director of the Landmine Survivors Network Hao Wu (BUS: MBA 2000), documentary filmmaker and blogger; controversially imprisoned by Chinese government for 5 months in 2006 Susan Lederman (BA 1958), President of the League of US Women Voters AAAI, ACM, IEEE Fellows and Awardees As of 2021, more than 65 Michigan alumni have been named as Fellows. Of those alumni, 4 have been awarded the Eckert-Mauchly Award (out of the 42 total awards granted), the most prestigious award for contributions to computer architecture. Gul Agha (computer scientist) IEEE ACM Fellow Frances Allen ACM Fellow; was an American computer scientist and pioneer in the field of optimizing compilers; a Turing Award winner; Remzi Arpaci-Dusseau ACM Fellow; winner of the SIGOPS Mark Weiser Award Farrokh Ayazi was named Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 2013[2] for contributions to micro-electro-mechanical resonators and resonant gyroscopes. Andrew Barto, IEEE Fellow; IEEE Neural Networks Society Pioneer Award. Paul R. Berger (BS Engin. Physics 1985, MS EE 1987, Ph.D. EE 1990) was named an IEEE Fellow (2011), an Outstanding Engineering Educator for State of Ohio (2014) and a Fulbright-Nokia Distinguished Chair in Information and Communications Technologies (2020). Randal Bryant ACM Fellow; IEEE Fellow Robert Cailliau ACM Software System Award for Co-development of the World Wide Web Sunghyun Choi, named an IEEE Fellow in 2014 Edgar F. Codd A Turing Award winner, was an English computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases and relational database management systems; Turing Award Winner; Stephen Cook, ACM Fellow; OC, OOnt (born December 14, 1939) is an American-Canadian computer scientist and mathematician who has made major contributions to the fields of complexity theory and proof complexity as a Turing Award Winner; Edward S. Davidson IEEE Fellow; 2000 IEEE/ACM Eckert-Mauchly Award "for his seminal contributions to the design, implementation, and performance evaluation of high performance pipelines and multiprocessor systems" Dorothy E. Denning ACM Fellow; The 2001 Augusta Ada Lovelace Award from the Assoc. for Women in Computing acknowledged "her outstanding in computer security and cryptography as well as her extraordinary contributions to national policy debates on cyber terrorism and information warfare David DeWitt ACM Fellow; He received the ACM SIGMOD Innovations Award (now renamed SIGMOD Edgar F. Codd Innovations Award) in 1995 for his contributions to the database systems field. Alexandra Duel-Hallen is a professor of electrical and computer engineering at North Carolina State University known for her research in wireless networks and was named an IEEE Fellow in 2011. George V. Eleftheriades is a researcher in the field of metamaterials. He is a fellow of the IEEE and the Royal Society of Canada. Usama Fayyad He holds over 30 patents and is a Fellow of both the AAAI (Association for Advancement of Artificial Intelligence) and the ACM (Association of Computing Machinery). Michael J. Fischer ACM Fellow; is a computer scientist who works in the fields of distributed computing, parallel computing, cryptography, algorithms and data structures, and computational complexity. Fischer served as the editor-in-chief of the Journal of the ACM in 1982–1986. James D. Foley is an ACM Fellow an IEEE Fellow and a member of the National Academy of Engineering Stephanie Forrest ACM/AAAI Allen Newell Award (2011) Elmer G. Gilbert IEEE Fellow. In control theory, he is well known for the “Gilbert realization,” still a standard topic in systems textbooks, and developed the foundational results for control over a moving horizon, which underlies model predictive control (MPC). Prof. Gilbert is a member of the National Academy of Engineering, and Fellow of IEEE and the American Association for the Advancement of Science. Lee Giles ACM Fellow; IEEE Fellow; Most recently he received the 2018 Institute of Electrical and Electronics Engineers (IEEE) Computational Intelligence Society (CIS) Neural Networks Pioneer Award and the 2018 National Federation of Advanced Information Services (NFAIS) Miles Conrad Award. Adele Goldberg (computer scientist) was president of the Association for computing Machinery (ACM) from 1984 to 1986 Robert M. Graham (ACM Fellow) was a cybersecurity researcher computer scientist Herb Grosch ACM Fellow; Grosch received the Association for Computing Machinery Fellows Award in 1995; was an early computer scientist, perhaps best known for Grosch's law Mark Guzdial ACM Fellow; He was the original developer of the CoWeb (or Swiki), one of the earliest wiki engines, which was implemented in Squeak and has been in use at institutions of higher education since 1998. Rick Hayes-Roth AAAI Fellow; Mark D. Hill He was named an Association for Computing Machinery Fellow in 2004 for "contributions to memory consistency models and memory system design", and was awarded the ACM SIGARCH Alan D. Berenbaum Distinguished Service Award in 2009; In 2019, he received the 2019 ACM - IEEE CS Eckert-Mauchly Award for "seminal contributions to the fields of cache memories, memory consistency models, transactional memory, and simulation." Julia Hirschberg IEEE Fellow, member of the National Academy of Engineering, ACM Fellow, AAAI Fellow. John M. Hollerbach named IEEE Fellow in 1996 Tara Javidi IEEE Fellow; Bill Joy co-founder of Sun Microsystems In 1986, was awarded a Grace Murray Hopper Award by the ACM for his work on the Berkeley UNIX Operating System. Nam Sung Kim, IEEE Fellow John D. Kraus IEEE Fellow; winner of a IEEE Centennial Medal winner of the IEEE Heinrich Hertz Medal David Kuck, Kuck is a fellow of the American Association for the Advancement of Science, the Association for Computing Machinery (ACM), and the Institute of Electrical and Electronics Engineers. He is also a member of the National Academy of Engineering. He has won the Eckert-Mauchly Award from ACM/IEEE and the IEEE Computer Society Charles Babbage Award. Cliff Lampe Since 2018 he has been Executive Vice President for ACM SIGCHI. John E. Laird ACM Fellow; AAAI Fellow; AAAS member; Carl Landwehr IEEE Fellow; winner of the ACM's SIGSAC's Outstanding Contribution Award (2013) Peter Lee (computer scientist) ACM Fellow. A longtime "Microsoft Researcher," Mr. Lee became the organization's head in 2013. In 2014, the organization had 1,100 advanced researchers "working in 55 areas of study in a dozen labs worldwide." Chih-Jen Lin ACM Fellow; AAAI Fellow; IEEE Fellow is a leading researcher in machine learning, optimization, and data mining K. J. Ray Liu IEEE Fellow; Liu was elected as 2021 IEEE President-Elect, and will serve as 2022 IEEE President and CEO. Patrick Drew McDaniel ACM Fellow; IEEE Fellow Olgica Milenkovic was named an IEEE Fellow "for contributions to genomic data compression". Edmund Miller was named an IEEE Fellow "for contributions to computational electromagnetics". David L. Mills He invented the Network Time Protocol (1981), the DEC LSI-11 based fuzzball router that was used for the 56 kbit/s NSFNET (1985), the Exterior Gateway Protocol (1984), inspired the author of ping for BSD (1983), and had the first FTP implementation. IEEE Fellow; winner of the IEEE Internet Award in 2013 Yi Murphey IEEE Fellow; Shamkant Navathe ACM Fellow; a noted researcher in the field of databases with more than 150 publications on different topics in the area of databases. Judith S. Olson ACM Fellow with over 110 published research articles Kunle Olukotun ACM Fellow; is known as the “father of the multi-core processor” Elliott Organick Founder of ACM Special Interest Group on Computer Science Education, SIGCSE Award for Outstanding Contribution to Computer Science Education(1985) C. Raymond Perrault named a founding member of AAAI in 1990 and a AAAS member in 2011 Raymond Reiter was a Fellow of the Association for Computing Machinery (ACM), an AAAI Fellow, and a Fellow of the Royal Society of Canada. Paul Resnick ACM Fellow as a result of his contributions to recommender systems, economics and computation, and online communities. Winner of the 2010 ACM Software Systems Award Jennifer Rexford won the ACM Grace Murray Hopper Award (the award goes to a computer professional who makes a single, significant technical or service contribution at or before age 35) in 2005, for her work on introducing network routing subject to the different business interests of the operators of different subnetworks into Border Gateway Protocol. Wally Rhines was named overall CEO of the Year by Portland Business Journal in 2012 and Oregon Technology Executive of the Year by the Technology Association of Oregon in 2003. He was named an IEEE Fellow in 2017. Keith W. Ross ACM Fellow; He is the Dean of Engineering and Computer Science at NYU Shanghai and a computer science professor at the New York University Tandon School of Engineering. Ronitt Rubinfeld ACM Fellow as of 2017 for Association for Computing Machinery for contributions to delegated computation, sublinear time algorithms and property testing. Rob A. Rutenbar ACM Fellow; IEEE Fellow Claude Shannon IEEE Medal of Honor; National Medal of Science; Claude E. Shannon Award Daniel Siewiorek ACM, AAAS, IEEE Fellow; winner of the IEEE/ACM Eckert-Mauchly Award David Slepian, IEEE Fellow; winner of a IEEE Centennial Medal Anna Stefanopoulou IEEE Fellow, Michael Stonebraker A Turing Award winner, He is the founder of many database companies, including Ingres Corporation, Illustra, Paradigm4, StreamBase Systems, Tamr, Vertica and VoltDB, and served as chief technical officer of Informix. James W. Thatcher winner of ACM SIG Access Award (2008), for Outstanding Contributions to Computing and Accessibility for his contributions to digital accessibility Eugene C. Whitney is an IEEE Fellow and a member of the IEEE Rotating Machinery, Synchronous and the Power Generation Hydraulic subcommittees. Louise Trevillyan 2012: ACM SIGDA Pioneering Achievement Award W. Rae Young In 1964 Mr. Young was named an IEEE Fellow "for contributions to mobile radio and data communications systems". Bernard P. Zeigler IEEE Fellow in recognition of his contributions to the theory of discrete event simulation; Xi Zhang (professor) Is an IEEE Fellow Aerospace Claudia Alexander (Ph.D. 1993), member of the technical staff at the Jet Propulsion Laboratory; the last project manager of NASA's Galileo mission to Jupiter; project manager of NASA's role in the European-led Rosetta mission to study comet 67P/Churyumov-Gerasimenko; once named UM's Woman of the Year Aisha Bowe (BS, MS 2009), NASA aerospace engineer; CEO of STEMBoard, a technology company Robert A. Fuhrman (BS AE), pioneering Lockheed engineer who played a central role in the creation of the Polaris and Poseidon missiles; during more than three decades at Lockheed, he served as president of three of its companies: Lockheed-Georgia, Lockheed-California, and Lockheed Missiles & Space; became president and chief operating officer of the corporation in 1986 and vice chairman in 1988; retired in 1990 Edgar Nathaniel Gott (COE: 1909), early aviation industry executive; co-founder and first president of the Boeing Company; senior executive of several aircraft companies, including Fokker and Consolidated Aircraft Robert Hall (COE: BSE 1927), designer of the Granville Brothers Aircraft Gee Bee Z racer that won the 1931 Thompson Trophy race; Grumman test pilot; credited with major role in the design of the Grumman F4F Wildcat, F6F Hellcat and TBM Avenger Willis Hawkins (COE: BSE 1937), Lockheed engineer; contributed to the designs of historic Lockheed aircraft including the Constellation, P-80 Shooting Star, XF-90, F-94 Starfire, F-104 Starfighter and C-130 Hercules; later President of Lockheed Clarence "Kelly" Johnson (COE: 1932 BSE, 1933 MSE, 1964 PhD (Hon.)), founder of the Lockheed Skunk Works; designer of the Lockheed P-38 Lightning, P-80 Shooting Star, JetStar, F-104 Starfighter, U-2 and SR-71 Blackbird; winner of the National Medal of Science Vania Jordanova (Ph.D. 1995), physicist Edgar J. Lesher, aircraft designer; pilot; professor of aerospace engineering Elizabeth Muriel Gregory "Elsie" MacGill (COE: MSE) OC, known as the "Queen of the Hurricanes"; first female aircraft designer Joseph Francis Shea (BS 1946, MS 1950, Ph.D. 1955), manager of the Apollo Spacecraft Program office during Project Apollo Art, architecture, and design See List of University of Michigan arts alumni Arts and entertainment See List of University of Michigan arts alumni Astronauts Daniel T. Barry (medical internship), engineer, scientist, retired NASA astronaut Andre Douglas, earned a bachelor’s degree in mechanical engineering from the U.S. Coast Guard Academy, a master’s degree in mechanical engineering from the University of Michigan, a master’s degree in naval architecture and marine engineering from the University of Michigan, a master’s degree in electrical and computer engineering from Johns Hopkins University, and a doctorate in systems engineering from the George Washington. Named a NASA astronaut in 2021. Theodore Freeman (COE: MSAE 1960), one of the third group of astronauts selected by NASA; died in T-38 crash at Ellington Air Force Base Karl G. Henize (Ph.D. 1954), STS-51-F, 1985 James Irwin (COE: MSAE 1957), Apollo 15, 1971, one of twelve men to have walked on the moon; one of two men to ride the lunar rover on the moon; co-founded alumni club of the moon Jack Lousma (COE: BSAE 1959), Skylab 3 1973; STS-3, 1982 James McDivitt (COE: BSE AA 1959, ScD hon. 1965), graduated first in his class; Command Pilot Gemini 4 part of an all UM crew, 1965; Commander Apollo 9; Program Manager for Apollo 12–16; Brigadier general, U.S. Air Force; vice president (retired), Rockwell International Corporation Donald Ray McMonagle (MBA 2003), retired USAF Colonel, USAF; became manager of launch integration at the Kennedy Space Center in 1997 David Scott (MDNG: 1949–1950; ScD hon. 1971), Apollo 15, 1971; one of twelve men to have walked on the moon; first man to drive a lunar rover on the Moon; co-founded alumni club of the moon James M. Taylor (B.S. 1959), Air Force astronaut, test pilot Ed White (COE: MSAE 1959, Hon. PhD Astronautics 1965), first American to walk in space (Gemini 4) part of an all UM crew, 1965; died in Apollo 1 test accident, 1967 Alfred Worden (COE: MSAE 1964, Scd hon. 1971), Apollo 15, 1971; co-founded alumni club of the moon A campus plaza was named for McDivitt and White in 1965 to honor their accomplishments on the Gemini IV spacewalk. (At the time of its dedication, the plaza was near the engineering program's facilities, but the College of Engineering has since been moved. The campus plaza honoring them remains.) Two NASA space flights have been crewed entirely by University of Michigan degree-holders: Gemini IV by James McDivitt and Ed White in 1965 and Apollo 15 by Alfred Worden, David Scott (honorary degree) and James Irwin in 1971. The Apollo 15 astronauts left a 45-word plaque on the moon establishing its own chapter of the University of Michigan Alumni Association. The Apollo 15 crew also named a crater on the moon "Wolverine". Belles lettres See List of University of Michigan arts alumni Business See List of University of Michigan business alumni Churchill Scholarship or Marshall Scholarship Churchill Scholarships are annual scholarships offered to graduates of participating universities in the United States and Australia, to pursue studies in engineering, mathematics, or other sciences for one year at Churchill College in the University of Cambridge. 2011–2012: David Montague, Pure Mathematics 2009–2010: Eszter Zavodszky, Medical Genetics 2007–2008: Lyric Chen, BA in Political Science and Economics from the University of Michigan, Marshall Scholar 2007 2006–2007: Charles Crissman, Pure Mathematics 2005–2006: Christopher Hayward, Applied Mathematics and Theoretical Physics 2005–2006: Jacob Bourjaily, graduated with honors, degree in Mathematics, Physics Marshall Scholar 2005 1996–1997: Amy S. Faranski, Engineering 1993–1994: Ariel K. Smits Neis, Clinical Biochemistry 1990–1991: David J. Schwartz, Chemistry 1989–1990: Eric J. Hooper, Physics 1987–1988: Michael K. Rosen, Chemistry 1985–1986: Laird Bloom, Molecular Biology 1984–1985: Julia M. Carter, Chemistry 1979–1980: David W. Mead, Engineering, Chemical Computers, engineering, and technology Benjamin Franklin Bailey, studied electrical engineering; chief engineer of the Fairbanks Morse Electrical Manufacturing Company and Howell Electrical Motor Company; director of Bailey Electrical Company; vice-president and director of the Fremont Motor Corporation; became professor of electrical engineering at UM in 1913 Arden L. Bement Jr. (Ph.D. 1963), Director of the National Science Foundation (NSF); awarded the ANSI's Chairman's award in 2005 James Blinn (BS Physics and Communications Science 1970, MS Information and Control Engineering, 1972), 3D computer imaging pioneer; 1991 MacArthur Fellowship for his work in educational animation Katie Bouman (BS Electrical Engineering 2011), developer of the algorithm used in filtering the first images of a black hole taken by the Event Horizon Telescope Lee Boysel (BSE EE 1962, MSE EE 1963), did pioneering work on Metal-oxide semiconductor transistors and systems during his years at IBM, Fairchild Semiconductor and McDonnell (now McDonnell-Douglas) Aerospace Corporation; founded Four-Phase Systems Inc., which produced the first LSI semiconductor memory system and the first LSI CPU; president, CEO and chairman of Four-Phase, which was purchased by Motorola in 1981 John Seely Brown (Ph.D. 1970), former Chief Scientist of Xerox, co-author of The Social Life of Information Jim Buckmaster (MED: MDNG), President and CEO of Craigslist since November 2000; formerly its CTO and lead programmer Alice Burks (M.A. 1957), author of children's books and books about the history of electronic computers Arthur W. Burks (Ph.D. 1941), member of the team that designed the Eniac computer as well as the IAS machine;frequent collaborator of John von Neumann; pioneer in computing education Robert Cailliau (COE: MSc Computer, Information and Control Engineering 1971), co-developer of the World Wide Web; in 1974 he joined CERN as a Fellow in the Proton Synchrotron division, working on the control system of the accelerator; in 1987 became group leader of Office Computing Systems in the Data Handling division; in 1989, with Tim Berners-Lee, independently proposed a hypertext system for access to the CERN documentation, which led to a common proposal in 1990 and then to the World Wide Web; won the 1995 ACM Software System Award with Berners-Lee Dick Costolo (LS&A: BA), former COO and former CEO of Twitter; founder of Feedburner, the RSS reader bought by Google in 2007 Edward S. Davidson, professor emeritus in Electrical Engineering and Computer Science at the University of Michigan; IEEE award winner Paul Debevec (ENG: BA CSE), researcher in computer graphics at the University of Southern California's Institute for Creative Technologies; known for his pioneering work in high-dynamic-range imaging and image-based modelling and rendering; honored by the Academy of Motion Picture Arts and Sciences in 2010 with a Scientific and Engineering Academy Award Tony Fadell (COE: BSE CompE 1991), "father" of the Apple iPod; created all five generations of the iPod and the Apple iSight camera James D. Foley (Ph.D. 1969), professor at the Georgia Institute of Technology; co-author of several widely used textbooks in the field of computer graphics, of which over 300,000 copies are in print; ACM Fellow; IEEE Fellow; recipient of 1997 Steven A. Coons Award Stephanie Forrest (Ph.D.), Professor of Computer Science at the University of New Mexico in Albuquerque; recipient of ACM - AAAI Allen Newell Award 2011 Lee Giles (M.S.), co-creator of CiteSeer; David Reese Professor of Information Sciences and Technology, Pennsylvania State University; ACM Fellow; IEEE Fellow Greg Joswiak (B.S. CSE 1986) Senior Vice President, Worldwide Marketing, Apple, Inc., attributed with marketing original iPod, iPad and iPhone John Henry Holland, first UM Computer Science PhD; originator of genetic algorithms Larry Paul Kelley, founder of Shelby Gem Factory Thomas Knoll (COE: BS EP 1982, MSE CI CE 1984), co-creator of Adobe Photoshop Robert A. Kotick (MDNG), also known as Bobby Kotick; CEO, president, and a director of Activision Blizzard John R. Koza (Ph.D. 1972), computer scientist; consulting professor at Stanford University; known for his work in pioneering the use of genetic programming for the optimization of complex problems David Kuck (BS), professor at the Computer Science Department at the University of Illinois at Urbana-Champaign 1965–1993; IEEE award winner Chris Langton (Ph.D.), computer science; "father of artificial life"; founder of the Swarm Corporation; distinguished expellee of the Santa Fe Institute Eugene McAllaster (BS 1889), distinguished Seattle naval architect and marine engineer with his own firm, McAllaster & Bennett; designer of Seattle's historic fireboat Duwamish (1909); consulting engineer on Seattle's massive Denny Hill and Jackson Street regrades Sid Meier, considered by some to be the "father of computer gaming"; created computer games Civilization, Pirates!, Railroad Tycoon, SimGolf Kevin O'Connor (BS EE 1983), founder of DoubleClick, initially sold for $1.2 billion, and later acquired by Google for $3.1 billion Kunle Olukotun (Ph.D.), pioneer of multi-core processors; professor of electrical engineering and computer science at Stanford University; director of the Pervasive Parallelism Laboratory at Stanford; IEEE award winner Nneka Egbujiobi, lawyer and founder of Hello Africa Larry Page (COE: BSE 1995), co-founder of Google; named a World Economic Forum Global Leader for Tomorrow (2002); member of the National Advisory Committee of the University of Michigan College of Engineering; with co-founder Sergey Brin, winner of 2004 Marconi Prize in 2004; trustee on the board of the X PRIZE; elected to the National Academy of Engineering in 2004 Eugene B. Power (BUS: BA 1927, MBA 1930); founder of University Microfilms Inc. (now ProQuest); K.B.E., hon.; president of the Power Foundation; honorary fellow of Magdalene College Niels Provos (Ph.D.), researcher, secure systems and cryptography Avi Rubin (Ph.D.), a leading authority on computer security; led the research team that successfully cracked the security code of Texas Instruments' RFID chip; holds eight patents for computer security-related inventions Claude E. Shannon (COE: BS EE 1936, BA Math 1936), considered by some the "father of digital circuit design theory" and "father of information theory"; a paper drawn from his 1937 master's thesis, "A Symbolic Analysis of Relay and Switching Circuits", was published in the 1938 issue of the Transactions of the American Institute of Electrical Engineers; this won the 1940 Alfred Noble American Institute of American Engineers Award Joseph Francis Shea (BS 1946, MS 1950, Ph.D. 1955), manager of the Apollo Spacecraft Program office during Project Apollo Irma M. Wyman (COE: BS 1949), systems thinking tutor; first female CIO of Honeywell Niklas Zennström, founder of Skype; has a dual degree in business and computer science from Uppsala University; spent his final year in the US at the University of Michigan Peter B. Lederman (BSE ChE' 1953, MSE 1957, Ph.D. 1961), director of the American Institute of Chemical Engineers foundation Turing and Grace Murray Hopper Award winners Frances E. Allen (M.Sc. 1957), first woman to win the Turing Award (2006); IBM computer science veteran; honored by the Association for Computing Machinery for her work on program optimization and Ptran: program optimization work that led to modern methods for high-speed computing Edgar F. Codd (Ph.D. 1965), mathematician; computer scientist; laid the theoretical foundation for relational databases; received the Turing Award in 1981 Stephen A. Cook (A.B. 1961); Turing Award 1982; formalised the notion of NP-completeness in a famous 1971 paper, "The Complexity of Theorem Proving Procedures" Bill Joy (COE: BSE CompE 1975, 2004 D.Eng. (Hon.)), co-founder of Sun Microsystems; given 1986 Grace Murray Hopper Award by the ACM for his work on the UNIX operating system Jennifer Rexford (MSE 1993; Ph.D. 1996), winner of ACM's 2004 Grace Murray Hopper Award for outstanding young computer professional of the year Michael Stonebraker, computer scientist and Turing award winner specializing in database research Criminals, murderers, and infamous newsmakers Hawley Harvey Crippen (MED: 1882), infamous murderer; an American homeopath, ear and eye specialist and medicine dispenser. In 1910 he was hanged in Pentonville Prison in London, England, for the murder of his wife Cora Henrietta Crippen. François Duvalier (Public Health, 1944–45), repressive dictator of Haiti, excommunication from the Catholic Church; estimates of those killed by his regime are as high as 30,000. Theodore Kaczynski (Ph.D. 1967), better known as the Unabomber, one of UM's most promising mathematicians; earned his Ph.D. by solving, in less than a year, a math problem that his advisor had been unable to solve; abandoned his career to engage in a mail bombing campaign. Jack Kevorkian (MED: MD Pathology 1952), guilty of second-degree homicide after committing euthanasia by administering a lethal injection to Thomas Youk; spent eight years in prison John List, murderer and fugitive for eighteen years; caught after being featured in America's Most Wanted, died in prison. Nathan F. Leopold, Jr., thrill killer of Leopold and Loeb, transferred from Michigan in 1922 to the University of Chicago, before murdering 14-year-old Robert "Bobby" Franks Richard A. Loeb (BA 1923), thrill killer of Leopold and Loeb, youngest graduate in the University of Michigan's history, murdered 14-year-old Robert "Bobby" Franks Larry Nassar (1985), USA national team doctor who sexually assaulted approximately 250 people Herman Webster Mudgett, a.k.a. H.H. Holmes (MED: MD 1884), 19th-century serial killer; one of the first documented American serial killers; confessed to 27 murders, of which nine were confirmed; actual body count could be as high as 250; took an unknown number of his victims from the 1893 World's Columbian Exposition; his story was novelized by Erik Larson in his 2003 book The Devil in the White City "Father of..." John Jacob Abel (PHARM: Ph.D. 1883), North American "father of pharmacology" Leon Jacob Cole (June 1, 1877 – February 17, 1948) was an American geneticist and ornithologist. He is regarded as the father of American bird banding. George Dantzig (MA Math 1937), father of linear programming; studied at UM under T.H. Hildebrandt, R.L. Wilder, and G.Y. Rainich Tony Fadell (COE: BSE CompE 1991), "father" of the Apple iPod; created all five generations of the iPod and the Apple iSight camera Moses Gomberg (February 8, 1866 – February 12, 1947) was a chemistry professor at the University of Michigan. Called the father of radical chemistry. Saul Hertz, M.D. (April 20, 1905 – July 28, 1950) was an American physician who devised the medical uses of radioactive iodine. Hertz pioneered the first targeted cancer therapies. Hertz is called the father of the field of theranostics, combining diagnostic imaging with therapy in a single chemical substance. Ellis R. Kerley (September 1, 1924 – September 3, 1998) was an American anthropologist, and pioneer in the field of Forensic anthropology Samuel Kirk (1904–1996) was an American psychologist and educator is recognized for his accomplishments in the field of special education, while sometimes being referred to as the “Father of Special Education”. Chris Langton (Ph.D.), computer science; "father of artificial life"; founder of the Swarm Corporation; distinguished expellee of the Santa Fe Institute Theodore C. Lyster, M.D. (10 July 1875 – 5 August 1933) was a United States Army physician and aviation medicine pioneer. "Father of aviation medicine". Li Shouheng (Chinese: 李寿恒; pinyin: Lǐ Shòuhéng; 1898–1995), also known as S. H. Li, was a Chinese educator, chemist and chemical engineer. Li founded the first chemical engineering department in China, thus is regarded as the Father of Modern Chinese Chemical Engineering. Sid Meier, considered by some to be the "father of computer gaming"; created computer games Civilization, Pirates!, Railroad Tycoon, SimGolf Daniel Okrent (BA 1969), public editor of New York Times; editor-at-large of Time Inc.; Pulitzer Prize finalist in history (Great Fortune, 2004); founding father of Rotisserie League Baseball Oyekunle Ayinde "Kunle" Olukotun is the Cadence Design Systems Professor in the Stanford School of Engineering, Professor of Electrical Engineering and Computer Science at Stanford University and the director of the Stanford Pervasive Parallelism Lab. Olukotun is known as the “father of the multi-core processor” Robert E. Park Emory S. Bogardus acknowledges that Park is the father of human ecology, proclaiming, "Not only did he coin the name but he laid out the patterns, offered the earliest exhibit of ecological concepts, defined the major ecological processes and stimulated more advanced students to cultivate the fields of research in ecology than most other sociologists combined." Raymond Pearl was an American biologist, regarded as one of the founders of biogerontology John Clark Salyer II he attended the University of Michigan where he received his MS in 1930. For his efforts as head of the Division of Wildlife Refuges, Salyer has become known as the "Father of the National Wildlife Refuge System". Claude Shannon (April 30, 1916 – February 24, 2001) was an American mathematician, electrical engineer, and cryptographer known as "the father of information theory" and the "father of digital circuit design theory". Richard Errett Smalley (June 6, 1943 – October 28, 2005) was the Gene and Norman Hackerman Professor of Chemistry and a Professor of Physics and Astronomy at Rice University. Upon Smalley's death, the US Senate passed a resolution to honor Smalley, crediting him as the "Father of Nanotechnology." William A. Starrett, Jr. (June 14, 1877 – March 25, 1932) was an American builder and architect of skyscrapers. He was best known as the builder of the Empire State Building in New York City. He was once nicknamed the "father of the skyscraper". Larry Teal (March 26, 1905 - July 11, 1984) is considered by many to be the father of American orchestral saxophone. Olke Uhlenbeck is a biochemist. He is known for his work in RNA biochemistry and RNA catalysis. He completed his undergraduate degree at the University of Michigan at Ann Arbor in 1964. Some have called him the “Father of RNA.” Mark Weiser (July 23, 1952 – April 27, 1999) was a computer scientist and chief technology officer (CTO) at Xerox PARC. Weiser is widely considered to be the father of ubiquitous computing Wu Ta-You (simplified Chinese: 吴大猷; traditional Chinese: 吳大猷; pinyin: Wú Dàyóu) (September 27, 1907 – March 4, 2000) was a Chinese physicist and writer who worked in the United States, Canada, mainland China and Taiwan. He has been called the Father of Chinese Physics. Founders and co-founders The Admiral Group was founded by Henry Engelhardt (B.A.), Chief Executive of Admiral Group, a British motor insurance company and English billionaire Adobe Photoshop was founded by Thomas Knoll (COE) Apollo 15 the all Michigan crew of Alfred Worden, David Scott (attended two years and later received honorary degree) and James Irwin left a 45-word plaque on the moon in 1971 founding its own chapter of the University of Michigan Alumni Association on the moon Bain Capital was co-founded by founding partner Edward Conard (BSE 1978), A co-founder and first president of the Boeing Company, Edgar Nathaniel Gott (May 2, 1887 – July 17, 1947) was an early American aviation industry executive who was a senior executive of several aircraft companies, including Fokker and Consolidated Aircraft. Borders was co-founded by Louis Borders (BA 1969), who co-founded Borders along with his brother Tom Borders (MA 1966) The Buffalo Bills, a team in the National Football League (NFL) were founded by Ralph Wilson (LAW: Attended) Leo Burnett Company was founded by Leo Burnett (BA 1914), journalism and advertising pioneer Buttonwood Development and Town Residential was co-founded by Andrew Heiberger, owner, and CEO DoubleClick Inc. was co-founded by Kevin O'Connor (COE: BSE EE 1983) EQ Office, a real estate development firm was founded by Samuel Zell (LAW: AB 1963, JD 1966) General Motors was co-founded in 1908 by Frederic Latta Smith. Smith was also one of the founders of the Olds Motor Works in 1899 Google was co-founded by Larry Page (COE) Groupon was co-founded by Brad Keywell (BUS: BBA 1991; LAW: JD 1993), principal of Groupon Haworth, Inc., a manufacturer of office environments grew from a garage-shop venture in 1948 to a $1.4 billion global corporation and was co-founded by Gerrard Wendell "(G.W.)" Haworth H&R Block Inc. was co-founded by Henry W. Bloch (BS 1944) LexION Capital Management was co-founded by Elle Kaplan (BA), CEO Merrill Lynch was co-founded by Charles Edward Merrill (attended Law School from 1906-07 but did not graduate) The National Baseball Seminar was founded by Bill Gamson. When he moved to the University of Michigan in 1962, he recruited about 25 people to his game, including Robert Sklar, a history professor. In 1968, Professor Sklar mentioned it to Daniel Okrent, a student he was advising. A decade later, Mr. Okrent invented the more complex Rotisserie League Baseball, which lets its “owners” make in-season trades; it's considered the closest ancestor to today's billion-dollar fantasy sports industry. Redbox was founded by Gregg Kaplan who is also the founder of Modjule LLC, and the former President and COO of Coinstar. The Related Group was founded by Stephen M. Ross (BUS: BBA 1962), real estate developer Saba Capital Management was founded by hedge fund manager Boaz Weinstein (BA 1995), who specialized in credit derivatives trading Science Applications International Corporation was founded by J. Robert Beyster (COE: BSE, MS, Ph.D.), chairman, president, and CEO of The co-founder of Scientific Games Corporation, is John Koza (MA Mathematics 1966; BA 1964, MS 1966, Ph. D 1972 Computer Science), venture capitalist who co-invented the scratch-off instant lottery ticket Lockheed's Skunk Works was founded by Kelly Johnson (COE) Syntel was founded by Bharat Desai (BUS: MBA 1981), president, and CEO; Indian billionaire Stryker Corporation, a medical device company, was founded by Dr. Homer Stryker (MED: M.D. 1925; D. 1980) Twilio was co-founded by Jeff Lawson who owns 5% of the $45 billion entity The founder and CEO of Uptake Technologies, an industrial AI software provider is Brad Keywell (BUS: BBA 1991; LAW: JD 1993), serial entrepreneur Wasserstein Perella & Co. was co-founded by Bruce Wasserstein (AB 1967), CEO of Lazard Freres Educators Frank Aarebrot, professor of comparative politics at University of Bergen Ida Louise Altman (A.B.), author of Emigrants and Society Edgardo J. Angara (LAW: LLM 1964), Secretary of Agriculture (emeritus) and former Executive Secretary of the Philippines W. Brian Arthur (MA 1969), Lagrange Prize in Complexity Science 2008; Schumpeter Prize in Economics 1990; Guggenheim Fellow 1987–88; Fellow of the Econometric Society John "Jack" William Atkinson (Ph.D. 1950), psychologist who pioneered the scientific study of human motivation, achievement and behavior Henry Moore Bates (Ph.B. 1890), dean of the University of Michigan Law School (1910–1939); Fellow of the American Academy of Arts and Sciences William J. Beal (A.B. 1859, A.M. 1862); namesake of W. J. Beal Botanical Garden Mitchell Berman, Professor of Law at the University of Pennsylvania Law School Mary Frances Berry; Geraldine R. Segal Professor of American Social Thought, Professor of History at the University of Pennsylvania; Civil Rights Commissioner, 1980–2004 Lewis Binford Ph.D., archaeologist most known for his role in establishing the "New Archaeology" movement of the 1960s Frank Nelson Blanchard (Ph.D. 1919), herpetologist and professor of zoology at the University of Michigan Elise M. Boulding (Ph.D.), educator and author in the field of Peace & Conflict Studies George W. Breslauer (A.B., A.M., Ph.D.), political science professor and Russia specialist at the University of California, Berkeley; Berkeley's executive vice chancellor and provost Allen Britton (Ph.D. 1949), music educator; former president of Music Educators National Conference Urie Bronfenbrenner (Ph.D. 1942), helped create the federal Head Start program; credited with creating the interdisciplinary field of human ecology Frederic G. Cassidy (Ph.D. 1938), Editor-in-Chief of the Dictionary of American Regional English from 1962 to his death in 2000 June Rose Colby (Ph.D. 1886), professor of literature 1892–1931; first woman at the University of Michigan to receive a Ph.D. by examination Katharine Coman (AB 1880), social activist and economist; specialized in the development of the American West; professor of history 1883–1900; chaired the Economics Department; dean of Wellesley College, which named a professorship in her honor Charles Horton Cooley (BA 1887; Ph.D. 1894), sociologist, most known for his concept of the "looking glass self", which expanded William James's idea of self to include the capacity of reflection on one's own behavior Natalie Zemon Davis (Ph.D. 1959) , Canadian and American historian of the early modern period; awarded the 2010 Holberg International Memorial Prize, worth 4.5 million Norwegian kroner (~$700,000 US), for her narrative approach to the field of history Bueno de Mesquita (Ph.D. 1971), political scientist and game theoretician Paul Dressel (Ph.D.), founding director of Michigan State University's Counseling Center James Stemble Duesenberry, economist; made a significant contribution to the Keynesian analysis of income and employment with his 1949 doctoral thesis "Income, Saving and the Theory of Consumer Behavior' Aaron Dworkin (A.B. 1997, M.M. 1998), 2005 MacArthur Fellow; founder and president of Detroit-based Sphinx Organization, which strives to increase the number of African-Americans and Latinos having careers in classical music W. Ralph Eubanks (M.A.), author, journalist, professor, public speaker, business executive, Guggenheim award winner David Fasenfest (Ph.D. 1984), Associate Professor of Sociology at Wayne State University Heidi Li Feldman (J.D. 1990; Ph.D. 1993), law professor Sidney Fine, professor of history at Michigan Neil Foley (Ph.D.), historian, Guggenheim award winner Joseph S. Freedman (Masters of Information and Library Science 1990), Professor of Education at Alabama State University Helen Beulah Thompson Gaige (1890–1976), herpetologist, curator of Reptiles and Amphibians for the Museum of Zoology at the University of Michigan and specialist in neotropical frogs Edwin Francis Gay (AB 1890), first Dean of Harvard Business School, 1908–1919 C. Lee Giles (M.S.), David Reese Professor of Information Sciences and Technology, Professor of Computer Science and Engineering, Professor of Supply Chain and Information Systems, Pennsylvania State University; Fellow of the ACM, IEEE and INNS Domenico Grasso (Ph.D. 1973), Sixth Chancellor of University of Michigan-Dearborn Roy Grow, Kellogg Professor of International Relations, director of the International Relations program at Carleton College; his specialty is the political economy of East Asia, specifically China and Southeast Asia Jack Guttenberg, professor of law at Capital University Law School Alice Hamilton (MED: MD 1893), toxicologist; scientist; first female faculty member at Harvard Medical School Ann Tukey Harrison (BA 1957, PhD 1962), professor of French language and literature and Michigan State University Elaine Catherine Hatfield (BA), Professor of Psychology at the University of Hawaii; earned Ph.D. at Stanford; scholar who pioneered the scientific study of passionate love and sexual desire Shelley Haley, Professor of Classics and Africana Studies at Hamilton College Jessica Hellmann (B.S.), Professor of Ecology at the University of Minnesota Clark Leonard Hull (M.A.), psychologist Lyman T. Johnson (AM 1931), the grandson of slaves; successfully sued to integrate the University of Kentucky, opening that state's colleges and universities to African-Americans five years before the landmark Brown v. Board of Education ruling Michael P. Johnson (Ph.D. 1974), emeritus professor of sociology, Pennsylvania State University Rosabeth Moss Kanter (MA 1965, Ph.D. 1967), first tenured female professor at Harvard Business School Roberta Karmel (born 1937), Centennial Professor of Law at Brooklyn Law School, and first female Commissioner of the U.S. Securities and Exchange Commission. Nafe Katter (BA, MA, PhD), Professor of Theatre at the University of Connecticut and frequent stage actor and director Mark Kilstofte (D.M.A. 1992), composer; professor at Furman University, Greenville, South Carolina; Guggenheim award winner George Kish (Ph.D.), cartographer John E. Laird (B.S. 1975), computer scientist Thomas A. LaVeist (MA 1985, PhD 1988, PDF 1990), Dean and Weatherhead Presidential Chair in Health Equity at Tulane University School of Public Health & Tropical Medicine Stanley Lebergott (BA, MA), former government economist; Wesleyan University professor Rensis Likert (B.A. 1926 in Sociology and Economics), founder of the University of Michigan Institute for Social Research and the director from its inception in 1946–1970 Lynda Lisabeth Professor in the school of Public Health of Michigan University. Howard Markel (A.B., English Literature, 1982; M.D., 1986), George E. Wantz Distinguished Professor of the History of Medicine at the University of Michigan, Guggenheim Fellow, Member of the National Academy of Medicine, author, pediatrician, medical journalist William McAndrew (B.A., Literature, 1886), superintendent of Chicago Public Schools Nina McClelland (PhD., 1968), Dean Emeritus and former professor of chemistry at the University of Toledo. Fellow of the American Chemical Society Paul Robert Milgrom (A.B. 1970), economist Martha Minow (LS&A: A.B. 1975), named Dean of Harvard Law School in 2009 James Moeser (Ph.D. 1967), ninth chancellor of the University of North Carolina at Chapel Hill, 2000–present Mayo Moran (LAW: LLM 1992), named Dean of the University of Toronto Faculty of Law in 2005 Marjorie Hope Nicolson (A.B. 1914), first female President of Phi Beta Kappa, Guggenheim award winner Eugene A. Nida (Ph.D.), linguist, developer of the dynamic-equivalence Bible-translation theory Nicholas Nixon (B.A. 1969), photographer, known for portraiture and documentary photography, and for championing the use of the 8x10 inch view camera; Guggenheim award winner Mary Beth Norton (B.A. 1964), American historian; Mary Donlon Alger Professor of American History, Department of History at Cornell University; Guggenheim award winner Norman Ornstein (MA Political Science, PhD 1974 Political Science), Scholar: Center for Advanced Study in the Behavioral Sciences, Stanford University. Scott E. Page (B.A. 1985), social scientist Clara Claiborne Park (1923–2010), instructor at Williams College; author; raised awareness of autism Michael Posner (PhD), psychologist; winner of the National Medal of Science John Oren Reed (1856–1916), Ph.D. at Jena (1897); professor of physics Shai Reshef (M.A.) (), Israeli businessman; educational entrepreneur; founder and president of University of the People, a non-profit, tuition-free, online academic institution dedicated to the democratization of higher education John Ruhl (BS Physics 1987), Professor of Physics at UCSB and Case Western Reserve University; primary investigator of the ACBAR, Boomerang, South Pole Telescope, and Spider Telescope projects; author of Princeton Problems in Physics Lucy Maynard Salmon (B.A. 1876, M.A. 1883), American historian; Professor of History, Vassar College, 1889–1927; member of the American Historical Association's Committee of Seven Floyd VanNest Schultz (Ph.D. EE 1950), Educator and Electrical Engineering Scientist Robert Scott (LAW: SJD 1973), Dean University of Virginia School of Law 1991–2000 Wilfrid Sellars (B.A. 1933), philosopher and Rhodes Scholar Al Siebert (M.A., Ph.D. 1965), Menninger Fellow; Resiliency Center Director; author of The Resiliency Advantage: Master Change, Bounce Back from Setbacks, awarded the 2006 Independent Publishers' award for Best Self-Help Book Holly Martin Smith, Distinguished Professor of Philosophy at Rutgers University Claude Steiner (Ph.D. 1965), founding member and teaching fellow of the International Transactional Analysis Association Clarence Stephens (Pd.D.); the teaching techniques he introduced at Potsdam, and earlier at Morgan State, have been adopted by many mathematics departments across the country George Sugihara (B.S. 1973), theoretical biologist; has worked across a wide variety of fields, including landscape ecology, algebraic topology, algal physiology and paleoecology, neurobiology, atmospheric science, fisheries science, and quantitative finance Leonard Suransky, winner of the Des Lee Visiting Lectureship in Global Awareness at Webster University G. David Tilman (Ph.D. 1976), ecologist, Guggenheim award winner Amos Tversky (Ph.D. 1965), long-time collaborator with Daniel Kahneman; co-founder of prospect theory in economics; died of cancer before Kahneman received the Nobel prize and was featured prominently and fondly in his Nobel speech Zalman Usiskin (Ph.D.), educator; Director of the University of Chicago School Mathematics Project Robert W. Vishny (AB, highest distinction, 1981), economist and the Eric J. Gleacher Distinguished Service Professor of Finance at the University of Chicago Graduate School of Business; prominent representative of the school of behavioural finance; his research papers (many written jointly with Andrei Shleifer, Rafael LaPorta and Josef Lakonishok) are among the most often cited recent research works in the field of economic sciences Robert M. Warner (MA 1953, Ph.D.), Dean Emeritus, University of Michigan's School of Information (the former School of Library Science) 1985–92; professor emeritus of the School of Information; appointed sixth archivist of the United States in July 1980 by President Jimmy Carter; continued to serve under President Ronald Reagan through April 15, 1985 Albert H. Wheeler (SPH: Ph.D.), life-sciences professor and politician in Ann Arbor; the city's first African-American mayor, 1975–1978; became assistant professor of microbiology and immunology at Michigan in 1952; eventually became the university's first tenured African-American professor David E. Weinstein (LS&A: MA 1988, Ph.D. 1991), Carl Sumner Shoup Professor of the Japanese Economy at Columbia University; contributed to new understanding of variety gains from international trade; expert on the Japanese economy; Research Director of the Japan Project at the National Bureau of Economic Research; Member of the Council on Foreign Relations; Member of the Federal Economic Statistics Advisory Committee Phyllis Wise (M.S. 1969, Ph.D. 1972), University of Washington provost or Chief Academic officer; manages $3 billion annual budget Frank Wu (LAW: JD 1991), named Dean of Hastings Law School in 2009 Bret Weinstein (MA, Ph.D. 2009), professor at Evergreen State College until 2017 University presidents Theophilus C. Abbot (LL.D. 1890), third President of Michigan State University Charles Kendall Adams (1861, 1862), historian; second President of Cornell University (1885–1892); President of the University of Wisconsin (1892–1902) James Rowland Angell (BA 1890), tenth President of Yale University Dr. Khaled S. Al-Sultan (MS, applied mathematics; COE: Ph.D. in IOE), third rector of King Fahd University for Petroleum and Minerals, a public university in Dhahran, Saudi Arabia Charles E. Bayless (MBA), president of West Virginia University Institute of Technology Warren E. Bow (M.A.), president of Wayne State University Detlev Bronk (Ph.D. 1926), scientist, educator, and administrator; credited with establishing biophysics as a recognized discipline; President of Johns Hopkins; president of The Rockefeller University from 1953 to 1968. Stratton D. Brooks (BA 1896), president of the University of Oklahoma and the University of Missouri Gaylen J. Byker (LAW: JD), President of Calvin College; Offshore Energy Development Corporation Partners William Wallace Campbell (COE: BSE 1886), astronomer; tenth President of the University of California (1923–30); elected president of the National Academy of Sciences in 1931er, Head of Dev Benjamin Cluff (B.A.), first president of Brigham Young University; the school's third principal Joanne V. Creighton (Ph.D. in English literature), 17th president of Mount Holyoke College in South Hadley, Massachusetts; provost and professor of English 1990–1994 at Wesleyan University; Wesleyan's interim president 1994–1995 James Danko (MBA), appointed 21st president of Butler University in 2011 John DiBiaggio (MA), president, University of Connecticut 1979–1985, Michigan State University 1985–1992, Tufts University 1992–2001 Saul Fenster (Ph.D., 1959), 6th President of New Jersey Institute of Technology 1978–2002 Lewis Ransom Fiske (A.B. 1850; A.M.; LL.D. 1879), second president of the Agricultural College of the State of Michigan (now Michigan State University) 1859–1862; president of Albion College 1877–1898 Deborah Freund (MPH, MA, Ph.D.), president of Claremont Graduate University David Friday, president of the U.S. state of Michigan's Michigan Agricultural College (now Michigan State University), 1922–1923; graduate of the University of Michigan Allan Gilmour (academic) (MBA), inaugurated as 11th president of Wayne State University in 2011 Thomas J. Haas, president, Grand Valley State University Eugene Habecker (Ph.D.), 30th president of Taylor University William W. Hagerty (COE:M.S. 1943, Ph.D. 1947), former president of Drexel University Cindy Hill, Wyoming Superintendent of Public Instruction since 2011, received master's degree from the University of Michigan Harry Burns Hutchins; fourth president of the University of Michigan (1909–1920); organized and led the law department at Cornell University from 1887 to 1894 Mark Kennedy is an American businessman, politician, and administrator currently serving as the president of the University of Colorado (CU) system. Previously he served as 12th president of the University of North Dakota, Raynard S. Kington (MED), former deputy director of the National Institutes of Health; 13th president of Grinnell College; earned medical degree from the University of Michigan at age 21 Bradford Knapp was the President of the Alabama Polytechnic Institute, now known as Auburn University from 1928 to 1933. Kathy Krendl, president, Otterbein College (Ohio) James Raymond Lawson (Ph.D.), president, Fisk University (1967–1975) Jeffrey S. Lehman (LAW: JD 1977), 11th President of Cornell University (2003–2005) Wallace D. Loh (Ph.D.), president University of Maryland Maud Mandel, Professor of History and Judaic Studies and Dean of the College at Brown University; president of Williams College Carroll Vincent Newsom (1904–1990) was an American educator who served as the eleventh NYU President Moses Ochonu, professor of African History at Vanderbilt University Alice Elvira Freeman Palmer (A.B. 1876, Ph.D. Hon 1882), appointed head of the history department at Wellesley College in 1879; named the acting president of Wellesley in 1881; became its president in 1882 Constantine Papadakis (Ph.D.), Drexel University President 1995–2009 William H. Payne (1836–1907), Chancellor of the University of Nashville and President of Peabody College (both of which later merged with Vanderbilt University), 1887–1901 Scott Ransom, president, University of North Texas Health Science Center William Craig Rice, president, Shimer College Henry Wade Rogers (BA, MA) was President of Northwestern University from 1890 to 1900. Jonathan Rosenbaum, President of Gratz College Alexander Grant Ruthven (Ph.D. 1906); president of the University of Michigan Austin Scott, tenth President of Rutgers College (now Rutgers University), 1891–1906 William Spoelhof (MA 1937), President of Calvin College 1951–76; namesake of Asteroid 129099 Spoelhof Rudolf Steinberg from 2000 to 2008 was president of the Johann Wolfgang Goethe University in Frankfurt. Carl Strikwerda (Ph.D.), William & Mary's Dean of the Faculty of Arts & Sciences; named 14th president of Elizabethtown College in Pennsylvania in 2011 Beverly Daniel Tatum, president, Spelman College Charles M. Vest, president, National Academy of Engineering; former president, MIT B. Joseph White (BUS: Ph.D. 1975), 16th president of the University of Illinois Jerome Wiesner (COE: BS 1937, MS 1938, Ph.D. 1950), MIT Provost 1968–1971; President of MIT 1971–1980 Edwin Willits (A.B. 1855), the first Assistant U.S. Secretary of Agriculture under Norman Jay Coleman for Grover Cleveland's first administration; 4th President of Michigan Agricultural College Richard F. Wilson (ED 1978), president, Illinois Wesleyan University Fiction, nonfiction See List of University of Michigan arts alumni. Fictional Wolverines In 24, Nadia Yassir has a B.A. in Languages from the university. In, 321 Days in Michigan, the story of Antonio Chico García (Chico García as Antonio), a young and successful executive, condemned to go to prison because of white collar crimes. In order to hide this fact, he pretends he is spending time at the University of Michigan working on a master's degree. In Ally McBeal, the character Billy, played by Gil Bellows, is a Michigan student. In A Tree Grows in Brooklyn by Betty Smith, Francie prepares to take classes at the University of Michigan. In Air Force One, U.S. President James Marshall, played by Harrison Ford, attended the University of Michigan. In The Americans, the character "Kimmy" is played as a Junior at Michigan In American Pie and other films in the series, Kevin Myers, played by Thomas Ian Nicholas, attends Michigan. In Answer This!, Christopher Gorham plays UM student Paul Tarson. In Bad Company, Laurence Fishburne plays a corrupt intelligence analyst who is a Michigan graduate. In Blindspot, Rob Brown plays an FBI agent and former Michigan student-athlete. In The Big Chill, Michael Gold, played by Jeff Goldblum, worked at The Michigan Daily. In The Chamber (1996 film), Chris O'Donnell plays a Michigan educated attorney defending a death penalty case. In The Company You Keep, Brit Marling plays a University of Michigan Law student. In Continental Divide, Allen Garfield plays the role of Max Bernbaum, an All-American football player from Michigan. In Crisis, Gary Oldman plays reputationally resurrected biochemist who is hired by Michigan. In Entourage, Ari Gold, earned his J.D./M.B.A at the Ross School of Business. In The Five Year Engagement, Emily Blunt plays UM student Violet Barnes, a post-doctoral fellow in psychology. In Freaks and Geeks, Lindsay leaves for the academic summit at the University of Michigan. In Ghostbusters (2016), Melissa McCarthy plays Abby Yates, a University of Michigan graduate. In The Good Place, one of the Eleanor Shellstrops, is a graduate of Michigan Law. In The Green Lantern, Guy Gardner, is a superhero alum who double majored in education and psychology. He played football with another superhero, Steel. In House, Dr. Gregory House earned his M.D. from Michigan's medical school. Lisa Cuddy, played by Lisa Edelstein, was in the pre-med program at Michigan. In Justified, Neal McDonough plays former UM MBA Robert Quarles, a violent sociopath. In Last Man Standing (U.S. TV series), Tim Allen plays Mike Baxter, a University of Michigan graduate and the highly opinionated marketing director for a chain of sporting goods stores. In Lost, the Dharma Initiative was founded in 1970 by two doctoral candidates, Gerald and Karen DeGroot, while studying at the university. In Love and Honor, Teresa Palmer plays an undergraduate caught up in the movement to end the war in Vietnam. In Mad Men, "Smitty" Smith (Patrick Cavanaugh) tells one of his fellow characters that he is a graduate of the University of Michigan. In MacGyver, Levy Tran plays the latest addition to the Phoenix team. In The Millionaire, Barbara "Babs" Alden (Evalyn Knapp) and William “Bill” Merrick (David Manners) meet at a University of Michigan dance and later in the film become involved. In No Strings Attached, Adam Franklin, played by Ashton Kutcher, and Emma Kurtzman, played by Natalie Portman, both attended the university. In Parks & Recreation, Chris Traeger, announces he and his partner Ann are moving to Ann Arbor because he has "a job lined up at the University of Michigan". In Perception, Kelly Rowan who plays the character Dr. Caroline Newsome/Natalie Vincent, a University of Michigan Medical school graduate. In Sister, Sister, Tia Landry, played by Tia Mowry, attends the University of Michigan. In Sleeping with Other People, Alison Brie plays a woman who becomes an aspiring medical student at the University of Michigan. In The Sopranos, Ronald Zellman, played by Peter Riegert, is a Michigan graduate. In The West Wing, Leo McGarry, played by John Spencer, attended the University of Michigan. In Shameless, Jimmy/Steve, played by Justin Chatwin, attended the University of Michigan. In True Believer, the character Roger Baron, played by Robert Downey Jr., is a Michigan Law graduate. In The Upside of Anger, Keri Russell plays Emily Wolfmeyer, an aspiring dancer. In Why Him?, Megan Mullally plays the Michigan-educated mother of the lead character. Finance Peter Borish, investor and trader Foodies Rick Bayless (doctoral student, linguistics), chef who specializes in traditional Mexican cuisine with modern interpretations; known for his PBS series Mexico: One Plate at a Time David Burtka is an American actor and professional chef. Gael Greene, food critic, said to have created the coinage "foodie", as found in the Foodie article. Gabrielle Hamilton chef, author, winner of James Beard Award Stephanie Izard, is an American chef residing in Chicago, Illinois, best known as the first female chef to win Bravo's Top Chef Sara Moulton (AB 1974), executive chef of Gourmet magazine; former host of the Food Network shows Sara's Secrets and Cooking Live Joan Nathan, She was executive producer and host of Jewish Cooking in America with Joan Nathan Ruth Reichl food writer, chef, critic and winner of four James Beard Awards Fulbright Scholars Since the inaugural class in 1949, Harvard, Yale, Berkeley, Columbia, and the University of Michigan have been the top producers of U.S. Student Program scholars. As of 2021, Michigan has been the leading producer since 2005. Guggenheim fellows As of 2021, Michigan alumni include over 145 Guggenheim Fellows. Richard Newbold Adams (August 4, 1924 – September 11, 2018) was an American anthropologist. Thomas R. Adams (May 22, 1921 – December 1, 2008) was librarian of the John Carter Brown Library and John Hay Professor of Bibliography and University Bibliographer at Brown University. Ricardo Ainslie is a Mexican-American documentary filmmaker. John Richard Alden (23 January 1908, Grand Rapids, Michigan – 14 August 1991, Clearwater, Florida) was an American historian and author of a number of books on the era of the American Revolutionary War. W. Brian Arthur (born 31 July 1945) is an economist credited with developing the modern approach to increasing returns. John William Atkinson (December 31, 1923 – October 27, 2003), also known as Jack Atkinson, was an American psychologist who pioneered the scientific study of human motivation, achievement and behavior. Dean Bakopoulos is an American writer, born in Dearborn Heights, Michigan in 1975. He is a two-time National Endowment for the Arts fellow, a Guggenheim Fellow, and writer-in-residence at Grinnell College. John Bargh (/ˈbɑːrdʒ/; born 1955) is a social psychologist currently working at Yale University Leslie Bassett was an American composer of classical music. Richard Bauman is a folklorist and anthropologist, now retired from Indiana University Bloomington. He is Distinguished Professor emeritus of Folklore, of Anthropology, and of Communication and Culture. Warren Benson (January 26, 1924 – October 6, 2005) was an American composer. His compositions consist mostly of music for wind instruments and percussion. Theodore H. Berlin (8 May 1917, New York City – 16 November 1962, Baltimore) was an American theoretical physicist. Derek Bermel (born 1967, in New York City) is an American composer, clarinetist and conductor whose music blends various facets of world music, funk and jazz with largely classical performing forces and musical vocabulary. Robert Berner (November 25, 1935 – January 10, 2015) was an American scientist known for his contributions to the modeling of the carbon cycle. Sara Berry (born 1940) is an American scholar of contemporary African political economies, professor of history at Johns Hopkins University and co-founder of the Center for Africana Studies at Johns Hopkins. Lawrence D. Bobo is the W. E. B. Du Bois Professor of the Social Sciences and the Dean of Social Science at Harvard University. Kevin Boyle (historian) (7 October 1960) is an American author and the William Smith Mason Professor of American History at Northwestern University. Bertrand Harris Bronson (June 22, 1902 – March 14, 1986) was an American academic and professor in the English department at the University of California, Berkeley. Clair Alan Brown (born August 16, 1903; died 1982) was an American botanist. Roger Brown (psychologist) (April 14, 1925 – December 11, 1997) was an American psychologist. He was known for his work in social psychology and in children's language development. Eugene Burnstein is an American social psychologist and professor emeritus of psychology at the University of Michigan College of Literature, Science, and the Arts. John W. Cahn was an American scientist and recipient of the 1998 National Medal of Science. David George Campbell (born January 28, 1949 in Decatur, Illinois, United States) is an American educator, ecologist, environmentalist, and award-winning author of non-fiction. Victoria Chang is an American poet and children's writer. Her fifth book of poems, OBIT, was published by Copper Canyon Press in 2020. Patricia Cheng (born 1952) is a Chinese American psychologist. Laura Clayton (born December 8, 1943) is an American pianist and composer. She was born in Lexington, Kentucky, and studied at the Peabody Conservatory in Baltimore and at Columbia University, New York, with Mario Davidovsky. Allan M. Collins is an American cognitive scientist, Professor Emeritus of Learning Sciences at Northwestern University's School of Education and Social Policy. Philip Converse (November 17, 1928 – December 30, 2014) was an American political scientist. Richard M. Cook is an American academic who specializes in American literature. Harold Courlander (September 18, 1908 – March 15, 1996) was an American novelist, folklorist, and anthropologist and an expert in the study of Haitian life. Olena Kalytiak Davis (born September 16, 1963) is an American poet. Philip James DeVries (born March 7, 1952) is a tropical biologist whose research focuses on insect ecology and evolution, especially butterflies. Charles L. Dolph (August 27, 1918 – June 1, 1994) was a professor of mathematics, known for his research in applied mathematics and engineering. William Doppmann (Springfield, Massachusetts, October 10, 1934 — Honokaa, Hawaii, January 27, 2013) was an American concert pianist and composer. William H. Durham a biological anthropologist and evolutionary biologist,[1][2] is the Bing Professor Emeritus in Human Biology at Stanford University W. Ralph Eubanks (born June 25, 1957) is an American author, journalist, professor, public speaker, and business executive. Avard Fairbanks (March 2, 1897 – January 1, 1987) was a 20th-century American sculptor. Ada Ferrer is a Cuban-American historian. She is Julius Silver Professor of History and Latin American Studies at New York University. Sidney Fine (historian) (October 11, 1920 – March 31, 2009) was a professor of history at the University of Michigan. Neil Foley is an American historian. Gabriela Lena Frank (born Berkeley, California, United States, September 1972) is an American pianist and composer of contemporary classical music. Steven Frank (biologist) (born 1957) is a professor of biology at the University of California, Irvine. William Frankena (June 21, 1908 – October 22, 1994) was an American moral philosopher. Ronald Freedman was an international demographer and founder of the Population Studies Center at the University of Michigan. Douglas J. Futuyma (born 24 April 1942) is an American evolutionary biologist. Neal Gabler (born 1950) is an American journalist, writer and film critic. Mary Gaitskill (born November 11, 1954) is an American novelist, essayist, and short story writer. David Gale was an American mathematician and economist. William A. Gamson was a professor of Sociology at Boston College, where he was also the co-director of the Media Research and Action Project (MRAP). Seymour Ginsburg (December 12, 1927 – December 5, 2004) was an American pioneer of automata theory, formal language theory, and database theory, in particular; and computer science Charles R. Goldman (born 9 November 1930 in Urbana, Illinois) is an American limnologist and ecologist. Francisco Goldman (born 1954) is an American novelist, journalist, and Allen K. Smith Professor of Literature and Creative Writing, Trinity College. Leslie D. Gottlieb (1936–2012) was a US biologist described by the Botanical Society of America as "one of the most influential plant evolutionary biologists over the past several decades.". Josh Greenfeld was an author and screenwriter mostly known for his screenplay for the 1974 film Harry and Tonto along with Paul Mazursky Gwendolyn Midlo Hall (born June 27, 1929) is an American historian who focuses on the history of slavery in the Caribbean, Latin America, Louisiana (United States), Africa, and the African Diaspora in the Americas. Amy Harmon is an American journalist. Joel F. Harrington (born August 25, 1959) is an American historian of pre-modern Germany. He is currently Centennial Professor of History at Vanderbilt University. Donald Harris (composer) (April 7, 1931, in St. Paul, Minnesota – March 29, 2016, in Columbus, Ohio) was an American composer who taught music at The Ohio State University for 22 years. He was Dean of the College of the Arts from 1988 to 1997. Garrett Hongo (born May 30, 1951, Volcano, Hawai'i) is a Yonsei, fourth-generation Japanese American academic and poet. Joseph Hickey (16 April 1907 - 31 August 1993) was an American ornithologist who wrote the landmark Guide to Bird Watching Isabel V. Hull (born 1949) is John Stambaugh Professor Emerita of History and the former chair of the history department at Cornell University. Philip Strong Humphrey (26 February 1926, Hibbing, Minnesota – 13 November 2009, Lawrence, Kansas) was an ornithologist, museum curator, and professor of zoology. M. Kent Jennings (born 1934) is an American political scientist best known for his path-breaking work on the patterns and development of political preferences and behaviors among young Americans. Lawrence Joseph (born 1948 in Detroit, Michigan) is an American poet, writer, essayist, critic, lawyer, and professor of law. James B. Kaler (born December 29, 1938 in Albany, New York) is an American astronomer and science writer. Rosabeth Moss Kanter (born March 15, 1943) is the Ernest L. Arbuckle professor of business at Harvard Business School. Laura Kasischke (born 1961) is an American fiction writer and poet. She is best known for writing the novels Suspicious River, The Life Before Her Eyes and White Bird in a Blizzard Mike Kelley (artist), (October 27, 1954 – c. January 31, 2012) was an American artist. Aviva Kempner (born December 23, 1946) is an American filmmaker. James Stark Koehler (10 November 1914 in Oshkosh, Wisconsin – 19 June 2006 in Urbana, Illinois) was an American physicist, specializing in metal defects and their interactions. He is known for the eponymous Peach-Koehler stress formula. Timothy Kramer (born 1959) is an American composer whose music has earned him a Fulbright Scholarship, an NEA grant, and a Guggenheim Fellowship. Edward Kravitz (born December 19, 1932) is the George Packer Berry Professor of Neurobiology at Harvard Medical School. Armin Landeck (1905-1984) was an American printmaker and educator. Chihchun Chi-sun Lee (Chinese: 李志純; Pe̍h-ōe-jī: Lí Chì-sûn; Pinyin: Li Zhìchún, born 1970) is a composer of contemporary classical music. Otis Hamilton Lee (28 September 1902, Montevideo, Minnesota – 17 September 1948, Vermont) was an American philosopher, noteworthy as a Guggenheim Fellow. Normand Lockwood (March 19, 1906 – March 9, 2002) was an American composer born in New York, New York. Alvin D. Loving Jr. (September 19, 1935 – June 21, 2005), better known as Al Loving, was an African-American abstract expressionist painter. Mary Lum (artist) (born 1951) is an American visual artist Suzanne McClelland is a New York-based artist best known for abstract work based in language, speech, and sound. Jay Meek (1937 – November 3, 2007 St. Paul) was an American poet, and director of the Creative Writing program at the University of North Dakota. Jonathan Metzl (born December 12, 1964) is an American psychiatrist and author. Nancy Milford (born March 26, 1938) is an American biographer. Harvey Alfred Miller (October 19, 1928, Sturgis, Michigan – January 7, 2020, Palm Bay, Florida) was an American botanist, specializing in Pacific Islands bryophytes. Susan Montgomery (born 2 April 1943 in Lansing, MI) is a distinguished American mathematician whose current research interests concern noncommutative algebras Howard Markel (born April 23, 1960) is an American physician and medical historian. George H. Miley (born 1933) is a professor emeritus of physics from the University of Illinois at Urbana–Champaign. Christine Montross (born 1973) is an American medical doctor and writer. Paul M. Naghdi (March 29, 1924 – July 9, 1994) was a professor of mechanical engineering at University of California, Berkeley. Homer Neal (June 13, 1942 – May 23, 2018) was an American particle physicist and a distinguished professor at the University of Michigan. Marjorie Hope Nicolson was an American literary scholar. Harald Herborg Nielsen (January 25, 1903 – January 8, 1973) was an American physicist. Nicholas Nixon (born October 27, 1947) is a photographer, known for his work in portraiture and documentary photography Richard Nonas (January 3, 1936 – May 11, 2021) was an American anthropologist and post-minimalist sculptor. Mary Beth Norton (born 1943) is an American historian, specializing in American colonial history and well known for her work on women's history and the Salem witch trials. Pat Oleszko is an American visual and performing artist. Susan Orlean (born October 31, 1955) is a journalist and bestselling author of The Orchid Thief and The Library Book. Peter Orner is an American writer. He is the author of two novels, two story collections and a book of essays. Scott E. Page is an American social scientist and John Seely Brown Distinguished University Professor of Complexity, Social Science, and Management at the University of Michigan Douglass Parker (May 27, 1927 – February 8, 2011) was an American classicist, academic, and translator. Doug Peacock is an American naturalist, outdoorsman, and author. Vivian Perlis (April 26, 1928 – July 4, 2019) was an American musicologist and the founder and former director of Yale University's Oral History of American Music. Elizabeth J. Perry, is an American scholar of Chinese politics and history at Harvard University, where she is the Henry Rosovsky Professor of Government and Director of the Harvard-Yenching Institute. Alvin Plantinga (born November 15, 1932) is an American analytic philosopher who works primarily in the fields of philosophy of religion, epistemology (particularly on issues involving epistemic justification), and logic. Michael Posner (psychologist) is an American psychologist who is a researcher in the field of attention, and the editor of numerous cognitive and neuroscience compilations. Richard Prum (born 1961) is William Robertson Coe Professor of ornithology, and head curator of vertebrate zoology at the Peabody Museum of Natural History at Yale University. Rayna Rapp (pen name Rayna R. Reiter) is a professor and associate chair of anthropology at New York University, specializing in gender and health Bertram Raven (September 26, 1926 – February 26, 2020) was an American academic. He was a member of the faculty of the psychology department at UCLA from 1956 until his death. Roger Reynolds (born July 18, 1934) is a Pulitzer prize-winning American composer. Roxana Barry Robinson (born 30 November 1946) is an American novelist and biographer whose fiction explores the complexity of familial bonds and fault lines. David Rosenberg (poet) (born August 1, 1943 in Detroit, Michigan) is an American poet, biblical translator, editor, and educator. Norman Rosten (January 1, 1913 – March 7, 1995) was an American poet, playwright, and novelist. Elizabeth S. Russell (May 1, 1913 – May 28, 2001), also known as "Tibby" Russell, was an American biologist in the field of mammalian developmental genetics Stanley Schachter (April 15, 1922 – June 7, 1997) was an American social psychologist. Betsy Schneider is an American photographer who lives and works in the Boston Area. Edwin William Schultz (1888 Wisconsin – 1971) was an American pathologist. Paul Schupp (born March 12, 1937) is a Professor Emeritus of Mathematics at the University of Illinois at Urbana Champaign. Kathryn Kish Sklar (born December 1939) is an American historian, author, and professor. Paul Slud (31 March 1918, New York City – 20 February 2006, Catlett, Virginia) was an American ornithologist and tropical ecologist, known for his 1960 monograph The Birds of Finca "La Selva," Costa Rica and his 1964 book The Birds of Costa Rica: Distribution and Ecology. Joel Sobel (born 24 March 1954) is an American economist and currently professor of economics at the University of California, San Diego. Frank Spedding (22 October 1902 – 15 December 1984) was a Canadian American chemist. He was a renowned expert on rare earth elements, and on extraction of metals from minerals. Edward A. Spiegel (1931 — January 2, 2020)[2] was an American professor of astronomy at Columbia University. Duncan G. Steel (born 1951) is an American experimental physicist, researcher and professor in quantum optics in condensed matter physics. Alexander Stephan (August 16, 1946 – May 29, 2009) was a specialist in German literature and area studies. James W. Stigler is an American psychologist, researcher, entrepreneur and author. Joan E. Strassmann is a North American evolutionary biologist and the Charles Rebstock Professor of Biology at the Washington University in St. Louis. Larissa Szporluk is an American poet and professor. Her most recent book is Embryos & Idiots (Tupelo Press, 2007). G. David Tilman (born 22 July 1949),[2] ForMemRS, is an American ecologist. Richard Toensing (March 11, 1940 - July 2, 2014) was an American composer and music educator. David Treuer (born 1970) (Ojibwe) is an American writer, critic and academic. As of 2019, he had published seven books Susan M. Ervin-Tripp (1927–2018) was an American linguist whose specialities were psycholinguistic and sociolinguistic research. Karen Uhlenbeck (born August 24, 1942) is an American mathematician and a founder of modern geometric analysis. Sim Van der Ryn is an American architect. He is also a researcher and educator Henry Van Dyke (novelist), Jr. (1928 – December 22, 2011), was an American novelist, editor, teacher and musician. Andrew G. Walder (born 1953) is an American political sociologist specializing in the study of Chinese society. William Shi-Yuan Wang (Chinese: 王士元; born 1933) is a linguist, with expertise in phonology, the history of Chinese language and culture, historical linguistics, and the evolution of language in humans. Michael Watts (born 1951 in England) is Emeritus "Class of 1963" Professor of Geography and Development Studies at the University of California, Berkeley, USA. Grady Webster (1927–2005) was a plant systematist and taxonomist. He was the recipient of a number of awards and appointed to fellowships of botanical institutions in the United States of America Joan Weiner is an American philosopher and professor emerita of philosophy at Indiana University Bloomington,[1] known for her books on Gottlob Frege. Morris Weitz (July 24, 1916 – February 1, 1981) "was an American philosopher of aesthetics who focused primarily on ontology, interpretation, and literary criticism". Edmund White (born January 13, 1940) is an American novelist, memoirist, and an essayist on literary and social topics. Michael Stewart Witherell (born 22 September 1949) is an American physicist and laboratory director. He is currently the director of the Lawrence Berkeley National Laboratory. Jorge Eduardo Wright (20 April 1922 – 2005) was an Argentinian mycologist. X. J. Kennedy (born Joseph Charles Kennedy on August 21, 1929, in Dover, New Jersey) is an American poet, translator, anthologist, editor, and author of children's literature and textbooks on English literature and poetry. Al Young (May 31, 1939 – April 17, 2021) was an American poet, novelist, essayist, screenwriter, and professor. Journalism, publishing, and broadcasting Roz Abrams MA, news co-anchor for CBS; reporter and anchor for almost 30 years, including 18 years with WABC in New York Sam Apple, publisher and editor-in-chief of The Faster Times Dean Baker (Ph.D., Economics), blogger for The American Prospect Ray Stannard Baker (MDNG LAW: 1891), biographer of Woodrow Wilson Margaret Bourke-White (MDNG: 1922–1924), photographer and journalist Rodney W. Brown, MA-education, MA-American culture, MA-English language and literature; producer of local and national television Jon Chait (BA 1994), Senior Editor for The New Republic Jeff Cohen, founder of Fairness and Accuracy in Reporting; left the group to produce Donahue on MSNBC Sarah Costello, Co-Host/Editor of the asexual and aromantic podcast Sounds Fake But Okay. Ann Coulter (LAW: JD 1988), conservative author and attorney Rich Eisen (BA 1990), host of sportstalk TV/radio show, The Rich Eisen Show, and journalist for NFL Network and CBS Sports; former ESPN anchor Larry Elder (LAW: JD 1977), talk radio show host, author, and TV show host Win Elliot, sports announcer and journalist John Fahey (BUS: MBA 1975), President and CEO of the National Geographic Society; former chairman, president and CEO of Time Life, Inc.; one of Advertising Ages top 100 marketers Bill Flemming (BA), television sports journalist Martin Ford (BSE 1985), author of Rise of the Robots: Technology and the Threat of a Jobless Future, winner of the 2015 Financial Times and McKinsey Business Book of the Year Award James Russell Gaines (1973), former managing editor of Time Magazine Arnold Gingrich (1925), founder and publisher of Esquire Todd Gitlin (MA 1966, Political Science), professor of journalism; social critic Wendell Goler, Fox News White House correspondent George Zhibin Gu, journalist and consultant Sanjay Gupta (MD: 1993), CNN anchor, reporter and senior medical correspondent; Emmy winner Raelynn Hillhouse (HHRS: MA, Ph.D. 1993), national security expert and blogger (The Spy Who Billed Me); novelist; political scientist Dana Jacobson (BA 1993), ESPN anchorwoman Alireza Jafarzadeh, senior Foreign Affairs Analyst for Fox News Television and other major TV networks; author of The Iran Threat: President Ahmadinejad and the Coming Nuclear Crisis Leon Jaroff (COE: BSE EE, BS EM 1950), a mainstay for the Time Inc. family of publications since he joined as an editorial trainee for LIFE magazine in 1951; moved to Time in 1954, and became its chief science writer in 1966; named a senior editor in 1970, a post he kept until he semi-retired in 2000 Paul Kangas, stockbroker for twelve years; host of Nightly Business Report since it was a local Florida program in 1979 Kayla Kaszyca, Co-Host/Marketing Manager of the asexual and aromantic podcast Sounds Fake But Okay. Ken Kelley, founder of the Ann Arbor Argus and Sundance, and Playboy interviewer William F. Kerby (AB 1920s), chairman of Dow Jones and Company Laurence Kirshbaum (AB 1966), founder of LJK Literary Management; chairman of Time Warner Book Group Melvin J. Lasky (MA History), combat historian in France and Germany during WWII; assistant to the U.S. Military Governor of Berlin in early postwar years; founder and editor of the anti-Communist journal Encounter, which was in 1966 shown to be secretly financed by the CIA Daniel Levin, writer Ann Marie Lipinski, former editor of the Chicago Tribune; 1987 Pulitzer Prize winner Richard Lui (MBA), journalist; MSNBC news anchor; former news anchor for five years at CNN Worldwide Wednesday Martin, journalist, memoirist, anthropologist Robert McHenry, encyclopedist and author; editor-in-chief (emeritus) of the Encyclopædia Britannica Ari Melber (AB 2002), MSNBC news anchor; NBC News legal analyst John J. Miller, National Political Reporter for the National Review Paul Scott Mowrer, journalist and Pulitzer Prize winner Davi Napoleon (AB 1966; AM 1968), writes a monthly feature for Live Design; former columnist for TheaterWeek and InTheater Daniel Okrent (BA 1969), public editor of New York Times; editor-at-large of Time Inc.; Pulitzer Prize finalist in history (Great Fortune, 2004); founding father of Rotisserie League Baseball Marvin Olasky (Ph.D. 1976), conservative pundit Susan Orlean (AB), staff writer for The New Yorker Norman Ornstein, American Enterprise Institute Senior Fellow Phil Ponce (LAW: JD 1974), Chicago television journalist, host of Chicago Tonight on WTTW PBS station David Portnoy (1999, Education), founder of Barstool Sports William E. Quinby (AB 1858, MA 1861), owner of the Detroit Free Press and United States Ambassador to the Netherlands Evan Rosen (BA), journalist, strategist, author of The Culture of Collaboration Adam Schefter, former Denver Post and Denver Broncos correspondent for 15 years; ESPN and NFL Network contributor John Schubeck, television reporter and anchor, one of the few to anchor newscasts on all three network owned-and-operated stations in one major market Samuel Spencer Scott, president of Harcourt, Brace & Company from 1948 until his retirement in 1954 David Shuster, television journalist with Current TV;talk radio host; former anchor for MSNBC; has also worked for Fox News and CNN Rob Siegel (1993), editor-in-chief of The Onion; screenplay writer for The Wrestler Carole Simpson (BA 1962), former ABC News correspondent; Emerson College professor Bert Randolph Sugar (LAW: JD 1961); former editor at The Ring, Boxing Illustrated, and Fight Game magazines; wrote more than 80 books on boxing, baseball, horse racing, and sports trivia Amy Sullivan, contributing editor for Time magazine, covers religion and politics; also writes for the magazine's political blog, Swampland Jerald F. ter Horst (also known as Jerald Franklin ter Horst) (BA 1947), Gerald Ford's short-term press secretary Peter Turnley, photojournalist known for documenting the human condition and current events John Voelker (LAW: 1928), author of Anatomy of a Murder Mike Wallace (A.B. 1939), TV journalist, longtime host of 60 Minutes; winner of 20 Emmys and three Peabodys David Weir, editor and journalist, Editor in Chief at Keep Media as of 2007 Margaret Wente (BA), writer for The Globe and Mail, 2006 winner of the National Newspaper Award for column-writing; has edited leading business magazines Canadian Business and ROB David Westin (BA, with honors and distinction; LAW: JD summa cum laude 1977), president of ABC News Roger Wilkins (AB 1953, LAW: LLB 1956, HLHD 1993), journalist of the Washington Post; shared the Pulitzer Prize for his Watergate editorials Tracy Wolfson, reporter for CBS Sports Bob Woodruff (LAW: JD), ABC World News Tonight anchor, replaced Peter Jennings Robin Wright, author, Washington Post Eric Zorn, columnist and blogger for the Chicago Tribune Daniel Zwerdling, investigative radio journalist for NPR News Svida Alisjahbana (BA 1988), CEO of Femina Indonesia, Indonesia’s leading women's magazine Law, government, and public policy MacArthur Foundation award winners As of 2020, 29 Michigan alumni — 17 undergraduate students and 12 graduate students — have been awarded a MacArthur fellowship. James Blinn (BS Physics 1970; MSE 1972; Communications Science 1970; MS Information and Control Engineering 1972) Caroline Walker Bynum (BA 1962), Medieval scholar; MacArthur Fellow Eric Charnov (BS 1969), evolutionary ecologist William A. Christian (Ph.D. 1971), religious studies scholar Shannon Lee Dawdy (M.A. 2000, Ph.D. 2003), 2010 fellowship winner; assistant professor of anthropology at the University of Chicago Philip DeVries (B.S. 1975), biologist William H. Durham (Ph.D. 1973), anthropologist Andrea Dutton (MA, Ph.D.) is an Associate Professor of Geology at the University of Florida Aaron Dworkin (BA 1997, M.A. 1998), Fellow, founder, and president of Detroit-based Sphinx Organization, which strives to increase the number of African-Americans and Latinos having careers in classical music Steven Goodman (BS 1984), adjunct research investigator in the U-M Museum of Zoology's bird division; conservation biologist in the Department of Zoology at Chicago's Field Museum of Natural History David Green (B.A. 1978; MPH 1982), Executive Director of Project Impact Ann Ellis Hanson (BA 1957; MA 1963), visiting associate professor of Greek and Latin John Henry Holland (MA 1954; Ph.D. 1959), professor of electrical engineering and computer science, College of Engineering; professor of psychology, College of Literature, Science, and the Arts Vonnie McLoyd (MA 1973, Ph.D. (1975), developmental psychologist Natalia Molina (Professor) Molina received her Ph.D. and M.A. from the University of Michigan. Denny Moore (BA), linguist, anthropologist Nancy A. Moran (Ph.D. 1982), evolutionary biologist; Yale professor; co-founder of the Yale Microbial Diversity Institute Dominique Morisseau (BFA 2000) is an American playwright and actor from Detroit, Michigan Cecilia Muñoz (BA 2000), Senior Vice President for the Office of Research, Advocacy and Legislation at the National Council of La Raza (NCLR), White House Director of Intergovernmental Affairs Dimitri Nakassis (BA 1997), a 2015 MacArthur Fellow; joined the faculty of the University of Toronto in 2008; currently an associate professor in the Department of Classics Richard Prum (Ph.D. 1989), William Robertson Coe Professor of Ornithology; Head Curator of Vertebrate Zoology at the Peabody Museum of Natural History at Yale University Mary Tinetti (BA 1973; MD 1978), physician; Gladys Phillips Crofoot Professor of Medicine and Epidemiology and Public Health at Yale University; Director of the Yale Program on Aging Amos Tversky (Ph.D.. 1965), psychologist Karen K. Uhlenbeck (BA 1964), mathematician Jesmyn Ward (MFA 2005), writer of fiction Julia Wolfe (BA 1980), classical composer Henry Tutwiler Wright (BA 1964), Albert Clanton Spaulding Distinguished University Professor of Anthropology in the Department of Anthropology; Curator of Near Eastern Archaeology in the Museum of Anthropology at the University of Michigan; 1993 MacArthur Fellows Program Tara Zahra (MA 2002; Ph.D. 2005); fellow with the Harvard Society of Fellows (2005–2007) prior to joining the faculty of the University of Chicago; 2014 MacArthur Fellow George Zweig (BA 1959), physicist who conceptualized quarks ("aces" in his nomenclature) Mathematics Ralph H. Abraham, mathematician Kenneth Ira Appel (Ph.D.), mathematician; in 1976, with colleague Wolfgang Haken at the University of Illinois at Urbana-Champaign, solved one of the most famous problems in mathematics, the four-color theorem Edward G. Begle (MA 1936), mathematician known for his role as the director of the School Mathematics Study Group, the primary group credited for developing what came to be known as New Math Harry C. Carver (BS 1915), mathematician and academic; a major influence in the development of mathematical statistics as an academic discipline Brian Conrey (Ph.D. 1980), mathematician; executive director of the American Institute of Mathematics George Dantzig (MA Math 1937), father of linear programming; studied at UM under T.H. Hildebrandt, R.L. Wilder, and G.Y. Rainich Carl de Boor (Ph.D. Mathematics 1966), known for pioneering work on splines, National Medal of Science 2003; John von Neumann Prize from the Society for Industrial and Applied Mathematics in 1996 Dorothy Elizabeth Denning, information security researcher; author of four books and 140 articles; at Georgetown University, she was the Patricia and Patrick Callahan Family Professor of computer science and director of the Georgetown Institute of Information Assurance; professor in the Department of Defense Analysis at the Naval Postgraduate School Sister Mary Celine Fasenmyer (Ph.D. 1946), mathematician noted for her work on hypergeometric functions and linear algebra; published two papers which expanded on her doctorate work and would be further elaborated by Doron Zeilberger and Herbert Wilf into "WZ theory", which allowed computerized proof of many combinatorial identities Wade Ellis (Ph.D. 1948), mathematician and educator, Associate Dean of the Rockingham School of Graduate Mathematics, Dean Emeritus and Professor Emeritus Walter Feit (Ph.D. 1955), winner of the 7th Cole Prize in 1965; known for proving the Feit–Thompson theorem David Gale (MA 1947), mathematician and economist Frederick Gehring (AB 1946), T. H. Hildebrandt Distinguished University Professor Emeritus of Mathematics; recipient of the 2006 AMS Leroy P. Steele Prize for Lifetime Achievement; taught at Michigan from 1955 until his retirement in 1996; invited three times to address the International Congress of Mathematicians; in 1989 elected to the National Academy of Sciences; in 1997, the Frederick and Lois Gehring Chair in Mathematics was endowed Seymour Ginsburg (Ph.D. 1952), pioneer of automata theory, formal language theory, database theory, and computer science; his work was influential in distinguishing theoretical computer science from the disciplines of mathematics and electrical engineering Thomas N.E. Greville (Ph.D. 1933), mathematician; specialized in statistical analysis as it concerned the experimental investigation of psi Earle Raymond Hedrick (A.B. 1896), mathematician; vice-president of the University of California Theophil Henry Hildebrandt, UM instructor of mathematics starting in 1909, where he spent most of his career; chairman of the department from 1934 until his retirement in 1957; received the second Chauvenet Prize of the Mathematical Association of America in 1929 Meyer Jerison (Ph.D. 1950), mathematician known for his work in functional analysis and rings, especially for collaborating with Leonard Gillman on one of the standard texts in the field, Rings of Continuous Functions D.J. Lewis (Ph.D. 1950), mathematician specializing in number theory; chaired the Department of Mathematics at the University of Michigan (1984–1994); director of the Division of Mathematical Sciences at the National Science Foundation James Raymond Munkres, Professor Emeritus of mathematics at MIT, author of classic textbook Topology Ralph S. Phillips (Ph.D.), mathematician; academic; known for his contributions to functional analysis, scattering theory, and servomechanisms Leonard Jimmie Savage (BS 1938, Ph.D. 1941), author of The Foundations of Statistics (1954); rediscovered Bachelier and introduced his theories to Paul Samuelson, who corrected Bachelier and used his thesis on randomness to advance derivative pricing theory Joel Shapiro (Ph.D.), mathematician; leading expert in the field of composition operators Isadore M. Singer (BA 1944), winner of the Abel Prize, the "Nobel of mathematics", and the Bôcher Memorial Prize Stephen Smale (BS 1952, MS 1953, Ph.D. 1957), Fields Medal winner; winner of the 2007 Wolf Prize in mathematics; 1965 Veblen Prize for Geometry, awarded every five years by the American Mathematical Society; 1988 Chauvenet Prize from the Mathematical Association of America; 1989 Von Neumann Award from the Society for Industrial and Applied Mathematics George W. Snedecor (MA 1913), mathematician and statistician Edwin Henry Spanier (Ph.D. 1947), mathematician at the University of California at Berkeley, working in algebraic topology Frank Spitzer (BA, Ph.D.), mathematician who made fundamental contributions to probability theory, including the theory of random walks, fluctuation theory, percolation theory, and especially the theory of interacting particle systems; his first academic appointments were at the California Institute of Technology (1953–1958); most of his academic career was spent at Cornell University, with leaves at the Institute for Advanced Study in Princeton and the Mittag-Leffler Institute in Sweden Norman Steenrod (A.B. 1932), algebraic topologist, author of The Topology of Fiber Bundles; believed to have coined the phrase "abstract nonsense," used in category theory Clarence F. Stephens (Ph.D.), ninth African American to receive a Ph.D. in mathematics; credited with inspiring students and faculty at SUNY Potsdam to form the most successful United States undergraduate mathematics degree programs in the past century Robert Simpson Woodward (A.B. 1872); professor of mechanics, mathematical physics at Columbia (1899–1904); President of the American Mathematical Society (1899–1900); in 1904 became President of the newly formed Carnegie Institution Cornelia Strong (M.A. 1931); professor of mathematics and astronomy at the Woman's College of the University of North Carolina Ted Kaczynski (PH.D.), Youngest Professor at the University of California, Berkeley later arrested for domestic terrorism also known as the Unabomber. Fellows of the American Mathematical Society As of 2021, UM numbers amongst its alumni 29 Fellows of the American Mathematical Society. Kenneth Appel (October 8, 1932 – April 19, 2013) was an American mathematician who in 1976, with colleague Wolfgang Haken at the University of Illinois at Urbana–Champaign, solved one of the most famous problems in mathematics, the four-color theorem. Susanne Brenner is an American mathematician, whose research concerns the finite element method and related techniques for the numerical solution of differential equations. Ralph Louis Cohen (born 1952) is an American mathematician, specializing in algebraic topology and differential topology. Robert Connelly (born July 15, 1942) is a mathematician specializing in discrete geometry and rigidity theory. Brian Conrey (23 June 1955) is an American mathematician and the executive director of the American Institute of Mathematics. Ronald Getoor (9 February 1929, Royal Oak, Michigan – 28 October 2017, La Jolla, San Diego, California) was an American mathematician. Tai-Ping Liu (Chinese: 劉太平; pinyin: Liú Tàipíng; born 18 November 1945) is a Taiwanese mathematician, specializing in partial differential equations. Russell Lyons (6 September 1957) is an American mathematician, specializing in probability theory on graphs, combinatorics, statistical mechanics, ergodic theory and harmonic analysis. Gaven Martin FRSNZ FASL FAMS (born 8 October 1958) is a New Zealand mathematician. Susan Montgomery (born 2 April 1943 in Lansing, MI) is a distinguished American mathematician whose current research interests concern noncommutative algebras Paul Muhly (born September 7, 1944) is an American mathematician. James Munkres (born August 18, 1930) is a Professor Emeritus of mathematics at MIT Zuhair Nashed (born May 14, 1936 in Aleppo, Syria) is an American mathematician, working on integral and operator equations, inverse and ill-posed problems, numerical and nonlinear functional analysis, optimization and approximation theory, operator theory, optimal control theory, signal analysis, and signal processing. Peter Orlik (born 12 November 1938, in Budapest) is an American mathematician, known for his research on topology, algebra, and combinatorics. Mihnea Popa (born 11 August 1973) is a Romanian-American mathematician at Harvard University, specializing in algebraic geometry. He is known for his work on complex birational geometry, Hodge theory, abelian varieties, and vector bundles. Jane Cronin Scanlon (July 17, 1922 – June 19, 2018) was an American mathematician and an emeritus professor of mathematics at Rutgers University. Maria E. Schonbek is an Argentine-American mathematician at the University of California, Santa Cruz. Her research concerns fluid dynamics and associated partial differential equations such as the Navier–Stokes equations. Paul Schupp (born March 12, 1937) is a Professor Emeritus of Mathematics at the University of Illinois at Urbana Champaign. George Roger Sell (February 7, 1937 – May 29, 2015) was an American mathematician, specializing in differential equations, dynamical systems, and applications to fluid dynamics, climate modeling, control systems, and other subjects. Charles Sims (mathematician) (April 14, 1937 – October 23, 2017) was an American mathematician best known for his work in group theory. Isadore Singer (May 3, 1924 – February 11, 2021) was an American mathematician. Christopher Skinner (born June 4, 1972) is an American mathematician working in number theory and arithmetic aspects of the Langlands program. Karen E. Smith (born 1965 in Red Bank, New Jersey) is an American mathematician, specializing in commutative algebra and algebraic geometry. Kannan Soundararajan (born December 27, 1973) is an India-born American mathematician and a professor of mathematics at Stanford University. Irena Swanson is an American mathematician specializing in commutative algebra. Karen Uhlenbeck (born August 24, 1942) is an American mathematician and a founder of modern geometric analysis. Judy L. Walker is an American mathematician. She is the Aaron Douglas Professor of Mathematics at the University of Nebraska–Lincoln, where she chaired the mathematics department from 2012 through 2016 John H. Walter (born 14 December 1927, Los Angeles) is an American mathematician known for proving the Walter theorem in the theory of finite groups. Charles Weibel (born October 28, 1950 in Terre Haute, Indiana) is an American mathematician working on algebraic K-theory, algebraic geometry and homological algebra. Mathematicians: African American African American pioneers in the field of Mathematics Joseph Battle, Year of Ph.D. 1963 Marjorie Lee Browne, Year of Ph. D. 1950, arguably the first African-American woman to earn a doctorate in mathematics Wade Ellis, Year of Ph.D. 1944 Dorothy McFadden Hoover ABD, featured in Margot Lee Shetterly's bestselling book, Hidden Figures Rogers Joseph Newman, Year of Ph.D. 1961 Joseph Alphonso Pierce, Year of Ph.D. 1938 Clarence F. Stephens, Year of Ph.D. 1944 Beauregard Stubblefield, Year of Ph.D. 1960 Irvin Elmer Vance, Year of Ph.D. 1967 Chelsea Walton, Year of Ph.D. 2011 Suzanne Weekes Year of Ph.D. 1995 Manhattan project A number of Michigan graduates or fellows were involved with the Manhattan Project, chiefly with regard to the physical chemistry of the device. Robert F. Bacher, Ph.D., member of the Manhattan Project; professor of physics at Caltech; president of the Universities Research Association Lawrence Bartell before he had finished his studies he was invited by Glenn Seaborg to interview for a position working on the Manhattan Project. He accepted the job and worked on methods for extracting plutonium from uranium. Lyman James Briggs was an American engineer, physicist and administrator. Donald L. Campbell was an American chemical engineer. Allen F. Donovan worked for the Manhattan Project on the design of the shape of the Fat Man atomic bomb and its release mechanism. Taylor Drysdale earned master's degrees in nuclear physics and mathematics from the University of Michigan, joined the U.S. military, worked on the Manhattan Project, and retired from the U.S. Air Force as a colonel. Arnold B. Grobman Grobman began his post-secondary education at the University of Michigan in Ann Arbor, earning his bachelor's degree in 1939. From 1944 to 1946, he was a Research Associate on the Manhattan Project, later publishing "Our Atomic Heritage" about his experiences. Herb Grosch received his B.S. and PhD in astronomy from the University of Michigan in 1942. In 1945, he was hired by IBM to do backup calculations for the Manhattan Project working at Watson Scientific Computing Laboratory at Columbia University. Ross Gunn was an American physicist who worked on the Manhattan Project during World War II. Isabella L. Karle, was x-ray crystallographer Jerome Karle was an American physical chemist. James Stark Koehler was an American physicist, specializing in metal defects and their interactions. He is known for the eponymous Peach-Koehler stress formula. Emil John Konopinski (1933, MA 1934, Ph.D. 1936), patented a device that made the first hydrogen bomb with Dr. Edward Teller; member of the Manhattan Project John Henry Manley was an American physicist who worked with J. Robert Oppenheimer at the University of California, Berkeley before becoming a group leader during the Manhattan Project. Elliott Organick chemist, Manhattan Project, 1944-1945; Carolyn Parker was a physicist who worked from 1943 to 1947 on the Dayton Project, the plutonium research and development arm of the Manhattan Project. Franklin E. Roach was involved in high explosives physics research connected with the Manhattan Project Nathan Rosen was an American-Israeli physicist noted for his study on the structure of the hydrogen atom and his work with Albert Einstein and Boris Podolsky on entangled wave functions and the EPR paradox. Frank Spedding (1925), chemist; developed an ion exchange procedure for separating rare earth elements, purifying uranium, and separating isotopes; Guggenheim award winner Arthur Widmer was attached on a three-year stint in 1943 as one of the Kodak researchers assigned to the Manhattan Project in Berkeley, California and Oak Ridge, Tennessee, as an analytical chemists developing methods of uranium analysis, which led to the development of the atomic bomb. Medicine and dentistry John Jacob Abel (PHARM: Ph.D. 1883), North American "father of pharmacology"; discovered epinephrine; first crystallized insulin; founded the department of pharmacology at Michigan; in 1893 established the department of pharmacology at the newly founded Johns Hopkins University School of Medicine; first full-time professor of pharmacology in the United States Susan Anderson (1897), one of the first female physicians in Colorado Robert C. Atkins (BA 1951), developed the Atkins Diet John Auer (BS 1898), credited with the discovery of Auer rods William Henry Beierwaltes (BS 1938, MED: MD 1941), champion of the use of radioiodine together with surgery in thyroid diagnosis and care; lead author of first book on nuclear medicine, 1957's Clinical Use of Radioisotopes Elissa P. Benedek (MD 1960), child and adolescent psychiatrist, forensic psychiatrist, adjunct clinical professor of psychiatry at the University of Michigan David Botstein (Ph.D. 1967); leader in the Human Genome Project; director of Princeton's Lewis-Sigler Institute for Integrative Genomics Alexa Canady (AB 1971, MED: MD 1975), became first African-American female neurosurgeon in the country when she was 30; chief of neurosurgery at Children's Hospital of Michigan in Detroit for almost 15 years Benjamin S. Carson (MED: MD 1977), former director of pediatric neurosurgery at Johns Hopkins Hospital Arul Chinnaiyan (MED: MD 1999), cancer researcher; recipient of the 28th annual American Association for Cancer Research Award for Outstanding Achievement Thomas Benton Cooley (MED: 1895), pediatrician; hematologist; professor of hygiene and medicine at the University of Michigan; son of Thomas McIntyre Cooley, first Chairman of the Interstate Commerce Commission Ronald M. Davis (AB 1978), 162nd President of the American Medical Association; Director of the Center for Health Promotion and Disease Prevention at the Henry Ford Health System in Detroit Mary Gage Day (MED: MD 1888), physician, medical writer Paul de Kruif (Ph.D. 1916), author of Microbe Hunters Julio Frenk (SPH: M.P.H. 1981, MA 1982, Ph.D. 1983), Minister of Health for Mexico Seraph Frissell (MED: MD 1875), physician, medical writer Raymond Gist, president of the American Dental Association Sanjay Gupta (MD: 1993), CNN anchor, reporter and senior medical correspondent; former neurosurgeon Lucy M. Hall (MED: MD 1878), first woman ever received at St Thomas' Hospital's bedside clinics Alice Hamilton (MED: MD 1893), specialist in lead poisoning and industrial diseases; known as the "Mother of Industrial Health;" in 1919 became the first woman on the faculty at Harvard Medical School; the first woman to receive tenure there; honored with her picture on the 55-cent postage stamp; winner of the Lasker Award Nancy M. Hill (MED: MD 1874), Civil War nurse and one of the first female doctors in the US Jerome P. Horwitz (Ph.D. 1950), synthesized AZT in 1964, a drug now used to treat AIDS Joel Lamstein (BS 1965), co-founder and president of John Snow, Inc. (JSI) and JSI Research & Training Institute, Inc., international public health research and consulting firms Josiah K. Lilly Jr. (1914 college of pharmacy), Chairman and President of Eli Lilly Howard Markel (MED: MD 1986), physician, medical historian, best-selling author, medical journalist, and member of the National Academy of Medicine, George E. Wantz Distinguished Professor of the History of Medicine at the University of Michigan, Guggenheim Fellow William James Mayo (MED: MD 1883), co-founder of the Mayo Clinic Jessica Rickert, first female American Indian dentist in America, which she became upon graduating from the University of Michigan School of Dentistry in 1975. She was a member of the Prairie Band Potawatomi Nation, and a direct descendant of the Indian chief Wahbememe (Whitepigeon). Ida Rollins, first African-American woman to earn a dental degree in the United States, which she earned from the University of Michigan in 1890 Leonard Andrew Scheele (BA 1931), US Surgeon General 1948–1956 Eric B. Schoomaker (BS 1970, MED: MD 1975), Major General; Commander of the North Atlantic Regional Medical Command and Walter Reed Army Medical Center; former commanding general of the U.S. Army Medical Research and Materiel Command at Fort Detrick Thomas L. Schwenk (MED: MD 1975), dean of the University of Nevada School of Medicine John Clark Sheehan (MS 1938, Ph.D. 1941), chemist who pioneered the first synthetic penicillin breakthrough in 1957 Norman Shumway (MDNG), heart transplantation pioneer; entered the University of Michigan as a pre-law student, but was drafted into the Army in 1943 Parvinder Singh (PHARM: Ph.D. 1967), Chairman of Ranbaxy in 1993 until his death in 1999; the market capitalization of the Company went up from Rs.3.5 to over Rs. 7300 Crores during this period Siti Hasmah Mohamad Ali (SPH 1966), one of the first female doctors in Malaysia, and later the wife of Malaysia's fourth Prime Minister Mahathir Mohamad Dr. Homer Stryker (MED: MD 1925), founder of Stryker Corporation Dr. William Erastus Upjohn (MED: MD 1875), inventor of the first pill that dissolved easily in the human body Christine Iverson Bennett (MED: MD 1907), medical missionary who worked in Arabia during WWI Larry Nassar (BS 1985), convicted serial child molester and a former USA Gymnastics national team doctor and osteopathic physician at Michigan State University Richard C. Vinci, retired United States Navy admiral and former commander of the United States Navy Dental Corps Military Samuel C. Phillips (MS 1950), director of the Apollo program from 1964 to 1969, director of the National Security Agency from 1972 to 1973, commander of Air Force Systems Command from 1973 to 1975. George H. Cannon (BS 1938), United States Marine Corps officer and World War II Medal of Honor recipient killed during the First Bombardment of Midway. John P. Coursey (BS 1937), United States Marine Corps aviator during World War II and late Brigadier general Francis C. Flaherty (BS 1940), United States Navy officer and World War II Medal of Honor recipient killed during the Attack on Pearl Harbor. Richard C. Vinci, retired United States Navy admiral and former commander of the United States Navy Dental Corps NASA Claudia Alexander (Ph.D.) moved to NASA's Jet Propulsion Laboratory in 1986. She was the last project manager of NASA's Galileo mission to Jupiter Spence M. Armstrong (M.S.) is a retired United States Air Force general officer, combat veteran, and test pilot. Armstrong spent eleven years as a senior executive at the National Aeronautics and Space Administration (NASA). Jim Blinn James F. Blinn is an American computer scientist who first became widely known for his work as a computer graphics expert at NASA's Jet Propulsion Laboratory (JPL), Scott J. Bolton (B.S.E.) has been a Principal Investigator with NASA on various research programs since 1988. Bolton became the Principal Investigator of Juno, a New Frontiers program mission to Jupiter which began primary science in 2016. Aisha Bowe (B.S.E. & M.S.E.) worked as an intern in the Ames Research Center in 2008, before joining as an Engineer. Bowe worked in the Ames Research Center, in the Flight Trajectory Dynamics and Controls Branch of the Aviation Systems Division. Beth A. Brown (Ph.D.) was a NASA astrophysicist with a research focus on X-ray observations of elliptical galaxies and black holes. She earned a Ph.D. in Astronomy from the University of Michigan in 1998, becoming the first African-American woman to do so. Steve Chappell is an American aerospace engineer. He is a Technical Lead & Research Specialist for Wyle Integrated Science & Engineering at NASA's Johnson Space Center (JSC) in Houston, Texas Bob Dempsey (B.S.) is a NASA flight director for the International Space Station selected in 2005. As astronomer he worked at the Space Telescope Science Institute (STScI) prior to joining the ISS project. Allen F. Donovan (M.S.) worked with NASA to solve the problem of combustion instability that affected Project Mercury, and later on the pogo oscillation problems that affected Project Gemini and Project Apollo. Jeff Dozier (Ph.D.) worked as a senior member of the technical staff and the Project Scientist for a potential spectroscopy space mission at NASA's Jet Propulsion Laboratory. From 1990 to 1992 he worked at the NASA Goddard Space Flight Center as the Senior Project Scientist at the start of NASA's Earth Observing System Julian Earls was made the Director of the Glenn Research Center in 2003, where he was responsible for technology, research and development, and systems development. This role involved Earls managing a budget of over a billion dollars and a work force of 4,500 Dorothy McFadden Hoover (A.B.D.) was an American physicist and mathematician. Hoover was a pioneer in the early days of NASA. She was then hired at the National Advisory Committee for Aeronautics (NACA, later NASA) in Langley in 1943 as a professional (P-1) mathematician. Usama Fayyad (Ph.D.) From 1989 to 1996 Fayyad held a leadership role at NASA's Jet Propulsion Laboratory (JPL), where his work in the analysis and exploration of Big Data in scientific applications (gathered from observatories, remote-sensing platforms and spacecraft) garnered him the top research excellence award that Caltech awards to JPL scientists – The Lew Allen Award for Excellence in Research, as well as a U.S. Government medal from NASA. Mei-Ching Hannah Fok (Ph.D.) is a Planetary Scientist at the Goddard Space Flight Center. She was awarded the NASA Exceptional Scientific Achievement Medal in 2011 and elected a Fellow of the American Geophysical Union in 2019. She has worked on the IMAGE, Van Allen Probes and TWINS missions. Jack Garman (B.S.) was a computer engineer, former senior NASA executive and noted key figure of the Apollo 11 lunar landing. William W. Hagerty (Ph.D.) From 1964 to 1970, Hagerty was an advisor to NASA and served as a board member to the National Science Foundation. Martin Harwit (M.S.) designed, built and launched the first rocket-powered liquid-helium-cooled telescopes in the late 1960s and also carried out astronomical observations from high-altitude NASA aircraft. Richard C. Henry (M.S.) was a lieutenant general in the United States Air Force who served as commander of the Space Division, Air Force Systems Command, Los Angeles Air Force Station, Calif. John T. Howe, (B.S.E.) During his 35 years with NASA, he served as Senior Staff Scientist, Head of Aerothermodynamics, Assistant Chief for the Physics Branch, and Branch Chief for Fluid Dynamics. Hyuck Kwon (Ph.D.) From 1989 to 1993, he was with the Lockheed Engineering and Sciences Company, Houston, Texas, as a principal engineer, working for NASA Space Shuttle and Space Station satellite communication systems. Joel S. Levine (Ph.D.) In 1970, Langley Research Center associate director John Edward Duberg recruited Levine to work on the Viking program. Levine joined the Center in July 1970 and was assigned to the Aeronomy Section of the Planetary Physics Branch. He continued working for NASA until his retirement in 2011. Bernard Lippmann (M.S.) in 1968 – 1969, he was a senior research associate at NASA's Goddard Institute for Space Studies. Much of his research during this time is classified. James Fu Bin Lu is an American Internet entrepreneur. Lu received master's degrees in electrical engineering and computer science from the University of Michigan (graduating summa cum laude) and worked as an engineer for NASA's Jet Propulsion Laboratory, developing software for the Mars rover. Harriet H. Malitson (M.S.) was an American astronomer. She was a solar researcher, employed at Goddard Space Flight Center and at the National Oceanic and Atmospheric Administration. Stephen P. Maran (Ph.D.) was an astrophysicist at NASA's Goddard Space Flight Center for 35 years, from 1969-2004. During this time, he served as a staff scientist, Project Scientist, and Principal Investigator, and was involved in research on a number of missions, including the Hubble Space Telescope. Hu Peiquan (Ph.D.) In 1944, Hu became a researcher at the Langley Research Center of the National Advisory Committee for Aeronautics (NACA, the predecessor of NASA). Samuel C. Phillips (M.S.) (February 19, 1921 – January 31, 1990) was a United States Air Force general who served as Director of NASA's Apollo program from 1964 to 1969, the seventh Director of the National Security Agency from 1972 to 1973, and as Commander, Air Force Systems Command from 1973 to 1975. Phil Plait During the 1990s, Plait worked with the COBE satellite and later was part of the Hubble Space Telescope team at NASA Goddard Space Flight Center, working largely on the Space Telescope Imaging Spectrograph. Margaret Hamilton (software engineer) Led a team credited with developing the software for Apollo and Skylab. James Kasting (Ph.D.) is active in NASA's search for habitable extrasolar planets. James Slattin Martin Jr. (B.S.) Martin joined NASA's Langley Research Center in September 1964 as assistant project manager for Lunar Orbiter. The five successful Lunar Orbiter missions provided significant new information about the Moon's surface and a wealth of photographic detail that stood as the definitive source of lunar surface information for years. In recognition of his contribution to this project, Martin was awarded the NASA Exceptional Service Medal in 1967. Rob Meyerson (B.S.) began his career as an aerospace engineer at NASA Johnson Space Center (JSC) from 1985 to 1997 working [3] on human spaceflight systems, including the aerodynamic design of the Space Shuttle orbiter drag parachute. He is the former President of Blue Origin. Elisa Quintana(Ph.D.) was a member of the NASA Kepler Mission Team at NASA Ames Research Center from 2006 to 2017. She worked as a scientific programmer developing the Kepler pipeline, for which she was awarded the NASA Software of the Year in 2010. Judith Racusin (B.S.) is an American astrophysicist. She works at Goddard Space Flight Center as a research aerospace technologist in fields and particles. Louis W. Roberts (M.S.) was an American microwave physicist. He was chief of the Microwave Laboratory at NASA's Electronics Research Center in the 1960s. William H. Robbins (B.S.) was an American engineer who worked for the National Aeronautics and Space Administration (NASA). During his long career at NASA, he worked on the NERVA nuclear rocket engine, NASA wind turbines, communication satellites, and the Shuttle-Centaur program. James Russell III (Ph.D.) is an atmospheric scientist who has served as the developer of instrumentation for several NASA probes. Kamal Sarabandi (Ph.D.) is a member of Science Team for NASA Soil Moisture Active and Passive (SMAP). Joseph Francis Shea was an American aerospace engineer and NASA manager. Roy Spencer (scientist) is a meteorologist, a principal research scientist at the University of Alabama in Huntsville, and the U.S. Science Team leader for the Advanced Microwave Scanning Radiometer (AMSR-E) on NASA's Aqua satellite. Vaino Jack Vehko (B.S.) In 1960 Vehko became Director of Engineering on the Saturn S-I and S-IB booster rocket program. The Saturn IB boosters successfully launched four uncrewed and five crewed Apollo missions. They were the forerunners of the Saturn V that launched the NASA Apollo moon missions. Kevin J. Zahnle (Ph.D.) is a planetary scientist at the NASA Ames Research Center and a Fellow of the American Geophysical Union. He studies impact processes, atmospheric escape processes, geochemical modelling of atmophiles, and photochemical modelling. Noel Zamot (M.S.) was selected as a member of the NASA Astronaut Training Group 16 and became a semi finalist NASA astronaut candidate. National Academy Members As of 2021, dozens of Michigan graduates have been inducted into various National Academies (inter alia, The National Academy of Engineering, The National Academy of Science...) John Jacob Abel was an American biochemist and pharmacologist. He established the pharmacology department at Johns Hopkins University School of Medicine in 1893 Edward Charles Bassett (1921–1999)[1] was an American architect based in San Francisco. He was elected into the National Academy of Design as an Associate member in 1970, and became a full member in 1990. Michael Bellavia He was the COO of Animax Entertainment, an animation, game, and interactive content production company. While at Animax, in 2006, Bellavia won one of the first broadband Emmy Awards for a series of animated shorts that were produced for ESPN. John Robert Beyster often styled J. Robert Beyster, was the founder of Science Applications International Corporation. Lyman James Briggs in 1932, Briggs was nominated by US President Herbert C. Hoover to Burgess as director of the National Bureau of Standards. James Brown (ecologist) is an American biologist and academic. John W. Cahn (January 9, 1928 – March 14, 2016) was an American scientist and recipient of the 1998 National Medal of Science. Robert L. Carneiro was an American anthropologist and curator of the American Museum of Natural History. Rufus Cole was an American medical doctor and the first director of the Rockefeller University Hospital. George Comstock (astronomer) He helped organize the American Astronomical Society in 1897, serving first as secretary and later as vice president. He was elected to the National Academy of Sciences in 1899. Heber Doust Curtis From 1902 to 1920 Curtis worked at Lick Observatory, continuing the survey of nebulae initiated by Keeler. David DeWitt He was elected a member of the National Academy of Engineering (1998) for the theory and construction of database systems. He is also a Fellow of the Association for Computing Machinery. Allen F. Donovan was an American aerospace engineer and systems engineer who was involved in the development of the Atlas and Titan rocket families. James R. Downing is an American pediatric oncologist and executive. He is the president and chief executive officer of St. Jude Children's Research Hospital. Harry George Drickamer (November 19, 1918 – May 6, 2002), born Harold George Weidenthal, was a pioneer experimentalist in high-pressure studies of condensed matter. John M. Eargle was an Oscar- and Grammy-winning audio engineer and a musician (piano and church and theater organ). Kent Flannery is a North American archaeologist who has conducted and published extensive research on the pre-Columbian cultures and civilizations of Mesoamerica, and in particular those of central and southern Mexico. Mars Guy Fontana His contribution at the university was such that in the List of buildings at Ohio State University he has a building named after him - The Fontana Laboratories. He also has a professorship named after him. Donald S. Fredrickson was an American medical researcher, principally of the lipid and cholesterol metabolism, and director of National Institutes of Health and subsequently the Howard Hughes Medical Institute. Robert A. Fuhrman was an American engineer responsible for the development of the Polaris Missile and Poseidon missile, as well as President and Chief Operating Officer of Lockheed Corporation. Fuhrman was elected to the National Academy of Engineering in 1976 "for contributions to the design and development of the Polaris and Poseidon underwater launch ballistic missile systems". Stanley Marion Garn was a human biologist and educator. He was Professor of Anthropology at the College for Literature, Science and Arts at the University of Michigan. Sam Granick was an American biochemist known for his studies of ferritin and iron metabolism more broadly, of chloroplast structure, and of the biosynthesis of heme and related molecules. Sonia Guillén Guillén is one of Peru's leading experts in mummies. George Edward Holbrook was a noted American chemical engineer and a founding member of the National Academy of Engineering. George W. Housner was a professor of earthquake engineering at the California Institute of Technology and National Medal of Science laureate. George Huebner Chrysler reorganized their research department in 1946 and Huebner was made chief engineer. Bill Ivey is an American folklorist and author. He was the seventh chairman of the National Endowment for the Arts, and is a past Chairman of the National Academy of Recording Arts and Sciences. Kelly Johnson (engineer) He is recognized for his contributions to a series of important aircraft designs, most notably the Lockheed U-2 and SR-71 Blackbird. Lewis Ralph Jones was an American botanist and agricultural biologist. Paul Kangas was the Miami-based co-anchor of the PBS television program Nightly Business Report, a role he held from 1979, when the show was a local PBS program in Miami, through December 31, 2009. Paul J. Kern served as Commanding General of the United States Army Materiel Command from October 2001 to November 2004. Pete King (composer) He was elected president of the National Academy of Recording Arts and Sciences in 1967. Conrad Phillip Kottak is an American anthropologist. He did extensive research in Brazil and Madagascar, visiting societies there and writing books about them. Thomas A. LaVeist (MA 1985, PhD 1988, PDF 1990), Dean and Weatherhead Presidential Chair in Health Equity at Tulane University School of Public Health & Tropical Medicine Alexander Leaf was a physician and research scientist best known for his work linking diet and exercise to the prevention of heart disease. Samuel C. Lind was a radiation chemist, referred to as "the father of modern radiation chemistry". Joyce Marcus is a Latin American archaeologist and professor in the Department of Anthropology, College of Literature, Science, and the Arts at the University of Michigan, Ann Arbor. She also holds the position of Curator of Latin American Archaeology, University of Michigan Museum of Anthropological Archaeology. Bill Joy William Nelson Joy (born November 8, 1954) co-founded Sun Microsystems in 1982 served as Chief Scientist and CTO at the company until 2003. Isabella Karle was an American chemist who was instrumental in developing techniques to extract plutonium chloride from a mixture containing plutonium oxide. James Nobel Landis He is known as a founding member of the National Academy of Engineering,[1] and as president of the American Society of Mechanical Engineers in the year 1958-59. Warren Harmon Lewis He served as president of the American Association of Anatomists and the International Society for Experimental Cytology, and held honorary memberships in the Royal Microscopical Society in London and Accademia Nazionale dei Lincei in Rome. Anne Harris (musician) She has served an elected term on the Board of Governors of the Chicago Chapter of the National Academy of Recording Arts and Sciences. Herbert Spencer Jennings was an American zoologist, geneticist, and eugenicist. Digby McLaren he was the head of the palaeontology section of the GSC (the Geological Survey of Canada (GSC)) Marshall Warren Nirenberg was an American biochemist and geneticist.[2] He shared a Nobel Prize in Physiology or Medicine in 1968 Kenneth Olden He was director of the National Institute of Environmental Health Sciences (NIEHS) and National Toxicology Program, being the first African-American to head an NIH institute, a position he held from 1991 to 2005. Raymond Pearl was an American biologist, regarded as one of the founders of biogerontology. Samuel C. Phillips was a United States Air Force general who served as Director of NASA's Apollo program from 1964 to 1969, the seventh Director of the NSA from 1972 to 1973, and as Commander, Air Force Systems Command from 1973 to 1975. John Porter (Illinois politician) During his chairmanship in the NIH he led efforts resulting in doubling funding for the NIH. Bonnie Rideout is a member of the National Academy of Recording Arts and Sciences (NARAS), having served on the Board of Governors for the Washington D.C. branch. Eugene Roberts (neuroscientist) was an American neuroscientist. In 1950, he was the first to report on the discovery of gamma-Aminobutyric acid (GABA) in the brain, and his work was key in demonstrating GABA as the main inhibitory neurotransmitter in the mammalian central nervous system. Elizabeth S. Russell was an American biologist in the field of mammalian developmental genetics Shirley E. Schwartz was inducted into the Michigan Women's Hall of Fame in 1996 for her accomplishments in the field of chemistry. Frank Spitzer was an Austrian-born American mathematician who made fundamental contributions to probability theory, including the theory of random walks, fluctuation theory, percolation theory, the Wiener sausage, and especially the theory of interacting particle systems. Michael Stryker an American neuroscientist specializing in studies of how spontaneous neural activity organizes connections in the developing mammalian brain Kapila Vatsyayan was a leading scholar of Indian classical dance, art, architecture, and art history. She served as a member of parliament and as a bureaucrat in India, and also served as the founding director of the Indira Gandhi National Centre for the Arts. Mary Jane West-Eberhard is an American theoretical biologist noted for arguing that phenotypic and developmental plasticity played a key role in shaping animal evolution and speciation. Eugene C. Whitney was a celebrated power engineer who designed hydroelectric turbines and generators at Westinghouse Electric Company. The pinnacle of his career was the machinery for the expansion of the Grand Coulee Dam to add the #3 Powerhouse in 1966–74. Henry T. Wright He serves as the Albert Clanton Spaulding Distinguished University Professor of Anthropology in the Department of Anthropology, and Curator of Near Eastern Archaeology in the Museum of Anthropology at the University of Michigan. Robert Wurtz is an American neuroscientist working as a NIH Distinguished Scientist and Chief of the Section on Visuomotor Integration at the National Eye Institute. James Wyngaarden served as director of National Institutes of Health between 1982 and 1989. Melinda A. Zeder is an American archaeologist and Curator Emeritus in the Department of Anthropology of the National Museum of Natural History, Smithsonian Institution. George Zweig is a Russian-American physicist. He was trained as a particle physicist under Richard Feynman. He introduced, independently of Murray Gell-Mann, the quark model (although he named the constituent components "aces"). Newsmakers Bill Ayers (BA 1968), co-founder of the radical Weathermen Benjamin Bolger (BA 1994), holds what is said to be the largest number of graduate degrees held by a living person Mamah Borthwick (BA 1892), mistress of architect Frank Lloyd Wright who was murdered at his studio, Taliesin Napoleon Chagnon (Ph.D.), anthropologist, professor of anthropology Rima Fakih (BA), 2010 Miss USA Geoffrey Fieger (BA, MA), attorney based in Southfield, Michigan Robert Groves (Ph.D. 1975), 2009 Presidential nominee to head the national census; nomination stalled by Republican opposition to use of "sampling" methodology, which Groves had already stated would not be used Janet Guthrie (COE: BSc Physics 1960), inducted into the International Motorsports Hall of Fame in 2006; first woman to race in the Indianapolis 500; still is the only woman to ever lead a Nextel Cup race; top rookie in five different races in 1977 including the Daytona 500 and at Talladega; author of autobiography Janet Guthrie: A Life at Full Throttle Alireza Jafarzadeh, whistle-blower of Iran's alleged nuclear weapons program when he exposed in August 2002 the nuclear sites in Natanz and Arak, and triggered the inspection of the Iranian nuclear sites by the UN for the first time; author of The Iran Threat: President Ahmadinejad and the Coming Nuclear Crisis Carol Jantsch (BFA 2006), the sole female tuba player on staff with a major U.S. orchestra, believed to be the first in history; at 21, the youngest member of the Philadelphia Orchestra Morris Ketchum Jessup (MS Astronomy), author of ufological writings; played role in "uncovering" the so-called "Philadelphia Experiment" Adolph Mongo (BGS 1976), political consultant Jerry Newport (BA Mathematics), author with Asperger syndrome whose life was the basis for the 2005 feature-length movie Mozart and the Whale; named "Most Versatile Calculator" in the 2010 World Calculation Cup Jane Scott, rock critic for The Plain Dealer in Cleveland, Ohio; covered every major local rock concert; until her retirement in 2002 she was known as "The World's Oldest Rock Critic;" influential in bringing the Rock and Roll Hall of Fame to Cleveland Michael Sekora (BS 1977), founder and director of Project Socrates, the intelligence community's classified program that was tasked with determining the cause of America's economic decline Robert Shiller (BA 1967), economist; author of Irrational Exuberance Jerome Singleton (COE: IEOR), Paralympic athlete, competing mainly in category T44 (single below knee amputation) sprint events Jerald F. ter Horst (BA 1947), briefly President Ford's press secretary Not-for-profit Larry Brilliant (SPH: MPH 1977, Economic Development and Health Planning), head of Google Foundation (holds assets of $1Bn); co-founder of the Well; in 1979 he founded the Seva Foundation, which has given away more than $100 million; CEO of SoftNet Systems Inc., a global broadband Internet services company in San Francisco that at its peak had more than 500 employees and $600 million capitalization Mark Malloch Brown (MA), Chef de Cabinet, no.2 rank in the United Nations system; Deputy Secretary-General John Melville Burgess (BA 1930, MA 1931, Hon DHum 1963), diocesan bishop of Massachusetts and the first African American to head an Episcopal diocese Stephen Goldsmith (LAW: JD), Marion County district attorney for 12 years; two-term mayor of Indianapolis (1992–1999); appointed senior fellow at the Milken Institute (economic think tank) in 2006; his work in Indianapolis has been cited as a national model Lisa Hamilton (LAW: JD), named in 2007 president of the UPS Foundation; previously its program director Bill Ivey (BA 1966), chairman of the National Endowment for the Arts 1998–2001, credited with restoring the agency's credibility with Congress; appointed by President Clinton Bob King (BA 1968), President of the UAW Michael D. Knox (MSW 1971, MA 1973, PhD psychology 1974), Chair and CEO of the US Peace Memorial Foundation and Distinguished Professor, University of South Florida Rajiv Shah (AB), former director of agricultural development for the Bill & Melinda Gates Foundation, nominated in 2009 as chief scientist at the United States Department of Agriculture and undersecretary of agriculture for research, education and economics; Administrator for the United States Agency for International Development Jack Vaughn, United States Peace Corps Director John George Vlazny (MA 1967), Roman Catholic prelate; Metropolitan Archbishop of the Roman Catholic Archdiocese of Portland Mark Weisbrot (Ph.D., economics), economist and co-director of the Center for Economic and Policy Research in Washington, D.C.; co-author, with Dean Baker, of Social Security: The Phony Crisis Pulitzer Prize winners As of 2020, 34 of Michigan's matriculants have been awarded a Pulitzer Prize. By alumni count, Michigan ranks fifth (as of 2018) among all schools whose alumni have won Pulitzers. Natalie Angier (MDNG), studied for two years at Michigan; nonfiction writer; science journalist for The New York Times; won 1991 Pulitzer Prize for Beat Reporting Ray Stannard Baker (LAW: attended 1891); published 15 volumes about Wilson and internationalism, including an 8-volume biography, the last two volumes of which won the 1940 Pulitzer Prize for Biography Leslie Bassett, winner of 1966 Pulitzer Prize in Music for Variations for Orchestra, premiered in Rome in 1963 by the RAI Symphony Orchestra under Feruccio Scaglia Howard W. Blakeslee, journalist; the Associated Press's first full-time science reporter; won the Pulitzer Prize for Reporting in 1937 Edwin G. Burrows (BA 1964), won the 1999 Pulitzer Prize for history for the Gotham: A History of New York City to 1898 John Ciardi (MA 1939), Pulitzer Prize–winning poet, "Blue Skies" George Crumb (MUSIC: PhD 1959), composer and 1968 Pulitzer Prize winner Sheri Fink (BS 1990), 2010 Pulitzer Prize for Investigative Reporting for The Deadly Choices at Memorial Robin Givhan (MA journalism), 2006 Pulitzer Prize for Criticism Amy Harmon (BA), 2008 Pulitzer Prize for Explanatory Reporting for a series titled "The DNA Age" Stephen Henderson (1992), former editorial page editor for The Michigan Daily, won Pulitzer Prize for Commentary in 2014; as Editorial Page Editor of the Detroit Free Press, he was honored for his reports on the bankruptcy of Detroit Charlie LeDuff (BA), one of several reporters who worked on the New York Times series "How Race is Lived in America," which won a Pulitzer Prize in 2001 David Levering Lewis (MDNG), historian; two-time winner of the Pulitzer Prize for Biography or Autobiography Leonard Levy, Levy obtained his undergraduate degree from the University of Michigan and his Ph.D. from Columbia University. He won a Pulitzer Prize in 1969 for his Origins of the Fifth Amendment in 1969. Stanford Lipsey (AB 1948), publisher of The Sun Newspaper Group and The Buffalo News; Pulitzer Prize for Investigative Journalism 1972 Ann Marie Lipinski (1994), former editor of the Chicago Tribune; 1987 Pulitzer Prize winner Andrew C. McLaughlin (BA JD); author of A Constitutional History of the United States, winner of 1936 Pulitzer Prize for History William McPherson (MDNG 1951–1955), Pulitzer Prize for Distinguished Criticism in 1977 Arthur Miller (AB 1938), playwright; Pulitzer Prize and Tony Award-winning author Howard Moss, won the Pulitzer Prize for Poetry for Selected Poems in 1971 Edgar Ansel Mowrer (AB 1913), Pulitzer Prize-winning (1933) journalist and author known for his writings on international events Paul Scott Mowrer (1913), journalist; Pulitzer Prize winner in 1929 Richard O. Prum, 2018 winner for "The Evolution of Beauty" Roger Reynolds (COE: BSE), composer; his 25-minute-long piece for string orchestra, Whispers out of Time, won the 1989 Pulitzer Prize for music Eugene Robinson, Michigan Daily Co-Editor-in-Chief in 1973–74; awarded a Pulitzer in April 2009 for his Washington Post commentaries on the 2008 presidential campaign Theodore Roethke (AB 1929, MA), poet; winner of the 1954 Pulitzer Prize for his collection The Waking Heather Ann Thompson (BA, MA), historian, author, activist, and speaker David C. Turnley (BA 1977), photographer Claude H. Van Tyne (BA 1896), 1930 Pulitzer Prize for History for his book The War of Independence Michael Vitez, journalism fellow; Pulitzer Prize-winning journalist; author Josh White, journalist; worked with a team covering the Virginia Tech shooting massacre, which won the 2008 Pulitzer Prize Roger Wilkins (AB 1953, LAW: LLB 1956, HLHD 1993), journalist of the Washington Post; shared the Pulitzer Prize for his Watergate editorials Julia Wolfe (BA), composer; winner of a 2015 Pulitzer Taro Yamasaki (MDNG), Pulitzer Prize-winning photographer Rhodes Scholars As of 2021, Michigan has matriculated 30 Rhodes Scholars. Some notable winners are linked below. James K. Watkins, 1911 Brand Blanshard, 1913 Albert C. Jacobs, 1921 Bertrand Harris Bronson, 1922 Allan Seager, 1930 Samuel Beer, 1932 Wilfred Sellars, 1934 R. V. Roosa, 1939 Abdul El-Sayed, 2009 Science Isabella Abbott (MS 1942), ethnobotanist, specialized in algae; more than 200 algae owe their discovery and scientific names to her Werner Emmanuel Bachmann (Ph.D. 1926), chemist; pioneer in steroid synthesis; carried out the first total synthesis of a steroidal hormone, equilenin; winner of a Guggenheim award Frank Benford (1910), an electrical engineer and physicist known for Benford’s Law, also devised in 1937 an instrument for measuring the refraction index of glass John Joseph Bittner (Ph.D. 1930), geneticist and cancer biologist, made contributions on the genetics of breast cancer John M. Carpenter (M.S. 1958, Ph.D. 1963), nuclear engineer, Fellow of the American Association for the Advancement of Science Rajeshwari Chatterjee (Ph.D. 1953), pioneer in Indian microwave engineering Carol McDonald Connor (Ph.D. 2002), Educational Psychologist with contributions to early literacy and reading comprehension research Bernhard Dawson (B.S., Ph.D. 1933), U.S.-born Argentinian astronomer; namesake of Dawson crater David Mathias Dennison, physicist who made contributions to quantum mechanics, spectroscopy, and the physics of molecular structure; Guggenheim award winner Gerald R. Dickens (Ph.D. 1996), Professor of Earth Science at Rice University Charles Fremont Dight (MED 1879), medical professor; promoter of the human eugenics movement in Minnesota William Gould Dow (COE: MSE 1929), pioneer in electrical engineering, space research, and nuclear engineering; former chairman of EECS Department Douglas J. Futuyma (Ph.D.), author of the widely used textbook Evolutionary Biology, and Science on Trial: The Case for Evolution, an introduction to the creation–evolution controversy; President of the Society for the Study of Evolution; President of the American Society of Naturalists; editor of Evolution and the Annual Review of Ecology and Systematics; received Sewall Wright Award from the American Society of Naturalists; Guggenheim Fellow; Fulbright Fellow; member of National Academy of Sciences Frank Gill (BS, PhD 1969), ornithologist; author of the standard textbook Ornithology; editor of the encyclopedic series Birds of North America; former president of the American Ornithologists' Union Moses Gomberg (PhD 1894), U-M professor of chemistry; discovered organic free radicals in 1900 Billi Gordon, PhD (BGS 1997), works in functional neuroimaging and brain research at the David Geffen School of Medicine at UCLA; investigates the pathophysiology of stress as antecedent to obesity-related diseases at the UCLA Gail and Gerald Oppenheimer Family Center for the Neurobiology of Stress; included on list of "30 Most Influential Neuroscientists Alive Today" Arnold B. Grobman (B.A.), zoologist Martin Harwit (MS), studied under Fred Hoyle; designed the first liquid-helium-cooled rockets for boosting telescopes into the atmosphere; investigated airborne infrared astronomy and infrared spectroscopy for NASA; Bruce Medal 2007; National Air and Space Museum Director 1987–95 Clara H. Hasse (Ph B 1903), botanist Duff Holbrook (M.S.), wildlife biologist and forester, reintroduced wild turkeys to much of South Carolina Jerome Horwitz (Ph.D.), developed AZT, an antiviral compound used in the treatment of AIDS Edward Israel (AB 1881), astronomer and Arctic explorer Diane Larsen-Freeman (Ph.D), linguist Zachary J. Lemnios (COE: BSEE), Director of Defense Research and Engineering; former Chief Technology Officer at MIT Lincoln Laboratory Armin O. Leuschner (BS Math 1888), astronomer at Berkeley, first graduate student at Lick Observatory; devised a simplification of differential corrections; improved the methodology for determining the courses of planetoids and comets; oversaw a survey of all the known minor planets; founded the Astronomy Department at Berkeley and served as director of its student observatory for 40 years, which was renamed in his honor days after his death; James Craig Watson Medal 1916; Bruce Medal 1936; American Astronomical Society; namesake of Asteroid 1361 Leuschneria Yuei-An Liou, Distinguished Professor and Director at the Center for Space and Remote Sensing Research, National Central University, Taiwan Jane Claire Marks – conservation ecologist and educator, Professor of Aquatic Ecology at Northern Arizona University, lead scientist in the PBS documentary, A River Reborn: The Restoration of Fossil Cree J. Ward Moody (Ph.D. 1986), professor of astronomy at Brigham Young University (BYU), principal author of the textbook Physical Science Foundations, work on Large-scale structure of the universe. Homer A. Neal (PhD 1966), Director of the ATLAS Project; board member, Ford Motors (1997–); Smithsonian Institution Board of Regents Harald Herborg Nielsen (Ph.D.), physicist who performed pioneering research in molecular infrared spectra Antonia Novello (MED: 1974), first female US Surgeon General James Arthur Oliver (MSC 1937; Ph.D. 1942), herpetologist; director of the Bronx Zoo, the American Museum of Natural History and the New York Aquarium; the only person ever to have held the directorship at all three institutions Donald Othmer (MSC 1925; Ph.D. 1927), co-founded and co-edited the 27-volume Kirk—Othmer Encyclopedia of Chemical Technology in 1947; chairman of Polytechnic University Chemical Engineering Department (1937–1961); invented the Othmer still, which concentrated the acetic acid needed to produce cellulose acetate for motion picture film; awarded 40 patents at Kodak Raymond Pearl (Ph.D. 1902), one of the founders of biogerontology Henry Pollack (Ph.D. 1963), emeritus professor of geophysics at the University of Michigan Albert Benjamin Prescott (MED: 1864), chemist; dean of the school of pharmacy in 1876; director of the chemical laboratory in 1884; president of the American Chemical Society in 1886; president of the American Association for the Advancement of Science in 1891; president of the American Pharmaceutical Association in 1900 Edwin William Schultz (A.B. 1914), pathologist; Guggenheim award winner Shirley E. Schwartz (BS 1957), chemist and research scientist at General Motors Homi Sethna (M.A. 1946), former Chairman of Atomic Energy Commission of India; in 1976 became the first chairman of Maharashtra Academy of Sciences in Pune, Maharashtra Joseph Beal Steere (A.B. 1868), ornithologist Marie Tharp (MS Geology), oceanographic cartographer whose work paved the way for the theories of plate tectonics and continental drift Catherine Troisi (Ph.D. 1980), epidemiologist Juris Upatnieks (MSE EE 1965), with Emmett Leith created the first working hologram in 1962 Steven G. Vandenberg (Ph.D. 1955), behavior geneticist James McDonald Vicary, market researcher; pioneered the notion of subliminal advertising in 1957 James Craig Watson (BA 1857, MA 1859), astronomer, established the James Craig Watson Medal John V. Wehausen (BS 1934, MS 1935, Ph.D. 1938), researcher in hydrodynamics Nancy Wexler (Ph.D. 1974), geneticist, Higgins Professor of Neuropsychology at Columbia University Terry Jean Wilson (BS), Antarctic researcher Ta You Wu (Ph.D. 1933), "the father of Chinese physics" Zhu Guangya (Chinese: 朱光亚 (Ph.D. 1950), nuclear physicist; academician of Chinese Academy of Sciences; vice chairman of 8th and 9th Chinese People's Political Consultative Conference; led the development of China's atomic and hydrogen bomb programs George Zweig (BS 1959), a graduate student when he published "the definitive compilation of elementary particles and their properties" in 1963, the work that led up to his theory about the existence of quarks in 1964; considered to have developed the theory of quarks independently of Murray Gell-Mann Kathleen Weston (BS Biology 1929), world renowned toxicologist, worked on the Salk polio vaccine, taught from the Sunday school level to the medical school level for over 50 years National Medal of Science Laureates/National Medal of Technology and Innovation Fay Ajzenberg-Selove, German-American nuclear physicist winner of the National Medal of Science Detlev Wulf Bronk, credited with establishing biophysics as a recognized discipline winner of the National Medal of Science John W. Cahn, scientist, winner of the 1998 National Medal of Science Stanley Cohen (Ph.D.), biochemist; 1986 Nobel Prize Laureate in Physiology and Medicine winner of the National Medal of Science Carl R. de Boor, German-American mathematician winner of the National Medal of Science George Dantzig (M.A.), mathematician called by some the "father of linear programming" winner of the National Medal of Science Harry George Drickamer, born Harold George Weidenthal, pioneer experimentalist in high-pressure studies of condensed matter winner of the National Medal of Science Roger L. Easton (MDNG) was an American physicist who was the principal inventor and designer of the Global Positioning System, along with Ivan A. Getting and Bradford Parkinson. Awarded the National Medal of Technology and Innovation in 2006. Donald N. Frey (Ph.D.), Ford Motor Company product manager; National Medal of Technology winner Willis M. Hawkins, aeronautical engineer for Lockheed for over fifty years winner of the National Medal of Science George W. Housner, authority on earthquake engineering; National Medal of Science laureate Clarence L. Johnson, system engineer, aeronautical innovator winner of the National Medal of Science Isabella L. Karle, x-ray crystallographer winner of the National Medal of Science Dr. Donald L. Katz (Ph.D.), chemist, chemical engineer winner of the National Medal of Science Marshall Warren Nirenberg (Ph.D.), biochemist and geneticist winner of the National Medal of Science Michael Posner, psychologist winner of the National Medal of Science Claude E. Shannon, mathematician, electronic engineer, cryptographer; "the father of information theory" winner of the National Medal of Science Isadore Singer, Institute Professor in the Department of Mathematics at MIT winner of the National Medal of Science Stephen Smale, mathematician winner of the National Medal of Science Karen K. Uhlenbeck, professor of mathematics winner of the National Medal of Science Donald Dexter Van Slyke (Ph.D.), Dutch American biochemist winner of the National Medal of Science Charles M. Vest (Ph.D.) was an American educator and engineer. He served as President of the Massachusetts Institute of Technology from October 1990 until December 2004. Winner, in 2006, of the National Medal of Technology and Innovation Sloan Research Fellows James Andreoni (born 1959 in Beloit, Wisconsin) is a Professor in the Economics Department of the University of California, San Diego where he directs the EconLab. John Avise (born 1948) is an American evolutionary geneticist, conservationist, ecologist and natural historian. Robert Berner (November 25, 1935 – January 10, 2015) was an American scientist known for his contributions to the modeling of the carbon cycle. Allan M. Collins is an American cognitive scientist, Professor Emeritus of Learning Sciences at Northwestern University's School of Education and Social Policy. Ralph Louis Cohen (born 1952) is an American mathematician, specializing in algebraic topology and differential topology. Michael D. Fried is an American mathematician working in the geometry and arithmetic of families of nonsingular projective curve covers. William L. Jungers (born November 17, 1948) is an American anthropologist, Distinguished Teaching Professor and the Chair of the Department of Anatomical Sciences at State University of New York at Stony Brook on Long Island, New York. Jeffrey MacKie-Mason is an American economist specializing in information, incentive-centered design and public policy. Gaven Martin FRSNZ FASL FAMS (born 8 October 1958)[1] is a New Zealand mathematician. George J. Minty Jr. (September 16, 1929, Detroit – August 6, 1986,[1] Bloomington, Indiana) was an American mathematician, specializing in mathematical analysis and discrete mathematics. He is known for the Klee-Minty cube and the Browder-Minty theorem. Alison R. H. Narayan (born 1984)[1] is an American chemist and the William R. Roush assistant professor in the Department of Chemistry at the University of Michigan College of Literature, Science, and the Arts. Homer Neal (June 13, 1942 – May 23, 2018[1]) was an American particle physicist and a distinguished professor at the University of Michigan. Hugh David Politzer (/ˈpɑːlɪtsər/; born August 31, 1949) is an American theoretical physicist and the Richard Chace Tolman Professor of Theoretical Physics at the California Institute of Technology. Jessica Purcell is an American mathematician specializing in low-dimensional topology whose research topics have included hyperbolic Dehn surgery and the Jones polynomial. Donald Sarason (January 26, 1933 – April 8, 2017) was an American mathematician who made fundamental advances in the areas of Hardy space theory and VMO. Stephen Smale (born July 15, 1930) is an American mathematician, known for his research in topology, dynamical systems and mathematical economics. Richard Smalley (June 6, 1943 – October 28, 2005) was the Gene and Norman Hackerman Professor of Chemistry and a Professor of Physics and Astronomy at Rice University. Karen E. Smith (born 1965 in Red Bank, New Jersey)[1] is an American mathematician, specializing in commutative algebra and algebraic geometry. James Stasheff (born January 15, 1936, New York City)[1] is an American mathematician Chelsea Walton is a mathematician whose research interests include noncommutative algebra, noncommutative algebraic geometry, symmetry in quantum mechanics, Hopf algebras, and quantum groups. Zhouping Xin (Chinese: 辛周平; born 13 July 1959) is a Chinese mathematician and the William M.W. Mong Professor of Mathematics at the Chinese University of Hong Kong.[1] He specializes in partial differential equations. Sports See List of University of Michigan sporting alumni References NOTE: The University of Michigan Alumni Directory is no longer printed, as of 2004. To find more recent information on an alumnus, you must log into the Alumni Association website to search their online directory. External links University of Michigan Alumni Famous U-M Alumni Alumni association of the University of Michigan UM Alumni Information University of Michigan alumni
51332086
https://en.wikipedia.org/wiki/Magic%20Software%20Enterprises
Magic Software Enterprises
Magic Software Enterprises Ltd is a global enterprise software company headquartered in Or Yehuda, Israel. It is listed on the NASDAQ Global Select (NASDAQ: MGIC) and is also listed on the Tel Aviv Stock Exchange TA-100 Index. History Magic Software Enterprises was founded in 1983 by David Assia and Yaki Dunietz as a spin-off from "Mashov Computers", a publicly traded Israeli company that provided business solutions on microcomputers. The new company was originally named "Mashov Software Export (MSE)", and developed software for the global market, specifically an application generator named Magic. Mashov’s major innovation was a metadata-driven approach to programming that required no compiling or linking, and also allowed instantaneous debugging. The Magic platform was originally designed and developed by Jonathan (Yoni) Hashkes, along with Miko Hasson who was responsible for programme management. During the 1980s, the company grew due to its sales of the DOS and UNIX platforms. The product was used by many large organizations, including the Israel Defense Forces. In 1991, the company changed its name to "Magic Software Enterprises" (retaining the acronym: MSE) and became the first Israeli software company to go public on the NASDAQ. During this period, the company developed a close relationship with IBM, focusing on AS/400 systems. In mid-1995, the first version of Magic for Windows was released. In 1998, Magic was acquired by the Formula Group, headed by Dan Goldstein. In February 2000, it raised over $100 million and traded at a company valuation of $1 billion. In 2001, Magic released "eDeveloper" (Rohan), a graphical, rules-based, and event-driven framework that offered a pre-compiled engine for database business tasks and a wide variety of generic runtime services and functions. In February 2001, Menachem Hasfari replaced Jack Dunitz as CEO after a series of failures that led the company to post two successive profit warnings. In 2007, Guy Bernstein was appointed chairman of the board at Magic Software Enterprises, replacing David Assia. Prior to that, Bernstein had served as Chief Financial and Operations Officer of Magic Software since 1999. Guy Bernstein was appointed CEO of Magic Software Enterprises in April 2010. In 2003, Magic released the "iBOLT" integration platform. In July 2008, it released the first version of the "uniPaaS" application platform, replacing eDeveloper. In 2011, Magic released a .NET version of uniPaaS, and launched a new offering for enterprise mobility. In May 2012, Magic launched a company-wide rebranding, including new product names and a new logo and tagline. uniPaaS was renamed "Magic xpa Application Platform" and iBOLT was renamed "Magic xpi Integration Platform". The Magic xpi Integration Platform was enhanced to include In-Memory Data Grid (IMDG) technology from GigaSpaces. In 2016, Magic entered into an agreement to acquire a 60% equity interest in Roshtov Software Industries for $21 million with an option to acquire 100% of the equity in Roshtov, developer of the Clicks platform used in patient-file oriented software solutions for managed care and healthcare providers. See also Formula Systems List of Israeli companies quoted on the Nasdaq References Software companies of Israel
60647995
https://en.wikipedia.org/wiki/ISBAT%20University
ISBAT University
ISBAT University (ISBAT), whose complete name is International Business, Science And Technology University, is a chartered university in Uganda. Location The university's campus is located in a modern high-rise building at 11A Rotary Avenue, off of Lugogo Bypass Road, on Kololo Hill, in the Central Division of Kampala, Uganda's capital and largest city. The geographical coordinates of the ISBAT University are 00°20'04.0"N, 32°36'05.0"E (Latitude:0.334444; Longitude:32.601389). History The university was established in 2005 as a post-secondary (tertiary) institution of education, that was affiliated with Sikkim Manipal University of India. At that time all courses and academic courses belonged to and were awarded by the Indian university. In 2013 ISBAT began awarding its own degrees and changed its classification to "Other Degree Awarding Institution" (ODAI). In 2015, the university moved to its present campus off of Lugogo Bypass Road. In 2016, ISBAT was elevated to a full university, having met the requirements of the Uganda National Council for Higher Education (UNCHE). On 18 November 2019, the university acquired charter status from the UNCHE. Overview As of May 2019, ISBAT University is associated with UCAM University Spain and with Manipal University Dubai. It is a member of the Association of Commonwealth Universities (ACU), the Association to Advance Collegiate School of Business (AACSB), and the International Assembly for Collegiate Business Education (IACBE). Academics As of January 2018 the following courses were on offer at ISBAT University: Undergraduate Bachelor of Science in Applied Information Technology Bachelor of Science in Multimedia and Animations Bachelor of Business Administration Bachelor of Commerce in Information Systems Bachelor of Science in Applied Accounting Bachelor of Hotel Management Bachelor of Human Resource Management Bachelor of Science in Applied Economics Bachelor of Science In Networking and Cyber Security Bachelor of Tourism and Hospitality Management Diploma in Business Administration Diploma in Hardware and Networking Diploma in Information Technology Postgraduate Postgraduate Diploma in Information Technology Master of Science in Information Technology Master of Business Administration Community social responsibility In April and May 2019, ISBAT University hosted a neurosurgery, pediatric cardiology and oncology medical camp run by physicians and surgeons from Apollo Hospitals Enterprise Limited from India. The visiting doctors offered free consultations to over 200 patients. See also Education in Uganda List of universities in Uganda Ugandan university leaders References External links Website of ISBAT University Kampala Central Division Kampala District Educational institutions established in 2005 Universities and colleges in Uganda 2005 establishments in Uganda
65681031
https://en.wikipedia.org/wiki/Extended%20detection%20and%20response
Extended detection and response
Extended detection and response (XDR) is a cybersecurity technology that monitors and mitigates cyber security threats. Concept The term was coined by Nir Zuk of Palo Alto Networks in 2018. The 'X' in 'XDR' stands for "extended". Gartner defines XDR as “a SaaS-based, vendor-specific, security threat detection and incident response tool that natively integrates multiple security products into a cohesive security operations system that unifies all licensed components.” Improved protection, detection capabilities, productivity, and lower ownership costs are the primary advantages of XDR. The system works by collecting and correlating data across various network points such as servers, email, cloud workloads, and endpoints. The system analyzes the correlated data, lending it visibility and context, and revealing advanced threats. Thereafter, the threats are prioritized, analyzed, and sorted to prevent security collapses and data loss. The XDR system helps organizations to have a higher level of cyber awareness, enabling cyber security teams to identify and eliminate security vulnerabilities. The XDR improves the malware detection and antivirus capabilities over the endpoint detection and response (EDR) system. XDR improves on the EDR capabilities to deploy high-grade security solutions by utilizing current technologies which proactively identifies and collects security threats, and employs strategies to detect future cyber security threats. It is an alternative to reactive endpoint protection solutions, such as EDR and network traffic analysis (NTA). See also Endpoint security Data loss prevention software Endpoint detection and response References Security technology
14672753
https://en.wikipedia.org/wiki/Taxation%20in%20China
Taxation in China
Taxes provide the most important revenue source for the Government of the People's Republic of China. Tax is a key component of macro-economic policy, and greatly affects China's economic and social development. With the changes made since the 1994 tax reform, China has sought to set up a streamlined tax system geared to a socialist market economy. China's tax revenue came to 11.05 trillion yuan (1.8 trillion U.S. dollars) in 2013, up 9.8 per cent over 2012. Tax revenue in 2015 was 12,488.9 billion yuan. In 2016, tax revenue was 13,035.4 billion yuan. Tax revenue in 2017 was 14,436 billion yuan. In 2018, tax revenue was 15,640.1 billion yuan, an increase of 1204.1 billion yuan over the previous year. The 2017 World Bank "Doing Business" rankings estimated that China's total tax rate for corporations was 68% as a percentage of profits through direct and indirect tax. As a percentage of GDP, according to the State Administration of Taxation, overall tax revenues were 30% in China. The government agency in charge of tax policy is the Ministry of Finance. For tax collection, it is the State Administration of Taxation. As part of a US$586 billion economic stimulus package in November 2008, the government planned to reform VAT, stating that the plan could cut corporate taxes by 120 billion yuan. Types of taxes Under the current tax system in China, there are 26 types of taxes, which, according to their nature and function, can be divided into the following 8 categories: Turnover taxes. This includes three kinds of taxes, namely, Value-Added Tax, Consumption Tax and Business Tax. The levy of these taxes are normally based on the volume of turnover or sales of the taxpayers in the manufacturing, circulation or service sectors. Income taxes. This includes Enterprise Income Tax (effective prior to 2008, applicable to such domestic enterprises as state-owned enterprises, collectively owned enterprises, private enterprises, joint operation enterprises and joint equity enterprises) and Individual Income Tax. These taxes are levied on the basis of the profits gained by producers or dealers, or the income earned by individuals. Please note that the new Enterprise Income Tax Law of the People's Republic of China has replaced the above two enterprise taxes as of 1 January 2008. Resource taxes. This consists of Resource Tax and Urban and Township Land Use Tax. These taxes are applicable to the exploiters engaged in natural resource exploitation or to the users of urban and township land. These taxes reflect the chargeable use of state-owned natural resources, and aim to adjust the different profits derived by taxpayers who have access to different availability of natural resources. Taxes for special purposes. These taxes are City Maintenance and Construction Tax, Farmland Occupation Tax, Fixed Asset Investment Orientation Regulation Tax, Land Appreciation Tax, and Vehicle Acquisition Tax. These taxes are levied on specific items for special regulative purposes. Property taxes. This encompasses House Property Tax, Urban Real Estate Tax, and Inheritance Tax (not yet levied). Behavioural taxes. This includes Vehicle and Vessel Usage Tax, Vehicle and Vessel Usage License Plate Tax, Stamp Tax, Deed Tax, Securities Exchange Tax (not yet levied), Slaughter Tax and Banquet Tax. These taxes are levied on specified behaviour. Agricultural taxes. Taxes belonging to this category are Agriculture Tax (including Agricultural Specialty Tax) and Animal Husbandry Tax which are levied on the enterprises, units and/or individuals receiving income from agriculture and animal husbandry activities. Customs duties. Customs duties are imposed on the goods and articles imported into and exported out of the territory of the People's Republic of China, including Excise Tax. Tax characteristics Compared with other forms of distribution, taxation has the characteristics of compulsory, gratuitous and fixed, which are customarily called the "three characteristics" of taxation. Compulsory The compulsory nature of taxation means that taxation is imposed by the state as a social administrator, by virtue of its power and political authority, through the enactment of laws or decrees. The social groups and members who are obliged to pay taxes must comply with the compulsory tax decree of the state. Within the limits stipulated by the national tax law, taxpayers must pay taxes according to the law, otherwise they will be sanctioned by the law, which is the embodiment of the legal status of taxes. The compulsory character is reflected in two aspects: on the one hand, the establishment of tax distribution relations is compulsory, i.e. tax collection is entirely by virtue of the political power of the state; on the other hand, the process of tax collection is compulsory, i.e. if there is a tax violation, the state can impose punishment according to the law. Gratuitousness The gratuitous nature of taxation means that through taxation, a part of the income of social groups and members of society is transferred to the state, and the state does not pay any remuneration or consideration to the taxpayer. This gratuitous nature of taxation is connected with the essence of income distribution by the state by virtue of its political power. The gratuitousness is reflected in two aspects: on the one hand, it means that the government does not have to pay any remuneration directly to the taxpayers after receiving the tax revenues; on the other hand, it means that the tax revenues collected by the government are no longer returned directly to the taxpayers. The gratuitous nature of taxation is the essence of taxation, which reflects a unilateral transfer of ownership and dominance of social products, rather than an exchange relationship of equal value. The gratuitous nature of taxation is an important characteristic that distinguishes tax revenue from other forms of fiscal revenue. Fixedness The fixed nature of taxation refers to the fact that taxation is levied in accordance with the standards stipulated by the state law, i.e. the taxpayers, tax objects, tax items, tax rates, valuation methods and time periods are pre-defined by the taxation law and have a relatively stable trial period, which is a fixed continuous income. For the pre-defined standard of taxation, both the taxing and tax-paying parties must jointly abide by it, and neither the taxing nor the tax-paying parties can violate or change this fixed rate or amount or other institutional provisions unless it is revised or adjusted by the state law. Summing up The three basic features of taxation are a unified whole. Among them, compulsory is the strong guarantee to realize the gratuitous collection of tax, gratuitous is the embodiment of the essence of taxation, and fixed is the inevitable requirement of compulsory and gratuitous. Taxation Functions Taxation is divided into national tax and local tax. Local taxes are further divided into: resource tax, personal income tax, individual incidental income tax, land value-added tax, urban maintenance and construction tax, vehicle and vessel use tax, property tax, slaughter tax, urban land use tax, fixed asset investment direction adjustment tax, enterprise income tax, stamp duty, etc. Taxes are mainly used for national defense and military construction, national civil servants' salary payment, road traffic and urban infrastructure construction, scientific research, medical and health epidemic prevention, culture and education, disaster relief, environmental protection and other fields. The functions and roles of taxation are the concrete manifestation of the essence of taxation functions. Generally speaking, taxation has several important basic functions as follows: Organizing finance Taxation is a form of distribution in which the government participates in social distribution by virtue of state coercive power and concentrates part of the surplus products (whether in monetary form or in physical form). The organization of state revenue is the most basic function of taxation. Regulating the economy The participation of the government in social distribution by means of state coercive power necessarily changes the share of social groups and their members in the distribution of national income, reducing their disposable income, but this reduction is not equal, and this gain or loss of interest will affect the economic activity capacity and behavior of taxpayers, which in turn has an impact on the social and economic structure. The government uses this influence to purposefully guide the socio-economic activities and thus rationalize the socio-economic structure. Monitoring the economy In the process of collecting and obtaining revenues, the state must build on the basis of intensive daily tax administration, specifically grasp the sources of taxes, understand the situation, find out problems, supervise taxpayers paying taxes in accordance with the law, and fight against violations of tax laws and regulations, thus supervising the direction of social and economic activities and maintaining the order of social life. The role of taxation is the effect of the tax function under certain economic conditions. The role of taxation is to reflect the fair tax burden and promote equal competition; to regulate the total economic volume and maintain economic stability; to reflect the industrial policy and promote structural adjustment; to reasonably adjust the distribution and promote common prosperity; to safeguard the rights and interests of the country and promote the opening up to the outside world, etc. Tax legislation Tax law, that is, the legal system of taxation, is the general name of the legal norms that adjust tax relations and is an important part of the national law. It is a legal code based on the Constitution, which adjusts the relationship between the state and members of the society in terms of rights and obligations in taxation, maintains the social and economic order and taxation order, protects the national interests and the legitimate rights and interests of taxpayers, and is a rule of conduct for the state taxation authorities and all tax units and individuals to collect taxes according to the law. Tax laws can be classified in different ways according to their legislative purposes, taxation objects, rights and interests, scope of application and functional roles. Generally, tax laws are divided into two categories: substantive tax laws and procedural tax laws according to the different functions of tax laws. Tax legal relationship is reflected as the relationship between state tax collection and taxpayers' benefit distribution. In general, tax legal relations, like other legal relations, are composed of three aspects: subject, object and content. The elements of tax law are the basic elements of the taxation system, which are reflected in the various basic laws enacted by the state. They mainly include taxpayers, tax objects, tax items, tax rates, tax deductions and exemptions, tax links, tax deadlines and violations. Among them, taxpayers, tax objects and tax rates are the basic factors of a taxation system or a basic composition of taxation. The Law of the People's Republic of China on Tax Collection and Administration stipulates that taxpayers must apply for tax declaration at taxation authorities within the prescribed declaration period. By virtue of the power granted by the state power, the tax authorities collect taxes from taxpayers in the name of the state power. If a taxpayer steals tax, owes tax, cheats tax or resists tax, the tax authorities shall recover the tax, late payment and impose fines according to the law, and those who violate the criminal law shall also be criminally punished by the judicial authorities. Tax evasion is a taxpayer's intentional violation of the tax law by deception, concealment and other means (such as forgery, alteration, concealment, unauthorized destruction of books and bookkeeping vouchers, false tax declaration, etc.) to not pay or underpay the tax due. Tax arrears is the act that the taxpayer fails to pay tax on time and defaults on the tax payment beyond the approved tax payment period by the taxation authority. Tax fraud is the act of taxpayers or personnel cheating the state export tax refund by false export declaration or other deceptive means. In China, export tax refund is to refund or exempt the VAT and consumption tax paid or payable by the taxpayer for the exported goods in the domestic production and circulation links. Export tax refund is an international practice. Tax resistance is the refusal of a taxpayer to pay tax by violence or threat. Those who gather a crowd, threaten or besiege taxation authorities and beat taxation cadres, and refuse to pay taxes, are tax resisters. State organs that have the authority to formulate tax laws or tax policy include the National People's Congress and its Standing Committee, the State Council, the Ministry of Finance, the State Administration of Taxation, the Tariff and Classification Committee of the State Council, and the General Administration of Customs. Tax laws are enacted by the National People's Congress, e.g., the Individual Income Tax Law of the People's Republic of China; or enacted by the Standing Committee of the National People's Congress, e.g., the Tax Collection and Administration Law of the People's Republic of China. The administrative regulations and rules concerning taxation are formulated by the State Council, e.g., the Detailed Rules for the Implementation of the Tax Collection and Administration Law of the People' s Republic of China, the Detailed Regulations for the Implementation of the Individual Income Tax Law of the People's Republic of China, the Provisional Regulations of the People's Republic of China on Value Added Tax. The departmental rules concerning taxation are formulated by the Ministry of Finance, the State Administration of Taxation, the Tariff and Classification Committee of the State Council, and the General Administration of Customs, e.g., the Detailed Rules for the Implementation of the Provisional Regulations of the People's Republic of China on Value Added Tax, the Provisional Measures for Voluntary Reporting of the Individual Income Tax. The formulation of tax laws follow four steps: drafting, examination, voting and promulgation. The four steps for the formulation of tax administrative regulations and rules are: planning, drafting, verification and promulgation. The four steps mentioned above take place in accordance with laws, regulations and rules. Besides, the laws of China stipulates that within the framework of the national tax laws and regulations, some local tax regulations and rules may be formulated by the People's Congress at the provincial level and its Standing Committee, the People's Congress of minority nationality autonomous prefectures and the People's Government at provincial level. The following table summarises up the current tax laws, regulations and rules and relevant legislation in China. Current Tax Legislation Table Note: The provisions of criminal responsibilities in Supplementary Rules of the Standing Committee of NPC of the People's Republic of China on Penalizing Tax Evasions and Refusal to Pay Taxes and Resolutions of the Standing Committee of NPC of the People's Republic of China on Penalizing Any False Issuance, Forgery and/or Illegal Sales of VAT Invoices have been integrated into the Criminal Law of the People's Republic of China revised and promulgated on 14 March 1997. Foreign investment taxation There are 14 kinds of taxes currently applicable to the enterprises with foreign investment, foreign enterprises and/or foreigners, namely: Value Added Tax, Consumption Tax, Business Tax, Income Tax on Enterprises with Foreign Investment and Foreign Enterprises, Individual Income Tax, Resource Tax, Land Appreciation Tax, Urban Real Estate Tax, Vehicle and Vessel Usage License Plate Tax, Stamp Tax, Deed Tax, Slaughter Tax, Agriculture Tax, and Customs Duties. Hong Kong, Macau and Taiwan and overseas Chinese and the enterprises with their investment are taxed in reference to the taxation on foreigners, enterprises with foreign investment and/or foreign enterprises. In an effort to encourage inward flow of funds, technology and information, China provides numerous preferential treatments in foreign taxation, and has successively concluded tax treaties with 60 countries (by July 1999): Japan, the US, France, UK, Belgium, Germany, Malaysia, Norway, Denmark, Singapore, Finland, Canada, Sweden, New Zealand, Thailand, Italy, the Netherlands, Poland, Australia, Bulgaria, Pakistan, Kuwait, Switzerland, Cyprus, Spain, Romania, Austria, Brazil, Mongolia, Hungary, Malta, the UAE, Luxembourg, South Korea, Russia, Papua New Guinea, India, Mauritius, Croatia, Belarus, Slovenia, Israel, Vietnam, Turkey, Ukraine, Armenia, Jamaica, Iceland, Lithuania, Latvia, Uzbekistan, Bangladesh, Yugoslavia, Sudan, Macedonia, Egypt, Portugal, Estonia, and Laos, 51 of which have been in force. Urban and Township Land Use Tax (1) Taxpayers The taxpayers of Urban and Township Land Use Tax include all enterprises, units, individual household businesses and other individuals (excluding enterprises with foreign investment, foreign enterprises and foreigners). (2) Tax payable per unit The tax payable per unit is differentiated with different ranges for different regions, i.e., the annual amount of tax payable per square meter is: 0.5-10 yuan for large cities, 0.4-8 yuan for medium-size cities, 0.3-6 yuan for small cities, or 0.2-4 yuan for mining districts. Upon approval, the tax payable per unit for poor area may be lowered or that for developed area may be raised to some extent. (3) Computation The amount of tax payable is computed on the basis of the actual size of the land occupied by the taxpayers and by applying the specified applicable tax payable per unit. The formula is: Tax payable = Size of land occupied ×Tax payable per unit (4) Major exemptions Tax exemptions may be given on land occupied by governmental organs, people's organizations and military units for their own use; land occupied by units for their own use which are financed by the institutional allocation of funds from financial departments of the State; land occupied by religious temples, parks and historic scenic spots for their own use; land for public use occupied by Municipal Administration, squares and green land; land directly utilized for production in the fields of agriculture, forestry, animal husbandry and fishery industries; land used for water reservation and protection; and land occupied for energy and transportation development upon approval of the State. City Maintenance and Construction Tax (1) Taxpayers The enterprises of any nature, units, individual household businesses and other individuals (excluding enterprises with foreign investment, foreign enterprises and foreigners) who are obliged to pay Value Added Tax, consumption Tax and/or Business Tax are the taxpayers of City Maintenance and Construction Tax. (2) Tax rates and computation of tax payable Differential rates are adopted: 7% rate for city area, 5% rate for county and township area and 1% rate for other area. The tax is based on the actual amount of VAT, Consumption Tax and/or Business Tax paid by the taxpayers, and paid together with the three taxes mentioned above. The formula for calculating the amount of the tax payable: Tax payable = Tax base × tax rate Applicable. Fixed Assets Investment Orientation Regulation Tax (1) Taxpayers This tax is imposed on enterprises, units, individual household businesses and other individuals who invest into fixed assets within the territory of the People's Republic of China (excluding enterprises with foreign investment, foreign enterprises and foreigners). (2) Taxable items and tax rates Table of Taxable Items and Tax Rates: (* For some residential building investment projects, the rate is 5%.) (3) Computation of tax payable This tax is based on the total investment actually put into fixed assets. For renewal and transformation projects, the tax is imposed on the investment of the completed part of the construction project. The formula for calculating the tax payable is: Tax payable - Amount of investment completed or amount of investment in construction project × Applicable rate Land Appreciation Tax (3) Computation of tax payable To calculate the amount of Land Appreciation Tax payable, the first step is to arrive at the appreciation amount derived by the taxpayer from the transfer of real estate, which equals to the balance of proceeds received by the taxpayer on the transfer of real estate after deducting the sum of relevant deductible items. Then the amount of tax payable shall be calculated respectively for different parts of the appreciation by applying the applicable tax rates in line with the percentages of the appreciation amount over the sum of the deductible items. The sum of the amount of tax payable for different parts of the appreciation shall be the full amount of tax payable by the taxpayers. The formula is: Tax payable = Σ (Part of appreciation ×Applicable rate) (4) Major exemptions The Land Appreciation Tax shall be exempt in situations where the appreciation amount on the sale of ordinary standard residential buildings construction by taxpayers for sale does not exceed 20% of the sum of deductible items and when the real estate is taken over or repossessed in accordance to the laws due to the construction requirements of the State. Urban Real Estate Tax (1) Taxpayers -At present, this tax is only applied to enterprises with foreign investment, foreign enterprises and foreigners, and levied on house property only. Taxpayers are owners, mortgagees custodians and/or users of house property. (2) Tax base, tax rates and computation of tax payable -Two different rates are applied to two different bases: one rate of 1. 2% is applied to the value of house property, and the other rate of 18% is applied to the rental income from the property. The formula for calculating House Property Tax payable is: Tax payable = Tax base ×Applicable rate (3) Major exemptions and reductions -Newly constructed buildings shall be exempt from the tax for three years commencing from the month in which the construction is completed. Renovated buildings for which the renovation expenses exceed one half of the expenses of the new construction of such buildings shall be exempt from the tax for two years commencing from the month in which the renovation is completed. Other house property may be granted tax exemption or reduction for special reasons by the People's Government at provincial level or above. Vehicle and Vessel Usage License Plate Tax (1) Taxpayers At this moment, this tax is only applied to the enterprises with foreign investment, foreign enterprises, and foreigners. The users of the taxable vehicles and vessels are taxpayers of this tax. (2) Tax amount per unit The tax amount per unit is different for vehicles and vessels: a. Tax amount per unit for vehicles: 15-80 yuan per passenger vehicle per quarter; 4-15 yuan per net tonnage per quarter for cargo vehicles; 5-20 yuan per motorcycle per quarter. 0.3-8 yuan per non-motored vehicle per quarter. b. Tax amount per unit for vessels: 0.3- 1.1 yuan per net tonnage per quarter for motorized vessels; 0.15-0.35 yuan per non-motorized vessel. (3) Computation The tax base for vehicles is the quantity or the net tonnage of taxable vehicles The tax base for vessels is the net-tonnage or the deadweight tonnage of the taxable vessels. The formula for computing the tax payable is: a. Tax payable = Quantity (or net-tonnage ) of taxable vehicles × Applicable tax amount per unit b. Tax payable = Net-tonnage (or deadweight tonnage) of taxable vessels × Applicable tax amount per unit (4) Exemptions a. Tax exemptions may be given on the vehicles used by Embassies and Consulates in China; the vehicles used by diplomatic representatives, consuls, administrative and technical staffs and their spouses and non-grown-up children living together with them. b. Tax exemptions may be given as stipulated in some provinces and municipalities on the fire vehicles, ambulances, water sprinkling vehicles and similar vehicles of enterprises with foreign investment and foreign enterprises. Individual income tax From October 1, 2018 Tax exemption: 5000 RMB for both residents and not residents. Taxable income = income - tax exemption Monthly tax formula: (taxable income * tax rate) - quick deduction = tax Example: ((10000 - 5000) * 10%) - 210 = 290 RMB in taxes Note that both, tax rate and quick deduction, are based on the 'income' AFTER the 'tax exemption': the 'taxable income'. China Tax calculator Tax governance As of 2007, a paper reported that about two-thirds of tax revenue was spent at the local level and that "the ratio of central revenue to total tax revenues reached a low of 22 per cent in 1993, before rising to the 50 per cent level following the 1994 tax reform". Malware Companies operating in China are required to use tax software from either Baiwang or Aisino (subsidiary of China Aerospace Science and Industry Corporation), highly sophisticated malware has been found in products from both vendors. Both sets of malware allowed for the theft of corporate secrets and other industrial espionage. GoldenSpy GoldenSpy was discovered in 2020 inside Aisino's Intelligent Tax Software, it allows system level access allowing an attacker nearly full control over an infected system. It was discovered that the Intelligent Tax software's uninstall feature would leave the malware in place if used. After GoldenSpy was discovered its creator's attempted to scrub it from infected systems in an attempt to cover their tracks. The uninstaller was delivered directly through the tax software. A second more sophisticated version of the uninstaller was later deployed as well. GoldenHelper GoldenHelper was discovered after GoldenSpy and is an equally sophisticated malware program which was part of the Golden Tax Invoicing software from Baiwang which is used by all companies in China to pay VAT. While it was discovered after GoldenSpy GoldenHelper had been operating for longer. This discovery indicated that Chinese tax software was harboring malware for much longer than suspected. See also State Administration of Taxation General Administration of Customs Ministry of Finance List of Chinese administrative divisions by tax revenues Tax-Sharing Reform of China in 1994 References An Overview of China's Tax System. 10-27-2007. State Administration of Taxation. Tax System of the People's Republic of China. Beijing Local Taxation Bureau. Further reading Denis V. Kadochnikov (2019) Fiscal decentralization and regional budgets’ changing roles: a comparative case study of Russia and China, Area Development and Policy, DOI: 10.1080/23792949.2019.1705171 History Huang, R. Taxation and Governmental Finance in Sixteenth Century Ming China (Cambridge U. Press, 1974) External links The Economist. China's tax system. April 12, 2007.
51793
https://en.wikipedia.org/wiki/Dave%20Cutler
Dave Cutler
David Neil Cutler Sr. (born March 13, 1942) is an American software engineer. He developed several computer operating systems, namely Microsoft's Windows NT, and Digital Equipment Corporation's RSX-11M, VAXELN, and VMS. Personal history Cutler was born in Lansing, Michigan and grew up in DeWitt, Michigan. After graduating from Olivet College, Michigan, in 1965, he went to work for DuPont. Cutler holds at least 20 patents, and is affiliate faculty in the Computer Science Department at the University of Washington. Cutler is an avid auto racing driver. He competed in the Atlantic Championship from 1996 to 2002, scoring a career best of 8th on the Milwaukee Mile in 2000. Cutler was elected a member of the National Academy of Engineering in 1994 for the design and engineering of commercially successful operating systems. Cutler is a member of Adelphic Alpha Pi Fraternity at Olivet College, Michigan. DuPont (1965 to 1971) Cutler's first exposure to computers came when he was tasked to perform a computer simulations model for one of DuPont's customers using IBM's GPSS-3 language on an IBM model 7044. This work led to an interest in how computers and their operating systems worked. Digital Equipment Corporation (1971 to 1988) Cutler left DuPont to pursue his interest in computer systems, beginning with Digital Equipment Corporation in 1971. He worked at the famous "Mill" facility in Maynard, Massachusetts. RSX-11M See RSX-11. VMS In April 1975, Digital began a hardware project, code-named Star, to design a 32-bit virtual address extension to its PDP-11. In June 1975, Cutler, together with Dick Hustvedt and Peter Lipman, were appointed the technical project leaders for the software project, code-named Starlet, to develop a totally new operating system for the Star family of processors. These two projects were tightly integrated from the beginning. The three technical leaders of the Starlet project together with three technical leaders of the Star project formed the "Blue Ribbon Committee" at Digital who produced the fifth design evolution for the programs. The design featured simplifications to the memory management and process scheduling schemes of the earlier proposals and the architecture was accepted. The Star and Starlet projects culminated in the development of the VAX-11/780 superminicomputer and the VAX/VMS operating system, respectively. PRISM and MICA projects Digital began working on a new CPU using reduced instruction set computer (RISC) design principles in 1986. Cutler, who was working in DEC's DECwest facility in Bellevue, Washington, was selected to head PRISM, a project to develop the company's RISC machine. Its operating system, code named MICA, was to embody the next generation of design principles and have a compatibility layer for Unix and VMS. The RISC machine was to be based on emitter coupled logic (ECL) technology, and was one of three ECL projects Digital was undertaking at the time. Funding the Research and Development (R&D) costs of multiple ECL projects yielding products that would ultimately compete against each other was a strain. Of the three ECL projects, the VAX 9000 was the only one that was directly commercialized. Primarily because of the early successes of the PMAX advanced development project and the need for differing business models, PRISM was canceled in 1988 in favor of PMAX. PRISM later surfaced as the basis of Digital's Alpha family of computer systems. Attitude towards Unix Cutler is known for his disdain for Unix. Said one team member who worked with Cutler: Microsoft (1988 to Present) Microsoft Windows NT Cutler left Digital for Microsoft in October 1988 and led the development of Windows NT. Later, he worked on targeting Windows NT to Digital's 64-bit Alpha architecture then on Windows 2000. After the demise of Windows on Alpha (and the demise of Digital), he was instrumental in porting Windows to AMD's new 64-bit AMD64 architecture. He was officially involved with the Windows XP Pro x64 and Windows Server 2003 SP1 x64 releases. He moved to working on Microsoft's Live Platform in August 2006. Dave Cutler was awarded the prestigious status of Senior Technical Fellow at Microsoft. Microsoft Windows Azure At the 2008 Professional Developers Conference, Microsoft announced Azure Services Platform, a cloud-based operating system which Microsoft is developing. During the conference keynote, Cutler was mentioned as a lead developer on the project, along with Amitabh Srivastava. Microsoft Xbox , a spokesperson for Microsoft confirmed that Cutler was no longer working on Windows Azure, and had joined the Xbox team. No further information was provided as to what Cutler's role was, nor what he was working on within the team. In May 2013, Microsoft announced the Xbox One console, and Cutler was mentioned as having worked in developing the host OS part of the system running inside the new gaming device. Apparently his work was focused in creating an optimized version of Microsoft's Hyper-V Host OS specifically designed for Xbox One. Awards Recognized as a 2007 National Medal of Technology and Innovation Laureate, awarded on 29 September 2008 at a White House ceremony in Washington, DC. Honored as a Computer History Museum Fellow on 16 April 2016 at the Computer History Museum in Mountain View, California. References Bibliography External links Dave Cutler video on his career as part of his Computer History Museum Fellow award on YouTube Dave Cutler race driving career statistics 1942 births Living people American computer programmers American computer scientists Microsoft technical fellows Microsoft Windows people Digital Equipment Corporation people Kernel programmers Atlantic Championship drivers People from Lansing, Michigan Racing drivers from Michigan Operating system people Olivet Comets football players
675311
https://en.wikipedia.org/wiki/Donald%20Davies
Donald Davies
Donald Watts Davies, (7 June 1924 – 28 May 2000) was a Welsh computer scientist who was employed at the UK National Physical Laboratory (NPL). In 1965 he conceived of packet switching, which is today the dominant basis for data communications in computer networks worldwide. Davies proposed a commercial national network in the United Kingdom and designed and built the local-area NPL network to demonstrate the technology. Many of the wide-area packet-switched networks built in the 1970s were similar "in nearly all respects" to his original 1965 design. The ARPANET project credited Davies for his influence, which was key to the development of the Internet. Davies' work was independent of the work of Paul Baran in the United States who had a similar idea in the early 1960s, and who also worked on ARPANET. Early life Davies was born in Treorchy in the Rhondda Valley, Wales. His father, a clerk at a coalmine, died a few months later, and his mother took Donald and his twin sister back to her home town of Portsmouth, where he went to school. He attended the Southern Grammar School for Boys. He received a BSc degree in physics (1943) at Imperial College London, and then joined the war effort working as an assistant to Klaus Fuchs on the nuclear weapons Tube Alloys project at Birmingham University. He then returned to Imperial taking a first class degree in mathematics (1947); he was also awarded the Lubbock memorial Prize as the outstanding mathematician of his year. In 1955, he married Diane Burton; they had a daughter and two sons. Career history National Physical Laboratory From 1947, he worked at the National Physical Laboratory (NPL) where Alan Turing was designing the Automatic Computing Engine (ACE) computer. It is said that Davies spotted mistakes in Turing's seminal 1936 paper On Computable Numbers, much to Turing's annoyance. These were perhaps some of the first "programming" bugs in existence, even if they were for a theoretical computer, the universal Turing machine. The ACE project was overambitious and floundered, leading to Turing's departure. Davies took over the project and concentrated on delivering the less ambitious Pilot ACE computer, which first worked in May 1950. A commercial spin-off, DEUCE was manufactured by English Electric Computers and became one of the best-selling machines of the 1950s. Davies also worked on applications of traffic simulation and machine translation. In the early 1960s, he worked on government technology initiatives designed to stimulate the British computer industry. Packet switching In 1965, Davies developed the idea of packet switching, dividing computer messages into packets that are routed independently across a network, possibly via differing routes, and are reassembled at the destination. Davies used the word "packets" after consulting with a linguist because it was capable of being translated into languages other than English without compromise. Davies' key insight came in the realisation that computer network traffic was inherently "bursty" with periods of silence, compared with relatively constant telephone traffic. He designed and proposed a commercial national data network based on packet switching in his 1966 Proposal for the Development of a National Communications Service for On-line Data Processing. In 1966 he returned to the NPL at Teddington just outside London, where he headed and transformed its computing activity. He became interested in data communications following a visit to the Massachusetts Institute of Technology, where he saw that a significant problem with the new time-sharing computer systems was the cost of keeping a phone connection open for each user. Davies was the first to describe the concept of an "Interface computer", in 1966, today known as a router. He and his team were one of the first to use the term 'protocol' in a data-commutation context in 1967. The NPL team also carried out simulation work on packet networks, including datagram networks. His work on packet switching, presented by his colleague Roger Scantlebury, initially caught the attention of the developers of ARPANET, a US defence network, at the Symposium on Operating Systems Principles in October 1967. In Scantlebury's report following the conference, he noted "It would appear that the ideas in the NPL paper at the moment are more advanced than any proposed in the USA". Larry Roberts of the Advanced Research Projects Agency in the United States applied Davies' concepts of packet switching in the late 1960s for the ARPANET, which went on to become a predecessor to the Internet. These early years of computer resource sharing were documented in the 1972 film Computer Networks: The Heralds of Resource Sharing. Davies first presented his own ideas on packet switching at a conference in Edinburgh on 5 August 1968. At NPL Davies helped build a packet-switched network (Mark I NPL network). It was replaced with the Mark II in 1973, and remained in operation until 1986, influencing other research in the UK and Europe, including Louis Pouzin's CYCLADES project in France. Unbeknown to him, Paul Baran of the RAND Corporation in the United States was also working on a similar concept; when Baran became aware of Davies's work he acknowledged that they both had equally discovered the concept. Baran was happy to acknowledge that Davies had come up with the same idea as him independently. In an e-mail to Davies, he wrote Leonard Kleinrock, a contemporary working on analysing message flow using queueing theory, developed a theoretical basis for the operation of message switching networks in his PhD thesis during 1961-2, published as a book in 1964. However, Kleinrock's later claim to have developed the theoretical basis of packet switching networks is disputed, including by Robert Taylor, Baran and Davies. Davies and Baran are recognized by historians and the U.S. National Inventors Hall of Fame for independently inventing the concept of digital packet switching used in modern computer networking including the Internet. Internetworking Davies, along with his deputy Derek Barber and Roger Scantlebury, conducted research into protocols for internetworking. They participated in the International Networking Working Group from 1972, initially chaired by Vint Cerf and later Derek Barber. Davies and Scantlebury were acknowledged by Bob Kahn and Vint Cerf in their 1974 paper on internetworking, "A Protocol for Packet Network Intercommunication". Davies and Barber published "Communication networks for computers" in 1973. They spoke at the Data Communications Symposium in 1975 about the "battle for access standards" between datagrams and virtual circuits, with Barber saying the "lack of standard access interfaces for emerging public packet-switched communication networks is creating 'some kind of monster' for users". For a long period of time, the network engineering community was polarized over the implementation of competing protocol suites, a debate commonly called the 'Protocol Wars'. It was unclear which type of protocol would result in the best and most robust computer networks. Internetworking experiments at NPL under Davies included connecting with the European Informatics Network by translating between two different host protocols and connecting with the Post Office Experimental Packet Switched Service using a common host protocol in both networks. Their research confirmed establishing a common host protocol would be more reliable and efficient than translating between different host protocols using a gateway. Davies and Barber published "Computer networks and their protocols" in 1979. Computer network security Davies relinquished his management responsibilities in 1979 to return to research. He became particularly interested in computer network security. Together with David O. Clayden, he designed the Message Authenticator Algorithm (MAA) in 1983, one of the first message authentication code algorithms to gain widespread acceptance. It was adopted as international standard ISO 8731-2 in 1987. He retired from NPL in 1984, becoming a leading consultant on data security to the banking industry and publishing a book on the topic that year. Later career In 1987, Davies became a visiting professor at Royal Holloway and Bedford New College. Awards and honours Davies was appointed a Distinguished Fellow of the British Computer Society (BCS) in 1975 and was made a CBE in 1983, and later a Fellow of the Royal Society in 1987. He received the John Player Award from the BCS in 1974. and was awarded a medal by the John von Neumann Computer Society in Hungary in 1985. In 2000, Davies shared the inaugural IEEE Internet Award. In 2007, he was inducted into the National Inventors Hall of Fame, and in 2012 Davies was inducted into the Internet Hall of Fame by the Internet Society. NPL sponsors a gallery, opened in 2009, about the development of packet switching and "Technology of the Internet" at The National Museum of Computing. A blue plaque commemorating Davies was unveiled in Treorchy in July 2013. Family Davies was survived by his wife Diane, a daughter and two sons. See also History of the Internet Internet in the United Kingdom § History Internet pioneers Books with W. Price, D. Barber, C. Solomonides References External links Oral history interview with Donald W. Davies, Charles Babbage Institute, University of Minnesota. Davies describes computer projects at the UK National Physical Laboratory, from the 1947 design work of Alan Turing to the development of the two ACE computers. Davies discusses a much larger, second ACE, and the decision to contract with English Electric Company to build the DEUCE—possibly the first commercially produced computer in Great Britain. Biography from the History of Computing Project Donald Davies profile page at NPL A Tribute to Donald Davies (1924–2000) UK National Physical Laboratory (NPL) & Donald Davies from Living Internet Computer Networks: The Heralds of Resource Sharing, documentary ca. 1972 about the ARPANET. Includes footage of Donald W. Davies (at 19m20s). 1924 births 2000 deaths Alumni of Imperial College London Commanders of the Order of the British Empire Fellows of the British Computer Society Fellows of the Royal Society History of computing in the United Kingdom Internet pioneers Packets (information technology) People from Treorchy Recreational cryptographers Scientists of the National Physical Laboratory (United Kingdom) Welsh computer scientists Welsh inventors
30863144
https://en.wikipedia.org/wiki/Indian%20Institute%20of%20Information%20Technology%2C%20Allahabad
Indian Institute of Information Technology, Allahabad
Indian Institute of Information Technology, Allahabad (IIIT-Allahabad) is a public university located in Jhalwa, Allahabad (Prayagraj), in Uttar Pradesh. It is one of the nineteen Indian Institutes of Information Technology listed by the Ministry of Education (India). History The institute was established in 1999 and designated a "Deemed University" in 2000. In 2014 the IIIT Act was passed, under which IIITA and four other Institutes of Information Technology funded by the Ministry of Human Resource Development were classed as Institutes of National Importance. The first director of the institute was M.D. Tiwari, from 1999 to 2013. G.C. Nandi served as director in-charge on the first four months of 2014, until Somenath Biswas took over the directorship and served from May 2014 to July 2016. After another stint by Nandi as director (Offg) that lasted until May 2017, P. Nagabhushan was appointed director. Rankings IIIT-Allahabad was ranked 119 in the BRICS nations by the QS World University Rankings of 2019 and 351-400 in Asia in 2020. IIIT-Allahabad was ranked 103 among engineering colleges in India by the National Institutional Ranking Framework (NIRF) in 2020. It was also ranked 10 by India Today in 2020 and 18 by Outlook India in 2019. Campus Life The institute has numerous student societies dedicated to music, dance, drama, literature, fine arts, photography, sports, technology and philanthropy. These societies organize co-curricular activities throughout the year, and especially during college festivals. Effervescence is the annual cultural festival of the institute. It lasts for three days and includes music, dance, drama, art, debate, and other activities. Aparoksha is the annual technical festival of the institute. It consists of a number of events like Hack In The North — the biggest student held hackathon of North India — and various coding, designing, and robotics contests and workshops. Asmita is the annual sports festival of the institute, in which tournaments for cricket, volleyball, swimming, athletics, etc. are held. References External links IIIT Allahabad website Allahabad Engineering colleges in Allahabad Educational institutions established in 1999 1999 establishments in Uttar Pradesh
42309877
https://en.wikipedia.org/wiki/Bwtech%40UMBC%20Research%20and%20Technology%20Park
Bwtech@UMBC Research and Technology Park
Bwtech@UMBC Research and Technology Park is the university research park for the University of Maryland, Baltimore County in Baltimore, Maryland. The research park has two campuses; bwtech@UMBC North, located just south of the main campus and adjacent to the campus gateway, and bwtech@UMBC South, located in UMBC's South Campus complex off of Rolling Road. History Bwtech@UMBC was established as the first university research park in the entire state of Maryland, in 1991. The research park has grown considerably since then by expanding the North campus to include several new buildings. Programs Bwtech@UMBC focuses on four main areas; clean energy, cyber security, life sciences, and training programs. For clean energy, the research park houses the Maryland Clean Energy Technology Incubator (CETI). For cyber security, the research park includes a variety of programs such as the Cyber Incubator, which utilizes the park's HUBZone status, Cync program with Northrop Grumman, CyberHive for government, military, and commercial interactions between cyber security companies, and CyberMap. References University of Maryland, Baltimore County Science parks in the United States
28072082
https://en.wikipedia.org/wiki/Pakistani%20missile%20research%20and%20development%20program
Pakistani missile research and development program
The missile research and development program () was the Pakistan Ministry of Defence secretive program for the comprehensive research and the development of guided missiles. Initiatives began in 1988 Benazir Bhutto Government in a direct response to equivalent program existed in India and was managed under the scrutiny of the Ministry of Defence in close coordination with the other related institutions. The program focused towards developing the short to medium range missiles with a computer guidance system. The project started in 1988 and has since spawned several strategic missile systems capable of carrying both conventional and nonconventional payloads. In its early stage, the Hatf missiles were made feasible as well as developing the Ghauri missile program. Further development led to the introduction of ballistics and cruise missiles by different scientific organizations in the 2000s. Program overview Planning and initiatives for the program began in 1987 based on an intelligence estimates on the existence of the missile program of India, which was taking place under the Indian DRDO. Memoirs written by former chief of army staff General Mirza Beg, the eventual planning of the program began in 1987, with many of the organizations associated with the Ministry of Defence. The program was delegated to Space Research Commission, DESTO, KRL, and PAEC, all individually working on the program under the MoD and the MoDP. President Zia-ul-Haq had held several national security meetings with the MoD and MoST officials to give crucial authorization for the launch of the program in 1987. The major motivation for this program, according to a military official, was to counter the India's ingeniously developed Prithvi system, first successfully tested in 1988. Only Hatf project was made it operational in 1987–88. Restriction and strict technology transfer monitored by the MTCL by the numbers of Western countries and the United States slowed the efforts for the program. In a direct technological race with India, the program was focused more towards ingenuity. The program was aggressively pursued by Prime Minister Benazir Bhutto's government faced with the missile gap with India with apparent testing of Prithvi-I missile in 1990 and strongly advocated and lobbied for the program's feasibility in the 1990s. From 1993 to 1995, the program focused on developing the comprehensive short- to medium-range missiles systems to deter missile threat from India. The program picked up speed under the control of Prime Minister Benazir Bhutto, and its existence was kept under extreme secrecy. Crucial decisions were taken by Benazir government and technologies were developed under administration ultimately resulted in the successful development of both short– and medium-range systems. Prime Minister Benazir Bhutto is described as a political "architect of Pakistan's missile technology" by Emily MacFarquhar of Alicia Patterson Foundation. At the leftist convention held in 2014, former Prime Minister Yousaf Raza Gillani said, "Benazir Bhutto gave this country the much-needed missile technology". The program eventually expended and diversified with the successful development of the cruise missiles and other strategic level arsenals in the early 2000s. Codenames Secretive codenames for the projects of concerning national security had been issued since the 1970s, and the military continued to do so, to promote the secrecy around the missile programme. The missile systems in this program were all given codenames by their respective organizations. However, all missiles were issued single codename series: Hatf, for the surface to surface guided ballistic missiles. This codename was selected by the research and development committee at the GHQ of the Pakistan Army. In Arabic, hatf meaning "Target" refers to the sword of the Prophet Muhammad which was used in many of his military conquests, and had the unique distinction of never missing its target. The other unofficial names, such as Ghauri and Abdali, are given the names of historical figures in the Islamic conquest of South Asia. Pakistan's missile systems are named after the powerful Turk warlords who invaded India from the historical region of Greater Khorasan;present day Afghanistan and western Pakistan between the 11th and 18th centuries in an attempt to expand their empires. Battle-range system The Hatf system (English tr.: "Vengeance") was the first project that was developed under this program in 1980s and the project went to the Pakistan Army. Designed by the SRC and developed by the KRL, the program was seen as to India's Prithvi, with three variants developed for the use by the Pakistan Army. Classified under the BRBM class, the missile has been in services with Pakistan Army since 1992. The Hatf–I is an unguided ballistic missile mounted on a TEL vehicle with a range of ; it has capability of both carrying the conventional and nuclear payloads of . The programme initially led by the SRC after developing the Hatf–IA, an improved version with same payload. Its final evolution led to the development of the Hatf–IB which includes a proper computer inertial guidance system with an extended range. The program evolved into final introduction of the Hatf–IV designation with a maximum range of with a payload of 1,000kg (2,200lb), equipped with a computer inertial navigation system. In 2011, the NDC developed the latest battle-field range system that seen as a compatible to Hatf–IV and widely believed to be a delivery system for small tactical nuclear weapons, which is codenamed as: Hatf–IX Nasr. Short–medium range development With the development of BRBM type missiles, the program extended towards the creation of the both short–to–intermediate range system. Originally, Prime Minister Benazir Bhutto was initially interested in seeking the procurement of Chinese M–11 missiles but cancelled the talks due to international pressure in the 1990s but China secretly transferred the equipments & technology for the same to Pakistan. After convincing arguments, the project went to SRC in 1995 and the development soon began. Codename Ghaznavi, it is the first solid–fuel based short range system with a range of 600km with a payload of 500kg. The Ghaznavi system was tested in 1997 and is stated to have been a major break-through. The Ghaznavi is a two-stage solid fuel based missile and an advanced terminal guidance system with an onboard computer. The DESTO designed five different types of warheads for the Ghaznavi can be delivered with a CEP of 0.1% at 600 km. It is believed that the design of the Ghaznavi is influenced from the Chinese M-11 missile, but the military officials have claimed that the Ghaznavi was developed entirely in Pakistan. Under the same category of SRBM, the second project which was codename Shaheen, was widely pursued and developed by the National Defence Complex (NDC)– a spinoff of PAEC. The Shaheen is a series of solid-fuelled missiles and expended well to SRC, NESCOM, DESTO, and Margalla Electronics. Despite many technological set backs and learning from India's developmental experience of the Agni-II, the project continues to evolve and produced the Shaheen-I which entered in the service in 1999. The Shaheen project produces three variants that are considered to be in the MRBM range. The Shaheen-II has a range of 2,500 km with a capacity of carrying payload of 1050kg. Its third variant, the Shaheen-III, rumored to be underdeveloped and is an IRBM range. No official confirmation of the project is known but only the media reports. In January 2017, the Ababeel, a development of the Shaheen-III with multiple independently targetable reentry vehicles (MIRV), was tested. This is a Shaheen-III airframe with an enlarged payload fairing and slightly shorter range of 2,200 km. The intention of the MIRV system is to counteract Indian BMD. While Shaheen was developed in 1995, another parallel project was being run under the KRL. Codename Ghauri, the project was aimed towards developing the liquid-fuel ballistic missile. The Ghauri was based on entirely on North Korea's Rodong-1 as its technology heavily reflected the Rodong-1. The project was supported by Benazir Bhutto who consulted for the project with North Korea and facilitated the technology transfer to KRL in 1993. According to the military officials, the original design was flawed and the missile burned up on re-entry during its first test flight. The KRL was forced to perform the heavy reverse engineering and had to redesign the entire missile. With scientific assistance from the DESTO, NDC, and the NESCOM, the first missile Ghauri–I was developed. It was successfully tested on 8 April 1998 and entered in the service. As Shaheen, the Ghauri evolves and produced Ghauri-II, also in the 1990s. The Ghauri-II has a maximum range of 2,000km (1,200mi) with a payload of 1,200kg. Under the Ghauri, its third variant which was codenamed as Ghauri III was underdeveloped by KRL. The Ghauri III was cancelled in 2000 despite the project being completed its 50% of work. Cruise systems In 2005 the Hatf VII Babur ground-launched cruise missile was revealed in a public test-firing. Early versions had a range of 500 km but later a 700 km variant was tested. In September 2012 a new launch vehicle was tested, as well as a new command and control system named the Strategic Command and Control Support System. It was stated that the SCCSS would give "decision-makers at the National Command Centre robust command and control capability of all strategic assets with round the clock situational awareness in a digitised network-centric environment." It was also stated that the Babur's guidance system uses terrain contour matching (TERCOM) and digital scene matching and area co-relation (DSMAC) techniques to achieve accuracy described as "pin-point". The new launch vehicle, a MAZ transporter erector launcher, is armed with three missile rounds launched vertically. In 2007 the Hatf VIII Ra'ad, an air-launched cruise missile (ALCM) was revealed in a test by the Pakistan Air Force. It has a stated range of 350 km. A flight test on 31 May 2012 was stated to have validated integration with the new Strategic Command and Control Support System (SCCSS), stated to be capable of remotely monitoring the missile's flight path in real time. It can avoid radar detection due to its low altitude trajectory. See also List of missiles of Pakistan References 1987 in Pakistan Missile defense Military projects of Pakistan Secret military programs Nuclear weapons programme of Pakistan History of science and technology in Pakistan Research and development in Pakistan Pakistani military exercises Nuclear history of Pakistan Research projects
41961631
https://en.wikipedia.org/wiki/Forfiles
Forfiles
forfiles is a computer software utility for Microsoft Windows, which selects files and runs a command on them. File selection criteria include name and last modified date. The command specifier supports some special syntax options. It can be used directly on the command-line, or in batch files or other scripts. The forfiles command was originally provided as an add-on, in the Windows 98, Windows NT and Windows 2000 Resource Kits. It became a standard utility with Windows Vista, as part of the new management features. Usage The forfiles command has several command-line switches. If no switches or parameters given, it outputs the name of every file in the current directory. Switches Command syntax The command string is executed as given, except as noted below. Sequences of the form , where "0x" is literal, and "FF" represents any two-digit hexadecimal number, are replaced with the corresponding single-byte value. This can be used to embed non-printing ASCII characters, or extended ASCII characters. The sequence is replaced with a literal quotation mark . Using the 0x sequence form described previously, can also be used, which additionally hides the from the command interpreter. Several variables are provided, to be used in the command as placeholders for the values from each file. Variables are technically not required, but must be used if the command is to vary for each file. Date syntax The date switch (/D) selects files based on their last modified date, given a date argument. The date argument can be given as a literal date, in MM/DD/YYYY format (other date formats are not accepted). Alternatively, the date argument can be given as a number, in which case it is taken to mean an age in days (i.e., the day date days before the present date). If the date argument begins with a minus (-), only files modified on or before the given date are selected (older file / modified earlier). Otherwise, only files modified on or after the given date are selected (younger files / modified later). An explicit plus (+) may be given, but is the default. Note that both modes select files on the given date. There is no way to select files only on a given date (without also either before or after). Examples The following command selects all log files (*.LOG) in the Windows directory 30 days or older, and lists them with their date. C:\>FORFILES /P C:\Windows /M *.LOG /D -30 /C "CMD /C ECHO @FDATE @FILE" 6/12/2015 "iis7.log" 5/28/2015 "msxml4-KB954430-enu.LOG" 5/28/2015 "msxml4-KB973688-enu.LOG" 5/26/2015 "setuperr.log" The following command would delete the same files. C:\>FORFILES /P C:\Windows /M *.LOG /D -30 /C "CMD /C DEL @PATH" The use of is required in the above examples, as both and are internal to the command processor, rather than external utility programs. See also cmd.exe – The program implementing the Windows command-line interpreter Foreach loop – The FOR and FORFILES commands both implement a for-each loop find (Unix) – Unix command that finds files by attribute, similar to forfiles find (Windows) – DOS and Windows command that finds text matching a pattern grep – Unix command that finds text matching a pattern, similar to Windows find References . External links forfiles | Microsoft Docs Windows Vista Utility software Command-line software Windows administration
44370146
https://en.wikipedia.org/wiki/List%20of%20applications%20using%20PKCS%2011
List of applications using PKCS 11
This article lists applications and other software implementations using the PKCS #11 standard. Applications FreeOTFE – disk encryption system (PKCS #11 can either be used to encrypt critical data block, or as keyfile storage) Mozilla Firefox – a web browser Mozilla Thunderbird – an email client OpenDNSSEC – a DNSSEC signer OpenSSL – TLS/SSL library (with engine_pkcs11) GnuTLS – TLS/SSL library Network Security Services library developed by Mozilla OpenVPN – VPN system StrongSwan – VPN system TrueCrypt – disk encryption system (PKCS #11 only used as trivial keyfile storage) TrouSerS – an open-source TCG Software Stack OpenSC – smartcard library OpenSSH – a Secure Shell implementation (since OpenSSH version 5.4) OpenDS – an open source directory server. Oracle Database – uses PKCS#11 for transparent data encryption IBM DB2 Database – uses PKCS#11 for transparent data encryption PowerDNS – open source, authoritative DNS server (since version 3.4.0) GNOME Keyring – a password and cryptographic key manager. Solaris Cryptographic Framework – pluggable cryptographic system in operating system Safelayer – KeyOne and TrustedX product suites. Pkcs11Admin – GUI tool for administration of PKCS#11 enabled devices SoftHSM – implementation of a cryptographic store accessible through a PKCS#11 interface XCA – X Certificate and key management SecureCRT – SSH client wolfSSL – an SSL/TLS library with PKCS #11 support XShell - SSH Client from NetSarang Computer, Inc (versions > 6.0 support PKCS#11) EJBCA – Certification Authority software (uses PKCS#11 for digital signatures) SignServer – Server side software for digitally signing and time stamping documents, files and code (uses PKCS#11 for digital signatures and key wrapping/unwrapping) PuTTY-CAC - A fork of PuTTY that supports smartcard authentication PKCS #11 wrappers Since PKCS #11 is a complex C API many wrappers exist that let the developer use the API from various languages. For Perl: Crypt::PKCS11 Crypt::NSS::PKCS11 Crypt::PKCS11::Easy Crypt::Cryptoki php-pkcs11 PHP PKCS11 Extension including the support of the Oasis PKCS11 standard NCryptoki - .NET (C# and VB.NET), Silverlight 5 and Visual Basic 6 wrapper for PKCS #11 API Pkcs11Interop - Open source .NET wrapper for unmanaged PKCS#11 libraries python-pkcs11 - The most complete and documented PKCS#11 wrapper for Python PyKCS11 - Another wrapper for Python pkcs11 - Another wrapper for Python Java includes a wrapper for PKCS #11 API since version 1.5 IAIK PKCS#11 Wrapperon GitHub - A library for the Java™ platform which makes PKCS#11 modules accessible from within Java. pkcs11-helper - A simple open source C interface to handle PKCS #11 tokens. SDeanComponents - Delphi wrapper for PKCS #11 API jacknji11 - Java wrapper using Java Native Access (JNA) rust-pkcs11 - Crate for Rust ruby-pkcs11 - Ruby binding for PKCS #11 API tclPKCS11 = Tcl binding for PKCS#11 API pkcs11.net - .NET wrapper for PKCS #11 API Oracle Solaris Cryptographic Framework pkcs11 - Go wrapper for PKCS #11 API node.js graphene - high level OOP wrapper for pkcs#11 pkcs11js - low level wrapper for pkcs#11 References Lists of software Cryptography lists and comparisons
24247277
https://en.wikipedia.org/wiki/Thacker%2C%20West%20Virginia
Thacker, West Virginia
Thacker is an unincorporated community in Mingo County, West Virginia, United States. Thacker is located along the Tug Fork across from the state of Kentucky. The community takes its name from nearby Thacker Creek. References Unincorporated communities in Mingo County, West Virginia Unincorporated communities in West Virginia Coal towns in West Virginia
1674557
https://en.wikipedia.org/wiki/Meta-process%20modeling
Meta-process modeling
Meta-process modeling is a type of metamodeling used in software engineering and systems engineering for the analysis and construction of models applicable and useful to some predefined problems. Meta-process modeling supports the effort of creating flexible process models. The purpose of process models is to document and communicate processes and to enhance the reuse of processes. Thus, processes can be better taught and executed. Results of using meta-process models are an increased productivity of process engineers and an improved quality of the models they produce. Overview Meta-process modeling focuses on and supports the process of constructing process models. Its main concern is to improve process models and to make them evolve, which in turn, will support the development of systems. This is important due to the fact that "processes change with time and so do the process models underlying them. Thus, new processes and models may have to be built and existing ones improved". "The focus has been to increase the level of formality of process models in order to make possible their enactment in process-centred software environments". A process meta-model is a meta model, "a description at the type level of a process model. A process model is, thus, an instantiation of a process meta-model. [..] A meta-model can be instantiated several times in order to define various process models. A process meta-model is at the meta-type level with respect to a process." There exist standards for several domains: Software engineering Software Process Engineering Metamodel (SPEM) which is defined as a profile (UML) by the Object Management Group. Topics in metadata modeling There are different techniques for constructing process models. "Construction techniques used in the information systems area have developed independently of those in software engineering. In information systems, construction techniques exploit the notion of a meta-model and the two principal techniques used are those of instantiation and assembly. In software engineering the main construction technique used today is language-based. However, early techniques in both, information systems and software engineering were based on the experience of process engineers and were, therefore, ad hoc in nature." Ad hoc "Traditional process models are expressions of the experiences of their developers. Since this experience is not formalised and is, consequently, not available as a fund of knowledge, it can be said that these process models are the result of an ad hoc construction technique. This has two major consequences: it is not possible to know how these process models were generated, and they become dependent on the domain of experience. If process models are to be domain independent and if they are to be rapidly generable and modifiable, then we need to go away from experience based process model construction. Clearly, generation and modifiability relate to the process management policy adopted (see Usage World). Instantiation and assembly, by promoting modularization, facilitate the capitalisation of good practice and the improvement of given process models." Assembly The assembly technique is based on the idea of a process repository from which process components can be selected. Rolland (1998) lists two selection strategies: Promoting a global analysis of the project on hand based on contingency criteria (Example Van Slooten 1996) Using the notion of descriptors as a means to describe process chunks. This eases the retrieval of components meeting the requirements of the user / matching with the situation at hand. (Example Plihon 1995 in NATURE and repository of scenario based approaches accessible on Internet in the CREWS project) For the assembly technique to be successful, it is necessary that process models are modular. If the assembly technique is combined with the instantiation technique then the meta-model must itself be modular. Instantiation For reusing processes a meta-process model identifies "the common, generic features of process models and represents them in a system of concepts. Such a representation has the potential to 'generate' all process models that share these features. This potential is realised when a generation technique is defined whose application results in the desired process model." Process models are then derived from the process meta-models through instantiation. Rolland associates a number of advantages with the instantiation approach: The exploitation of the meta-model helps to define a wide range of process models. It makes the activity of defining process models systematic and versatile. It forces to look for and introduce, in the process meta-model, generic solutions to problems and this makes the derived process models inherit the solution characteristics. "The instantiation technique has been used, for example, in NATURE, Rolland 1993, Rolland 1994, and Rolland 1996. The process engineer must define the instances of contexts and relationships that comprise the process model of interest." Language Rolland (1998) lists numerous languages for expressing process models used by the software engineering community: E3 Various Prolog dialects for EPOS, Oikos, and PEACE PS-Algol for PWI as well as further computational paradigms: Petri nets in EPOS and SPADE Rule based paradigm in MERLIN ALF Marvel EPOS Triggers in ADELE and MVP-L. Languages are typically related to process programs whereas instantiation techniques have been used to construct process scripts. Tool support The meta-modeling process is often supported through software tools, called CAME tools (Computer Aided Method Engineering) or MetaCASE tools (Meta-level Computer Assisted Software Engineering tools). Often the instantiation technique "has been utilised to build the repository of Computer Aided Method Engineering environments". Example tools for meta-process modeling are: Maestro II MetaEdit+ Mentor Example: "Multi-model view" Colette Rolland (1999) provides an example of a meta-process model which utilizes the instantiation and assembly technique. In the paper the approach is called "Multi-model view" and was applied on the CREWS-L'Ecritoire method. The CREWS-L'Ecritoire method represents a methodical approach for Requirements Engineering, "the part of the IS development that involves investigating problems and requirements of the users community and developing a specification of the future system, the so-called conceptual schema.". Besides the CREWS-L'Ecritoire approach, the multi-model view has served as a basis for representing: (a) the three other requirements engineering approaches developed within the CREWS project, Real World Scenes approach, SAVRE approach for scenario exceptions discovery, and the scenario animation approach (b) for integrating approaches one with the other and with the OOSE approach Furthermore, the CREWS-L'Ecritoire utilizes process models and meta-process models in order to achieve flexibility for the situation at hand. The approach is based on the notion of a labelled graph of intentions and strategies called a map as well as its associated guidelines. Together, map (process model) and the guidelines form the method. The main source of this explanation is the elaboration of Rolland. Process model / map The map is "a navigational structure which supports the dynamic selection of the intention to be achieved next and the appropriate strategy to achieve it"; it is "a process model in which a nondeterministic ordering of intentions and strategies has been included. It is a labelled directed graph with intentions as nodes and strategies as edges between intentions. The directed nature of the graph shows which intentions can follow which one." The map of the CREWS-L'Ecritoire method looks as follow: The map consists of goals / intentions (marked with ovals) which are connected by strategies (symbolized through arrows). An intention is a goal, an objective that the application engineer has in mind at a given point of time. A strategy is an approach, a manner to achieve an intention. The connection of two goals with a strategy is also called section. A map "allows the application engineer to determine a path from Start intention to Stop intention. The map contains a finite number of paths, each of them prescribing a way to develop the product, i.e. each of them is a process model. Therefore the map is a multi-model. It embodies several process models, providing a multi-model view for modeling a class of processes. None of the finite set of models included in the map is recommended 'a priori'. Instead the approach suggests a dynamic construction of the actual path by navigating in the map. In this sense the approach is sensitive to the specific situations as they arise in the process. The next intention and strategy to achieve it are selected dynamically by the application engineer among the several possible ones offered by the map. Furthermore, the approach is meant to allow the dynamic adjunction of a path in the map, i.e. adding a new strategy or a new section in the actual course of the process. In such a case guidelines that make available all choices open to handle a given situation are of great convenience. The map is associated to such guidelines". Guidelines A guideline "helps in the operationalisation of the selected intention"; it is "a set of indications on how to proceed to achieve an objective or perform an activity." The description of the guidelines is based on the NATURE project's contextual approach and its corresponding enactment mechanism. Three types of guidelines can be distinguished: Intention Selection Guidelines (ISG) identify the set of intentions that can be achieved in the next step and selects the corresponding set of either IAGs (only one choice for an intention) or SSGs (several possible intentions). Strategy Selection Guidelines (SSG) guide the selection of a strategy, thereby leading to the selection of the corresponding IAG. Intention Achievement Guidelines (IAG) aim at supporting the application engineer in the achievement of an intention according to a strategy, are concerned with the tactics to implement these strategies, might offer several tactics, and thus may contain alternative operational ways to fulfil the intention. In our case, the following guidelines – which correspond with the map displayed above – need to be defined: Intention Selection Guidelines (ISG) ISG-1 Progress from Elicit a goal ISG-2 Progress from Conceptualize a Scenario ISG-3 Progress from Write a scenario ISG-4 Progress from Start Strategy Selection Guidelines (SSG) SSG-1 Progress to Elicit a goal SSG-2 Progress to Conceptualize a Scenario SSG-3 Progress to Write a scenario SSG-4 Progress to Elicit a goal SSG-5 Progress to Stop Intention Achievement Guidelines (IAG) IAG-1 Elicit a goal with case-based strategy IAG-2 Elicit a goal with composition strategy IAG-3 Elicit a goal with alternative strategy IAG-4 Elicit a goal with refinement strategy IAG-5 Elicit a goal with linguistic strategy IAG-6 Elicit a goal with template-driven strategy IAG-7 Write a scenario with template-driven strategy IAG-8 Write a scenario in free prose IAG-9 Conceptualize a Scenario with computer support strategy IAG-10 Conceptualize a Scenario manually IAG-11 Stop with completeness strategy The following graph displays the details for the Intention Achievement Guideline 8 (IAG-8). Meta-process map In the multi-model view as presented in the paper of C. Rolland, the meta-process (the instance of the meta-process model) is "a process for the generation of a path from the map and its instantaneous enactment for the application at hand." While the meta-process model can be represented in many different ways, a map was chosen again as a means to do so. It is not to be mixed up with the map for the process model as presented above. Colette Rolland describes the meta-model as follows: (Meta-intentions are in bold, meta-strategies in italic – in green in the map.) "The Start meta-intention starts the construction of a process by selecting a section in the method map which has map intention Start as source. The Choose Section meta-intention results in the selection of a method map section. The Enact Section meta-intention causes the execution of the method map section resulting from Choose Section. Finally, the Stop meta-intention stops the construction of the application process. This happens when the Enact Section meta-intention leads to the enactment of the method map section having Stop as the target. As already explained in the previous sections, there are two ways in which a section of a method map can be selected, namely by selecting an intention or by selecting a strategy. Therefore, the meta-intention Choose Section has two meta-strategies associated with it, select intention and select strategy respectively. Once a method map section has been selected by Choose Section, the IAG to support its enactment must be retrieved; this is represented in [the graph] by associating the meta-strategy automated support with the meta-intention, Enact Section." Sample process The sample process "Eliciting requirements of a Recycling Machine" is about a method for designing the requirements of recycling facilities. The recycling facilities are meant for customers of a supermarket. The adequate method is obtained through instantiation of the meta-process model on the process model. The following table displays the stepwise trace of the process to elicit requirements for the recycling machine (from ): See also Automatic programming Class-Responsibility-Collaboration card (CRC) Data mapping Data transformation Domain Specific Language (DSL) Domain-specific modeling (DSM) Eclipse (software) Generative programming (GP) Glossary of Unified Modeling Language terms Intentional Programming (IP) KM3 Language oriented programming (LOP) List of UML tools Metadata Meta-modeling technique Meta-Object Facility Method engineering Model Driven Engineering (MDE) Model Transformation Language (MTL) Model-based testing (MBT) Model-driven architecture (MDA) Modeling language Modeling perspectives Object Constraint Language (OCL) Object-oriented analysis and design (OOAD) MOF Queries/Views/Transformations (QVT) Semantic spectrum Semantic translation Software factory Transformation language (TL) UML tool Unified Modeling Language Vocabulary-based transformation XMI XML transformation language (XTL) References Systems engineering Unified Modeling Language Software development process Data modeling
1574038
https://en.wikipedia.org/wiki/Reciprocal%20Public%20License
Reciprocal Public License
The Reciprocal Public License (RPL) is a copyleft software license released in 2001. Version 1.5 of the license was published on July 15, 2007, and was approved by the Open Source Initiative as an open-source license. Description The RPL was authored in 2001 by Scott Shattuck, a software architect for Technical Pursuit Inc. for use with that company's TIBET product line. The RPL was inspired by the GNU General Public License (GPL) and authored to explicitly remove what the RPL's authors have referred to as the GPL's "privacy loophole". The GPL privacy loopholes allows recipients of GPL'd code to: make changes to source code which are never released to the open source community (by virtue of not deploying "to a third party"), and derive financial or other business benefit from that action, violating what some might consider a simple concept of "fairness". Because of its "viral" nature, the RPL is often found in dual-licensing models in which it is paired with more traditional closed-source licenses. This strategy allows software companies who use this model to present customers with a "pay with cash or pay with code" option, ensuring either the growth of the software directly through code contributions or indirectly through cash which can be used to fund further development. Reception The RPL was written to conform to the requirements of the Open Source Initiative to ensure that it met the goals for an Open Source license and is an approved open-source license. However, because of its requirements for reciprocation without exceptions, it is considered to be non-free by the Free Software Foundation. The license is used by Active Agenda, a risk-management web application, and NServiceBus, an asynchronous messaging library for the .NET/Mono platform. The RPL and GPL are used by OPC Foundation under the dual-licence scheme, where the former is used for members and the latter for non-members. References External links The Reciprocal Public License 1.5 hosted by the Open Source Initiative The Reciprocal Public License 1.3 hosted by Technical Pursuit Inc. (archived link) The Reciprocal Public License 1.1 hosted by the Open Source Initiative The Reciprocal Public License 1.0 hosted by Technical Pursuit Inc. (archived link) Free and open-source software licenses Copyleft software licenses
8426293
https://en.wikipedia.org/wiki/Gatehouse%20School
Gatehouse School
Gatehouse School is a co-educational independent school based in Sewardstone Road in Bethnal Green in East London, educating pupils from the ages of three to eleven years. The youngest classes follow a Montessori-style education, but the influence of the national curriculum has brought the older classes more into line with mainstream schools. The school admits children from the full ability range, with an emphasis on the Arts, including visits to museums and theatres, as well as sports and outward bound activities. The school was founded in Smithfield, London by Phyllis Wallbank, in 1948. It was housed in the Gatehouse of St Bartholomew-the-Great church in Smithfield but moved to Bethnal Green in the 1970s. It has been run along Montessori method principles developed by the educationalist Maria Montessori and began serving children from 2 – 16 years of age, and at the time of its founding, was untraditional in its educational philosophy. The school's 60th anniversary in 2008 was marked by a service in the school's original home, St Bartholomew's in Smithfield. The school integrates children with a wide range of disabilities with able-bodied children. It follows the idea that true learning results from children exploring the world for themselves through play. It allows children to choose when to take their lessons during the week. A child is required to complete a certain number of lessons in Mathematics, English, Art, Geography etc. per week but would be able to decide when to do them. Students also have free lessons where they can choose any subject they like. The balance of subjects is often weighted towards a child's aptitude or current interests. Different abilities/ages of children are taught in the same session, and their teachers 'sign pupils off' for the lessons they have completed. Some older children (14/15-year-olds) can then take the amount of each subject they wished to do over the course of each week, resulting in some pupils spending the week doing 'what they want' e.g., Art/Monday, Geography/Tuesday, English/Wednesday, Biology/Thursday and then back to Art/Friday. After an hour for lunch pupils have an hour to read followed by 'afternoon activities'. These include football, swimming, and visits to museums. The school has also had two ponies, as well as a duck, for the children. It also has an old farm cottage just outside Clochan in Scotland. The Gatehouse School featured in several documentary programmes during the 1970s. Saxophonist & Bon Viveur Brian Hardy taught Art & Dance here. Actors Sophie Ward and Linus Roche were pupils. Bibliography Wallbank, Phyllis. "The Vocation of Teaching." The Sower: A Quarterly Magazine on Christian Formation. Wallbank, Phyllis. "Moral Teaching through Shakespeare's Tragedies." The Sower: A Quarterly Magazine on Christian Formation. Wallbank, Phyllis. "The Way we Learn." The Sower: A Quarterly Magazine on Christian Formation. Wallbank, Phyllis. "The Philosophy of International Education." Divyadaan: Journal of Philosophy and Education 12/2 (2001) 193–209. Wallbank, Phyllis. "Periods of Sensitivity within Human Lives." Divyadaan: Journal of Philosophy and Education 12/3 (2001) 337–384. Wallbank, Phyllis. "Savants." Divyadaan: Journal of Philosophy and Education 13/1 (2002) 137–140. Wallbank, Phyllis. "Time." Divyadaan: Journal of Philosophy and Education 14/1 (2003) 1–12. Wallbank, Phyllis. "Montessori and the New Century." Divyadaan: Journal of Philosophy and Education 14/2 (2003) 135–144. Wallbank, Phyllis. "A Universal Way of Education." Divyadaan: Journal of Philosophy and Education 15/3 (2004) 521–532. Wallbank, Phyllis. "Adolescence." Divyadaan: Journal of Philosophy and Education 18/1 (2007) 77–90. Wallbank, Phyllis. "Dr Maria Montessori: The Past, the Present and the Future." Divyadaan: Journal of Philosophy and Education 18/2 (2007) 149–158. Wallbank, Phyllis. "A Montessori Journey: Phyllis Wallbank celebrates the life and work of Dr Montessori." Montessori International Magazine 83 (2007) 32–33. Wallbank, Phyllis. "Imagination." Divyadaan: Journal of Philosophy and Education 20/1 (2009) 107–108. Wallbank, Phyllis. "War and Time." Divyadaan: Journal of Philosophy and Education 20/2 (2009) 255–258. Coelho, Ivo. Review of Phyllis Wallbank and David Fleischacker, Worldwide Natural Education: Three Important Discussion Lectures by Phyllis Wallbank MBE and Dr David Fleischacker (set of 3 DVDs). Divyadaan: Journal of Philosophy and Education 18/2 (2007) 231–233. Curran, Eugene. "A Method and a Model: Maria Montessori and Bernard Lonergan on Adult Education." Divyadaan: Journal of Philosophy and Education 18/2 (2007) 165–204. Fleischacker, David. "Understanding the Four General Sensitive Phases of Human Development from Age 0–24: Maria Montessori, Phyllis Wallbank, and Bernard Lonergan." Divyadaan: Journal of Philosophy and Education 18/2 (2007) 205–222. Price, Patty Hamilton. "Phyllis Wallbank and Maria Montessori." Divyadaan: Journal of Philosophy and Education 18/2 (2007) 159–164. References 1948 establishments in England Bethnal Green Educational institutions established in 1948 Independent co-educational schools in London Independent schools in the London Borough of Tower Hamlets
36809506
https://en.wikipedia.org/wiki/World%20Computer%20Exchange
World Computer Exchange
World Computer Exchange (WCE) is a United States and Canada based charity organization whose mission is "to reduce the digital divide for youth in developing countries, to use our global network of partnerships to enhance communities in these countries, and to promote the reuse of electronic equipment and its ultimate disposal in an environmentally responsible manner." According to UNESCO, it is North America's largest non-profit supplier of tested used computers to schools and community organizations in developing countries. History WCE was founded in 1999 by Timothy Anderson. It is a non-profit organization. Its headquarters are in Hull, Massachusetts, and there are 15 chapters in the US and five in Canada. In 2015, WCE opened a chapter in Puerto Rico. By November 2002, the organisation shipped 4,000 computers to 585 schools in many developing countries. By October, 2011, along with partner organizations, WCE has shipped 30,000 computers, established 2,675 computer labs. In February 2012, the Boston Chapter sent out their 68th shipment bringing their total to 13,503 computers. Activities WCE provides computers and technology, and the support to make them useful in developing communities. WCE delivers educational content and curriculum on agriculture, health, entrepreneurship, water, and energy. The program also ensures that teachers will know how to use the technology and content by providing staff and teacher training, as well as ongoing tech support. Each chapter of WCE collects donated computers, refurbishes and prepares them for shipment. They also raise funds to ship the computers. Volunteers inspect and repair each computer, then install the operating system and educational material onto each computer. WCE calls recipients of its computers "partners." The requests of computer donations originate from the partners. Once the refurbished computers and the funds to ship the computers are fulfilled, WCE initiates shipment. When possible, WCE coordinates shipments with other organizations, such as University of the People, Peace Corps, Computers4Africa.org, ADEA (Assoc. for the development of Education in Africa) and others. In June 2013, WCE Chicago chapter sent 400 computers to Mexico, and 300 to the Dominican Republic with help of 85 volunteers. In November 2015, WCE sent two Spanish speakers to visit Honduras for two weeks in 2015 to pilot tech skills training for youth under a contract with World Vision. The WCE Computers for Girls (C4G) initiative is field testing of eight tools to provide technological training and STEM education for interested teachers helping their girl students in four West African countries (Ghana, Liberia, Mali, and Zambia) and Pakistan. In September 2016, World Computer Exchange-Puerto Rico and 4GCommunity.org, two not-for-profit corporations, have announced their alliance to improve public school and family access to technology where needed throughout Puerto Rico. eCorps To install computers at partner sites without access to experts, WCE recruits and supports volunteers from the USA under its initiative. To be eligible, volunteers must be 21 years of age, have necessary tech skills, and be prepared to self-fund their travel and accommodation expenses. 18 training teams have worked in Dominican Republic, Ethiopia, Georgia, Ghana, Honduras, Kenya, Liberia, Mali, Nepal, Nicaragua, Nigeria, Philippines, Puerto Rico, Tanzania, and Zimbabwe. The "Travelers" program is geared towards those already planning to go to one of the countries in the WCE network, to provide tech support during their trip. 79 "Travelers" have visited the following 41 developing countries including: Armenia, Bolivia, Cambodia, Cameroon, Democratic Republic of Congo, Dominican Republic, Ecuador, Ethiopia, Haiti, Honduras, India, Indonesia, Jordan, Kenya, Liberia, Malawi, Mexico, Namibia, Nepal, Pakistan, Palestine, Panama, Peru, Puerto Rico, Qatar, Senegal, Sierra Leone, South Africa, Swaziland, Tanzania, Togo, and Uganda. In 2015, "Travelers" visited: Cambodia, Haiti, Honduras, Mexico, Puerto Rico, and South Africa. Computers WCE uses the Ubuntu operating system on their computers, citing the cost of license and being less prone to malware while providing a computing environment such as word processor and printer drivers. Unlike One Laptop per Child, the computers do not contain specialized software. Each computer is loaded with educational materials to allow users to learn without an internet connection. See also Computer recycling Electronic waste in the United States Empower Up Free Geek Geekcorps Geeks Without Bounds Global digital divide ICVolunteers Inveneo NetCorps NetDay Nonprofit Technology Resources United Nations Information Technology Service (UNITeS) External links http://www.worldcomputerexchange.org References Computer recycling Charities Charities Charities based in Massachusetts
34241014
https://en.wikipedia.org/wiki/New%20Mexico%20Department%20of%20Public%20Safety
New Mexico Department of Public Safety
The New Mexico Department of Public Safety (NMDPS) is department within the New Mexico Governor's Cabinet. NMDPS is responsible for statewide law enforcement services, training, disaster and emergency response. NMDPS also provides technical communications and forensics support to the public and other law enforcement agencies. NMDPS has the duty to provide for the protection and security of the governor and lieutenant governor. The department is led by the Secretary of Public Safety. The cabinet secretary is appointed by the governor with the approval of the New Mexico Senate, to serve at his or her pleasure. History NMDPS was created by the enactment of the Department of Public Safety Act in 1986. The Department brought together the formerly independent New Mexico State Police, the Governor’s Organized Crime Commission, the Motor Transportation Division of the Taxation and Revenue Department, the enforcement division of the Department of Alcoholic Beverage Control, and the New Mexico law enforcement academy into a single unified entity. Overview The Department of Public Safety has two main missions: Program Support Services, Law Enforcement Programs. Program support Program Support Services consists of the Technical Support Division, the Office of the Secretary, the Office of Legal Affairs, the Information Technology Division, and the Administrative Services Division. These divisions support the operations of the Department and other law enforcement agencies across the State. Technical Support provides forensic science services through northern and southern laboratories. The Division also provides criminal records history information, mission person's clearinghouse, sex offender registration tracking, and uniform crime reporting. The Information Technology Division operates the New Mexico Law Enforcement Telecommunications System and provides connectivity to 180 criminal justice agencies across the State. Law enforcement Law Enforcement Program provides statewide law enforcement services. The Program area consists of the State Police Division which includes the Investigations Bureau (SIU), the Uniform Bureau and the Motor Transportation Bureau, and the Training and Recruitment Bureau. The New Mexico State Police enforces the criminal, civil and administrative laws of the State, in particular in smaller communities, rural areas, and the highways of the State. The Training and Recruitment Division operates the state Law Enforcement Academy which provides training to all law enforcement personnel in the State. Organization The head of NMDPS is the Cabinet Secretary of Public Safety. The cabinet secretary is appointed by the governor of New Mexico, with the approval of the New Mexico Senate, and serves as a member of the Governor's Cabinet. The cabinet secretary is assisted by a deputy secretary and seven division directors. Each of the division directors is appointed by the cabinet secretary with the approval of the governor. The sole exception is the Chief of the State Police, who is appointed by the cabinet secretary with the approval of the State Senate. The state police chief also serves as the DPS Deputy Secretary for Operations, the department's third highest-ranking official behind the cabinet secretary and deputy secretary. NMDPS is composed of eight divisions: State Police Division Motor Transportation Police Division (merged on July 1, 2015 with State Police as the Motor Transportation Bureau.) Special Investigations Division (merged on July 1, 2015 with State Police as the Special Investigations Unit within the Investigations Bureau.) Training and Recruitment Division Communications Division Technical Support Division Administrative Services Information Technology Division Overseen by NMDPS New Mexico Mounted Patrol New Mexico Search and Rescue Civilian volunteers Budget and Personnel See also Department of Public Safety External links Department of Public Safety official website Department of Public Safety Government agencies established in 1986 1986 establishments in New Mexico
13618905
https://en.wikipedia.org/wiki/2007%20Stanford%20vs.%20USC%20football%20game
2007 Stanford vs. USC football game
The 2007 Stanford vs. USC football game was an NCAA college football game held on October 6, 2007, at the Los Angeles Memorial Coliseum in Los Angeles, California. In a remarkable upset, the visiting Stanford Cardinal won 24–23 despite USC having been favored by 41 points entering the game. This result was the biggest point spread upset of all time in college football (since surpassed by the Howard University Bison in 2017, who were 45-point underdogs heading into a road game against the UNLV Rebels). USC entered the game with a 35-game home game winning streak (its previous home game loss also happened to be to Stanford, in 2001) which included a 24-game home game winning streak in Pac-10 play. By contrast, Stanford had compiled a Pac-10 worst 1–11 season in 2006, which included a 42–0 loss to USC. To compound the situation, Stanford's starting quarterback T. C. Ostrander had suffered a seizure the week before and his replacement, backup quarterback Tavita Pritchard, had never started a game and had thrown just three passes in official play. Game summary The weather was sunny and with a slight west wind. The game began at 4:09pm Pacific Daylight Time and ended at 7:36pm. Scoring First quarter 06:25 USC – David Buehler 34 yd field goal USC 3–0 Second quarter 07:15 USC – Chauncey Washington 1 yd run (PAT blocked) USC 9–0 Third quarter 11:58 Stanford – Austin Yancy 31 yd interception return (Derek Belch kick) USC 9–7 02:54 USC – Fred Davis 63 yd pass from John David Booty (David Buehler kick) USC 16–7 Fourth quarter 14:54 Stanford – Anthony Kimble 1 yd run (Derek Belch kick) USC 16–14 11:04 USC – Ronald Johnson 47 yd pass from John David Booty (David Buehler kick) USC 23–14 05:43 Stanford – Derek Belch 26 yd field goal USC 23–17 00:48 Stanford – Mark Bradford 10 yd pass from Tavita Pritchard on fourth down and goal (Derek Belch kick) Stanford 24–23. The game-winning drive featured a 20-yard pass from Tavita Pritchard to future NFL star Richard Sherman on fourth-and-20 from the USC 29. Aftermath The final score was announced at the Rose Bowl, where USC's two arch-rivals, UCLA and Notre Dame, were playing each other. Irish and Bruins fans cheered in unison and celebrated together briefly. At the same time, at Tiger Stadium, the #1 LSU Tigers were playing the #9 Florida Gators and the fans in the stadium celebrated when the USC score was announced there, too. The Tigers would later come from behind to beat the Gators 28–24, making them #1 in both polls with USC dropping from #1 in the coaches poll due to the loss. Stanford's victory, for once, was cheered on by perennial rival Cal, who was ranked No. 3 in the nation at the time of USC's loss. USC's loss elevated California to its highest ranking in nearly six decades, and it was primed to reach the #1 ranking for the first time since 1951 when #1 LSU was beaten in overtime by Kentucky the same day it played Oregon State. California lost the Oregon State game; after starting the season 5-0 and ranked No. 12, it finished 7-6 and unranked. 2007 was also the only game in an eight year Big Game stretch that it lost to Stanford. At the end of the regular season, Sports Illustrated chose the Stanford upset of USC as the second "Biggest Upset of 2007" after Division I FCS Appalachian State's 34–32 upset of #5 Michigan. In 1979, Stanford had pulled a similar feat by coming back in the last four minutes to tie USC 21–21 on October 13. This game, considered one of the greatest of the 20th century, effectively cost USC a national championship. In the 2009 season, Stanford would eclipse the point spread by handing USC its worst defeat ever. Stanford won 55–21, and USC was an 11-point favorite. The next year in 2010, tenth-ranked Stanford defeated USC with a last-second field goal to win, 37–35. In 2011, Stanford would again defeat USC, continuing a 3-game streak of defeating USC at their home stadium. In a much closer game, Stanford defeated USC 56–48 in triple overtime. In the following year, the Cardinal again faced a second-ranked USC team and defeated them 21–14, earning a fourth consecutive win over the Trojans, a first in team and school history. See also 2007 NCAA Division I FBS football season 2007 Appalachian State vs. Michigan football game References External links Stanford vs. USC Final Stats. October 6, 2007 USC Trojans Athletics official site Stanford vs. USC College football games Stanford Cardinal football games USC Trojans football games October 2007 sports events in the United States Stanford vs. USC 2007 in Los Angeles
2371482
https://en.wikipedia.org/wiki/Business%20analysis
Business analysis
Business analysis is a professional discipline of identifying business needs and determining solutions to business problems. Solutions often include a software-systems development component, but may also consist of process improvements, organizational change or strategic planning and policy development. The person who carries out this task is called a business analyst or BA. Business analysts do not work solely on developing software systems. But work across the organisation, solving business problems in consultation with business stakeholders. Whilst most of the work that business analysts do today relate to software development/solutions, this derives from the ongoing massive changes businesses all over the world are experiencing in their attempts to digitise. Although there are different role definitions, depending upon the organization, there does seem to be an area of common ground where most business analysts work. The responsibilities appear to be: To investigate business systems, taking a holistic view of the situation. This may include examining elements of the organisation structures and staff development issues as well as current processes and IT systems. To evaluate actions to improve the operation of a business system. Again, this may require an examination of organisational structure and staff development needs, to ensure that they are in line with any proposed process redesign and IT system development. To document the business requirements for the IT system support using appropriate documentation standards. In line with this, the core business analyst role could be defined as an internal consultancy role that has the responsibility for investigating business situations, identifying and evaluating options for improving business systems, defining requirements and ensuring the effective use of information systems in meeting the needs of the business. Sub-disciplines Business analysis as a discipline includes requirements analysis, sometimes also called requirements engineering. It focuses on ensuring the changes made to an organisation are aligned with its strategic goals. These changes include changes to strategies, structures, policies, business rules, processes, and information systems. Examples of business analysis include: Enterprise analysis or company analysis Focuses on understanding the needs of the business as a whole, its strategic direction, and identifying initiatives that will allow a business to meet those strategic goals. It also includes: Creating and maintaining the business architecture Conducting feasibility studies Identifying new business opportunities Scoping and defining new business opportunities Preparing the business case Conducting the initial risk assessment Requirements planning and management Involves planning the requirements development process, determining which requirements are the highest priority for implementation, and managing change. Requirements elicitation Describes techniques for collecting requirements from stakeholders in a project. Techniques for requirements elicitation include: Brainstorming Document analysis Focus group Interface analysis Interviews/Questionnaire Workshops Reverse engineering Surveys User task analysis Process mapping Observation/job shadowing Design thinking Prototyping Requirements analysis and documentation Describes how to develop and specify requirements in enough detail to allow them to be successfully implemented by a project team. Analysis The major forms of analysis are: Architecture analysis Business process analysis Object-oriented analysis Structured analysis Data warehouse analysis, storage and databases analysis Documentation Requirements documentation can take several forms: Textual – for example, stories that summarize specific information Matrix – for example, a table of requirements with priorities Diagrams – for example, how data flows from one structure to the other Wireframe – for example, how elements are required in a website, Models – for example, 3-D models that describes a character in a computer game Requirements communication Describes techniques for ensuring that stakeholders have a shared understanding of the requirements and how they will be implemented. Solution assessment and validation Describes how the business analyst can perform correctness of a proposed solution, how to support the implementation of a solution, and how to assess possible shortcomings in the implementation. Techniques There are a number of generic business techniques that a business analyst will use when facilitating business change. Some of these techniques include: PESTLE This is used to perform an external environmental analysis by examining the many different external factors affecting an organization. The six attributes of PESTLE: Political (current and potential influences from political pressures) Economic (the local, national and world economy impact) Sociological (the ways in which a society can affect an organization) Technological (the effect of new and emerging technology) Legal (the effect of national and world legislation) Environmental (the local, national and world environmental issues) Heptalysis This is used to perform an in-depth analysis of early stage businesses/ventures on seven important categories: Market opportunity Product/solution Execution plan Financial engine Human capital Potential return Margin of safety STEER Is essentially another take on PESTLE. You will notice it factors in the same elements of PESTLE and shouldn't be considered a tool on its own except an author/user prefers to use this acronym as opposed to PESTLE. STEER puts into consideration – the following Socio-cultural Technological Economic Ecological Regulatory Factors MOST This is used to perform an internal environmental analysis by defining the attributes of MOST to ensure that the project you are working on is aligned to each of the four attributes. The four attributes of MOST are: Mission (where the business intends to go) Objectives (the key goals which will help achieve the mission) Strategies (options for moving forward) Tactics (how strategies are put into action) SWOT SWOT is used to help focus activities into areas of strength and where the greatest opportunities lie. This is used to identify the dangers that take the form of weaknesses and both internal and external threats. The four attributes of SWOT analysis are: Strengths – What are the advantages? What is currently done well? (e.g. key area of best-performing activities of your company) Weaknesses – What should be improved? What is there to overcome? (e.g. key area where you are performing unsatisfactorily) Opportunities – What good opportunities face the organization? (e.g. key area where your competitors are performing poorly) Threats – What obstacles does the organization face? (e.g. key area where your competitor will perform well) CATWOE This is used to prompt thinking about what the business is trying to achieve. Business perspectives help the business analyst to consider the impact of any proposed solution on the people involved. There are six elements of CATWOE: Customers – Who are the beneficiaries of the highest level business process and how does the issue affect them? Actors – Who is involved in the situation, who will be involved in implementing solutions and what will impact their success? Transformation Process – What processes or systems are affected by the issue? World View – What is the big picture and what are the wider impacts of the issue? Owner – Who owns the process or situation being investigated and what role will they play in the solution? Environmental Constraints – What are the constraints and limitations that will impact the solution and its success? de Bono's Six Thinking Hats This is often used in a brainstorming session to generate and analyse ideas and options. It is useful to encourage specific types of thinking and can be a convenient and symbolic way to request someone to "switch gears". It involves restricting the group to only thinking in specific ways – giving ideas & analysis in the "mood" of the time. Also known as the Six Thinking Hats. White: Pure facts, logical. Green: Creative. Yellow: Bright, optimistic, positive. Black: Negative, devil’s advocate. Red: Emotional. Blue: Cold, control. Not all colors / moods have to be used. Five Whys Five Whys is used to get to the root of what is really happening in a single instance. For each answer given a further 'why' is asked. MoSCoW This is used to prioritize requirements by allocating an appropriate priority, gauging it against the validity of the requirement itself and its priority against other requirements. MoSCoW comprises: Must have – or else delivery will be a failure Should have – otherwise will have to adopt a workaround Could have – to increase delivery satisfaction Won't have this time – useful to the exclude requirements from this delivery timeframe VPEC-T This technique is used when analyzing the expectations of multiple parties having different views of a system in which they all have an interest in common, but have different priorities and different responsibilities. Values – constitute the objectives, beliefs and concerns of all parties participating. They may be financial, social, tangible and intangible Policies – constraints that govern what may be done and the manner in which it may be done Events – real-world proceedings that stimulate activity Content – the meaningful portion of the documents, conversations, messages, etc. that are produced and used by all aspects of business activity Trust – between users of the system and their right to access and change information within it SCRS The SCRS approach in business analysis claims that the analysis should flow from the high-level business strategy to the solution, through the current state and the requirements. SCRS stands for: Strategy Current State Requirements Solution Business Analysis Canvas The Business Analysis Canvas is a tool that enables Business Analyst to quickly present a high level view of the activities that will be completed as part of the business analysis work allocation. The Business Analysis Canvas is broken into several sections. Project Objective Stakeholder Deliverable Impact to Target Operating Model Communication Approach Responsibilities Scheduling Key Dates The Canvas has activities and questions the Business analyst can ask the organization to help build out the content. Business Process Analysis Processes are modeled visually to understand the current state and the models appear in levels to understand the enablers that are influencing a particular businesses process. At the highest level of the models are end-to-end business processes that would be common to many businesses. Below that business process level would be a level of activities, sub-activities and finally tasks. The task level is the most granular and when modeled depicts a particular workflow. As business processes get documented on the workflow level, they become more heavily influenced or "enabled" by characteristics that impact that particular businesses. These "workflow enablers" are considered to be Workflow Design, Information Systems/IT, Motivation and Measurement, Human Resources & Organization, Policies and Rules, and Facilities/Physical Environment. This technique of process leveling and analysis assists business analysts in understanding what is really required for a particular business and where there are possibilities to re-engineer a process for greater efficiency in the future state. Roles of business analysts As the scope of business analysis is very wide, there has been a tendency for business analysts to specialize in one of the three sets of activities which constitute the scope of business analysis, the primary role for business analysts is to identify business needs, define requirements, and provide solutions to business problems these are done as being a part of following set of activities. Strategist Organizations need to focus on strategic matters on a more or less continuous basis in the modern business world. Business analysts, serving this need, are well-versed in analyzing the strategic profile of the organization and its environment, advising senior management on suitable policies, and the effects of policy decisions. Architect Organizations may need to introduce change to solve business problems which may have been identified by the strategic analysis, referred to above. Business analysts contribute by analyzing objectives, processes and resources, and suggesting ways by which re-design (BPR), or improvements (BPI) could be made. Particular skills of this type of analyst are "soft skills", such as knowledge of the business, requirements engineering, stakeholder analysis, and some "hard skills", such as business process modeling. Although the role requires an awareness of technology and its uses, it is not an IT-focused role. Three elements are essential to this aspect of the business analysis effort: the redesign of core business processes; the application of enabling technologies to support the new core processes; and the management of organizational change. This aspect of business analysis is also called "business process improvement" (BPI), or "reengineering". IT-systems analyst There is the need to align IT development with the business system as a whole. A long-standing problem in business is how to get the best return from IT investments, which are generally very expensive and of critical, often strategic, importance. IT departments, aware of the problem, often create a business analyst role to better understand and define the requirements for their IT systems. Although there may be some overlap with the developer and testing roles, the focus is always on the IT part of the change process, and generally this type of business analyst gets involved only when a case for change has already been made and decided upon. In any case, the term "analyst" is lately considered somewhat misleading, insofar as analysts (i.e. problem investigators) also do design work (solution definers). The key responsibility areas of a business analyst are to collate the client's software requirements, understand them, and analyze them further from a business perspective. A business analyst is required to collaborate with and assist the business and assist them. Function within the organizational structure The role of business analysis can exist in a variety of structures within an organizational framework. Because business analysts typically act as a liaison between the business and technology functions of a company, the role can be often successful either aligned to a line of business, within IT, or sometimes both. Business alignment When business analysts work at the business side, they are often subject matter experts for a specific line of business. These business analysts typically work solely on project work for a particular business, pulling in business analysts from other areas for cross-functional projects. In this case, there are usually business systems analysts on the IT side to focus on more technical requirements. IT alignment In many cases, business analysts work solely within IT and they focus on both business and systems requirements for a project, consulting with various subject matter experts (SMEs) to ensure thorough understanding. Depending on the organizational structure, business analysts may be aligned to a specific development lab or they might be grouped together in a resource pool and allocated to various projects based on availability and expertise. The former builds specific subject matter expertise while the latter provides the ability to acquire cross-functional knowledge. Practice management In a large organizations, there are centers of excellence or practice management groups who define frameworks and monitor the standards throughout the process of implementing the change in order to maintain the quality of change and reduce the risk of changes to organization. Some organizations may have independent centers of excellence for individual streams such as project management, business analysis or quality assurance. A practice management team provides a framework by which all business analysts in an organization conduct their work, usually consisting of processes, procedures, templates and best practices. In addition to providing guidelines and deliverables, it also provides a forum to focus on continuous improvement of the business analysis function. Goals Ultimately, business analysis wants to achieve the following outcomes: Create solutions Give enough tools for robust project management Improve efficiency and reduce waste Provide essential documentation, such as project initiation documents One way to assess these goals is to measure the return on investment (ROI) for all projects. According to Forrester Research, more than $100 billion is spent annually in the U.S. on custom and internally developed software projects. For all of these software development projects, keeping accurate data is important and business leaders are constantly asking for the return or ROI on a proposed project or at the conclusion of an active project. However, asking for the ROI without sufficient data of where value is created or destroyed may result in inaccurate projections. Reduce waste and complete projects on time Project delays are costly in several ways: Project costs – For every month of delay, the project team costs and expenses continue to accumulate. When a large part of the development team has been outsourced, the costs will start to add up quickly and are very visible if contracted on a time and materials basis (T&M). Fixed price contracts with external parties limit this risk. For internal resources, the costs of delays are not as readily apparent, unless time spent by resources is being tracked against the project, as labor costs are essentially ‘fixed’ costs. Opportunity costs – Opportunity costs come in two types – lost revenue and unrealized expense reductions. Some projects are specifically undertaken with the purpose of driving new or additional revenues to the bottom line. For every month of delay, a company foregoes a month of this new revenue stream. The purpose of other projects is to improve efficiencies and reduce costs. Again, each month of failure postpones the realization of these expense reductions by another month. In the vast majority of cases, these opportunities are never captured or analyzed, resulting in misleading ROI calculations. Of the two opportunity costs, the lost revenue is the most egregious – and the effects are greater and longer lasting. On a lot of projects (particularly larger ones) the project manager is the one responsible for ensuring that a project is completed on time. The BA's job is more to ensure that if a project is not completed on time then at least the highest priority requirements are met. Document the right requirements Business analysts want to make sure that they define the requirements in a way that meets the business needs, for example, in IT applications the requirements need to meet end-users' needs. Essentially, they want to define the right application. This means that they must document the right requirements through listening carefully to ‘customer’ feedback, and by delivering a complete set of clear requirements to the technical architects and coders who will write the program. If a business analyst has limited tools or skills to help him elicit the right requirements, then the chances are fairly high that he will end up documenting requirements that will not be used or that will need to be re-written – resulting in rework as discussed below. The time wasted to document unnecessary requirements not only impacts the business analyst, it also impacts the rest of the development cycle. Coders need to generate application code to perform these unnecessary requirements and testers need to make sure that the wanted features actually work as documented and coded. Experts estimate that 10% to 40% of the features in new software applications are unnecessary or go unused. Being able to reduce the amount of these extra features by even one-third can result in significant savings. An approach of minimalism or "Keep it Simple" and minimum technology supports a reduced cost number for the end result and on going maintenance of the implemented solution. Improve project efficiency Efficiency can be achieved in two ways: by reducing rework and by shortening project length. Rework is a common industry headache and it has become so common at many organizations that it is often built into project budgets and time lines. It generally refers to extra work needed in a project to fix errors due to incomplete or missing requirements and can impact the entire software development process from definition to coding and testing. The need for rework can be reduced by ensuring that the requirements gathering and definition processes are thorough and by ensuring that the business and technical members of a project are involved in these processes from an early stage. Shortening project length presents two potential benefits. For every month that a project can be shortened, project resource costs can be diverted to other projects. This can lead to savings on the current project and lead to earlier start times of future projects (thus increasing revenue potential). Business analysis qualifications An aspiring business analyst can opt for academic or professional education. Several leading universities in the US, NL and UK offer master's degrees with a major in either Business Analysis, Process Management or Business Transformation. There are many universities offer bachelors or master's degree in Business Analysis, including: The University of Manchester Master of Science (MSc) in Business Analysis Victoria University of Wellington Master of Professional Business Analysis City University of Hong Kong BBA in Business Analysis Radboud University Nijmegen Master of Science (MSc) in Business Administration - specialisation Business Analysis and Modelling The three most widely recognised Business Analysis Qualifications are: International Institute of Business Analysis (IIBA) Certified Business Analysis Professional Level 1 – Entry-level Certificate in Business Analysis (ECBA) Level 2 – Certification of Capability in Business Analysis (CCBA) Level 3 – Certified Business Analysis Professional (CBAP) Level 4 (not yet available) – Certified Business Analysis Thought Leader (CBATL) Project Management Institute - Professional in Business Analysis (PMI-PBA) The British Computer Society (BCS) offers a range of certifications and BA qualification: Foundation Certificate in Business Analysis Foundation Certificate in Business Change Foundation Certificate in Commercial Awareness Practitioner Certificate in Benefits Management and Business Acceptance Practitioner Certificate in Business Analysis Practice Practitioner Certificate in Data Management Essentials Practitioner Certificate in Modelling Business Processes Practitioner Certificate in Requirements Engineering International Diploma in Business Analysis See also Cost overrun Data Presentation Architecture Enterprise Life Cycle International Institute of Business Analysis (IIBA) Operations research Strategic management Real options valuation Requirements analysis Revenue shortfall Spreadmart Viability study References Project management
31969460
https://en.wikipedia.org/wiki/Ingenia%20Technology
Ingenia Technology
Ingenia Technology, formed in 2003, is an international security technology company and inventor of Laser Surface Authentication, a technique used for brand protection, track and trace and document authentication. History Ingenia Technology was founded following years of research funding at Durham University and Imperial College in London. Under the leadership of Professor Russell Cowburn, the Laser Surface Authentication technology that forms the basis of the security solution was developed. Ingenia Technology has its headquarters in London with satellite offices in Vienna and Zurich. Technology Laser Surface Authentication analyses the naturally occurring random structure of a surface and from this, generates a signature or code unique to that surface. This code can then be used to authenticate and identify the item in the same way as a fingerprint. The technology can be used for paper, cardboard, plastics, metals and ceramics, and has found many applications across a diverse number of markets. Awards The company has won many technology and company awards in recent years including: Global Security Challenge Winner 2006 – Best New Security Technology in the World Hermes Award 2007 – Best Technology together with Bayer Technology Services Red Herring Europe 100 and Red Herring Global 100 Winners 2007 – Emerging Technology Companies References External links International Authentication Association Cartondruck IDT Systems BBC: Laser spots paper 'fingerprints' The Inquirer: Laser authentication system may foil ID thieves POPSCI: Best of what's new '08 Business Computing World: The Olympics are coming, but how secure are our documents? Packaging Europe: Ingenia Technology reflect on a successful 2010 as key markets are identified for 2011 Technology companies of the United Kingdom Technology companies established in 2003
295744
https://en.wikipedia.org/wiki/Adaptive%20Server%20Enterprise
Adaptive Server Enterprise
SAP ASE (Adaptive Server Enterprise), originally known as Sybase SQL Server, and also commonly known as Sybase DB or Sybase ASE, is a relational model database server developed by Sybase Corporation, which later became part of SAP AG. ASE was developed for the Unix operating system, and is also available for Microsoft Windows. In 1988, Sybase, Microsoft and Ashton-Tate began development of a version of SQL Server for OS/2, but Ashton-Tate later left the group and Microsoft went on to port the system to Windows NT. When the agreement expired in 1993, Microsoft purchased a license for the source code and began to sell this product as Microsoft SQL Server. MS SQL Server and Sybase SQL Server share many features and syntax peculiarities. History Originally developed for Unix operating system platforms in 1987, Sybase Corporation's primary relational database management system product was initially marketed under the name Sybase SQL Server. In 1988, SQL Server for OS/2 was co-developed for the PC by Sybase, Microsoft, and Ashton-Tate. Ashton-Tate divested its interest and Microsoft became the lead partner after porting SQL Server to Windows NT. Microsoft and Sybase sold and supported the product through version 4.2.1. Sybase released SQL Server 4.2 in 1992. This release included internationalization and localization and support for symmetric multiprocessing systems. In 1993, the co-development licensing agreement between Microsoft and Sybase ended, and the companies parted ways while continuing to develop their respective versions of SQL Server. Sybase released Sybase SQL Server 10.0, which was part of the System 10 product family, which also included Back-up Server, Open Client/Server APIs, SQL Monitor, SA Companion and OmniSQL Gateway. Microsoft continued on with Microsoft SQL Server. Sybase provides native low-level programming interfaces to its database server which uses a protocol called Tabular Data Stream. Prior to version 10, DBLIB (DataBase LIBrary) was used. Version 10 and onwards uses CTLIB (ClienT LIBrary). In 1995, Sybase released SQL Server 11.0. Starting with version 11.5 released in 1996, Sybase moved to differentiate its product from Microsoft SQL Server by renaming it to Adaptive Server Enterprise. Sybase 11.5 added Asynchronous prefetch, case expression in sql, the optimizer can use a descending index to avoid the need for a worktable and a sort. The Logical Process Manager was added to allow prioritization by assigning execution attributes and engine affinity. In 1998, ASE 11.9.2 was rolled out with support for data pages locking, data rows (row-level locking), distributed joins and improved SMP performance. Indexes could now be created in descending order on a column, readpast concurrency option and repeatable read transaction isolation were added. A lock timeout option and task-to-engine affinity were added, query optimization is now delayed until a cursor is opened and the values of the variables are known. In 1999, ASE 12.0 was released, providing support for Java, high availability and distributed transaction management. Merge joins were added, previous all joins were nested loop joins. In addition, cache partitions were added to improve performance. In 2001, ASE 12.5 was released, providing features such as dynamic memory allocation, an EJB container, support for XML, Secure Sockets Layer (SSL) and LDAP. Also added was compressed backups, unichar UTF-16 support and multiple logical page sizes 2K, 4K, 8K, or 16K. In 2005, Sybase released ASE 15.0. It included support for partitioning table rows in a database across individual disk devices, and "virtual columns" which are computed only when required. In ASE 15.0, many parameters that had been static (which required server reboot for the changes to take place) were made dynamic (changes take effect immediately). This improved performance and reduced downtime. For example, one parameter that was made dynamic was the "tape retention in days" (the number of days that the backup is kept on the tape media without overwriting the existing contents in the production environment). On January 27, 2010 Sybase released ASE 15.5. It included support for in-memory and relaxed-durability databases, distributed transaction management in the shared-disk cluster, faster compression for backups as well as Backup Server Support for the IBM Tivoli Storage Manager. Deferred name resolution for user-defined stored procedures, FIPS 140-2 login password encryption, incremental data transfer, bigdatetime and bigtime datatypes and tempdb groups were also added. In July 2010, Sybase became a wholly owned subsidiary of SAP America. On September 13, 2011 Sybase released ASE 15.7 at Techwave. It included support for: New Security features - Application Functionality Configuration Groups, a new threaded kernel, compression for large object (LOB) and regular data, End-to-End CIS Kerberos Authentication, Dual Control of Encryption Keys and Unattended Startup and extension for securing logins, roles, and password management, Login Profiles, ALTER... modify owner, External Passwords and Hidden Text, Abstract Plans in Cached Statements, Shrink Log Space, In-Row Off-Row LOB, using Large Object text, unitext, and image Datatypes in Stored Procedures, Using LOB Locators in Transact-SQL Statements, select for update to exclusively lock rows for subsequent updates within the same transaction, and for update-able cursors, Non-materialized, Non-null Columns with a default value, Fully Recoverable DDL (select into, alter table commands that require data movement, reorg rebuild), merge command, Expanded Variable-Length Rows, Allowing Unicode Noncharacters. In April 2014, SAP released ASE 16. It included support for partition locking, CIS Support for HANA, Relaxed Query Limits, Query Plan Optimization with Star Joins, Dynamic Thread Assignment, Sort and Hash Join Operator improvements, Full-Text Auditing, Auditing for Authorization Checks Inside Stored Procedures, create or replace functionality, Query Plan and Execution Statistics in HTML, Index Compression, Full Database Encryption, Locking, Run-time locking, Metadata and Latch enhancements, Multiple Trigger support, Residual Data Removal, Configuration History Tracking, CRC checks for dump database and the ability to calculate the transaction log growth rate for a specified time period. Structure A single standalone installation of ASE typically comprises one "dataserver" and one corresponding "backup server". In multi server installation many dataservers can share one backup server. A dataserver consists of system databases and user databases. Minimum system databases that are mandatory for normal working of dataserver are 'master', 'tempdb', 'model', 'sybsystemdb' and 'sybsystemprocs'. 'master' database holds critical system related information that includes, logins, passwords, and dataserver configuration parameters. 'tempdb' is used for storage of data that are required for intermediate processing of queries, and temporary data. 'model' is used as a template for creating new databases. 'sybsystemprocs' consists of system supplied stored procedures that queries system tables and manipulates data in them. ASE is a single process multithreaded dataserver application. Editions There are several editions, including an express edition that is free for productive use but limited to four server engines and 50 GB of disk space per server. See also SQL Anywhere Sybase List of relational database management systems Comparison of relational database management systems References External links SAP Sybase ASE official website SAP Sybase ASE online documentation SAP ASE Community What's New from 15.7 to 16.0.3.7 Proprietary database management systems Relational database management systems SAP SE Computer-related introductions in 1987 RDBMS software for Linux
47552633
https://en.wikipedia.org/wiki/List%20of%20people%20associated%20with%20PARC
List of people associated with PARC
Many notable computer scientists and others have been associated with the Palo Alto Research Center Incorporated (PARC), formerly Xerox PARC. They include: Nina Amenta (at PARC 1996–1997), researcher in computational geometry and computer graphics Anne Balsamo (at PARC 1999–2002), media studies scholar of connections between art, culture, gender, and technology Patrick Baudisch (at PARC 2000–2001), in human–computer interaction Daniel G. Bobrow (at PARC 1972–2017), artificial intelligence researcher Susanne Bødker (at PARC 1982–1983), researcher in human–computer interaction David Boggs (at PARC 1972–1982), computer network pioneer, coinventor of Ethernet Anita Borg (at PARC 1997–2003), computer systems researcher, advocate for women in computing John Seely Brown (at PARC 1978–2000), researcher in organizational studies, chief scientist of Xerox Bill Buxton (at PARC 1989–1994), pioneer in human–computer interaction Stuart Card (at PARC 1974-2010), applied human factors in human–computer interaction Robert Carr (at PARC in late 1970s), CAD and office software designer Ed Chi (at PARC 1997–2011), researcher in information visualization and the usability of web sites Elizabeth F. Churchill (at PARC 2004–2006), specialist in human-computer interaction and social computing Lynn Conway (at PARC 1973–1982), VLSI design pioneer and transgender activist Franklin C. Crow (at PARC circa 1982–1990), computer graphics expert who did early research in antialiasing Pavel Curtis (at PARC 1983–1996), pioneer in text-based online virtual reality systems Doug Cutting (at PARC 1990-1994), creator of Nutch, Lucene, and Hadoop Steve Deering (at PARC circa 1990–1996), internet engineer, lead designer of IPv6 L Peter Deutsch (at PARC 1971–1986), implementor of LISP 1.5, Smalltalk, and Ghostscript David DiFrancesco (at PARC 1972–1974), worked with Richard Shoup on PAINT, cofounded Pixar Paul Dourish (at PARC mid-1990s), researcher at the intersection of computer science and social science W. Keith Edwards (at PARC 1996–2004), researcher in human-computer interaction and ubiquitous computing Jerome I. Elkind (at PARC 1971–1978), head of the Computer Science Laboratory at PARC Clarence Ellis (at PARC 1976–1984), first African American CS PhD, pioneered computer-supported cooperative work David Em (at PARC 1975), computer artist, first fine artist to create a computer model of a 3d character Bill English (at PARC 1971–1989), co-invented computer mouse David Eppstein (at PARC 1989–1990), researcher in computational geometry and graph algorithms John Ellenby (at PARC 1975–1978), Led AltoII development, 1979 founded GRID Systems Matthew K. Franklin (at PARC 1998–2000), developed pairing-based elliptic-curve cryptography Gaetano Borriello (at PARC 1980–1987), developed Open Data Kit Sean R. Garner (at PARC circa 2009– ), researcher in photovoltaics and sustainable engineering Charles Geschke (at PARC 1972–1980), invented page description languages, cofounded Adobe Adele Goldberg (at PARC 1973–1986), codesigned Smalltalk, president of ACM Jack Goldman (at PARC 1970–), Xerox chief scientist 1968–1982, founded PARC in 1970 Bill Gosper (at PARC 1977–1981), founded the hacker community, pioneered symbolic computation Rich Gossweiler (at PARC 1997–2000), software engineer, expert in interaction design Rebecca Grinter (at PARC 2000–2004), researcher in human-computer interaction and computer-supported cooperative work Neil Gunther (at PARC 1982–1990), developed open-source performance modeling software Jürg Gutknecht (at PARC 1984–1985), programming language researcher, designer, with Niklaus Wirth Marti Hearst (at PARC 1994–1997), expert in computational linguistics and search engine user interfaces Jeffrey Heer (at PARC 2001-2005), expert in information visualization and interactive data analysis Bruce Horn (at PARC 1973–1981), member of original Apple Macintosh design team Bernardo Huberman (at PARC circa 1982–2000), applied chaos theory to web dynamics Dan Ingalls (at PARC circa 1972–1984), implemented Smalltalk virtual machine, invented bit blit Van Jacobson (at PARC 2006– ), developed internet congestion control protocols and diagnostics Natalie Jeremijenko (at PARC 1995), installation artist Ted Kaehler (at PARC 1972–1985), developed key systems for original Smalltalk, later Apple HyperCard, Squeak Ronald Kaplan (at PARC 1974–2006), expert in natural language processing, helped develop Interlisp Jussi Karlgren (at PARC 1991-1992), known for work on stylistics, evaluation of search technology, and statistical semantics Lauri Karttunen (at PARC 1987–2011), developed finite state morphology in computational linguistics Alan Kay (at PARC 1971–1981), pioneered object-oriented programming and graphical user interfaces Martin Kay (at PARC 1974– ), expert on machine translation and computational linguistics Gregor Kiczales (at PARC 1984–2002), invented aspect-oriented programming Ralph Kimball (at PARC 1972–1982), designed first commercial workstation with mice, icons, and windows Butler Lampson (at PARC 1971–1983), won Turing Award for his development of networked personal computers David M. Levy (at PARC 1984–1999), researcher on information overload Cristina Lopes (at PARC 1995–2002), researcher in aspect-oriented programming and ubiquitous computing Richard Francis Lyon (at PARC 1977–1981), built the first optical mouse Jock D. Mackinlay (at PARC 1986–2004) researcher in information visualization Cathy Marshall (at PARC circa 1989–2000), researcher on hypertext and personal archiving Edward M. McCreight (at PARC 1971–1989) co-invented B-trees Scott A. McGregor (at PARC 1978–1983) worked on Xerox Star, Viewers for Cedar and then Windows 1.0 at Microsoft Sheila McIlraith (at PARC 1997–1998), researcher in artificial intelligence and the semantic web Ralph Merkle (at PARC 1988–1999), invented public key cryptography and cryptographic hashing Diana Merry (at PARC circa 1971–1986), helped develop Smalltalk, co-invented bit blit Robert Metcalfe (at PARC 1972–1979), co-invented Ethernet, formulated Metcalfe's Law James G. Mitchell (at PARC 1971–1984), developed WATFOR compiler, Mesa (programming language), Spring (operating system), ARM RISC chip Louis Monier (at PARC 1983–1989), founded AltaVista search engine Thomas P. Moran (at PARC 1974–2001), founded journal Human-Computer Interaction James H. Morris (at PARC 1974–1982), co-invented KMP string matching algorithm and lazy evaluation Elizabeth Mynatt (at PARC 1995–1998), studied digital family portraits and ubiquitous computing Greg Nelson (at PARC 1980–1981), satisfiability modulo theories, extended static checking, program verification, Modula-3, theorem prover Martin Newell (at PARC 1979–1981), graphics expert who created the Utah teapot William Newman (at PARC 1973–1979), Graphics and HCI researcher, developed drawing and page description software Geoffrey Nunberg (at PARC 1987–2001), linguist known for his work on lexical semantics Severo Ornstein (at PARC 1976–1983), founding head of Computer Professionals for Social Responsibility Valeria de Paiva (at PARC 2000–2008), uses logic and category theory to model natural language George Pake (at PARC 1970–1986), pioneer in nuclear magnetic resonance, founding director of PARC Jan O. Pedersen (at PARC circa 1990-1996), researcher in search system technology and algorithms Peter Pirolli (at PARC 1991– ), developed information foraging theory Calvin Quate (at PARC 1983–1994), invented the atomic force microscope Ashwin Ram (at PARC circa 2011– ), researcher on artificial intelligence for health applications Prasad Ram (at PARC circa 1998–2000), expert on digital rights management and web search Trygve Reenskaug (at PARC 1978–1979), formulated model–view–controller user interface design George G. Robertson (at PARC circa 1988–1995), information visualization expert Daniel M. Russell (at PARC 1982–1993), AI and UI research; later at Apple, then at Google, where he calls himself a search anthropologist Eric Schmidt (at PARC 1982–1983), CEO of Google and chairman of Alphabet Ronald V. Schmidt (at PARC 1980–1985), computer network engineer who founded SynOptics Michael Schroeder (at PARC circa 1977–1985), co-invented Needham–Schroeder protocol for encrypted networking Bertrand Serlet (at PARC 1985–1989), led the Mac OS X team Scott Shenker (at PARC 1984–1998), leader in software-defined networking John Shoch (at PARC 1971–1980), developed an important predecessor of TCP/IP networking Richard Shoup (at PARC 1971–1978), invented SUPERPAINT and the first 8 bit Frame Buffer (picture memory), 1979 cofounded Aurora Charles Simonyi (at PARC 1972-1981), led the creation of Microsoft Office Alvy Ray Smith (at PARC 1974), cofounded Pixar Brian Cantwell Smith (at PARC 1982–1996), invented introspective programming and researches computational metaphors David Canfield Smith (at PARC 1975), invented computer icons Robert Spinrad (at PARC 1978–1982), designed vacuum tube computers, directed PARC Bob Sproull (at PARC 1973–1977), designed early head-mounted display, wrote widely used computer graphics textbook Jessica Staddon (at PARC 2001–2010), information privacy researcher Gary Starkweather (at PARC 1970–1988), invented laser printers and color management Maureen C. Stone (at PARC circa 1980–1998), expert in color modeling Lucy Suchman (at PARC 1980–2000), researcher on human factors, cybercultural anthropology, and feminist theory Bert Sutherland (at PARC 1975–1981), brought social scientists to PARC Robert Taylor (at PARC 1970–1983), managed early ARPAnet development, founded DEC Systems Research Center Warren Teitelman (at PARC 1972–1984), designed Interlisp Shang-Hua Teng (at PARC 1991–1992), invented smoothed analysis of algorithms and near-linear-time Laplacian solvers Larry Tesler (at PARC 1973–1980), developed Object Pascal and Apple Newton Chuck Thacker (at PARC 1971–1983), chief designer of Alto, co-invented Ethernet David Thornburg (at PARC 1971-1981), invented graphics touch tablet, cofounded Koala Technologies John Warnock (at PARC 1978–1982), cofounded Adobe Mark Weiser (at PARC 1987–1999), invented ubiquitous computing Niklaus Wirth (at PARC 1976–1977 and 1984–1985), designed Pascal and other programming languages Frances Yao (at PARC 1979–1999), researcher in computational geometry and combinatorial algorithms Annie Zaenen (at PARC 2001–2011), researcher on linguistic encoding of temporal and spatial information Lixia Zhang (at PARC 1989–1996), computer networking pioneer References PARC
19965152
https://en.wikipedia.org/wiki/Net%20Nanny
Net Nanny
Net Nanny is a content-control software suite marketed primarily towards parents as a way to monitor and control their child's computer and phone activity. Features The original version of Net Nanny released in 1996 was a web browser that could filter web and IRC content, block images, and mask profanity. Modern versions allow complete remote administration of child devices through a web portal or parent applications. Some of the features offered are - Allow or block usage of child devices using ad-hoc controls or through a schedule Monitor and block Internet content in various categories Create custom black lists and white lists for websites Track search engine usage, enforce safe search, and receive warnings for flagged words Place daily time limits on device use Monitor and allow/block applications installed on devices Track the location of mobile devices Apply different rules for individual children Web pages (including dynamic pages) are blocked by content rather than URL, even over HTTPS. This prevents children from accessing blocked websites through proxies. History Net Nanny was designed, created and founded by Gordon Ross in 1993 in Vancouver and moved to Bellevue, Washington in 2000. He became inspired to create an internet protection service for children after viewing a sting operation on a pedophile soliciting a child online. In 1998, the company expanded its offerings beyond family protection when it launched BioPassword, a bio metric security access system based on technology it acquired from Stanford University. On November 14, 2002, Net Nanny filed for bankruptcy and was sold to BioNet Systems, LLC, a maker of bio metric security software in Issaquah, Washington. LookSmart Ltd, a commercial web search company based in San Francisco acquired Net Nanny for $5.3 million in stock and cash in April 2004. In January 2007, Net Nanny was purchased by ContentWatch Inc and moved to Salt Lake City. The product line was expanded to include security and business-oriented solutions. Mobile browsers for iOS and Android were released in June 2012 at the Consumer Electronics Show. These also allowed parents to monitor and manage the applications on the phone. In 2013, Net Nanny Social was launched to allow parents to monitor their children's social media activity and to protect against cyber bullying, cyber stalking, grooming by sexual predators, and the spread of sensitive images and videos. Features were added to the desktop applications to help adults who wanted their internet content filtered. In May 2014, the Brooklyn Public Library chose Net Nanny to filter content and applications on its Android tablets to ensure compliance with the Children's Internet Protection Act. Zift, a digital parenting company, acquired Net Nanny from ContentWatch in 2016 and moved most operations to Philadelphia. In May 2019, Zift's applications were rebranded and launched as Net Nanny 10 for all supported platforms. Reception Net Nanny was rated first by TopTenReviews.com in "Internet Filter Software" and fourth in "Parental Control Software" in 2017. PCMag also posted an online review stating that "Net Nanny is fully at home in the modern, multi-device world of parental control, and it still has the best content filtering around.". See also List of content-control software References External links NetNanny - Official Website Content-control software Internet safety 1995 software
52313606
https://en.wikipedia.org/wiki/Secure%20signature%20creation%20device
Secure signature creation device
A secure signature creation device (SSCD) is a specific type of computer hardware or software that is used in creating an electronic signature. To be put into service as a secure signature creation device, the device must meet the rigorous requirements laid out under Annex II of Regulation (EU) No 910/2014 (eIDAS), where it is referred to as a qualified (electronic) signature creation device (QSCD). Using secure signature creation devices helps in facilitating online business processes that save time and money with transactions made within the public and private sectors. Description The minimum requirements that must be met to elevate an electronic signature creation device to the level of a secure signature creation device are provided in Annex II of eIDAS. Through appropriate procedural and technical means, the device must reasonably assure the confidentiality of the data used to create an electronic signature. It further must ensure that the data used to create an electronic signature is unique and only used once. Lastly it shall only allow a qualified trust service provider or certificate authority to create or manage a signatory’s electronic signature data. To ensure security, signature creation data used by the SSCD to create an electronic signature must provide reasonable protection through current technology to prevent forgery or duplication of the signature. The creation data must remain under the sole control of its signatory to prevent unauthorized use. The SSCD itself is prohibited from altering the signature’s accompanying data. When a trust service provider or certificate authority places an SSCD into service, they must securely prepare the device according to Annex II of eIDAS in fully compliance to the following three conditions: While in use or in storage, the SSCD must remain secure. Further, a reactivation and deactivation of the SSCD must occur under secure conditions. Any user activation data, include PIN codes be delivered separately from the SSCD after being prepared securely. International security assurance requirements for SSCDs The secure signature creation device must also meet the international standard for computer security certification, referred to as the Common Criteria for Information Technology Security Evaluation (ISO/IEC 15408). This standard gives computer system users the ability to specify security requirements via Protection Profiles (PPs) for security functional requirements (SFRs) and security assurance requirements (SARs). The trust service provider or certificate authority is the required to implement the specified requirements and attest to their product’s security attributes. A third-party testing laboratory then evaluates the device to ensure that the level of security is as claimed by the provider. Central authentication service When a secure signature creation device is used as part of a central authentication service (CAS), it may act as a CAS server in multi-tier authentication scenarios. The CAS software protocol allows users to be authenticated when signing into a web application. The common scheme for a CAS protocol includes the client’s web browser, an application requesting authentication and the CAS server. When authentication is needed, the application will send a request to the CAS server. The server will then compare the user’s credentials against its database. If the information matches, the CAS will respond that the user has been authenticated. Legal implications regarding secure signature creation devices eIDAS has provided a tiered approach to determining the legal implications of electronic signatures. A signature that has been created with a secure signature creation device is considered to have the strongest probative value. A document or message that has been signed with such a device is non-reputable, meaning the signatory cannot deny they are responsible for the creation of the signature. Regulation (EU) No 910/2014 (eIDAS) evolved from Directive 1999/93/EC, the Electronic Signatures Directive. The intent of the directive was to make EU Member States responsible for creating legislation that would allow for the creation of the European Union’s electronic signing system. The eIDAS Regulation required all Member States to follow its specifications for electronic signatures by its effective date of 1 July 2016. References External links www.iso.org/iso 22715:2006 www.eur-lex.europe.eu/eIDAS Regulation Authentication methods Signature Computer law Cryptography standards
330282
https://en.wikipedia.org/wiki/Camcorder
Camcorder
A camcorder is a self-contained portable electronic device with video and recording as its primary function. It is typically equipped with an articulating screen mounted on the left side, a belt to facilitate holding on the right side, hot-swappable battery facing towards the user, hot-swappable recording media, and an internally contained quiet optical zoom lens. The earliest camcorders were tape-based, recording analog signals onto videotape cassettes. In 2006, digital recording became the norm, with tape replaced by storage media such as mini-HD, microDVD, internal flash memory and SD cards. More recent devices capable of recording video are camera phones and digital cameras primarily intended for still pictures, whereas dedicated camcorders are often equipped with more functions and interfaces than more common cameras, such as an internal optical zoom lens that is able to operate silently with no throttled speed, whereas cameras with protracting zoom lenses commonly throttle operation speed during video recording to minimize acoustic disturbance. Additionally, dedicated units are able to operate solely on external power with no battery inserted. History Video cameras originally designed for television broadcast were large and heavy, mounted on special pedestals and wired to remote recorders in separate rooms. As technology improved, out-of-studio video recording was possible with compact video cameras and portable video recorders; a detachable recording unit could be carried to a shooting location. Although the camera itself was compact, the need for a separate recorder made on-location shooting a two-person job. Specialized videocassette recorders were introduced by JVC (VHS) and Sony (U-matic, with Betamax) releasing a model for mobile work. Portable recorders meant that recorded video footage could be aired on the early-evening news, since it was no longer necessary to develop film. In 1983, Sony released the first camcorder, the Betacam system, for professional use. A key component was a single camera-recorder unit, eliminating a cable between the camera and recorder and increasing the camera operator's freedom. The Betacam used the same cassette format ( tape) as the Betamax, but with a different, incompatible recording format. It became standard equipment for broadcast news. Sony released the first consumer camcorder in 1983, the Betamovie BMC-100P. It used a Betamax cassette and rested on the operator's shoulder, due to a design not permitting a single-handed grip. That year, JVC released the first VHS-C camcorder. Kodak announced a new camcorder format in 1984, the 8 mm video format. Sony introduced its compact 8 mm Video8 format in 1985. That year, Panasonic, RCA and Hitachi began producing camcorders using a full-size VHS cassette with a three-hour capacity. These shoulder-mount camcorders were used by videophiles, industrial videographers and college TV studios. Full-size Super-VHS (S-VHS) camcorders were released in 1987, providing an inexpensive way to collect news segments or other videographies. Sony upgraded Video8, releasing the Hi8 in competition with S-VHS. Digital technology emerged with the Sony D1, a device which recorded uncompressed data and required a large amount of bandwidth for its time. In 1992 Ampex introduced DCT, the first digital video format with data compression using the discrete cosine transform algorithm present in most commercial digital video formats. In 1995 Sony, JVC, Panasonic and other video-camera manufacturers launched DV, which became a de facto standard for home video production, independent filmmaking and citizen journalism. That year, Ikegami introduced Editcam (the first tapeless video recording system). Camcorders using DVD media were popular at the turn of the 21st century due to the convenience of being able to drop a disc into the family DVD player; however, DVD capability, due to the limitations of the format, is largely limited to consumer-level equipment targeted at people who are not likely to spend any great amount of effort video editing their video footage. High definition (HD) Panasonic launched DVCPRO HD in 2000, expanding the DV codec to support high definition (HD). The format was intended for professional camcorders, and used full-size DVCPRO cassettes. In 2003 Sony, JVC, Canon and Sharp introduced HDV as the first affordable HD video format, due to its use of inexpensive MiniDV cassettes. Tapeless Sony introduced the XDCAM tapeless video format in 2003, introducing the Professional Disc (PFD). Panasonic followed in 2004 with its P2 solid state memory cards as a recording medium for DVCPRO-HD video. In 2006 Panasonic and Sony introduced AVCHD as an inexpensive, tapeless, high-definition video format. AVCHD camcorders are produced by Sony, Panasonic, Canon, JVC and Hitachi. About this time, some consumer grade camcorders with hard disk and/or memory card recording used MOD and TOD file formats, accessible by USB from a PC. 3D In 2010, after the success of James Cameron's 2009 3D film Avatar, full 1080p HD 3D camcorders entered the market. With the proliferation of file-based digital formats, the relationship between recording media and recording format has declined; video can be recorded onto different media. With tapeless formats, recording media are storage for digital files. In 2011 Panasonic, Sony, and JVC released consumer-grade camcorders capable of filming in 3D. Panasonic released the HDC-SDT750. It is a 2D camcorder which can shoot in HD; 3D is achieved by a detachable conversion lens. Sony released a 3D camcorder, the HDR-TD10, with two lenses built in for 3D filming, and can optionally shoot 2D video. Panasonic has also released 2D camcorders with an optional 3D conversion lens. The HDC-SD90, HDC-SD900, HDC-TM900 and HDC-HS900 are sold as "3D-ready": 2D camcorders, with optional 3D capability at a later date. JVC also released a twin-lens camcorder in 2011, JVC Everio GS-TD1. 4K Ultra HD In CES (January) 2014, Sony announced the first consumer/low-end professional ("prosumer") camcorder Sony FDR-AX100 with a 1" 20.9MP sensor able to shoot 4K video in 3840x2160 pixels 30fps or 24fps in the XAVC-S format; in standard HD the camcorder can also deliver 60fps. When using the traditional format AVCHD, the camcorder supports 5.1 surround sound from its built-in microphone, this is however not supported in the XAVC-S format. The camera also has a 3-step ND filter switch allowing greater control of how much light can enter the camera for maintaining a shallow depth of field or giving a softer appearance to motion. For one hour video shooting in 4K the camera needs about 32 GB to accommodate a data transfer rate of 50 Mbit/s. The camera's MSRP in the US is USD $2,000. In 2015, consumer UHD (3840x2160) camcorders below USD $1000 became available. Sony released the FDRAX33, and Panasonic released the HC-WX970K and the HC-VX870. In September 2014 Panasonic announced and claimed 4K Ultra HD Camcorder HC-X1000E as the first conventional camcorder design that can capture up to 60fps at 150 Mbit/s or alternatively standard HD recording at up to 200 Mbit/s in ALL-I mode with MP4, MOV and AVCHD formats all offered depending on the resolution and frame rate. With use 1/2.3" small sensor as commonly is used by bridge cameras, the camcorder has 20x optical zoom in a compact body with dual XLR audio inputs, Internal ND filters and separate control rings for focus, iris and zoom. In HD capture, the camcorder enables in-camera downscaling of the 4K image to HD to reduce noise inherent in the smaller sensor. As of January 2017, the only major manufacturer to announce new consumer camcorders at CES (Consumer Electronic Show) in Las Vegas was Canon with its entry-level HD models. Panasonic only announced details regarding their Mirrorless Micro Four Thirds Digital Camera called the LUMIX GH5, capable of shooting 4K in 60p. This is the first time in decades that Panasonic and Sony haven't announced new traditional camcorders at CES, and instead carried over 2016's models, such as Sony's FDR-AX53. This is due to there being far less demand in the market for traditional camcorders as more and more consumers prefer to record video with their 4K-capable smartphones, DSLRs, and action cameras from GoPro, Xiaomi, Sony, Nikon, and many others. Components Camcorders have three major components: lens, imager and recorder. The lens gathers light, focusing it on the imager. The imager (usually a CCD or CMOS sensor; earlier models used vidicon tubes) converts incident light into an electrical signal. The recorder converts the electrical signal to video, encoding it in a storable form. The lens and imager comprise the "camera" section. Lens The lens is the first component of the light path. Camcorder optics generally have one or more of the following controls: Aperture (or iris): Regulates exposure and controls depth of field Zoom: Controls focal length and angle of view Shutter speed: Regulates exposure to maintain desired motion portrayal Gain: Amplifies signal strength in low-light conditions Neutral density filter: Regulates exposure intensity In consumer units these adjustments are often automatically controlled by the camcorder, but can be adjusted manually if desired. Professional-grade units offer user control of all major optical functions. Imager The imager, often a CCD, or a photodiode array which may be an Active Pixel Sensor, converts light into an electrical signal. The camera lens projects an image onto the imager surface, exposing the photosensitive array to light. This light exposure is converted into an electrical charge. At the end of the timed exposure, the imager converts the accumulated charge into a continuous analog voltage at the imager's output terminals. After the conversion is complete, the photosites reset to start the exposure of the next video frame. In many cases the photosites (per pixel) are actually reset globally by charging to a fixed voltage, and discharged towards zero individually proportionally to the accumulated light, because it is simpler to manufacture the sensor that way. Most camcorders use a single imaging sensor with integrated colour filters, per pixel, to enable red, green and blue to be sensed, each on their own set of pixels. The individual pixel filters present a significant manufacturing challenge. However some camcorders, even consumer grade devices such as the JVC GZ-HD3, introduced around 2007, are triple sensor cameras, usually CCD but could be CMOS. In this case the exact alignment of the three sensors so that the red, green and blue components of the video output are correctly aligned, is the manufacturing challenge. Recorder The recorder writes the video signal onto a recording medium, such as magnetic videobatteries. Since the record function involves many signal-processing steps, some distortion and noise historically appeared on the stored video; playback of the stored signal did not have the exact characteristics and detail as a live video feed. All camcorders have a recorder-controlling section, allowing the user to switch the recorder into playback mode for reviewing recorded footage, and an image-control section controlling exposure, focus and color balance. The image recorded need not be limited to what appeared in the viewfinder. For documenting events (as in law enforcement), the field of view overlays the time and date of the recording along the top and bottom of the image. The police car or constable badge number to which the recorder was given, the car's speed at the time of recording, compass direction and geographical coordinates may also be seen. Functionality Dedicated camcorders are usually equipped with optical image stabilization, optical zoom, stereo microphone, and touch screen. Additional possible features include a viewfinder (usually digital), LED lamp for illuminating in darkness – possibly with an option to adjust automatically, night vision which may be assisted by an infrared lamp, still photography, the ability to capture still photos while filming – usually at a higher resolution than the video, the ability to lock the OIS on far subjects while zoomed in (named "OIS Lock" by Panasonic), the ability to buffer footage before pressing the "record" button to avoid missing moments without having to be constantly recording (named "PRE-REC" by Panasonic), the ability to keep the lens cover open for few minutes into stand-by mode for rapid restarting, internal storage for recording when the inserted memory card's space is exhausted, autofocus able to track objects, and optional visual effects during video recording and playback.:p37<ref name=bh-HC-V500>[https://www.bhphotovideo.com/c/product/839115-REG/Panasonic_HC_V500M_V500M_Full_HD_Camcorder.html Panasonic HC-V500/V500M Full HD camcorder – B&H Photo & Video]</ref> Metadata such as date/time and technical parameters may be stored in a separate subtitle track. The former allows measuring the exact and undistorted recording time of scenes even if intermittently paused, and the latter may encompass aperture, frames' exposure duration, exposure value, and photosensitivity. On digital camcorders, the video resolution, frame rate, and/or bit rate may be adjustable between higher quality but larger file sizes and lower quality but extended recording time on remaining storage. The image sensor may have a higher resolution than the recorded video, allowing for lossless digital zoom by cropping the area read out from the image sensor. The video player may allow for navigation between individual frames and extraction of still frames from footage to standalone pictures. Types Analog and digital Camcorders are often classified by their storage device; VHS, VHS-C, Betamax, Video8 are examples of late 20th century videotape-based camcorders which record video in analog form. Digital video camcorder formats include Digital8, MiniDV, DVD, hard disk drive, direct to disk recording and solid-state, semiconductor flash memory. While all these formats record video in digital form, Digital8, MiniDV, DVD and hard-disk drives have no longer been manufactured in consumer camcorders since 2006. In the earliest analog camcorders the imaging device is vacuum-tube technology, in which the charge of a light-sensitive target was directly proportional to the amount of light striking it; the Vidicon is an example of such an imaging tube. Newer analog, and digital camcorders use a solid-state charge-coupled imaging device (CCD) or a CMOS imager. Both are analog detectors, using photodiodes to pass a current proportional to the light striking them. The current is then digitised before being electronically scanned and fed to the imager's output. The main difference between the two devices is how the scanning is done. In the CCD the diodes are sampled simultaneously, and the scan passes the digitised data from one register to the next. In CMOS devices, the diodes are sampled directly by the scanning logic. Digital video storage retains higher-quality video than analog storage, especially on the prosumer and strictly consumer levels. MiniDV storage allows full-resolution video (720x576 for PAL, 720x480 for NTSC), unlike analog consumer-video standards. Digital video does not experience colour bleeding, jitter, or fade. Unlike analog formats, digital formats do not experience generation loss during dubbing; however, they are more prone to complete loss. Although digital information can theoretically be stored indefinitely without deterioration, some digital formats (like MiniDV) place tracks only about 10 micrometers apart (compared with 19–58 μm for VHS). A digital recording is more vulnerable to wrinkles or stretches in the tape which could erase data, but tracking and error-correction code on the tape compensates for most defects. On analog media, similar damage registers as "noise" in the video, leaving a deteriorated (but watchable) video. DVDs may develop DVD rot, losing large chunks of data. An analog recording may be "usable" after its storage media deteriorates severely, but slight media degradation in digital recordings may trigger an "all or nothing" failure; the digital recording will be unplayable without extensive restoration. Recording media Older digital camcorders record video onto tape digitally, microdrives, hard drives, and small DVD-RAM or DVD-Rs. Newer machines since 2006 record video onto flash memory devices and internal solid-state drives in MPEG-1, MPEG-2 or MPEG-4 format. Because these codecs use inter-frame compression, frame-specific editing requires frame regeneration, additional processing and may lose picture information. Codecs storing each frame individually, easing frame-specific scene editing, are common in professional use. Other digital consumer camcorders record in DV or HDV format on tape, transferring content over FireWire or USB 2.0 to a computer where large files (for DV, 1GB for 4 to 4.6 minutes in PAL/NTSC resolutions) can be edited, converted and recorded back to tape. The transfer is done in real time, so the transfer of a 60-minute tape requires one hour to transfer and about 13GB of disk space for the raw footage (plus space for rendered files and other media). Tapeless A tapeless camcorder is a camcorder that does not use video tape for the digital recording of video productions as 20th century ones did. Tapeless camcorders record video as digital computer files onto data storage devices such as optical discs, hard disk drives and solid-state flash memory cards. Inexpensive pocket video cameras use flash memory cards, while some more expensive camcorders use solid-state drives or SSD; similar flash technology is used on semi-pro and high-end professional video cameras for ultrafast transfer of high-definition television (HDTV) content. Most consumer-level tapeless camcorders use MPEG-2, MPEG-4 or its derivatives as video coding formats. They are normally capable of still-image capture to JPEG format additionally. Consumer-grade tapeless camcorders include a USB port to transfer video onto a computer. Professional models include other options like Serial digital interface (SDI) or HDMI. Some tapeless camcorders are equipped with a FireWire (IEEE-1394) port to ensure compatibility with magnetic tape-based DV and HDV formats. Consumer market Since the consumer market favors ease of use, portability and price, most consumer-grade camcorders emphasize handling and automation over audio and video performance. Most devices with camcorder capability are camera phones or compact digital cameras, in which video is a secondary capability. Some pocket cameras, mobile phones and camcorders are shock-, dust- and waterproof. The consumer camcorder was generally still very expensive throughout the early to mid 1990s but prices compared to the 1980s had halved for an entry level model and fell even further at the turn of the millennium placing them in easier reach of basic income consumers with the addition of available and more easy to obtain credit to spread payments. This market has followed an evolutionary path driven by miniaturization and cost reduction enabled by progress in design and manufacture. Miniaturization reduces the imager's ability to gather light; designers have balanced improvements in sensor sensitivity with size reduction, shrinking the camera imager and optics while maintaining relatively noise-free video in daylight. Indoor or dim-light shooting is generally noisy, and in such conditions artificial lighting is recommended. Mechanical controls cannot shrink below a certain size, and manual camera operation has given way to camera-controlled automation for every shooting parameter (including focus, aperture, shutter speed and color balance). The few models with manual override are menu-driven. Outputs include USB 2.0, Composite and S-Video and IEEE 1394/FireWire (for MiniDV models). The high end of the consumer market emphasizes user control and advanced shooting modes. More-expensive consumer camcorders offer manual exposure control, HDMI output and external audio input, progressive-scan frame rates (24fps, 25fps, 30fps) and higher-quality lenses than basic models. To maximize low-light capability, color reproduction and frame resolution, multi-CCD/CMOS camcorders mimic the 3-element imager design of professional equipment. Field tests have shown that most consumer camcorders (regardless of price) produce noisy video in low light. Before the 21st century, video editing required two recorders and a desktop video workstation to control them. A typical home personal computer can hold several hours of standard-definition video, and is fast enough to edit footage without additional upgrades. Most consumer camcorders are sold with basic video editing software, so users can create their own DVDs or share edited footage online. Since 2006, nearly all camcorders sold are digital. Tape-based (MiniDV/HDV) camcorders are no longer popular, since tapeless models (with an SD card or internal SSD) cost almost the same but offer greater convenience; video captured on an SD card can be transferred to a computer faster than digital tape. None of the consumer-class camcorders announced at the 2006 International Consumer Electronics Show recorded on tape. Other devices Video-capture capability is not confined to camcorders. Cellphones, digital single-lens reflex and compact digicams, laptops and personal media players offer video-capture capability, but most multipurpose devices offer less video-capture functionality than an equivalent camcorder. Most lack manual adjustments, audio input, autofocus and zoom. Few capture in standard TV-video formats (480p60, 720p60, 1080i30), recording in either non-TV resolutions (320x240, 640x480) or slower frame rates (15 or 30 fps). A multipurpose device used as a camcorder offers inferior handling, audio and video performance, which limits its utility for extended or adverse shooting situations. The camera phone developed video capability during the early 21st century, reducing sales of low-end camcorders. DSLR cameras with high-definition video were also introduced early in the 21st century. Although they still have the handling and usability deficiencies of other multipurpose devices, HDSLR video offers the shallow depth-of-field and interchangeable lenses lacking in consumer camcorders. Professional video cameras with these capabilities are more expensive than the most expensive video-capable DSLR. In video applications where the DSLR's operational deficiencies can be mitigated, DSLRs such as the Canon 5D Mark II provide depth-of-field and optical-perspective control. Combo-cameras combine full-feature still cameras and camcorders in a single unit. The Sanyo Xacti HD1 was the first such unit, combining the features of a 5.1 megapixel still camera with a 720p video recorder with improved handling and utility. Canon and Sony have introduced camcorders with still-photo performance approaching that of a digicam, and Panasonic has introduced a DSLR body with video features approaching that of a camcorder. Hitachi has introduced the DZHV 584E/EW, with 1080p resolution and a touch screen. Flip Video The Flip Video was a series of tapeless camcorders introduced by Pure Digital Technologies in 2006. Slightly larger than a smartphone, the Flip Video was a basic camcorder with record, zoom, playback and browse buttons and a USB jack for uploading video. The original models recorded at a 640x480-pixel resolution; later models featured HD recording at 1280x720 pixels. The Mino was a smaller Flip Video, with the same features as the standard model. The Mino was the smallest of all camcorders, slightly wider than a MiniDV cassette and smaller than most smartphones on the market. In fact the Mino was small enough to fit inside the shell of a VHS cassette. Later HD models featured larger screens. In 2011, the Flip Video (more recently manufactured by Cisco) was discontinued. Interchangeable lenses Interchangeable-lens camcorders can capture HD video with DSLR lenses and an adapter. Built-in projector In 2011, Sony launched its HDR-PJ range of HD camcorders: the HDR-PJ10, 30 and 50. Known as Handycams, they were the first camcorders to incorporate a small image projector on the side of the unit. This feature allows a group of viewers to watch video without a television, a full-size projector or a computer. These camcorders were a huge success and Sony subsequently released further models in this range. Sony's 2014 line up comprises the HDR-PJ240, HDR-PJ330 (entry level models), HDR-PJ530 (mid-range model) and the HDR-PJ810 (top of the range). Specifications vary by model. Uses Media Camcorders are used by nearly all electronic media, from electronic-news organizations to current-affairs TV productions. In remote locations, camcorders are useful for initial video acquisition; the video is subsequently transmitted electronically to a studio or production center for broadcast. Scheduled events (such as press conferences), where a video infrastructure is readily available or can be deployed in advance, are still covered by studio-type video cameras "tethered" to production trucks. Home movies Camcorders often cover weddings, birthdays, graduations, children's growth and other personal events. The rise of the consumer camcorder during the mid- to late 1980s led to the creation of TV shows such as America's Funniest Home Videos, which showcases homemade video footage. Entertainment Camcorders are used in the production of low-budget TV shows if the production crew does not have access to more expensive equipment. Movies have been shot entirely on consumer camcorder equipment (such as The Blair Witch Project, 28 Days Later and Paranormal Activity). Academic filmmaking programs have also switched from 16mm film to digital video in early 2010s, due to the reduced expense and ease of editing of digital media and the increasing scarcity of film stock and equipment. Some camcorder manufacturers cater to this market; Canon and Panasonic support 24p (24 fps, progressive scan—the same frame rate as cinema film) video in some high-end models for easy film conversion. Education Schools in the developed world increasingly use digital media and digital education. Students use camcorders to record video diaries, make short films and develop multi-media projects across subject boundaries. Teacher evaluation involves a teacher's classroom lessons being recorded for review by officials, especially for questions of teacher tenure. Student camcorder-created material and other digital technology are used in new-teacher preparation courses. The University of Oxford Department of Education PGCE programme and NYU's Steinhardt School's Department of Teaching and Learning MAT programme are examples. The USC Rossier School of Education goes further, insisting that all students purchase their own camcorder (or similar) as a prerequisite to their MAT education programs (many of which are delivered online). These programs employ a modified version of Adobe Connect to deliver the courses. Recordings of MAT student work are posted on USC's web portal for evaluation by faculty as if they were present in class. Camcorders have allowed USC to decentralize its teacher preparation from Southern California to most American states and abroad; this has increased the number of teachers it can train. Formats The following list covers consumer equipment only (for other formats, see videotape): Analog Lo-Band: Approximately 3 MHz bandwidth (250 lines EIA resolution, or ~333x480 edge-to-edge) BCE (1954): First tape storage for video, manufactured by Bing Crosby Entertainment from Ampex equipment BCE Color (1955): First color tape storage for video, manufactured by Bing Crosby Entertainment from Ampex equipment Simplex (1955): Developed commercially by RCA and used to record live broadcasts by NBC Quadruplex videotape (1955): Developed formally by Ampex, this was the recording standard for 20 years. Vision electronic recording apparatus (Vera) (1955): An experimental recording standard developed by the BBC, it was never used or sold commercially. U-matic (1971): Tape originally used by Sony to record video U-matic S (1974): A smaller version of U-matic, used for portable recorders Betamax (1975): Used on old Sony and Sanyo camcorders and portables; obsolete by the late 1980s in the consumer market VHS (1976): Compatible with VHS VCRs; no longer manufactured VHS-C (1982): Originally designed for portable VCRs, this standard was later adapted for compact consumer camcorders; identical in quality to VHS; cassettes play in VHS VCRs with an adapter. Still available in the low-end consumer market. Relatively short running time compared to other formats. Video8 (1985): Small-format tape developed by Sony to compete with VHS-C's palm-sized design; equivalent to VHS or Betamax in picture quality Hi-Band: Approximately 5 MHz bandwidth (420 lines EIA resolution, or ~ 550x480 edge-to-edge) U-matic BVU (1982): Largely used in high-end consumer and professional equipment U-matic BVU-SP (1985): Largely used in high-end consumer and professional equipment S-VHS (1987): Largely used in mid-range consumer and prosumer equipment S-VHS-C (1987): Limited to low-end consumer market Hi8 (1988): Used in low to mid-range consumer equipment but also was available as prosumer/industrial equipment Digital DV (1995): Initially developed by Sony, the DV standard became the most widespread standard-definition digital camcorder technology for the next decade. The DV format was the first to make capturing footage for video editing possible without special hardware, using the 4- or 6-pin FireWire sockets common on computers at the time. DVCPRO (1995): Panasonic released its own variant of the DV format for broadcast news-gathering. DVCAM (1996): Sony's answer to the DVCPRO DVD recordable (1996): A variety of recordable optical disc standards were released by multiple manufacturers during the 1990s and 2000s, of which DVD-RAM was the first. The most common in camcorders was MiniDVD-R, which used recordable 8 cm discs holding 30 minutes of MPEG video. D-VHS (1998): JVC's VHS tape supporting 720p/1080i HD; many units also supported IEEE 1394 recording. Digital8 (1999): Uses Hi8 tapes; most can read older Video8 and Hi8 analog tapes. MICROMV (2001): Matchbox-sized cassette. Sony was the only electronics manufacturer for this format, and editing software was proprietary to Sony and only available on Microsoft Windows; however, open source programmers did manage to create capture software for Linux. Blu-ray Disc (2003): Manufactured by Hitachi HDV (2004): Records up to an hour of HDTV MPEG-2 signal on a MiniDV cassette MPEG-2 codec-based format: Records MPEG-2 program stream or MPEG-2 transport stream to various kinds of tapeless standard and HD media (hard disks, solid-state memory, etc.). H.264: Compressed video using the H.264 codec in an MPEG-4 file; usually stored in tapeless media AVCHD: Puts H.264 video into a transport-stream file format; compressed in H.264 format (not MPEG-4) Multiview Video Coding: Amendment to H.264/MPEG-4 AVC video compression for sequences captured from multiple cameras using a single video stream; backwards-compatible with H.264 Operating systems Since most manufacturers focus their support on Windows and Mac users, users of other operating systems have difficulty finding support for their devices. However, open-source products such as Kdenlive, Cinelerra and Kino (written for the Linux operating system) allow editing of most popular digital formats on alternative operating systems and can be used in conjunction with OBS for online broadcast solutions; software to edit DV streams is available on most platforms. Digital forensics The issue of digital-camcorder forensics to recover data (e.g. video files with timestamps) has been addressed. Recalls In 1998, Sony recalled an estimated 700,000 1998 Sony Handycams after the NightShot'' feature was found to be able to see under peoples clothes, creating a risk for accidental pornographic recording. See also 3CCD AVCHD Charge-coupled device CMOS Dew warning FireWire Flip Video PictBridge Pocket video camera Professional video camera PXL-2000—A toy camcorder that used compact audio cassette to store video SteadyShot USB streaming and USB port. VTR List of Sony camcorders List of Panasonic camcorders References External links History of Camcorders by Mark Shapiro How Camcorders Work from HowStuffWorks Audiovisual introductions in 1983 Consumer electronics Japanese inventions
22379852
https://en.wikipedia.org/wiki/Under%20the%20Sun%20%28Yosui%20Inoue%20album%29
Under the Sun (Yosui Inoue album)
Under the Sun is the 16th studio album by Japanese singer-songwriter Yōsui Inoue, released in September 1993. Two songs "Gogatsu no Wakare" and "Make-Up Shadow" were released as a single prior to the album, and the latter became a massive hit. "Make-Up Shadow" was featured as the theme song for Subarashikikana Jinsei, a television drama aired on Fuji TV. The music was composed by Jun Sato (who used the pseudonym Utsuru Ayame), and Inoue wrote the lyrics. The song became the highest-charting single for Inoue, reaching number-two on the Japanese Oricon Weekly Singles charts, selling in excess of 800,000 copies. Sato also arranged the song, and his arrangement won him a prize at the 35th Japan Record Awards. The album debuted at the number-one on the Japanese Oricon, and became his fifth chart-topping non-compilation album since 9.5 Carats released in 1984. Track listing All songs written and composed by Yosui Inoue (except where indicated) "Be-Pop Juggler" - 3:10 "Eleven" - 4:43 "" - 4:37 "Power Down" - 4:42 "Make-up Shadow" (Utsuru Ayame/Inoue) - 4:07 "" (Kyouhei Tsutsumi/Inoue) - 5:34 "" (Inoue/Natsumi Hirai) - 4:28 "" (Inoue/Jun Satō/Yasushi Akimoto) - 5:50 "" - 5:06 "Under the Sun" - 5:41 "" (Inoue/Banana-UG-Kawashima) - 4:06 Personnel Yosui Inoue - Lead and background vocals, acoustic guitar Jun Sato - Piano, electric piano, keyboards, synthesizer, acoustic guitar, percussion, background vocals Banana-UG-Kawashima - Synthesizer, piano Yoshinobu Kojima - Organ, piano Yasuharu Nakanishi - Piano Yasuhiro Kobayashi - Accordion Tsuyoshi Kon - Acoustic guitar, electric guitar, bass guitar Susumu Osada - Electric guitar Haruo Kubota - Electric guitar Koki Ito - Bass guitar Chiharu Mikuzuki - Bass guitar Hiroshi Igarashi - Auto harp, banjo, steel drums Motoya Hamaguchi - Percussion, kalimba Nobu Saito - Percussion Hideo Yamaki - Drums Jun Aoyama - Drums Shin Kazuhara - Trumpet Jake H. Conception - Saxophone Taro Kiyooka - Trombone Hidefumi Toki - Clarinet Aska Strings (conducted by Aska Kaneko) - Strings Ma*To - Computer programming Keishi Urata - Computer programming Hideki Matsutake - Computer programming Naoki "Taro" Suzuki - Drums programming Seri - Background vocals Nisa - Background vocals Anna - Background vocals Eve - Background vocals Production Arranger: Yosui Inoue (#1,4), Jun Sato (#5,6,8,9), Tsuyoshi Kon (#2), Banana-UG-Kawashima (#3,11), Yasuharu Nakanishi (#7), Haruo Kubota (#10) Composer: Yosui Inoue (All tracks except #5,6), Utsuru Ayame (#5), Kyohei Tsutsumi (#6), Natsumi Hirai (#7), Jun Sato (#8), Banana-UG-Kawashima (#11) Lyricist: Yosui Inoue (All tracks except #9), Yasushi Akimoto (#9) Mixing Engineer: Tamotsu Yoshida, Jun Tendo, Takayoshi Yamanouchi, Yuta Kagema Recording Engineer: Jun Tendo, Junichi YAmazaki, Takayoshi Yamanouchi, Yuji Kuraishi, Kazuya Yoshida, Kazuya Miyazaki Session Support Engineer: Kenji Igarashi, Yutaka Uematsu, Motoyoshi Komine, Junichi Hohrin, Yuko Suzuki, Hajime Nagai, Kenji Matsunaga, Kaoru Matsuyama Mastering Engineer: Toru Kotetsu, Masayoshi Nakajo Art/Styling: Sachico Ito Photographer: Kenji Miura Artwork producer: Noriko Shimoyama Artwork designer:Shuzo Hayashi Artwork supervisor: Tomio Watanabe Artwork: Hiroshi Shoji Promotion Staff: Mitsuo Sakauchi, Yasuhide Sasa Production Coordinator: Sei Sato, Takashi Yokoo, Yasuko Makino, Chiharu Senoma Production Manager: Nao Funatsu Production Assistant: Hidenori Muto, Rie Nishioka, Satoko Ishizaki Chart positions Album Singles Release history References 1993 albums Yōsui Inoue albums
36718505
https://en.wikipedia.org/wiki/CloudCompare
CloudCompare
CloudCompare is a 3D point cloud processing software (such as those obtained with a laser scanner). It can also handle triangular meshes and calibrated images. Originally created during a collaboration between Telecom ParisTech and the R&D division of EDF, the CloudCompare project began in 2003 with the PhD of Daniel Girardeau-Montaut on Change detection on 3D geometric data. At that time, its main purpose was to quickly detect changes in 3D high density point clouds acquired with laser scanners in industrial facilities (such as power plants) or building sites. Afterwards it evolved towards a more general and advanced 3D data processing software. It is now an independent open source project and a free software. CloudCompare provides a set of basic tools for manually editing and rendering 3D points clouds and triangular meshes. It also offers various advanced processing algorithms, among which methods for performing: projections (axis-based, cylinder or a cone unrolling, ...) registration (ICP, ...) distance computation (cloud-cloud or cloud-mesh the nearest neighbor distance, ...) statistics computation (spatial Chi-squared test, ...) segmentation (connected components labeling, front propagation based, ...) geometric features estimation (density, curvature, roughness, geological plane orientation, ...) CloudCompare can handle unlimited scalar fields per point cloud on which various dedicated algorithms can be applied (smoothing, gradient evaluation, statistics, etc.). A dynamic color rendering system helps the user to visualize per-point scalar fields in an efficient way. Therefore, CloudCompare can also be used to visualize N-D data. The user can interactively segment 3D entities (with a 2D polyline drawn on screen), interactively rotate/translate one or more entities relatively to the others, interactively pick single points or couples of points (to get the corresponding segment length) or triplets of points (to get the corresponding angle and plane normal). The latest version also supports the creation of 2D labels attached to points or rectangular areas annotations. CloudCompare is available on Windows, Linux and Mac OS X platforms, for both 32 and 64 bits architectures. It is developed in C++ with Qt. Input/Output CloudCompare supports input/output in the following formats: BIN (CloudCompare own binary format) ASCII cloud (one point per line "X Y Z ...") [wizard] PLY cloud or mesh [wizard] OBJ mesh(es) VTK cloud or mesh STL mesh E57 (ASTM E2807 standard) clouds & calibrated images LAS and LAZ clouds Point Cloud Library PCD files FBX mesh SHP files OFF mesh (Geomview) PTX cloud (Leica) FLS/FWS cloud(s) (Faro) DP cloud(s) (DotProduct) RDB / RDBX / RDS cloud(s) (Riegl) PSZ projects (Photoscan) Various other polyline formats Moreover, thanks to a collaboration with Pr. Irwin Scollar (creator of AirPhoto SE, a program for the geometric rectification of aerial images & orthophotos from multiple images), CloudCompare can also import Snavely's Bundler SfM software output file (.out) to generate orthorectified images (directly as image files or as 2D point clouds) and an approximated DTM (based on Bundler key-points) colored with images data. CloudCompare can also import various other formats: Aveva PDMS '.mac' scripts (supported primitives: cylinder, plane, cone, torus, dish, box, snout and profile extrusion), SOI (from old Mensi Soisic scanners), PN, PV, POV, ICM, etc. Eventually, CloudCompare can also export Maya ASCII files (MA). Plugins A plugin mechanism enables further extension of CloudCompare capabilities. Two kinds of plugins are available: standard plugins for algorithms coming either from the academic world (ShadeVis, HPR, Poisson reconstruction, boolean operations on meshes, etc.) or from external libraries (PCL) or others (e.g. generation of animations with qAnimation) OpenGL plugins for advanced shaders (EyeDome Lighting, SSAO, etc.) See also 3D scanner References External links Airphoto SE on the Bonn Archaeological Software Package project page Bundler project page OpenKinect project page libLAS project page libE57 project page Free 3D graphics software 3D graphics software Computer-aided design software Free computer-aided design software Free graphics software Computer-aided design software for Linux Free software programmed in C++
37131078
https://en.wikipedia.org/wiki/ProjectLibre
ProjectLibre
ProjectLibre is a project management software company with both a free open-source desktop and an upcoming Cloud version. ProjectLibre desktop is a free and open-source project management software system intended ultimately as a standalone replacement for Microsoft Project. ProjectLibre has been downloaded 5,500,000 times in 200 countries on all 7 continents and translated into 29 languages The latest release of ProjectLibre was released with extensive update for global users. ProjectLibre has been translated into 29 languages and has users on all 7 continents. The 1.9.3 release allows project managers to select the language in a drop down list. In addition to language, the country can be chosen which also sets the project currency and date format. Based on the referenced downloads, languages it is derived that ProjectLibre delivers project management software in native language and currency to over 5,500,000 people worldwide and 197 countries. ProjectLibre is written in the Java programming language, and will thus theoretically run on any machine for which a fully functioning Java virtual machine (JVM) exists. Currently, ProjectLibre is certified to run on Linux, MacOS, and Microsoft Windows. It is released under the Common Public Attribution License (CPAL) and qualifies as free software according to the Free Software Foundation. ProjectLibre's initial release was in August 2012. SourceForge staff selected ProjectLibre ProjectLibre as the January 2016 "Staff Pick" Project of the Month. ProjectLibre Cloud is a web-based project management application. ProjectLibre Cloud will be a multi-user, multi-project version in the browser. It will be similar to Google Docs compared to Microsoft Word. The beta test timing has not been announced. History The initial release of ProjectLibre occurred in August 2012. The team is looking in Q1 2022 to release a Cloud/SaaS version, which will extend the desktop features with team and enterprise features. Features The current version includes: Microsoft Project 2010 compatibility OpenOffice and LibreOffice compatibility Ribbon user interface Earned value costing Gantt chart PERT graph only (not PERT technique) Resource breakdown structure (RBS) chart Task usage reports Work breakdown structure (WBS) chart Comparison to Microsoft Project Compared to Microsoft Project, which it closely emulates, ProjectLibre has a similar user interface (UI) including a ribbon-style menu, and a similar approach to construction of a project plan: create an indented task list or work breakdown structure (WBS), set durations, create links (either by (a) mouse drag, (b) selection and then button-down, or (c) manually type in the "predecessor" column), assign resources. The columns (fields) look the same as for Microsoft Project. Costing features are comparable: labour, hourly rate, material usage, and fixed costs: these are all provided. ProjectLibre improvements Full compatibility with Microsoft Project 2010, import/export capability Printing PDF exporting (without any restrictions) Ribbon user interface Many bug fixes and correction of issues that OpenProj encounters that are mentioned above See also Comparison of project management software Microsoft Project OpenProj References External links ProjectLibre on SourceForge Free software programmed in Java (programming language) Free project management software Java platform software
680345
https://en.wikipedia.org/wiki/X68000
X68000
The is a home computer created by Sharp Corporation. It was first released in 1987 and sold only in Japan. Gaming was a major use of the X68000, with custom sprite hardware and an 8-channel sound chip enabling ports of contemporaneous arcade video games. The initial model has a 10 MHz Motorola 68000 CPU, 1 MB of RAM, and lacks a hard drive. The final model was released in 1993 with a 25 MHz Motorola 68030 CPU, 4 MB of RAM, and optional 80 MB SCSI hard drive. RAM in these systems is expandable to 12 MB, though most games and applications do not require more than 2 MB. Operating system The X68k runs an operating system called Human68k which was developed for Sharp by Hudson Soft. An MS-DOS-workalike, Human68k features English-based commands very similar to those in MS-DOS; executable files have the extension .X. Versions of the OS prior to 2.0 have command line output only for common utilities like "format" and "switch", while later versions included forms-based versions of these utilities. At least three major versions of the OS were released, with several updates in between. Other operating systems available include NetBSD for X68030 and OS-9. Early models have a GUI called "VS" or "Visual Shell"; later ones were originally packaged with SX-WINDOW. A third GUI called Ko-Window exists with an interface similar to Motif. These GUI shells can be booted from floppy disk or the system's hard drive. Most games also boot and run from floppy disk; some are hard disk installable and others require hard disk installation. Since the system's release, software such as Human68k, console, SX-Window C compiler suites, and BIOS ROMs have been released as public domain software and are freely available for download. Case design The X68000 features two soft-eject 5.25-inch floppy drives, or in some of the compact models, two 3.5-inch floppy drives, and a very distinctive case design of two connected towers, divided by a retractable carrying handle. This system was also one of the first to feature a software-controlled power switch; pressing the switch would signal the system's software to save and shutdown, similar to the ATX design of modern PCs. The screen would fade to black and sound would fade to silence before the system turned off. The system's keyboard has a mouse port built into either side. The front of the computer has a headphone jack, volume control, joystick, keyboard and mouse ports. The top has a retractable carrying handle only on non-Compact models, a reset button, and a non-maskable interrupt (NMI) button. The rear has a variety of ports, including stereoscopic output for 3D goggles, FDD and HDD expansion ports, and I/O board expansion slots. Display The monitor supports horizontal scanning rates of 15, 24, and 31 kHz and functions as a cable-ready television (NTSC-J standard) with composite video input. It was a high quality monitor for playing JAMMA-compatible arcade boards due to its analog RGB input and support for all three horizontal scanning rates used with arcade games. Disk I/O Early machines use the rare Shugart Associates System Interface (SASI) for the hard disk interface; later versions adopted the industry-standard Small Computer System Interface (SCSI). Per the hardware's capability, formatted SASI drives can be 10, 20 or 30 MB in size and can be logically partitioned as well. Human68K does not support the VFAT long filenames standard of modern Windows systems, but it supports 18.3 character filenames instead of the 8.3 character filenames allowed in the FAT filesystem. Human68K is case sensitive and allows lower case and Shift JIS encoded Kanji characters in filenames, both of which cause serious problems when a DOS system tries to read such a directory. If a X68000 user restricts themselves to use only filenames according to the 8.3 characters scheme of DOS, using only Latin upper case characters, then a disk written on the X68000 is fully compatible with other Japanese standard platforms like e.g. the NEC PC-9800, the Fujitsu FMR and FM Towns computers. The Japanese standard disk format used by the X68000 is: 77 tracks, 2 heads, 8 sectors, 1024 bytes per sector, 360 rpm (1232 KiB). Expansion Many add-on cards were released for the system, including networking (Neptune-X), SCSI, memory upgrades, CPU enhancements (JUPITER-X 68040/060 accelerator), and MIDI I/O boards. The system has two joystick ports, both 9-pin male and supporting Atari standard joysticks and MSX controllers. Capcom produced a converter that was originally sold packaged with the X68000 version of Street Fighter II that allowed users to plug in a Super Famicom or Mega Drive controller into the system. The adapter was made specifically so that users could plug in the Capcom Power Stick Fighter controller into the system. Home arcade In terms of hardware, the X68K was very similar to arcade machines of the time, and served as the Capcom CPS system development machine. It supports separate text RAM, graphic RAM and hardware sprites. Sound is produced internally via Yamaha's then top-of-the-line YM2151 FM synthesizer and a single channel OKI MSM6258V for PCM. Due to this and other similarities, it played host to many arcade game ports in its day. Games made for this system include Parodius Da! -Shinwa kara Owarai e-, Ghouls 'n Ghosts, Strider, Final Fight, Alien Syndrome, Street Fighter II: Champion Edition, Akumajo Dracula (Castlevania in other regions, the X68000 version was ported to the PlayStation as Castlevania Chronicles), Cho Ren Sha 68k (which has a Windows port) and many others. Many games also supported the Roland SC-55 and MT-32 MIDI modules for sound as well as mixed-mode internal/external output. List of X68000 series List of X68000 games Technical specifications Processors Main CPU (central processing unit) X68000 (1987) to SUPER (1991) models - Hitachi HD68HC000 (16/32-bit) @ 10 MHz XVI (1991) to Compact (1992) models - Motorola 68000 (16/32-bit) @ 16 MHz X68030 (1993) models - Motorola MC68EC030 (32-bit) @ 25 MHz Sub-CPU: Oki MSM80C51 MCU GPU (graphics processing unit) chipset: Sharp-Hudson Custom Chipset X68000 (1987) model - CYNTHIA Jr Sprite Controller, VINAS CRT Controller, VSOP Video Controller, RESERVE Video Data Selector ACE (1988) to X68030 (1993) models - CYNTHIA Sprite Controller, VICON CRT Controller, VIPS Video Controller, CATHY Video Data Selector Sound chips: Yamaha YM2151: Eight FM synthesis channels Yamaha YM3012: Floating point DAC with 2-channel stereo output Oki MSM6258: One 4-bit ADPCM mono channel @ 22 kHz sampling rate Memory ROM: 1 MB (128 kB BIOS, 768 kB Character Generator) Main RAM: 1-4 MB (expandable up to 12 MB) VRAM: 1056 kB 512 kB graphics 512 kB text 32 kB sprites SRAM: 16 kB static RAM Graphics Color palette: 65,536 (16-bit RGB high color depth) Maximum colors on screen: 65,536 (in 512×512 resolution) Screen resolutions (all out of 65,536 color palette) 256×240 pixels @ 16 to 65,536 colors 256×256 pixels @ 16 to 65,536 colors 512×240 pixels @ 16 to 65,536 colors 512×256 pixels @ 16 to 65,536 colors 512×512 pixels @ 16 to 65,536 colors 640×480 pixels @ 16 to 64 colors 768×512 pixels @ 16 to 64 colors 1024×1024 pixels @ 16 to 64 colors Graphics hardware: Hardware scrolling, priority control, super-impose, dual tilemap background layers, sprite flipping Graphical planes: 1-4 bitmap planes, 1-2 tilemap planes, 1 sprite plane Bitmap planes 1 layer: 512×512 resolution @ 65,536 colors on screen, or 1024×1024 resolution @ 64 colors on screen (out of 65,536 color palette) 2 layers: 512×512 resolution @ 256 colors on screen per layer (512 colors combined) (out of 65,536 color palette) 4 layers: 512×512 resolution @ 16 colors on screen per layer (64 colors combined) (out of 65,536 color palette) BG tilemap planes BG plane resolutions: 256×256 (2 layers) or 512×512 (1 layer) BG chip/tile size: 8×8 or 16×16 Colors per BG layer: 256 (out of 65,536 color palette) BG colors on screen: 256 (1 layer) or 512 (2 layers), out of 65,536 color palette BG tiles on screen: 512 (16×16 tiles in 256×256 layers) to 4096 (8×8 tiles in 512×512 layer) Sprite plane Sprite count: 128 sprites on screen, 32 sprites per scanline, 256 sprite patterns in VRAM (can be multiplied up to 512 sprites on screen with scanline raster interrupt method) Sprite size: 16×16 Colors per sprite: 16 colors per palette, selectable from 16 palettes (out of 65,536 color palette) Sprite colors on screen: 256 (out of 65,536 color palette) Sprite tile size: 8×8 or 16×16 Sprite tile count: 128 (16×16) to 512 (8×8) on screen, 256 (16×16) to 1024 (8×8) in VRAM Other specifications Expansion: 2 card slots (4 on Pro models) I/O Ports: 2 MSX compatible joystick ports Audio IN / OUT Stereo scope/3D goggles port TV/monitor Control RGB/NTSC Video Image I/O Expansion (2 slots) External FDD (up to 2) SASI/SCSI (depending on model) RS232 serial port Parallel port Headphone and microphone ports Floppy Drives: Two soft-eject 5.25″ floppy drives, 1.2 MB each Two 3.5″ floppy drives, 1.44 MB each (compact models) Hard Disk: 20-80 MB SASI/SCSI (depending on model) Operating Systems: Human68k (MS DOS-alike developed by Hudson), SX-Windows GUI Power Input: AC 100 V, 50/60 Hz Weight: ~8 kg (~10 kg Pro) Optional upgrades Upgradable CPU: HARP: Motorola 68000 @ 20 MHz REDZONE: Motorola 68000 @ 24 MHz X68030 D'ash: Motorola 68030 @ 33 MHz Xellent30: Motorola 68030 @ 40 MHz HARP-FX: Motorola 68030 @ 50 MHz Xellent40: Motorola 68040 @ 33 MHz 060Turbo: Motorola 68060 @ 50 MHz Jupiter-EX: Motorola 68060 @ 66 MHz Venus-X/060: Motorola 68060 @ 75 MHz Additional CPU: CONCERTO-X68K: NEC V30 @ 8 MHz, with 512 kB RAM VDTK-X68K: NEC V70 @ 20 MHz, with 2 MB DRAM and 128 kB SRAM FPU (floating point unit) coprocessor: Sharp CZ-6BP1 Sharp CZ-6BP2: Motorola 68881 @ 16 MHz Sharp CZ-5MP1: Motorola 68882 @ 25 MHz Xellent30: Motorola 68882 @ 33 MHz Tsukumo TS-6BE6DE: Motorola MC68882, with 6 MB RAM Sound card: Sharp CZ-6BM1: MIDI card System Sacom SX-68M: MIDI card System Sacom SX-68M-2: MIDI card Marcury-Unit: 16-bit stereo PCM @ 48 kHz sampling rate, 2× Yamaha YMF288 FM synthesis sound chips Graphics accelerator & sound card: Tsukumo TS-6BGA Graphics chip: Cirrus Logic CL-GD5434 (1994) VRAM: 2 MB (2048 kB) 64-bit DRAM Color palette: 16,777,216 (24-bit RGB true color depth) and alpha channel (RGBA) Maximum colors on screen: 16,777,216 Maximum resolution: 2048×1024 pixels Screen resolutions (all out of 16,777,216 color palette) 768×512 pixels @ 32,768 to 16,777,216 colors 800×600 pixels @ 32,768 to 16,777,216 colors 1024×512 pixels @ 32,768 to 16,777,216 colors 1024×768 pixels @ 32,768 to 16,777,216 colors 1024×1024 pixels @ 32,768 colors 1280×1024 pixels @ 256 colors 2048×1024 pixels @ 256 colors Graphical capabilities: 64-bit GUI acceleration, blitter, bit blit Audio capabilities: 16-bit stereo PCM @ 48 kHz sampling rate Hard disk drive storage: Sharp CZ-5H08: 80 MB Sharp CZ-68H: 81 MB Sharp CZ-5H16: 160 MB See also X68000's MDX X1, the predecessor of the X68000 References External links Japanese Computer Emulation Centre Japanese site for official public domain software and ROMs English site with X68000 Hardware information and emulators X68000 review at old-computers.com Sharp X68000 68000-based home computers Home video game consoles Home computers Products introduced in 1987
8688139
https://en.wikipedia.org/wiki/Electronic%20circuit%20design
Electronic circuit design
Electronic circuit design comprises the analysis and synthesis of electronic circuits. Methods To design any electrical circuit, either analog or digital, electrical engineers need to be able to predict the voltages and currents at all places within the circuit. Linear circuits, that is, circuits wherein the outputs are linearly dependent on the inputs, can be analyzed by hand using complex analysis. Simple nonlinear circuits can also be analyzed in this way. Specialized software has been created to analyze circuits that are either too complicated or too nonlinear to analyze by hand. Circuit simulation software allows engineers to design circuits more efficiently, reducing the time cost and risk of error involved in building circuit prototypes. Some of these make use of hardware description languages such as VHDL or Verilog. Network simulation software More complex circuits are analyzed with circuit simulation software such as SPICE and EMTP. Linearization around operating point When faced with a new circuit, the software first tries to find a steady state solution wherein all the nodes conform to Kirchhoff's Current Law and the voltages across and through each element of the circuit conform to the voltage/current equations governing that element. Once the steady state solution is found, the software can analyze the response to perturbations using piecewise approximation, harmonic balance or other methods. Piece-wise linear approximation Software such as the PLECS interface to Simulink uses piecewise linear approximation of the equations governing the elements of a circuit. The circuit is treated as a completely linear network of ideal diodes. Every time a diode switches from on to off or vice versa, the configuration of the linear network changes. Adding more detail to the approximation of equations increases the accuracy of the simulation, but also increases its running time. Synthesis Simple circuits may be designed by connecting a number of elements or functional blocks such as integrated circuits. More complex digital circuits are typically designed with the aid of computer software. Logic circuits (and sometimes mixed mode circuits) are often described in such hardware description languages as HDL, VHDL or Verilog, then synthesized using a logic synthesis engine. References See also Integrated circuit design Electronic design
37661419
https://en.wikipedia.org/wiki/Headquarters%20Emergency%20Relocation%20Team
Headquarters Emergency Relocation Team
Headquarters Emergency Relocation Team (HERT) was a subordinate unit to the United States' Strategic Air Command during the Cold War, poised to provide command and control (C2) of SAC forces in the event of a national emergency (i.e. nuclear war). The personnel and equipment were stationed at Offutt AFB, Nebraska, with temporary deployment locations at the Cornhusker Army Ammunition Plant, Grand Island, Nebraska. History The term HERT was superseded by Enduring Battle Management Support Center (EBMSC) circa 1982. The unit was redesignated 55th Mobile Command and Control Squadron, after SAC was inactivated in 1992. Chronology 1970: OPLAN 109 for HERT is first developed. Mostly a paper exercise, OPLAN 109 was revised in 1974, 1975, 1977. June 1979: participation in exercise GLOBAL SHIELD April 1980: participation in exercise PRIZE GAUNTLET June 1980: participation in exercise GLOBAL SHIELD January 1981: participation in exercise GLOBAL SHIELD June 1982: participation in exercise GLOBAL SHIELD June 1983: participation in exercise GLOBAL SHIELD Jan 1984: participation in local Offutt AFB exercises Apr 1984: participation in exercise NIGHT TRAIN 84, alongside Rex 84 Weather support The 3d Weather Wing (3 WW) was tasked to support HERT. 3 WW developed nuclear fallout procedures for Post Attack Command and Control System (PACCS), and demonstrated Defense Meteorological Support Program (DMSP) Mark IV capabilities. Weather support centered around DMSP satellite data, local observations, climographic data and aircraft forwarded "air reports" (AIREPS). Future capabilities were to rely on Marine HF broadcast data. Interaction with NPO As the National Program Office (NPO) mirrored HERTs command and control abilities for the civilian side of the government, coordination between the two were exercised during exercise NIGHT TRAIN 84, which was held concurrent with READINESS EXERCISE 84 (REX 84). See also Post Attack Command and Control System Continuity of government 55th Mobile Command and Control Squadron References Military installations in Nebraska Military units and formations of the United States in the Cold War United States nuclear command and control Continuity of government in the United States
34744096
https://en.wikipedia.org/wiki/Brantley%20Coile
Brantley Coile
Brantley Coile is an inventor and founder of network technology companies whose products include PIX Firewall, the first stateful-inspection firewall and Cisco Systems' first load-balancer, LocalDirector. Coile's patents include the fundamental patents on Network Address Translation (NAT). Coile earned a degree in computer science at the University of Georgia. In 1994, he co-founded Network Translation, where he created the PIX Firewall appliance a new class of data communication firewalls utilizing stateful packet inspection. After leaving Cisco Systems in 2000, he founded Coraid, Inc. to design and develop network storage devices using the ATA-over-Ethernet (AoE), an open and lightweight network storage protocol. Coile founded South Suite, Inc. in 2013 and continued to develop AoE technology. In 2015 he purchased Coraid's EtherDrive intellectual property and founded The Brantley Coile Company, a subsidiary of SouthSuite. References American businesspeople American computer programmers American computer scientists 21st-century American engineers 21st-century American inventors Living people Year of birth missing (living people)
6767
https://en.wikipedia.org/wiki/Commodore%201541
Commodore 1541
The Commodore 1541 (also known as the CBM 1541 and VIC-1541) is a floppy disk drive which was made by Commodore International for the Commodore 64 (C64), Commodore's most popular home computer. The best-known floppy disk drive for the C64, the 1541 is a single-sided 170-kilobyte drive for 5¼" disks. The 1541 directly followed the Commodore 1540 (meant for the VIC-20). The disk drive uses group coded recording (GCR) and contains a MOS Technology 6502 microprocessor, doubling as a disk controller and on-board disk operating system (DOS) processor. The number of sectors per track varies from 17 to 21 (an early implementation of zone bit recording). The drive's built-in disk operating system is CBM DOS 2.6. History Introduction The 1541 was priced at under at its introduction. A C64 plus a 1541 cost about $900, while an Apple II with no disk drive cost $1,295. The first 1541 drives produced in 1982 have a label on the front reading VIC-1541 and have an off-white case to match the VIC-20. In 1983, the 1541 was switched to having the familiar beige case and a front label reading simply "1541" along with rainbow stripes to match the Commodore 64. By 1983 a 1541 sold for $300 or less. After a brutal home-computer price war that Commodore began, the C64 and 1541 together cost under $500. The drive became very popular, and became difficult to find. The company claimed that the shortage occurred because 90% of C64 owners bought the 1541 compared to its 30% expectation, but the press discussed what Creative Computing described as "an absolutely alarming return rate" because of defects. The magazine reported in March 1984 that it received three defective drives in two weeks, and Compute!'s Gazette reported in December 1983 that four of the magazine's seven drives had failed; "COMPUTE! Publications sorely needs additional 1541s for in-house use, yet we can't find any to buy. After numerous phone calls over several days, we were able to locate only two units in the entire continental United States", reportedly because of Commodore's attempt to resolve a manufacturing issue that caused the high failures. The early (1982 to 1983) 1541s have a spring-eject mechanism (Alps drive), and the disks often fail to release. This style of drive has the popular nickname "Toaster Drive", because it requires the use of a knife or other hard thin object to pry out the stuck media just like a piece of toast stuck in an actual toaster (though this is inadvisable with actual toasters). This was fixed later when Commodore changed the vendor of the drive mechanism (Mitsumi) and adopted the flip-lever Newtronics mechanism, greatly improving reliability. In addition, Commodore made the drive's controller board smaller and reduced its chip count compared to the early 1541s (which had a large PCB running the length of the case, with dozens of TTL chips). The beige-case Newtronics 1541 was produced from 1984 to 1986. Versions and third-party clones All but the very earliest non-II model 1541s can use either the Alps or Newtronics mechanism. Visually, the first models, of the VIC-1541 denomination, have an off-white color like the VIC-20 and VIC-1540. Then, to match the look of the C64, CBM changed the drive's color to brown-beige and the name to Commodore 1541. The 1541's numerous shortcomings opened a market for a number of third-party clones of the disk drive, a situation that continued for the lifetime of the C64. Well-known clones are the Oceanic OC-118 a.k.a. Excelerator+, the MSD Super Disk single and dual drives, the Enhancer 2000, the Indus GT, and CMD 's FD-2000 and FD-4000. Nevertheless, the 1541 became the first disk drive to see widespread use in the home and Commodore sold millions of the units. In 1986, Commodore released the 1541C, a revised version that offered quieter and slightly more reliable operation and a light beige case matching the color scheme of the Commodore 64C. It was replaced in 1988 by the 1541-II, which uses an external power supply to provide cooler operation and allows the drive to have a smaller desktop footprint (the power supply "brick" being placed elsewhere, typically on the floor). Later ROM revisions fixed assorted problems, including a software bug that caused the save-and-replace command to corrupt data. Successors The Commodore 1570 is an upgrade from the 1541 for use with the Commodore 128, available in Europe. It offers MFM capability for accessing CP/M disks, improved speed, and somewhat quieter operation, but was only manufactured until Commodore got its production lines going with the 1571, the double-sided drive. Finally, the small, external-power-supply-based, MFM-based Commodore 1581 3½-inch drive was made, giving 800 KB access to the C128 and C64. Design Hardware The 1541 does not have DIP switches to change the device number. If a user added more than one drive to a system the user had to open the case and cut a trace in the circuit board to permanently change the drive's device number, or hand-wire an external switch to allow it to be changed externally. It was also possible to change the drive number via a software command, which was temporary and would be erased as soon as the drive was powered off. 1541 drives at power up always default to device #8. If multiple drives in a chain are used, then the startup procedure is to power on the first drive in the chain, alter its device number via a software command to the highest number in the chain (if three drives were used, then the first drive in the chain would be set to device #10), then power on the next drive, alter its device number to the next lowest, and repeat the procedure until the final drive at the end of the chain was powered on and left as device #8. Unlike the Apple II, where support for two drives was normal, it was relatively uncommon for Commodore software to support this setup, and the CBM DOS copy file command was not able to copy files between drives – a third party copy utility needed to be used instead. The pre-II 1541s also have an internal power source, which generated a lot of heat. The heat generation was a frequent source of humour. For example, Compute! stated in 1988 that "Commodore 64s used to be a favorite with amateur and professional chefs since they could compute and cook on top of their 1500-series disk drives at the same time". A series of humorous tips in MikroBitti in 1989 said "When programming late, coffee and kebab keep nicely warm on top of the 1541." The MikroBitti review of the 1541-II said that its external power source "should end the jokes about toasters". The drive-head mechanism installed in the early production years is notoriously easy to misalign. The most common cause of the 1541's drive head knocking and subsequent misalignment is copy-protection schemes on commercial software. The main cause of the problem is that the disk drive itself does not feature any means of detecting when the read/write head reaches track zero. Accordingly, when a disk is not formatted or a disk error occurs, the unit tries to move the head 40 times in the direction of track zero (although the 1541 DOS only uses 35 tracks, the drive mechanism itself is a 40-track unit, so this ensured track zero would be reached no matter where the head was before). Once track zero is reached, every further attempt to move the head in that direction would cause it to be rammed against a solid stop: for example, if the head happened to be on track 18 (where the directory is located) before this procedure, the head would be actually moved 18 times, and then rammed against the stop 22 times. This ramming gives the characteristic "machine gun" noise and sooner or later throws the head out of alignment. A defective head-alignment part likely caused many of the reliability issues in early 1541 drives; one dealer told Compute!s Gazette in 1983 that the part had caused all but three of several hundred drive failures that he had repaired. The drives were so unreliable that Info magazine joked, "Sometimes it seems as if one of the original design specs ... must have said 'Mean time between failure: 10 accesses.'" Users can realign the drive themselves with a software program and a calibration disk. What the user would do is remove the drive from its case and then loosen the screws holding the stepper motor that moved the head, then with the calibration disk in the drive gently turn the stepper motor back and forth until the program shows a good alignment. The screws are then tightened and the drive is put back into its case. A third-party fix for the 1541 appeared in which the solid head stop was replaced by a sprung stop, giving the head a much easier life. The later 1571 drive (which is 1541-compatible) incorporates track-zero detection by photo-interrupter and is thus immune to the problem. Also, a software solution, which resides in the drive controller's ROM, prevents the rereads from occurring, though this could cause problems when genuine errors did occur. Due to the alignment issues on the Alps drive mechanisms, Commodore switched suppliers to Newtronics in 1984. The Newtronics mechanism drives have a lever rather than a pull-down tab to close the drive door. Although the alignment issues were resolved after the switch, the Newtronics drives added a new reliability problem in that many of the read/write heads were improperly sealed, causing moisture to penetrate the head and short it out. The 1541's PCB consists mainly of a 6502 CPU, two 6522 VIA chips, and 2k of work RAM. Up to 48k of RAM can be added; this was mainly useful for defeating copy protection schemes since an entire disk track could be loaded into drive RAM, while the standard 2k only accommodated a few sectors (theoretically eight, but some of the RAM was used by CBM DOS as work space). Some Commodore users used 1541s as an impromptu math coprocessor by uploading math-intensive code to the drive for background processing. Interface The 1541 uses a proprietary serialized derivative of the IEEE-488 parallel interface, which Commodore used on their previous disk drives for the PET/CBM range of personal and business computers, but when the VIC-20 was in development, a cheaper alternative to the expensive IEEE-488 cables was sought. To ensure a ready supply of inexpensive cabling for its home computer peripherals, Commodore chose standard DIN connectors for the serial interface. Disk drives and other peripherals such as printers connected to the computer via a daisy chain setup, necessitating only a single connector on the computer itself. Control Throughput and software IEEE Spectrum in 1985 stated that: The C-64's designers blamed the 1541's slow speed on the marketing department's insistence that the computer be compatible with the 1540, which was slow because of a flaw in the 6522 VIA interface controller. Initially, Commodore intended to use a hardware shift register (one component of the 6522) to maintain fast drive speeds with the new serial interface. However, a hardware bug with this chip prevented the initial design from working as anticipated, and the ROM code was hastily rewritten to handle the entire operation in software. According to Jim Butterfield, this causes a speed reduction by a factor of five; had 1540 compatibility not been a requirement, the disk interface would have been much faster. In any case, the C64 normally could not work with a 1540 unless the VIC-II video output was disabled via a register write, which would stop the halting the CPU during certain video lines which ensured correct serial timing. As implemented on the VIC-20 and C64, Commodore DOS transfers 300 bytes per second, compared to the Atari 810's 2,400 bytes per second, the Apple Disk II's 15,000 bytes per second, and the 300-baud data rate of the Commodore Datasette storage system. About 20 minutes are needed to copy one disk—10 minutes of reading time, and 10 minutes of writing time. However, since both the computer and the drive can easily be reprogrammed, third parties quickly wrote more efficient firmware that would speed up drive operations drastically. Without hardware modifications, some "fast loader" utilities (which bypassed routines in the 1541's onboard ROM) managed to achieve speeds of up to 4 KB/s. The most common of these products are the Epyx Fast Load, the Final Cartridge, and the Action Replay plug-in ROM cartridges, which all have machine code monitor and disk editor software on board as well. The popular Commodore computer magazines of the era also entered the arena with type-in fast-load utilities, with Compute!'s Gazette publishing TurboDisk in 1985 and RUN publishing Sizzle in 1987. Even though each 1541 has its own on-board disk controller and disk operating system, it is not possible for a user to command two 1541 drives to copy a disk (one drive reading and the other writing) as with older dual drives like the 4040 that was often found with the PET computer, and which the 1541 is backward-compatible with (it can read 4040 disks but not write to them as a minor difference in the number of header bytes makes the 4040 and 1541 only read-compatible). Originally, to copy from drive to drive, software running on the C64 was needed and it would first read from one drive into computer memory, then write out to the other. Only when Fast Hack'em and, later, other disk backup programs were released, was true drive-to-drive copying possible for a pair of 1541s. The user could, if they wished, unplug the C64 from the drives (i.e., from the first drive in the daisy chain) and do something else with the computer as the drives proceeded to copy the entire disk. This is not a recommended practice as disconnecting the serial lead from a powered drive and/or computer can result in destruction of one or both of the port chips in the disk drive. Media The 1541 drive uses standard 5¼-inch double-density floppy media; high-density media will not work due to its different magnetic coating requiring a higher magnetic coercivity. As the GCR encoding scheme does not use the index hole, the drive was also compatible with hard-sectored disks. The standard CBM DOS format is 170 KB with 35 tracks and 256-byte sectors. It is similar to the format used on the PET 2031, 2040 & 4040 drives, but a minor difference in the number of header bytes makes these drives and the 1541 only read-compatible; disks formatted with one drive cannot be written to by the other. The drives will allow writes to occur, but the inconsistent header size will damage the data in the data portions of each track. The 4040 drives used Shugart SA-400s, which were 35-track units, thus the format there was due to physical limitations of the drive mechanism. The 1541 used 40 track mechanisms, but Commodore intentionally limited the CBM DOS format to 35 tracks because of reliability issues with the early units. It was possible via low-level programming to move the drive head to tracks 36–40 and write on them, this was sometimes done by commercial software for copy protection purposes and/or to get additional data on the disk. However, one track is reserved by DOS for directory and file allocation information (the BAM, block availability map). And since for normal files, two bytes of each physical sector are used by DOS as a pointer to the next physical track and sector of the file, only 254 out of the 256 bytes of a block are used for file contents. If the disk side was not otherwise prepared with a custom format, (e.g. for data disks), 664 blocks would be free after formatting, giving 664 × 254 = 168,656 bytes (or almost 165 KB) for user data. By using custom formatting and load/save routines (sometimes included in third-party DOSes, see below), all of the mechanically possible 40 tracks can be used. Owing to the drive's non-use of the index hole, it was also possible to make "flippy floppies" by inserting the diskette upside-down and formatting the other side, and it was commonplace and normal for commercial software to be distributed on such disks. Tracks 36–42 are non-standard. The bitrate is the raw one between the read/write head and signal circuitry so actual useful data rate is a factor 5/4 less due to GCR encoding. The 1541 disk typically has 35 tracks. Track 18 is reserved; the remaining tracks are available for data storage. The header is on 18/0 (track 18, sector 0) along with the BAM, and the directory starts on 18/1 (track 18, sector 1). The file interleave is 10 blocks, while the directory interleave is 3 blocks. Header contents: The header is similar to other Commodore disk headers, the structural differences being the BAM offset ($04) and size, and the label+ID+type offset ($90). $00–01 T/S reference to first directory sector (18/1) 02 DOS version ('A') 04-8F BAM entries (4 bytes per track: Free Sector Count + 24 bits for sectors) 90-9F Disk Label, $A0 padded A2-A3 Disk ID A5-A6 DOS type ('2A') Uses Early copy protection schemes deliberately introduced read errors on the disk, the software refusing to load unless the correct error message is returned. The general idea was that simple disk-copy programs are incapable of copying the errors. When one of these errors is encountered, the disk drive (as do many floppy disk drives) will attempt one or more reread attempts after first resetting the head to track zero. Few of these schemes had much deterrent effect, as various software companies soon released "nibbler" utilities that enabled protected disks to be copied and, in some cases, the protection removed. Commodore copy protection sometimes would fail on specific hardware configurations. Gunship, for example, does not load if a second disk drive or printer is connected to the computer. See also Commodore 64 Commodore 64 peripherals 1541 Ultimate References Further reading CBM (1982). VIC-1541 Single Drive Floppy Disk User's Manual. 2nd ed. Commodore Business Machines, Inc. P/N 1540031-02. Neufeld, Gerald G. (1985). 1541 User's Guide. The Complete Guide to Commodore's 1541 Disk Drive. Second Printing, June 1985. 413 pp. Copyright © 1984 by DATAMOST, Inc. (Brady). . Immers, Richard; Neufeld, Gerald G. (1984). Inside Commodore DOS. The Complete Guide to the 1541 Disk Operating System. DATAMOST, Inc & Reston Publishing Company, Inc. (Prentice-Hall). . Englisch, Lothar; Szczepanowski, Norbert (1984). The Anatomy of the 1541 Disk Drive. Grand Rapids, MI: Abacus Software (translated from the original 1983 German edition, Düsseldorf: Data Becker GmbH). . External links Disk Preservation Project: internal drive mechanics and copy protection Undocumented 1541 drive functions from the Project 64 website RUN Magazine Issue 64 devili.iki.fi: Beyond the 1541, Mass Storage For The 64 And 128, COMPUTE!'s Gazette, issue 32, February 1986 (market overview) 1541 Maintenance Guide from Bitsavers Freespin, the Commodore 1541 graphical demo running on the floppy drive The Ultimate Commodore 1541 Disk Drive Talk (video), a recording of a talk in Aug 2021 at the VCF West 2021 CBM floppy disk drives Commodore 64 CBM hardware
47477
https://en.wikipedia.org/wiki/Ames%20Research%20Center
Ames Research Center
The Ames Research Center (ARC), also known as NASA Ames, is a major NASA research center at Moffett Federal Airfield in California's Silicon Valley. It was founded in 1939 as the second National Advisory Committee for Aeronautics (NACA) laboratory. That agency was dissolved and its assets and personnel transferred to the newly created National Aeronautics and Space Administration (NASA) on October 1, 1958. NASA Ames is named in honor of Joseph Sweetman Ames, a physicist and one of the founding members of NACA. At last estimate NASA Ames has over US$3 billion in capital equipment, 2,300 research personnel and a US$860 million annual budget. Ames was founded to conduct wind-tunnel research on the aerodynamics of propeller-driven aircraft; however, its role has expanded to encompass spaceflight and information technology. Ames plays a role in many NASA missions. It provides leadership in astrobiology; small satellites; robotic lunar exploration; the search for habitable planets; supercomputing; intelligent/adaptive systems; advanced thermal protection; and airborne astronomy. Ames also develops tools for a safer, more efficient national airspace. The center's current director is Eugene Tu. The site is mission center for several key missions (Kepler, the Stratospheric Observatory for Infrared Astronomy (SOFIA), Interface Region Imaging Spectrograph) and a major contributor to the "new exploration focus" as a participant in the Orion crew exploration vehicle. Missions Although Ames is a NASA Research Center, and not a flight center, it has nevertheless been closely involved in a number of astronomy and space missions. The Pioneer program's eight successful space missions from 1965 to 1978 were managed by Charles Hall at Ames, initially aimed at the inner Solar System. By 1972, it supported the bold flyby missions to Jupiter and Saturn with Pioneer 10 and Pioneer 11. Those two missions were trail blazers (radiation environment, new moons, gravity-assist flybys) for the planners of the more complex Voyager 1 and Voyager 2 missions, launched five years later. In 1978, the end of the program brought about a return to the inner solar system, with the Pioneer Venus Orbiter and Multiprobe, this time using orbital insertion rather than flyby missions. Lunar Prospector was the third mission selected by NASA for full development and construction as part of the Discovery Program. At a cost of $62.8 million, the 19-month mission was put into a low polar orbit of the Moon, accomplishing mapping of surface composition and possible polar ice deposits, measurements of magnetic and gravity fields, and study of lunar outgassing events. Based on Lunar Prospector Neutron Spectrometer (NS) data, mission scientists have determined that there is indeed water ice in the polar craters of the Moon. The mission ended July 31, 1999, when the orbiter was guided to an impact into a crater near the lunar south pole in an (unsuccessful) attempt to analyze lunar polar water by vaporizing it to allow spectroscopic characterization from Earth telescopes. The 11-pound (5 kg) GeneSat-1, carrying bacteria inside a miniature laboratory, was launched on December 16, 2006. The very small NASA satellite has proven that scientists can quickly design and launch a new class of inexpensive spacecraft—and conduct significant science. The Lunar Crater Observation and Sensing Satellite (LCROSS) mission to look for water on the Moon was a 'secondary payload spacecraft.' LCROSS began its trip to the Moon on the same rocket as the Lunar Reconnaissance Orbiter (LRO), which continues to conduct a different lunar task. It launched in April 2009 on an Atlas V rocket from Kennedy Space Center, Florida. The Kepler mission was NASA's first mission capable of finding Earth-size and smaller planets. The Kepler mission monitored the brightness of stars to find planets that pass in front of them during the planets' orbits. During such passes or 'transits,' the planets will slightly decrease the star's brightness. Stratospheric Observatory for Infrared Astronomy (SOFIA) is a joint venture of the U.S. and German aerospace agencies, NASA and the German Aerospace Center (DLR) to make an infrared telescope platform that can fly at altitudes high enough to be in the infrared-transparent regime above the water vapor in the Earth's atmosphere. The aircraft is supplied by the U.S., and the infrared telescope by Germany. Modifications of the Boeing 747SP airframe to accommodate the telescope, mission-unique equipment and large external door were made by L-3 Communications Integrated Systems of Waco, Texas. The Interface Region Imaging Spectrograph mission is a partnership with the Lockheed Martin Solar and Astrophysics Laboratory to understand the processes at the boundary between the Sun's chromosphere and corona. This mission is sponsored by the NASA Small Explorer program. The Lunar Atmosphere Dust Environment Explorer (LADEE) mission has been developed by NASA Ames. This successfully launched to the Moon on September 6, 2013. In addition, Ames has played a support role in a number of missions, most notably the Mars Pathfinder and Mars Exploration Rover missions, where the Ames Intelligent Robotics Laboratory played a key role. NASA Ames was a partner on the Mars Phoenix, a Mars Scout Program mission to send a high-latitude lander to Mars, deployed a robotic arm to dig trenches up to 1.6 feet (one half meter) into the layers of water ice and analyzing the soil composition. Ames is also a partner on the Mars Science Laboratory and its Curiosity rover, a next generation Mars rover to explore for signs of organics and complex molecules. Air traffic control automation research The Aviation Systems Division conducts research and development in two primary areas: air traffic management, and high-fidelity flight simulation. For air traffic management, researchers are creating and testing concepts to allow for up to three times today's level of aircraft in the national airspace. Automation and its attendant safety consequences are key foundations of the concept development. Historically, the division has developed products that have been implemented for the flying public, such as the Traffic Management Adviser, which is being deployed nationwide. For high-fidelity flight simulation, the division operates the world's largest flight simulator (the Vertical Motion Simulator), a Level-D 747-400 simulator, and a panoramic air traffic control tower simulator. These simulators have been used for a variety of purposes including continued training for Space Shuttle pilots, development of future spacecraft handling qualities, helicopter control system testing, Joint Strike Fighter evaluations, and accident investigations. Personnel in the division have a variety of technical backgrounds, including guidance and control, flight mechanics, flight simulation, and computer science. Customers outside NASA have included the FAA, DOD, DHS, DOT, NTSB, Lockheed Martin, and Boeing. The center's flight simulation and guidance laboratory was listed on the National Register of Historic Places in 2017. Information technology Ames is the home of NASA's large research and development divisions in Advanced Supercomputing, Human Factors, and Artificial Intelligence (Intelligent Systems). These Research & Development organizations support NASA's Exploration efforts, as well as the continued operations of the International Space Station, and the space science and Aeronautics work across NASA. The center also runs and maintains the E Root nameserver of the DNS System. The Intelligent Systems Division is NASA's leading R&D Division developing advanced intelligent software and systems for all of NASA Mission Directorates. It provides software expertise for aeronautics, space science missions, International Space Station, and the Crewed Exploration Vehicle (CEV). The first AI in space (Deep Space 1) was developed from Code TI, as is the MAPGEN software that daily plans the activities for the Mars Exploration Rovers, the same core reasoner is used for Ensemble to operate Phoenix Lander, and the planning system for the International Space Station's solar arrays. Integrated System Health Management for the International Space Station's control moment gyroscopes, collaborative systems with semantic search tools, and robust software engineering round out the scope of Code TI's work. The Human Systems Integration Division "advances human-centered design and operations of complex aerospace systems through analysis, experimentation, and modeling of human performance and human-automation interaction to make dramatic improvements in safety, efficiency and mission success". For decades, the Human Systems Integration Division has been on the leading edge of human-centered aerospace research. The Division is home to over 100 researchers, contractors and administrative staff. The NASA Advanced Supercomputing Division at Ames operates several of the agency's most powerful supercomputers, including the petaflop-scale Pleiades, Aitken, and Electra systems. Originally called the Numerical Aerodynamic Simulation Division, the facility has housed more than 40 production and test supercomputers since its construction in 1987, and has served as a leader in high-performance computing, developing technology used across the industry, including the NAS Parallel Benchmarks and the Portable Batch System (PBS) job scheduling software. In September 2009, Ames launched NEBULA as a fast and powerful Cloud Computing Platform to handle NASA's massive data sets that complied with security requirements. This innovative pilot uses open-source components, complies with FISMA and can scale to Government-sized demands while being extremely energy efficient. In July 2010, NASA CTO Chris C. Kemp open sourced Nova, the technology behind the NEBULA Project in collaboration with Rackspace, launching OpenStack. OpenStack has subsequently become one of the largest and fastest growing open source projects in the history of computing, and has been included in most major distributions of Linux including Red Hat, Oracle, HP, SUSE, and Canonical. Image processing NASA Ames was one of the first locations to conduct research on image processing of satellite-platform aerial photography. Some of the pioneering techniques of contrast enhancement using Fourier analysis were developed at Ames in conjunction with researchers at ESL Inc. Wind tunnels The NASA Ames Research Center wind tunnels are known not only for their immense size, but also for their diverse characteristics that enable various kinds of scientific and engineering research. ARC Unitary Plan Wind Tunnel The Unitary Plan Wind Tunnel (UPWT) was completed in 1956 at a cost of $27 million under the Unitary Plan Act of 1949. Since its completion, the UPWT facility has been the most heavily used NASA wind tunnel. Every major commercial transport and almost every military jet built in the United States over the last 40 years has been tested in this facility. Mercury, Gemini, and Apollo spacecraft, as well as Space Shuttle models, were also tested in this tunnel complex. National Full-Scale Aerodynamics Complex (NFAC) Ames Research Center also houses the world's largest wind tunnel, part of the National Full-Scale Aerodynamic Complex (NFAC): it is large enough to test full-sized planes, rather than scale models. The complex of wind tunnels was listed on the National Register in 2017. The 40 by 80 foot wind tunnel circuit was originally constructed in the 1940s and is now capable of providing test velocities up to . It is used to support an active research program in aerodynamics, dynamics, model noise, and full-scale aircraft and their components. The aerodynamic characteristics of new configurations are investigated with an emphasis on estimating the accuracy of computational methods. The tunnel is also used to investigate the aeromechanical stability boundaries of advanced rotorcraft and rotor-fuselage interactions. Stability and control derivatives are also determined, including the static and dynamic characteristics of new aircraft configurations. The acoustic characteristics of most of the full-scale vehicles are also determined, as well as acoustic research aimed at discovering and reducing aerodynamic sources of noise. In addition to the normal data gathering methods (e.g., balance system, pressure measuring transducers, and temperature sensing thermocouples), state-of-the-art, non-intrusive instrumentation (e.g., laser velocimeters and shadowgraphs) are available to help determine flow direction and velocity in and around the lifting surfaces of aircraft. The 40 by 80 Foot Wind Tunnel is primarily used for determining the low- and medium-speed aerodynamic characteristics of high-performance aircraft, rotorcraft, and fixed wing, powered-lift V/STOL aircraft. The 80 by 120 Foot Wind Tunnel is the world's largest wind tunnel test section. This open circuit leg was added and a new fan drive system was installed in the 1980s. It is currently capable of air speeds up to . This section is used in similar ways to the 40 by 80 foot section, but it is capable of testing larger aircraft, albeit at slower speeds. Some of the test programs that have come through the 80 by 120 Foot include: F-18 High Angle of Attack Vehicle, DARPA/Lockheed Common Affordable Lightweight Fighter, XV-15 Tilt Rotor, and Advance Recovery System Parafoil. The 80 by 120 foot test section is capable of testing a full size Boeing 737. Although decommissioned by NASA in 2003, the NFAC is now being operated by the United States Air Force as a satellite facility of the Arnold Engineering Development Complex (AEDC). Arc Jet Complex The Ames Arc Jet Complex is an advanced thermophysics facility where sustained hypersonic- and hyperthermal testing of vehicular thermoprotective systems takes place under a variety of simulated flight- and re-entry conditions. Of its seven available test bays, four currently contain Arc Jet units of differing configurations, serviced by common facility support equipment. These are the Aerodynamic Heating Facility (AHF), the Turbulent Flow Duct (TFD), the Panel Test Facility (PTF), and the Interaction Heating Facility (IHF). The support equipment includes two D.C. power supplies, a steam ejector-driven vacuum system, a water-cooling system, high-pressure gas systems, data acquisition system, and other auxiliary systems. The magnitude and capacity of these systems makes the Ames Arc Jet Complex unique. The largest power supply can deliver 75 megawatts (MW) for a 30-minute duration or 150 MW for a 15-second duration. This power capacity, in combination with a high-volume 5-stage steam ejector vacuum-pumping system, enables facility operations to match high-altitude atmospheric flight conditions with samples of relatively large size. The Thermo-Physics Facilities Branch operates four arc jet facilities. The Interaction Heating Facility (IHF), with an available power of over 60-MW, is one of the highest-power arc jets available. It is a very flexible facility, capable of long run times of up to one hour, and able to test large samples in both a stagnation and flat plate configuration. The Panel Test Facility (PTF) uses a unique semielliptic nozzle for testing panel sections. Powered by a 20-MW arc heater, the PTF can perform tests on samples for up to 20 minutes. The Turbulent Flow Duct provides supersonic, turbulent high temperature air flows over flat surfaces. The TFD is powered by a 20-MW Hüls arc heater and can test samples in size. The Aerodynamic Heating Facility (AHF) has similar characteristics to the IHF arc heater, offering a wide range of operating conditions, sample sizes and extended test times. A cold-air-mixing plenum allows for simulations of ascent or high-speed flight conditions. Catalycity studies using air or nitrogen can be performed in this flexible rig. A 5-arm model support system allows the user to maximize testing efficiency. The AHF can be configured with either a Hüls or segmented arc heater, up to 20-MW. 1 MW is enough power to supply 750 homes. The Arc Jet Complex was listed on the National Register in 2017. Range complex Ames Vertical Gun Range The Ames Vertical Gun Range (AVGR) was designed to conduct scientific studies of lunar impact processes in support of the Apollo missions. In 1979, it was established as a National Facility, funded through the Planetary Geology and Geophysics Program. In 1995, increased scientific needs across various disciplines resulted in joint core funding by three different science programs at NASA Headquarters (Planetary Geology and Geophysics, Exobiology, and Solar System Origins). In addition, the AVGR provides programmatic support for various proposed and ongoing planetary missions (e.g. Stardust, Deep Impact). Using its 0.30 cal light-gas gun and powder gun, the AVGR can launch projectiles to velocities ranging from . By varying the gun's angle of elevation with respect to the target vacuum chamber, impact angles from 0° to 90° relative to the gravitational vector are possible. This unique feature is extremely important in the study of crater formation processes. The target chamber is approximately in diameter and height and can accommodate a wide variety of targets and mounting fixtures. It can maintain vacuum levels below , or can be back filled with various gases to simulate different planetary atmospheres. Impact events are typically recorded with high-speed video/film, or Particle Image Velocimetry (PIV). Hypervelocity Free-Flight Range The Hypervelocity Free-Flight (HFF) Range currently comprises two active facilities: the Aerodynamic Facility (HFFAF) and the Gun Development Facility (HFFGDF). The HFFAF is a combined Ballistic Range and Shock-tube Driven Wind Tunnel. Its primary purpose is to examine the aerodynamic characteristics and flow-field structural details of free-flying aeroballistic models. The HFFAF has a test section equipped with 16 shadowgraph-imaging stations. Each station can be used to capture an orthogonal pair of images of a hypervelocity model in flight. These images, combined with the recorded flight time history, can be used to obtain critical aerodynamic parameters such as lift, drag, static and dynamic stability, flow characteristics, and pitching moment coefficients. For very high Mach number (M > 25) simulations, models can be launched into a counter-flowing gas stream generated by the shock tube. The facility can also be configured for hypervelocity impact testing and has an aerothermodynamic capability as well. The HFFAF is currently configured to operate the light-gas gun in support of continuing thermal imaging and transition research for NASA's hypersonics program. The HFFGDF is used for gun performance enhancement studies, and occasional impact testing. The Facility uses the same arsenal of light-gas and powder guns as the HFFAF to accelerate particles that range in size from diameter to velocities ranging from . Most of the research effort to date has centered on Earth atmosphere entry configurations (Mercury, Gemini, Apollo, and Shuttle), planetary entry designs (Viking, Pioneer Venus, Galileo and MSL), and aerobraking (AFE) configurations. The facility has also been used for scramjet propulsion studies (National Aerospace Plane (NASP)) and meteoroid/orbital debris impact studies (Space Station and RLV). In 2004, the facility was utilized for foam-debris dynamics testing in support of the Return To Flight effort. As of March 2007, the GDF has been reconfigured to operate a cold gas gun for subsonic CEV capsule aerodynamics. Electric Arc Shock Tube The Electric Arc Shock Tube (EAST) Facility is used to investigate the effects of radiation and ionization that occur during very high velocity atmospheric entries. In addition, the EAST can also provide air-blast simulations requiring the strongest possible shock generation in air at an initial pressure loading of or greater. The facility has three separate driver configurations, to meet a range of test requirements: the driver can be connected to a diaphragm station of either a or a shock tube, and the high-pressure shock tube can also drive a shock tunnel. Energy for the drivers is supplied by a 1.25-MJ-capacitor storage system. United States Geological Survey (USGS) In September 2016, the United States Geological Survey (USGS) announced plans to relocate its West Coast science center from nearby Menlo Park to the Ames Research Center at Moffett Field. The relocation is expected to take five years and will begin in 2017 with 175 of the USGS employees moving to Moffett. The relocation is designed to save money on the $7.5 million annual rent the USGS pays for its Menlo Park campus. The land in Menlo Park is owned by the General Services Administration, which is required by federal law to charge market-rate rent. Education NASA Ames Exploration Center The NASA Ames Exploration Center is a science museum and education center for NASA. There are displays and interactive exhibits about NASA technology, missions and space exploration. A Moon rock, meteorite, and other geologic samples are on display. The theater shows movies with footage from NASA's explorations of Mars and the planets, and about the contributions of the scientists at Ames. Robotics Alliance Project In 1999, Mark León developed NASA's Robotics Education Project — now called the Robotics Alliance Project — under his mentor Dave Lavery, which has reached over 100,000 students nationwide using FIRST robotics and BOTBALL robotics competitions. The Project's FIRST branch originally comprised FRC Team 254: "The Cheesy Poofs", an all-male team from Bellarmine High School in San Jose, California. In 2006, Team 1868: "The Space Cookies", an all-female team, was founded in collaboration with the Girl Scouts. In 2012, Team 971: "Spartan Robotics" of Mountain View High School joined the Project, though the team continues to operate at their school. All three teams are highly decorated. All three have won Regional competitions, two have won the FIRST Championship, two have won the Regional Chairman's Award, and one is a Hall of Fame team. The three teams are collectively referred to as "House teams". The mission of the project is "To create a human, technical, and programmatic resource of robotics capabilities to enable the implementation of future robotic space exploration missions." Recent events Although the Bush administration slightly increased funding for NASA overall, the substantial realignment in research priorities that followed the announcement of the Vision for Space Exploration in 2004 led to a significant number of layoffs at Ames. On October 22, 2006, NASA opened the Carl Sagan Center for the Study of Life in the Cosmos. The center continued work that Sagan undertook, including the Search for Extraterrestrial Intelligence. In 2008, the Lunar Orbiter Image Recovery Project (LOIRP) was given space in the old McDonald's (the building was renamed McMoons) to digitize data tapes from the five 1966 and 1967 Lunar Orbiter spacecraft that were sent to the Moon. Also in 2008, it was announced that former Ames director Henry McDonald was a 60th Anniversary Class inductee of the Ames Hall of Fame for providing, "...exceptional leadership and keen technical insight to NASA Ames as the Center re-invented itself in the late 1990s." In 2010, scientists at the Fluid Mechanics Laboratory at Ames studied the aerodynamics of the Jabulani World Cup soccer ball, concluding that it tends to "knuckle under" at speeds of . Aerospace engineer Rabi Mehta attributed this effect to asymmetric flow due to the ball's seam construction. In March 2015, scientists at Ames announced that they had synthesized "...uracil, cytosine, and thymine, all three components of RNA and DNA, non-biologically in a laboratory under conditions found in space." Public-private partnerships The federal government has re-tasked portions of the facility and human resources to support private sector industry, research, and education. HP became the first corporate affiliate of a new Bio-Info-Nano Research and Development Institute (BIN-RDI); a collaborative venture established by the University of California Santa Cruz and NASA, based at Ames. The Bio|Info|Nano R&D Institute is dedicated to creating scientific breakthroughs by the convergence of biotechnology, information technology, and nanotechnology. Singularity University hosts its leadership and educational program at the facility. The Organ Preservation Alliance is also headquartered there; the Alliance is a nonprofit organization that works in partnership with the Methuselah Foundation's New Organ Prize "to catalyze breakthroughs on the remaining obstacles towards the long-term storage of organs" to overcome the drastic unmet medical need for viable organs for transplantation. Kleenspeed Technologies is headquartered there. Google On September 28, 2005, Google and Ames Research Center disclosed details to a long-term research partnership. In addition to pooling engineering talent, Google planned to build a facility on the ARC campus. One of the projects between Ames, Google, and Carnegie Mellon University is the Gigapan Project – a robotic platform for creating, sharing, and annotating terrestrial gigapixel images. The Planetary Content Project seeks to integrate and improve the data that Google uses for its Google Moon and Google Mars projects. On 4 June 2008, Google announced it had leased from NASA, at Moffett Field, for use as office space and employee housing. Construction of the new Google project which is near Google's Googleplex headquarters began in 2013 and has a target opening date in 2015. It is called "Bay View" as it overlooks San Francisco Bay. In May 2013, Google Inc announced that it was launching the Quantum Artificial Intelligence Lab, to be hosted by NASA's Ames Research Center. The lab will house a 512 qubit quantum computer from D-Wave Systems, and the USRA (Universities Space Research Association) will invite researchers from around the world to share time on it. The goal being to study how quantum computing might advance machine learning. Announced on November 10, 2014, Planetary Ventures LLC (a Google subsidiary) will lease the Moffett Federal Airfield from NASA Ames, a site of about 1,000 acres formerly costing the agency $6.3 million annually in maintenance and operation costs. The lease includes the restoration of the site's historic landmark Hangar One, as well as hangars Two and Three. The lease went into effect in March 2015, and spans 60 years. Living and working at Ames An official NASA ID is required to enter Ames. There are a myriad of activities inside the research center and around for full-time workers and interns alike. There was a fitness trail inside the base, also called a Parcourse trail, but sections of it are now inaccessible due to changes in base layout since it was installed. See also NASA Research Park Pleiades (supercomputer) National Register of Historic Places listings in Santa Clara County, California References Complete books online External links Ames Research Center NASA Ames Exploration Center The Astrophysics and Astrochemistry Laboratory The Orion Door Collection at NASA Ames Research Center NASA Research Park Academic Partners University Affiliated Research Center Bio | Info | Nano R&D Institute NASA GeneLab GeneLab Wikipedia Buildings and structures in Santa Clara County, California Buildings and structures in Mountain View, California Research institutes in the San Francisco Bay Area Research institutes in the United States Space technology research institutes University of California, Santa Cruz Aerospace research institutes Aviation research institutes Government buildings completed in 1939 1939 establishments in California National Register of Historic Places in Santa Clara County, California Wind tunnels NASA research centers
25202594
https://en.wikipedia.org/wiki/Samsung%20M900%20Moment
Samsung M900 Moment
The Samsung Moment, known as SPH-M900, is a smartphone manufactured by Samsung that uses the open source Android operating system. Features The phone features a 3.2-inch 16M-color AMOLED capacitive touchscreen and a 3.2-megapixel autofocus camera. Compared to Sprint's version of the HTC Hero, the device offers a left-sliding QWERTY keyboard with Search Key, four-way navigation with arrow keys, a faster processor, and more available user-accessible memory; however, the Moment has a lower-capacity battery and its touchscreen hardware does not offer multi-touch support. An exclusive custom version with an internal MobileTV antenna and external antenna jack was released in 2010 in the Washington, D.C./Delaware metro area for a public field test of the Mobile ATSC standard. The base of the Moment's Android 1.5 interface is identical to the unmodified Android install in T-Mobile's G1 phone; built-in software includes Mobile Google services such as Google Search, Gmail, YouTube, Google Calendar, and Google Talk. Building from that, Samsung added Moxier Mail (POP/IMAP Support, Microsoft Exchange access) and Nuance VoiceControl, while Sprint installed NFL Mobile, NASCAR Sprint Cup, Sprint Navigation, and Sprint TV. In May 2010, Sprint made an update to Android 2.1 (Eclair) available on its website, then announced in June via Twitter that the Moment and HTC Hero would not be upgraded to Android 2.2 (Froyo). A third-party upgrade to Android 2.2.2 was later released in February 2011 by enthusiasts at The Haxung Development Group. Issues As of the latest Android 2.1 build DJ07, the Samsung Moment, the Samsung Intercept, and the Samsung Transform (all based on the same SoC) do not include support for OpenGL ES 1.1 or 2.0 (in Android 2.2) despite hardware support for it. A community led complete rewrite of the g3d drivers is in development. Samsung Moment and Intercept users have also been reporting issues of data and airplane mode lock up over the CDMA network while using various browsers, streaming software such as YouTube and Pandora, and even randomly for seemingly no reason. The data/airplane mode lock up also prevents making voice calls and forces user to restart the phone to have connections restored. Enabling wifi radio while using CDMA network makes the issue more frequent. GPS has also been a "hit or miss" feature on the Samsung Moment, as some devices have perfectly working GPS, others have semi-working GPS, and yet others have completely dead GPS altogether. Controversy Despite various software updates from Sprint, the previously mentioned issues have remained unfixed on most handsets, leading some customers to believe that the issues are due to defective hardware, rather than software, many customers have pushed to get a replacement phone of equal value from Sprint, but the only phones officially offered as replacements (and even then, only if at least three exchanges have occurred within six months) are the Samsung Intercept and HTC Hero, both of which are considered downgrades to the Moment. Some customers also claim that they were often lied to when confronting Sprint on the issue, often being told that the data lockup and GPS bugs are not known issues, even though the official Samsung Moment update log on Sprint's website lists three different updates meant to address the issues. Customers have also accused Sprint and Samsung of violating FCC regulations, as the data lockup prevents both outgoing and incoming calls, including 911, unless the phone is restarted. It has also been speculated that this is the reason that the Samsung Moment was silently discontinued, despite Sprint's official statement that the release of the Samsung Intercept and Samsung Transform were the reason for the Moment's discontinuation. As support for the Samsung Moment has officially ended, it is unlikely that the issues will be fixed unless by a third-party source. Specifications Detailed technical specifications of the Samsung Moment SPH-M900: Processor: Samsung S3C6410 at 800 MHz SetCPU app can change the speed to 66/133/266/400/800 MHz Memory: 256 MB of RAM and 512 MB of ROM (150 MB /system, 223 MB /data, 116 MB /cache) Supports MicroSDHC. Official capacity up to 16 GB, comes with a 2 GB Class 2 card from factory. Connectivity: IEEE 802.11 b/g, Bluetooth 2.1 (HFP and A2DP), and MicroUSB 2.0 high-speed. Display: Capacitive, AMOLED touch-screen 3.2 inches 320×480. Text Input: A left-slide out keyboard as well as landscape and portrait on-screen keyboards. Camera: 3.2-megapixel camera with LED flash. Video recording (h.263 at 352x288). Audio: microphone, speaker phone, 3.5-mm headphone jack (compatible with standard stereo headphones, but also containing a fourth pin with microphone input). Operating system: Ships with Android OS 1.5. An update to Android 2.1 was released on May 14, 2010. Availability The phone is available in the United States. See also Android OS Galaxy Nexus References Sph-M900 Android (operating system) devices Mobile phones introduced in 2009 Smartphones Sprint Corporation
62690615
https://en.wikipedia.org/wiki/MOSFET%20applications
MOSFET applications
The metal–oxide–semiconductor field-effect transistor (MOSFET, MOS-FET, or MOS FET), also known as the metal–oxide–silicon transistor (MOS transistor, or MOS), is a type of insulated-gate field-effect transistor (IGFET) that is fabricated by the controlled oxidation of a semiconductor, typically silicon. The voltage of the covered gate determines the electrical conductivity of the device; this ability to change conductivity with the amount of applied voltage can be used for amplifying or switching electronic signals. The MOSFET was invented by Egyptian engineer Mohamed M. Atalla and Korean engineer Dawon Kahng at Bell Labs in 1959. It is the basic building block of modern electronics, and the most frequently manufactured device in history, with an estimated total of 13sextillion (1.3 × 1022) MOSFETs manufactured between 1960 and 2018. The MOSFET is the most common semiconductor device in digital and analog circuits, and the most common power device. It was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses, revolutionizing the electronics industry and the world economy, having been central to the computer revolution, digital revolution, information revolution, silicon age and information age. MOSFET scaling and miniaturization has been driving the rapid exponential growth of electronic semiconductor technology since the 1960s, and enable high-density integrated circuits (ICs) such as memory chips and microprocessors. The MOSFET is considered to be possibly the most important invention in electronics, as the "workhorse" of the electronics industry and the "base technology" of the late 20th to early 21st centuries, having revolutionized modern culture, economy, society and daily life. The MOSFET is by far the most widely used transistor in both digital circuits and analog circuits, and it is the backbone of modern electronics. It is the basis for numerous modern technologies, and is commonly used for a wide range of applications. According to Jean-Pierre Colinge, numerous modern technologies would not exist without the MOSFET, such as the modern computer industry, digital telecommunication systems, video games, pocket calculators, and digital wristwatches, for example. MOSFETs in integrated circuits are the primary elements of computer processors, semiconductor memory, image sensors, and most other types of integrated circuits. Discrete MOSFET devices are widely used in applications such as switch mode power supplies, variable-frequency drives and other power electronics applications where each device may be switching thousands of watts. Radio-frequency amplifiers up to the UHF spectrum use MOSFET transistors as analog signal and power amplifiers. Radio systems also use MOSFETs as oscillators, or mixers to convert frequencies. MOSFET devices are also applied in audio-frequency power amplifiers for public address systems, sound reinforcement, and home and automobile sound systems. History The MOSFET was invented by Egyptian engineer Mohamed M. Atalla and Korean engineer Dawon Kahng at Bell Telephone Laboratories in 1959. They fabricated the device in November 1959, and presented it as the "siliconsilicon dioxide field induced surface device" in early 1960, at the Solid-State Device Conference held at Carnegie Mellon University. In the early 1960s, research programs on MOS technology were established by Fairchild Semiconductor, RCA Laboratories, General Microelectronics (led by former Fairchild engineer Frank Wanlass) and IBM. In 1963, the first formal public announcement of the MOSFET's existence as a potential technology was made. It was then first commercialized by General Microelectronics (GMe) in May 1964, followed by Fairchild in October 1964. GMe's first MOS contract was with NASA, which used MOSFETs for spacecraft and satellites in the Interplanetary Monitoring Platform (IMP) program and Explorers Program. The early MOSFETs commercialized by GMe and Fairchild were p-channel (PMOS) devices for logic and switching applications. By the mid-1960s, RCA were using MOSFETs in their consumer products, including FM radio, television and amplifiers. MOS revolution The development of the MOSFET led to a revolution in electronics technology, called the MOS revolution or MOSFET revolution. The MOSFET was the first truly compact transistor that could be miniaturised and mass-produced for a wide range of uses. With its rapidly scaling miniaturisation, MOS technology became the focus of RCA, Fairchild, Intel and other semiconductor companies in the 1960s, fuelling the technological and economic growth of the early semiconductor industry based around California (including what later became known as Silicon Valley) as well as Japan. The impact of the MOSFET became commercially significant from the late 1960s onwards. This led to a revolution in the electronics industry, which has since impacted daily life in almost every way, with MOS technology leading to revolutionary changes in technology, economy, culture and thinking. The invention of the MOSFET has been cited as the birth of modern electronics. The MOSFET was central to the electronics revolution, microelectronics revolution, silicon revolution, and microcomputer revolution, Importance The MOSFET forms the basis of modern electronics, and is the basic element in most modern electronic equipment. It is the most common transistor in electronics, and the most widely used semiconductor device in the world. It has been described as the "workhorse of the electronics industry" and "the base technology" of the late 20th to early 21st centuries. MOSFET scaling and miniaturization (see List of semiconductor scale examples) have been the primary factors behind the rapid exponential growth of electronic semiconductor technology since the 1960s, as the rapid miniaturization of MOSFETs has been largely responsible for the increasing transistor density, increasing performance and decreasing power consumption of integrated circuit chips and electronic devices since the 1960s. MOSFETs are capable of high scalability (Moore's law and Dennard scaling), with increasing miniaturization, and can be easily scaled down to smaller dimensions. They consume significantly less power, and allow much higher density, than bipolar transistors. MOSFETs thus have much smaller size than BJTs, about 20 times smaller by the early 1990s. MOSFETs also have faster switching speed, with rapid on–off electronic switching that makes them ideal for generating pulse trains, the basis for digital signals. in contrast to BJTs which more slowly generate analog signals resembling sine waves. MOSFETs are also cheaper and have relatively simple processing steps, resulting in high manufacturing yield. MOSFETs thus enable large-scale integration (LSI), and are ideal for digital circuits, as well as linear analog circuits. The MOSFET has been called the most important transistor, the most important device in the electronics industry, the most important device in the computing industry, one of the most important developments in semiconductor technology, and possibly the most important invention in electronics. The MOSFET has been the fundamental building block of modern digital electronics, during the digital revolution, information revolution, information age, and silicon age. MOSFETs have been the driving force behind the computer revolution, and the technologies enabled by it. The rapid progress of the electronics industry during the late 20th to early 21st centuries was achieved by rapid MOSFET scaling (Dennard scaling and Moore's law), down to the level of nanoelectronics in the early 21st century. The MOSFET revolutionized the world during the information age, with its high density enabling a computer to exist on a few small IC chips rather than filling a room, and later making possible digital communications technology such as smartphones. The MOSFET is the most widely manufactured device in history. The MOSFET generates annual sales of as of 2015. Between 1960 and 2018, an estimated total of 13sextillion MOS transistors have been manufactured, accounting for at least 99.9% of all transistors. Digital integrated circuits such as microprocessors and memory devices contain thousands to billions of integrated MOSFETs on each device, providing the basic switching functions required to implement logic gates and data storage. There are also memory devices which contain at least a trillion MOS transistors, such as a 256GB microSD memory card, larger than the number of stars in the Milky Way galaxy. As of 2010, the operating principles of modern MOSFETs have remained largely the same as the original MOSFET first demonstrated by Mohamed Atalla and Dawon Kahng in 1960. The US Patent and Trademark Office calls the MOSFET a "groundbreaking invention that transformed life and culture around the world" and the Computer History Museum credits it with "irrevocably changing the human experience." The MOSFET was also the basis for Nobel Prize winning breakthroughs such as the quantum Hall effect and the charge-coupled device (CCD), yet there was never any Nobel Prize given for the MOSFET itself. In 2018, the Royal Swedish Academy of Sciences which awards the science Nobel Prizes acknowledged that the invention of the MOSFET by Atalla and Kahng was one of the most important inventions in microelectronics and in information and communications technology (ICT). The MOSFET is also included on the list of IEEE milestones in electronics, and its inventors Mohamed Atalla and Dawon Kahng entered the National Inventors Hall of Fame in 2009. MOS integrated circuit (MOS IC) The MOSFET is the most widely used type of transistor and the most critical device component in integrated circuit (IC) chips. The monolithic integrated circuit chip was enabled by the surface passivation process, which electrically stabilized silicon surfaces via thermal oxidation, making it possible to fabricate monolithic integrated circuit chips using silicon. The surface passivation process was developed by Mohamed M. Atalla at Bell Labs in 1957. This was the basis for the planar process, developed by Jean Hoerni at Fairchild Semiconductor in early 1959, which was critical to the invention of the monolithic integrated circuit chip by Robert Noyce later in 1959. The same year, Atalla used his surface passivation process to invent the MOSFET with Dawon Kahng at Bell Labs. This was followed by the development of clean rooms to reduce contamination to levels never before thought necessary, and coincided with the development of photolithography which, along with surface passivation and the planar process, allowed circuits to be made in few steps. Mohamed Atalla realised that the main advantage of a MOS transistor was its ease of fabrication, particularly suiting it for use in the recently invented integrated circuits. In contrast to bipolar transistors which required a number of steps for the p–n junction isolation of transistors on a chip, MOSFETs required no such steps but could be easily isolated from each other. Its advantage for integrated circuits was re-iterated by Dawon Kahng in 1961. The Si–SiO2 system possessed the technical attractions of low cost of production (on a per circuit basis) and ease of integration. These two factors, along with its rapidly scaling miniaturization and low energy consumption, led to the MOSFET becoming the most widely used type of transistor in IC chips. The earliest experimental MOS IC to be demonstrated was a 16-transistor chip built by Fred Heiman and Steven Hofstein at RCA in 1962. General Microelectronics later introduced the first commercial MOS integrated circuits in 1964, consisting of 120 p-channel transistors. It was a 20-bit shift register, developed by Robert Norman and Frank Wanlass. In 1967, Bell Labs researchers Robert Kerwin, Donald Klein and John Sarace developed the self-aligned gate (silicon-gate) MOS transistor, which Fairchild Semiconductor researchers Federico Faggin and Tom Klein used to develop the first silicon-gate MOS IC. MOS IC chips There are various different types of MOS IC chips, which include the following. MOS large-scale integration (MOS LSI) With its high scalability, and much lower power consumption and higher density than bipolar junction transistors, the MOSFET made it possible to build high-density IC chips. By 1964, MOS chips had reached higher transistor density and lower manufacturing costs than bipolar chips. MOS chips further increased in complexity at a rate predicted by Moore's law, leading to large-scale integration (LSI) with hundreds of MOSFETs on a chip by the late 1960s. MOS technology enabled the integration of more than 10,000 transistors on a single LSI chip by the early 1970s, before later enabling very large-scale integration (VLSI). Microprocessors The MOSFET is the basis of every microprocessor, and was responsible for the invention of the microprocessor. The origins of both the microprocessor and the microcontroller can be traced back to the invention and development of MOS technology. The application of MOS LSI chips to computing was the basis for the first microprocessors, as engineers began recognizing that a complete computer processor could be contained on a single MOS LSI chip. The earliest microprocessors were all MOS chips, built with MOS LSI circuits. The first multi-chip microprocessors, the Four-Phase Systems AL1 in 1969 and the Garrett AiResearch MP944 in 1970, were developed with multiple MOS LSI chips. The first commercial single-chip microprocessor, the Intel 4004, was developed by Federico Faggin, using his silicon-gate MOS IC technology, with Intel engineers Marcian Hoff and Stan Mazor, and Busicom engineer Masatoshi Shima. With the arrival of CMOS microprocessors in 1975, the term "MOS microprocessors" began to refer to chips fabricated entirely from PMOS logic or fabricated entirely from NMOS logic, contrasted with "CMOS microprocessors" and "bipolar bit-slice processors". CMOS circuits Complementary metal–oxide–semiconductor (CMOS) logic was developed by Chih-Tang Sah and Frank Wanlass at Fairchild Semiconductor in 1963. CMOS had lower power consumption, but was initially slower than NMOS, which was more widely used for computers in the 1970s. In 1978, Hitachi introduced the twin-well CMOS process, which allowed CMOS to match the performance of NMOS with less power consumption. The twin-well CMOS process eventually overtook NMOS as the most common semiconductor manufacturing process for computers in the 1980s. By the 1970s1980s, CMOS logic consumed over times less power than NMOS logic, and about 100,000 times less power than bipolar transistor-transistor logic (TTL). Digital The growth of digital technologies like the microprocessor has provided the motivation to advance MOSFET technology faster than any other type of silicon-based transistor. A big advantage of MOSFETs for digital switching is that the oxide layer between the gate and the channel prevents DC current from flowing through the gate, further reducing power consumption and giving a very large input impedance. The insulating oxide between the gate and channel effectively isolates a MOSFET in one logic stage from earlier and later stages, which allows a single MOSFET output to drive a considerable number of MOSFET inputs. Bipolar transistor-based logic (such as TTL) does not have such a high fanout capacity. This isolation also makes it easier for the designers to ignore to some extent loading effects between logic stages independently. That extent is defined by the operating frequency: as frequencies increase, the input impedance of the MOSFETs decreases. Analog The MOSFET's advantages in digital circuits do not translate into supremacy in all analog circuits. The two types of circuit draw upon different features of transistor behavior. Digital circuits switch, spending most of their time either fully on or fully off. The transition from one to the other is only of concern with regards to speed and charge required. Analog circuits depend on operation in the transition region where small changes to V can modulate the output (drain) current. The JFET and bipolar junction transistor (BJT) are preferred for accurate matching (of adjacent devices in integrated circuits), higher transconductance and certain temperature characteristics which simplify keeping performance predictable as circuit temperature varies. Nevertheless, MOSFETs are widely used in many types of analog circuits because of their own advantages (zero gate current, high and adjustable output impedance and improved robustness vs. BJTs which can be permanently degraded by even lightly breaking down the emitter-base). The characteristics and performance of many analog circuits can be scaled up or down by changing the sizes (length and width) of the MOSFETs used. By comparison, in bipolar transistors the size of the device does not significantly affect its performance. MOSFETs' ideal characteristics regarding gate current (zero) and drain-source offset voltage (zero) also make them nearly ideal switch elements, and also make switched capacitor analog circuits practical. In their linear region, MOSFETs can be used as precision resistors, which can have a much higher controlled resistance than BJTs. In high power circuits, MOSFETs sometimes have the advantage of not suffering from thermal runaway as BJTs do. Also, MOSFETs can be configured to perform as capacitors and gyrator circuits which allow op-amps made from them to appear as inductors, thereby allowing all of the normal analog devices on a chip (except for diodes, which can be made smaller than a MOSFET anyway) to be built entirely out of MOSFETs. This means that complete analog circuits can be made on a silicon chip in a much smaller space and with simpler fabrication techniques. MOSFETS are ideally suited to switch inductive loads because of tolerance to inductive kickback. Some ICs combine analog and digital MOSFET circuitry on a single mixed-signal integrated circuit, making the needed board space even smaller. This creates a need to isolate the analog circuits from the digital circuits on a chip level, leading to the use of isolation rings and silicon on insulator (SOI). Since MOSFETs require more space to handle a given amount of power than a BJT, fabrication processes can incorporate BJTs and MOSFETs into a single device. Mixed-transistor devices are called bi-FETs (bipolar FETs) if they contain just one BJT-FET and BiCMOS (bipolar-CMOS) if they contain complementary BJT-FETs. Such devices have the advantages of both insulated gates and higher current density. RF CMOS In the late 1980s, Asad Abidi pioneered RF CMOS technology, which uses MOS VLSI circuits, while working at UCLA. This changed the way in which RF circuits were designed, away from discrete bipolar transistors and towards CMOS integrated circuits. As of 2008, the radio transceivers in all wireless networking devices and modern mobile phones are mass-produced as RF CMOS devices. RF CMOS is also used in nearly all modern Bluetooth and wireless LAN (WLAN) devices. Analog switches MOSFET analog switches use the MOSFET to pass analog signals when on, and as a high impedance when off. Signals flow in both directions across a MOSFET switch. In this application, the drain and source of a MOSFET exchange places depending on the relative voltages of the source/drain electrodes. The source is the more negative side for an N-MOS or the more positive side for a P-MOS. All of these switches are limited on what signals they can pass or stop by their gate–source, gate–drain, and source–drain voltages; exceeding the voltage, current, or power limits will potentially damage the switch. Single-type This analog switch uses a four-terminal simple MOSFET of either P or N type. In the case of an n-type switch, the body is connected to the most negative supply (usually GND) and the gate is used as the switch control. Whenever the gate voltage exceeds the source voltage by at least a threshold voltage, the MOSFET conducts. The higher the voltage, the more the MOSFET can conduct. An N-MOS switch passes all voltages less than V − V. When the switch is conducting, it typically operates in the linear (or ohmic) mode of operation, since the source and drain voltages will typically be nearly equal. In the case of a P-MOS, the body is connected to the most positive voltage, and the gate is brought to a lower potential to turn the switch on. The P-MOS switch passes all voltages higher than V − V (threshold voltage V is negative in the case of enhancement-mode P-MOS). Dual-type (CMOS) This "complementary" or CMOS type of switch uses one P-MOS and one N-MOS FET to counteract the limitations of the single-type switch. The FETs have their drains and sources connected in parallel, the body of the P-MOS is connected to the high potential (VDD) and the body of the N-MOS is connected to the low potential (gnd). To turn the switch on, the gate of the P-MOS is driven to the low potential and the gate of the N-MOS is driven to the high potential. For voltages between VDD − Vtn and gnd − Vtp, both FETs conduct the signal; for voltages less than gnd − Vtp, the N-MOS conducts alone; and for voltages greater than VDD − Vtn, the P-MOS conducts alone. The voltage limits for this switch are the gate–source, gate–drain and source–drain voltage limits for both FETs. Also, the P-MOS is typically two to three times wider than the N-MOS, so the switch will be balanced for speed in the two directions. Tri-state circuitry sometimes incorporates a CMOS MOSFET switch on its output to provide for a low-ohmic, full-range output when on, and a high-ohmic, mid-level signal when off. MOS memory The advent of the MOSFET enabled the practical use of MOS transistors as memory cell storage elements, a function previously served by magnetic cores in computer memory. The first modern computer memory was introduced in 1965, when John Schmidt at Fairchild Semiconductor designed the first MOS semiconductor memory, a 64-bit MOS SRAM (static random-access memory). SRAM became an alternative to magnetic-core memory, but required six MOS transistors for each bit of data. MOS technology is the basis for DRAM (dynamic random-access memory). In 1966, Dr. Robert H. Dennard at the IBM Thomas J. Watson Research Center was working on MOS memory. While examining the characteristics of MOS technology, he found it was capable of building capacitors, and that storing a charge or no charge on the MOS capacitor could represent the 1 and 0 of a bit, while the MOS transistor could control writing the charge to the capacitor. This led to his development of a single-transistor DRAM memory cell. In 1967, Dennard filed a patent under IBM for a single-transistor DRAM (dynamic random-access memory) memory cell, based on MOS technology. MOS memory enabled higher performance, was cheaper, and consumed less power, than magnetic-core memory, leading to MOS memory overtaking magnetic core memory as the dominant computer memory technology by the early 1970s. Frank Wanlass, while studying MOSFET structures in 1963, noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM (erasable programmable read-only memory) technology. In 1967, Dawon Kahng and Simon Sze proposed that floating-gate memory cells, consisting of floating-gate MOSFETs (FGMOS), could be used to produce reprogrammable ROM (read-only memory). Floating-gate memory cells later became the basis for non-volatile memory (NVM) technologies including EPROM, EEPROM (electrically erasable programmable ROM) and flash memory. Types of MOS memory There are various different types of MOS memory. The following list includes various different MOS memory types. MOS sensors A number of MOSFET sensors have been developed, for measuring physical, chemical, biological and environmental parameters. The earliest MOSFET sensors include the open-gate FET (OGFET) introduced by Johannessen in 1970, the ion-sensitive field-effect transistor (ISFET) invented by Piet Bergveld in 1970, the adsorption FET (ADFET) patented by P.F. Cox in 1974, and a hydrogen-sensitive MOSFET demonstrated by I. Lundstrom, M.S. Shivaraman, C.S. Svenson and L. Lundkvist in 1975. The ISFET is a special type of MOSFET with a gate at a certain distance, and where the metal gate is replaced by an ion-sensitive membrane, electrolyte solution and reference electrode. By the mid-1980s, numerous other MOSFET sensors had been developed, including the gas sensor FET (GASFET), surface accessible FET (SAFET), charge flow transistor (CFT), pressure sensor FET (PRESSFET), chemical field-effect transistor (ChemFET), reference ISFET (REFET), biosensor FET (BioFET), enzyme-modified FET (ENFET) and immunologically modified FET (IMFET). By the early 2000s, BioFET types such as the DNA field-effect transistor (DNAFET), gene-modified FET (GenFET) and cell-potential BioFET (CPFET) had been developed. The two main types of image sensors used in digital imaging technology are the charge-coupled device (CCD) and the active-pixel sensor (CMOS sensor). Both CCD and CMOS sensors are based on MOS technology, with the CCD based on MOS capacitors and the CMOS sensor based on MOS transistors. Image sensors MOS technology is the basis for modern image sensors, including the charge-coupled device (CCD) and the CMOS active-pixel sensor (CMOS sensor), used in digital imaging and digital cameras. Willard Boyle and George E. Smith developed the CCD in 1969. While researching the MOS process, they realized that an electric charge was the analogy of the magnetic bubble and that it could be stored on a tiny MOS capacitor. As it was fairly straightforward to fabricate a series of MOS capacitors in a row, they connected a suitable voltage to them so that the charge could be stepped along from one to the next. The CCD is a semiconductor circuit that was later used in the first digital video cameras for television broadcasting. The MOS active-pixel sensor (APS) was developed by Tsutomu Nakamura at Olympus in 1985. The CMOS active-pixel sensor was later developed by Eric Fossum and his team at NASA's Jet Propulsion Laboratory in the early 1990s. MOS image sensors are widely used in optical mouse technology. The first optical mouse, invented by Richard F. Lyon at Xerox in 1980, used a 5µm NMOS sensor chip. Since the first commercial optical mouse, the IntelliMouse introduced in 1999, most optical mouse devices use CMOS sensors. Other sensors MOS sensors, also known as MOSFET sensors, are widely used to measure physical, chemical, biological and environmental parameters. The ion-sensitive field-effect transistor (ISFET), for example, is widely used in biomedical applications. MOSFETs are also widely used in microelectromechanical systems (MEMS), as silicon MOSFETs could interact and communicate with the surroundings and process things such as chemicals, motions and light. An early example of a MEMS device is the resonant-gate transistor, an adaptation of the MOSFET, developed by Harvey C. Nathanson in 1965. Common applications of other MOS sensors include the following. Power MOSFET The power MOSFET, which is commonly used in power electronics, was developed in the early 1970s. The power MOSFET enables low gate drive power, fast switching speed, and advanced paralleling capability. The power MOSFET is the most widely used power device in the world. Advantages over bipolar junction transistors in power electronics include MOSFETs not requiring a continuous flow of drive current to remain in the ON state, offering higher switching speeds, lower switching power losses, lower on-resistances, and reduced susceptibility to thermal runaway. The power MOSFET had an impact on power supplies, enabling higher operating frequencies, size and weight reduction, and increased volume production. Switching power supplies are the most common applications for power MOSFETs. They are also widely used for MOS RF power amplifiers, which enabled the transition of mobile networks from analog to digital in the 1990s. This led to the wide proliferation of wireless mobile networks, which revolutionised telecommunication systems. The LDMOS in particular is the most widely used power amplifier in mobile networks, such as 2G, 3G, 4G, and 5G. Over 50billion discrete power MOSFETs are shipped annually, as of 2018. They are widely used for automotive, industrial and communications systems in particular. Power MOSFETs are commonly used in automotive electronics, particularly as switching devices in electronic control units, and as power converters in modern electric vehicles. The insulated-gate bipolar transistor (IGBT), a hybrid MOS-bipolar transistor, is also used for a wide variety of applications. LDMOS, a power MOSFET with lateral structure, is commonly used in high-end audio amplifiers and high-power PA systems. Their advantage is a better behaviour in the saturated region (corresponding to the linear region of a bipolar transistor) than the vertical MOSFETs. Vertical MOSFETs are designed for switching applications. DMOS and VMOS Power MOSFETs, including DMOS, LDMOS and VMOS devices, are commonly used for a wide range of other applications, which include the following. RF DMOS RF DMOS, also known as RF power MOSFET, is a type of DMOS power transistor designed for radio-frequency (RF) applications. It is used in various radio and RF applications, which include the following. Consumer electronics MOSFETs are fundamental to the consumer electronics industry. According to Colinge, numerous consumer electronics would not exist without the MOSFET, such as digital wristwatches, pocket calculators, and video games, for example. MOSFETs are commonly used for a wide range of consumer electronics, which include the following devices listed. Computers or telecommunication devices (such as phones) are not included here, but are listed separately in the Information and communications technology (ICT) section below. Pocket calculators One of the earliest influential consumer electronic products enabled by MOS LSI circuits was the electronic pocket calculator, as MOS LSI technology enabled large amounts of computational capability in small packages. In 1965, the Victor 3900 desktop calculator was the first MOS LSI calculator, with 29 MOS LSI chips. In 1967 the Texas Instruments Cal-Tech was the first prototype electronic handheld calculator, with three MOS LSI chips, and it was later released as the Canon Pocketronic in 1970. The Sharp QT-8D desktop calculator was the first mass-produced LSI MOS calculator in 1969, and the Sharp EL-8 which used four MOS LSI chips was the first commercial electronic handheld calculator in 1970. The first true electronic pocket calculator was the Busicom LE-120A HANDY LE, which used a single MOS LSI calculator-on-a-chip from Mostek, and was released in 1971. By 1972, MOS LSI circuits were commercialized for numerous other applications. Audio-visual (AV) media MOSFETs are commonly used for a wide range of audio-visual (AV) media technologies, which include the following list of applications. Power MOSFET applications Power MOSFETs are commonly used for a wide range of consumer electronics. Power MOSFETs are widely used in the following consumer applications. Information and communications technology (ICT) MOSFETs are fundamental to information and communications technology (ICT), including modern computers, modern computing, telecommunications, the communications infrastructure, the Internet, digital telephony, wireless telecommunications, and mobile networks. According to Colinge, the modern computer industry and digital telecommunication systems would not exist without the MOSFET. Advances in MOS technology has been the most important contributing factor in the rapid rise of network bandwidth in telecommunication networks, with bandwidth doubling every 18 months, from bits per second to terabits per second (Edholm's law). Computers MOSFETs are commonly used in a wide range of computers and computing applications, which include the following. Telecommunications MOSFETs are commonly used in a wide range of telecommunications, which include the following applications. Power MOSFET applications Insulated-gate bipolar transistor (IGBT) The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). , the IGBT is the second most widely used power transistor, after the power MOSFET. The IGBT accounts for 27% of the power transistor market, second only to the power MOSFET (53%), and ahead of the RF amplifier (11%) and bipolar junction transistor (9%). The IGBT is widely used in consumer electronics, industrial technology, the energy sector, aerospace electronic devices, and transportation. The IGBT is widely used in the following applications. Quantum physics 2D electron gas and quantum Hall effect In quantum physics and quantum mechanics, the MOSFET is the basis for two-dimensional electron gas (2DEG) and the quantum Hall effect. The MOSFET enables physicists to study electron behavior in a two-dimensional gas, called a two-dimensional electron gas. In a MOSFET, conduction electrons travel in a thin surface layer, and a "gate" voltage controls the number of charge carriers in this layer. This allows researchers to explore quantum effects by operating high-purity MOSFETs at liquid helium temperatures. In 1978, the Gakushuin University researchers Jun-ichi Wakabayashi and Shinji Kawaji observed the Hall effect in experiments carried out on the inversion layer of MOSFETs. In 1980, Klaus von Klitzing, working at the high magnetic field laboratory in Grenoble with silicon-based MOSFET samples developed by Michael Pepper and Gerhard Dorda, made the unexpected discovery of the quantum Hall effect. Quantum technology The MOSFET is used in quantum technology. A quantum field-effect transistor (QFET) or quantum well field-effect transistor (QWFET) is a type of MOSFET that takes advantage of quantum tunneling to greatly increase the speed of transistor operation. Transportation MOSFETs are widely used in transportation. For example, they are commonly used for automotive electronics in the automotive industry. MOS technology is commonly used for a wide range of vehicles and transportation, which include the following applications. Automotive industry MOSFETs are widely used in the automotive industry, particularly for automotive electronics in motor vehicles. Automotive applications include the following. Power MOSFET applications Power MOSFETs are widely used in transportation technology, which includes the following vehicles. In the automotive industry, power MOSFETs are widely used in automotive electronics, which include the following. IGBT applications The insulated-gate bipolar transistor (IGBT) is a power transistor with characteristics of both a MOSFET and bipolar junction transistor (BJT). IGBTs are widely used in the following transportation applications. Space industry In the space industry, MOSFET devices were adopted by NASA for space research in 1964, for its Interplanetary Monitoring Platform (IMP) program and Explorers space exploration program. The use of MOSFETs was a major step forward in the electronics design of spacecraft and satellites. The IMP D (Explorer 33), launched in 1966, was the first spacecraft to use the MOSFET. Data gathered by IMP spacecraft and satellites were used to support the Apollo program, enabling the first manned Moon landing with the Apollo 11 mission in 1969. The Cassini–Huygens to Saturn in 1997 had spacecraft power distribution accomplished 192 solid-state power switch (SSPS) devices, which also functioned as circuit breakers in the event of an overload condition. The switches were developed from a combination of two semiconductor devices with switching capabilities: the MOSFET and the ASIC (application-specific integrated circuit). This combination resulted in advanced power switches that had better performance characteristics than traditional mechanical switches. Other applications MOSFETs are commonly used for a wide range of other applications, which include the following. References Applications 1960 introductions 20th-century inventions Arab inventions Biosensors Digital electronics Electronic design Integrated circuits Semiconductor devices Sensors Silicon Solid state switches South Korean inventions Transistor amplifiers Transistor types Transistors
2932542
https://en.wikipedia.org/wiki/De%20La%20Salle%20Lipa
De La Salle Lipa
De La Salle Lipa, also known by its acronym DLSL, is a private Catholic Lasallian basic and higher educational institution run by the De La Salle Brothers of the Philippine District Of the Christian Brothers in Lipa City, Batangas, Philippines. It was founded in 1962. It is one of the third generation of La Salle schools founded by the Catholic religious congregation De La Salle Brothers in the Philippines: La Salle Academy-Iligan (Iligan City, Lanao del Norte) in 1958, La Salle Green Hills (Mandaluyong) in 1959, Saint Joseph School-La Salle (Villamonte, Bacolod) in 1960 and, De La Salle Lipa in 1962. History In school year 1985–1986, the College Department was formally opened, with Elsie Rabago as officer-in-charge. Norma Blanco was appointed the school's first lay high school principal in 1989. Because of the burgeoning school population, Br. Narciso Erquiza FSC was appointed as resident president. On May 15, 1995, Brother Rafael Donato FSC, former president of De La Salle University, assumed the presidency of De La Salle Lipa. Under Brother Donato, the school constructed the SENTRUM, the Sen. Jose W. Diokno Building, the Chez Avenir Hotel (now called Chez Rafael), the St. La Salle Building, the Noli Me Tangere and El Filibusterismo structure clusters of the Jose Rizal Building, and the Centennial Sports Plaza. In 1996, the school opened a graduate school, initially offering a Master in Management Technology degree. In 1997, the school became the first educational institution in Batangas to go online, with its web site launched in the same year. A year later, De La Salle Lipa became one of the first schools around the world to launch an alumni registry web site. Called Umpokan, the web site has become fully interactive and is an online meeting place for graduates of the school. In 2002, Juan Lozano was appointed the school's first vice-president and chief operating officer. In 2003, Donato retired from active service and was named president emeritus at the auditorium of the Sen. Jose Diokno building. Brother Manuel Pajarillo, FSC was then appointed president. The school changed its organizational structure in 2005. With Br. Pajarillo still the school's president, Lozano was elevated to the position of Executive Vice-President. Rex Torrecampo was, meanwhile, appointed as the first Vice-President for Administration. The following year, Corazon Abansi became the school's first Vice-President for Academics and Research. In 2006, the school's incorporation papers were amended to officially make it part of an umbrella entity, De La Salle Philippines, which was formed to synchronize the operations of the De La Salle schools with the mission of the De La Salle Brothers in the Philippines. In May 2007, in keeping with the standards set by De La Salle Philippines, the executive vice-president became known as the chancellor, while the two vice-presidents became known as vice-chancellors. In school year 2006–2007, Pajarillo was president of three De La Salle schools (Lipa, Dasmariñas, and the Medical and Health Sciences Institute also in Dasmarinas, Cavite), in 2007 he was made president solely of De La Salle Lipa. During his term, information technology and new facilities were established. Wireless internet connectivity was likewise introduced. The Book Mobile Reading Program (BMRP), a bus turned into a mobile library, was also launched. BMRP reached out to several communities to cater to the youth through storytelling sessions and other literacy training programs. Campus The De La Salle Lipa campus sits on a 10-hectare lot next to the J.P. Laurel National highway, (Japan-Philippine Friendship highway), just on the outskirts of Lipa City. It is 5-minute drive from the Southern Tagalog Arterial Road (STAR), which links the city to the Southern Luzon Express Way (SLEX). Batangas City, the provincial capital, is 5 minutes away via STAR tollway. Entering the main access gate at the front of campus, visitors drive into well-paved concrete roads with parking facilities that can accommodate more than 200 vehicles. The SENTRUM is the first major structure seen, a multi-purpose building which has been the venue of pop concerts, professional basketball games, corporate assemblies and religious gatherings. In front of the SENTRUM is a well-kept garden that has a stone sculpture of the founder of the De La Salle Brothers St. John Baptist de la Salle. Nearby are the Chez Rafael (formerly Chez Avenir and renamed in honor of the school's former president Br. Rafael S. Donato FSC), a laboratory hotel for BS Hotel & Restaurant Management majors, and the Sen. Jose Diokno Building, which holds the college's Learning Resource Center and the offices of executive administration. The Student Center near the Apolinario Mabini Buildings and CBEAM (College of Business, Economics, Accountancy and Management) Building, holds the building for the Student Government (SG) and the Council of Student Organization (CSO) for college. The campus may be divided in two areas: the Integrated School side and the College side. Students are not prohibited from crossing to either side . On the Integrated School side, the most recognizable structure is the St. La Salle Building, which is made up of several clusters just in front of the highway. The main cluster that offers the main access gate for Integrated School (IS) students is called the Hall of Lasallian Saints. The hall leads to the building's classrooms as well as the historic Br. Henry Virgil Memorial Gymnasium. The other main structures for IS students are the St. Benilde, St. Mutien Marie, and Br. Gregory Refuerzo Buildings. The Learning Resource Center is located inside the Br. Vernon Mabile Building. On the Senior High School side, the buildings that are used are the Claro M. Recto, and Jose Rizal (composed of the Noli Me Tangere and El Filibusterismo structure clusters). College students hold classes on the western half of the campus, using the Apolinario Mabini Building, and CBEAM (College of Business, Economics, Accountancy and Management) Building. The Gregorio Zara building is also on the college side of the campus. Also known as the I.T. Domain Building, it holds the school's Network Operations Center as well as three computer laboratories. Beside the building is a gate and an access road that leads to the De La Salle Brothers’ Novitiate. Academic programs College Degree Programs College of Business, Economics, Accountancy and Management BS Accountancy BS Accounting Technology BS Business Administration major in Business Economics BS Business Administration major in Financial Management BS Business Administration major in Marketing Management BS Entrepreneurship BS Legal Management BS Management Technology Certificate in Entrepreneurship College of Education, Arts and Sciences AB Communication AB Multimedia Arts BS Biology BS CBT Bachelor of Elementary Education Bachelor of Elementary Education major in Special Education BS Mathematics BS Psychology Bachelor of Secondary Education major in English Bachelor of Secondary Education major in Filipino Bachelor of Secondary Education major in Mathematics Bachelor of Secondary Education major in Social Studies College of International Hospitality and Tourism Management BS Hotel and Restaurant Management BS Tourism Certificate in Hotel and Restaurant Management Certificate in Culinary Arts College of Information Technology and Engineering BS Computer Engineering BS Computer Science BS Electronics and Communications Engineering BS Electrical Engineering BS Industrial Engineering BS Information Systems BS Information Technology Certificate in Information Technology College of Law Juris Doctor College of Nursing BS Nursing Graduate Degree Programs Master in Management Technology Short Intensive Management Seminars College Of Law Organizational divisions Academic division Integrated School Primary Learning Community Junior Learning Community Senior Learning Community College College of Business, Economics, Accountancy & Management (CBEAM) College of Education, Arts & Sciences (CEAS) College of International Hospitality and Tourism Management (CIHTM) College of Information Technology & Engineering (CITE) College of Law (COL) College of Nursing (CON) Offices Under the OP and the OEVP Office of the President Presidential Management Office Office of the Executive Vice-President Lasallian Ministries Sports & Culture Publications Institutional The President's Report (Annual) The Ala (Salle) Eh! (Quarterly) SADYÂ (Twice Monthly) Integrated school Bulik (Integrated School Broadsheet) Bulik Literary Folio (Student Magazine) Bakas (Grade School Student Newsletter) Kamalig (Student Newsletter) CRESCIT (Senior High School Newspaper) PÁNANAW (Senior High School Literary Folio) typo. (Senior High School Student Magazine) College LAVOXA (Student Broadsheet & Tabloid) Umalohokan (Student Newsletter) L Magazine (Student Magazine) Utak Berde (Student Literary Magazine) Talas (Faculty Journal) MMT Link (Graduate School Journal) Etudes (Research Office Newsletter) Infobits (Guidance Newsletter) Parents’ Bulletin (Guidance Newsletter) Educator's Link (Guidance Newsletter) References De La Salle Lipa: Academic Programs De La Salle Lipa: History De La Salle Philippines Universities and colleges in Batangas Education in Lipa, Batangas High schools in Batangas Batangas Nursing schools in the Philippines
4707646
https://en.wikipedia.org/wiki/Orao%20%28computer%29
Orao (computer)
Orao (en. Eagle) was an 8-bit computer developed by PEL Varaždin in 1984. Its marketing and distribution was done by Velebit Informatika. It was used as a standard primary school and secondary school computer in Croatia and Vojvodina from 1985 to 1991. Orao (code named YU102) was designed by Miroslav Kocijan to supersede Galeb (code named YU101). The goal was to make a better computer, yet with less components, easier to produce and less expensive. The initial version, dubbed Orao MR102, was succeeded by Orao 64 and Orao+. History The chief designer of Orao was Miroslav Kocijan, who previously constructed the basic motherboard for Galeb (working name YU101). Galeb was inspired by computers Compukit UK101, Ohio Scientific Superboard and Ohio Scientific Superboard II which appeared in the United Kingdom and the United States in 1979 and were cheaper than the Apple II, Commodore PET and TRS-80. Driven by the challenge of Anthony Madidi, Miroslav Kocijan began to develop a computer that is supposed to be more advanced than the Galeb with fewer components, easier to produce, better graphics, performance and a more affordable price. The working title of the new project was YU102. Miroslav Kocijan managed to gather around him a group of people who helped in the development of electronic components and software. Kocijan had the idea to commercialize Orao, and was able to convince Rajko Ivanusic, director of PEL, to support the idea. In the market of the former Yugoslavia, where the purchase of home computers were disabled due to high tariffs and due to the low purchasing power of citizens and schools computers were unattainable, the idea of mass-produced home computers made sense. Serial production and price The price of Orao was originally set to be around 55.000 Yugoslav dinars, however the price rose to 80.000 dinars. The production began in the summer of 1984. Since the only imported components were integrated circuits which were hard to acquire in Yugoslavia because of strict monetary politics, PEL Varaždin itself financed the imports of these components, which enabled a cheaper final product. Occasional problems that occurred in the serial production were related to the construction of certain external parts and overheating. Lack of supported software Since the Orao was not compatible with any home computer of the time, its software offering was scarce due to the lack of software companies whose products supported the platform. Lack of capabilities That was one of most common sentences related to 8-bit school computer. Result of that statement is chapter above. Architecture The graphics were controlled by a special circuit, not by the main processor as it was the case in many other home computers because Kocijan's intention was to create a graphical computer similar to Xerox Alto, or Macintosh, and as such, he had it utilize bitmap graphics. The resolution was 256x256 dots, for up to 196,608 bits of VRAM as the graphics could need no more than three bits per pixel. Such a resolution was chosen for square dots, which enabled easy writing of graphical programs. The resolution of text was 32x32, and every character was rendered in an 8x8 field. The designers of Orao went an additional step further to create a computer which could be far more easily expanded, connect with a printer and establish a net connection through RS-232. Specifications CPU: MOS Technology 6502 at 1 MHz Read-Only Memory: 16 KB (with BASIC interpreter and Machine code monitor) RAM: 16 KB (expandable to 32 KB) VRAM up to 24 KB Graphics: monochrome 256×256 pixels, in up to 8 shades of gray Text mode: 32 lines with 32 characters each 72 chars in one BASIC line Sound: single-channel, 5 octaves through built-in loudspeaker Computer keyboard: 61-key QWERTZ I/O ports: video and RF TV out, cassette tape interface (DIN-5), RS-232 (D-25), Edge expansion connector Peripherals: 5.25" floppy drive, Printer Price: 55,000 dinars planned but increased to 80,000 during production BASIC example Math 10 REM PLOTS ONE PERIOD OF SINUS GRAPH 20 for x=0 to 128 30 y=64*sin(3.14159*x/64) 40 plot x,y+96 50 next 60 END Physics 5 REM CONVERTS KM/H TO M/S 10 PRINT"KM/H M/S" 20 FOR SP=0 TO 60 30 PRINT SP,SP*1000/(60*60) 40 NEXT Output RUN KM/H M/S 0 0 1 .277777778 2 .555555556 3 .833333333 4 1.11111111 5 1.38888889 6 1.66666667 7 1.94444445 8 2.22222222 9 2.5 10 2.77777778 11 3.05555556 12 3.33333333 13 3.61111111 14 3.88888889 15 4.16666667 16 4.44444445 17 4.72222222 18 5 19 5.27777778 20 5.55555556 21 5.83333334 22 6.11111111 23 6.38888889 24 6.66666667 25 6.94444445 26 7.22222223 27 7.5 28 7.77777778 29 8.05555556 30 8.33333333 31 8.61111112 32 8.88888889 33 9.16666667 34 9.44444445 35 9.72222223 36 10 37 10.2777778 38 10.5555556 39 10.8333333 40 11.1111111 41 11.3888889 42 11.6666667 43 11.9444444 44 12.2222222 45 12.5 46 12.7777778 47 13.0555556 48 13.3333333 49 13.6111111 50 13.8888889 51 14.1666667 52 14.4444444 53 14.7222222 54 15 55 15.2777778 56 15.5555556 57 15.8333333 58 16.1111111 59 16.3888889 60 16.6666667 Machine code/Assembly example 1000 A9 7F LDA #7F 1002 85 E2 STA E2 ; x center 1004 85 E3 STA E3 ; y center 1006 A9 6F LDA #6F 1008 85 F8 STA F8 ; radius 100A 20 06 FF JSR FF06 ; draw circle 100D C6 E2 DEC E2 ; decrement x center 100F C6 E3 DEC E3 ; decrement y center 1011 A5 F8 LDA F8 1013 38 SEC 1014 E9 04 SBC #04 ; reduce radius for four points 1016 85 F8 STA F8 ; store it 1018 C9 21 CMP #21 ; compare with 0x21 101A B0 EE BCS 100A ; bigger or equal ? yes, draw again 101C 60 RTS ; no, return Design team Miroslav Kocijan Branko Zebec Ivan Pongračić Anđelko Kršić Damir Šafarić Davorin Krizman Zdravko Melnjak Vjekoslav Prstec Dražen Zlatarek References External links Orao page at old-computers.com Orao implementation in FPGA Another Orao implementation in FPGA MESS, Multi-System Emulator which supports Orao Orao emulator with source code and some software, as well as Orao 2007 recreation of original computer Orao Emulator written in C# Orao Emulator for Android ORAO BASKET Orao Emulator in web browser Orao Emulator in Python Browser Orao Emulator as standard web site, using Blazor/C# Browser Orao Emulator as Web Assembly app, can work offline in modern browsers Computer-related introductions in 1984 Personal computers
57282698
https://en.wikipedia.org/wiki/ZFS
ZFS
ZFS (previously: Zettabyte file system) combines a file system with a volume manager. It began as part of the Sun Microsystems Solaris operating system in 2001. Large parts of Solaris – including ZFS – were published under an open source license as OpenSolaris for around 5 years from 2005, before being placed under a closed source license when Oracle Corporation acquired Sun in 2009/2010. During 2005 to 2010, the open source version of ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD. In 2010, the illumos project forked a recent version of OpenSolaris, to continue its development as an open source project, including ZFS. In 2013, OpenZFS was founded to coordinate the development of open source ZFS. OpenZFS maintains and manages the core ZFS code, while organizations using ZFS maintain the specific code and validation processes required for ZFS to integrate within their systems. OpenZFS is widely used in Unix-like systems. Overview The management of stored data generally involves two aspects: the physical volume management of one or more block storage devices such as hard drives and SD cards and their organization into logical block devices as seen by the operating system (often involving a volume manager, RAID controller, array manager, or suitable device driver), and the management of data and files that are stored on these logical block devices (a file system or other data storage). Example: A RAID array of 2 hard drives and an SSD caching disk is controlled by Intel's RST system, part of the chipset and firmware built into a desktop computer. The Windows user sees this as a single volume, containing an NTFS-formatted drive of their data, and NTFS is not necessarily aware of the manipulations that may be required (such as reading from/writing to the cache drive or rebuilding the RAID array if a disk fails). The management of the individual devices and their presentation as a single device is distinct from the management of the files held on that apparent device. ZFS is unusual because, unlike most other storage systems, it unifies both of these roles and acts as both the volume manager and the file system. Therefore, it has complete knowledge of both the physical disks and volumes (including their condition and status, their logical arrangement into volumes), and also of all the files stored on them. ZFS is designed to ensure (subject to suitable hardware) that data stored on disks cannot be lost due to physical errors or misprocessing by the hardware or operating system, or bit rot events and data corruption which may happen over time, and its complete control of the storage system is used to ensure that every step, whether related to file management or disk management, is verified, confirmed, corrected if needed, and optimized, in a way that storage controller cards and separate volume and file managers cannot achieve. ZFS also includes a mechanism for dataset and pool-level snapshots and replication, including snapshot cloning which is described by the FreeBSD documentation as one of its "most powerful features", having features that "even other file systems with snapshot functionality lack". Very large numbers of snapshots can be taken, without degrading performance, allowing snapshots to be used prior to risky system operations and software changes, or an entire production ("live") file system to be fully snapshotted several times an hour, in order to mitigate data loss due to user error or malicious activity. Snapshots can be rolled back "live" or previous file system states can be viewed, even on very large file systems, leading to savings in comparison to formal backup and restore processes. Snapshots can also be cloned to form new independent file systems. A pool level snapshot (known as a "checkpoint") is available which allows rollback of operations that may affect the entire pool's structure, or which add or remove entire datasets. History Sun Microsystems (to 2010) In 1987, AT&T Corporation and Sun announced that they were collaborating on a project to merge the most popular Unix variants on the market at that time: Berkeley Software Distribution, UNIX System V, and Xenix. This became Unix System V Release 4 (SVR4). The project was released under the name Solaris, which became the successor to SunOS 4 (although SunOS 4.1.x micro releases were retroactively named Solaris 1). ZFS was designed and implemented by a team at Sun led by Jeff Bonwick, Bill Moore and Matthew Ahrens. It was announced on September 14, 2004, but development started in 2001. Source code for ZFS was integrated into the main trunk of Solaris development on October 31, 2005, and released for developers as part of build 27 of OpenSolaris on November 16, 2005. In June 2006, Sun announced that ZFS was included in the mainstream 6/06 update to Solaris 10. Solaris was originally developed as proprietary software, but Sun Microsystems was an early commercial proponent of open source software and in June 2005 released most of the Solaris codebase under the CDDL license and founded the OpenSolaris open-source project. In Solaris 10 6/06 ("U2"), Sun added the ZFS file system and during the next 5 years frequently updated ZFS with new features. ZFS was ported to Linux, Mac OS X (continued as MacZFS) and FreeBSD, under this open source license. The name at one point was said to stand for "Zettabyte File System", but by 2006, the name was no longer considered to be an abbreviation. A ZFS file system can store up to 256 quadrillion zettabytes (ZB). In September 2007, NetApp sued Sun claiming that ZFS infringed some of NetApp's patents on Write Anywhere File Layout. Sun counter-sued in October the same year claiming the opposite. The lawsuits were ended in 2010 with an undisclosed settlement. Later development Ported versions of ZFS began to appear in 2005. After the Sun acquisition by Oracle in 2010, Oracle's version of ZFS became closed source and development of open-source versions proceeded independently, coordinated by OpenZFS from 2013. Features Summary Examples of features specific to ZFS include: Designed for long-term storage of data, and indefinitely scaled datastore sizes with zero data loss, and high configurability. Hierarchical checksumming of all data and metadata, ensuring that the entire storage system can be verified on use, and confirmed to be correctly stored, or remedied if corrupt. Checksums are stored with a block's parent block, rather than with the block itself. This contrasts with many file systems where checksums (if held) are stored with the data so that if the data is lost or corrupt, the checksum is also likely to be lost or incorrect. Can store a user-specified number of copies of data or metadata, or selected types of data, to improve the ability to recover from data corruption of important files and structures. Automatic rollback of recent changes to the file system and data, in some circumstances, in the event of an error or inconsistency. Automated and (usually) silent self-healing of data inconsistencies and write failure when detected, for all errors where the data is capable of reconstruction. Data can be reconstructed using all of the following: error detection and correction checksums stored in each block's parent block; multiple copies of data (including checksums) held on the disk; write intentions logged on the SLOG (ZIL) for writes that should have occurred but did not occur (after a power failure); parity data from RAID/RAID-Z disks and volumes; copies of data from mirrored disks and volumes. Native handling of standard RAID levels and additional ZFS RAID layouts ("RAID-Z"). The RAID-Z levels stripe data across only the disks required, for efficiency (many RAID systems stripe indiscriminately across all devices), and checksumming allows rebuilding of inconsistent or corrupted data to be minimized to those blocks with defects; Native handling of tiered storage and caching devices, which is usually a volume related task. Because ZFS also understands the file system, it can use file-related knowledge to inform, integrate and optimize its tiered storage handling which a separate device cannot; Native handling of snapshots and backup/replication which can be made efficient by integrating the volume and file handling. Relevant tools are provided at a low level and require external scripts and software for utilization. Native data compression and deduplication, although the latter is largely handled in RAM and is memory hungry. Efficient rebuilding of RAID arrays—a RAID controller often has to rebuild an entire disk, but ZFS can combine disk and file knowledge to limit any rebuilding to data which is actually missing or corrupt, greatly speeding up rebuilding; Unaffected by RAID hardware changes which affect many other systems. On many systems, if self-contained RAID hardware such as a RAID card fails, or the data is moved to another RAID system, the file system will lack information that was on the original RAID hardware, which is needed to manage data on the RAID array. This can lead to a total loss of data unless near-identical hardware can be acquired and used as a "stepping stone". Since ZFS manages RAID itself, a ZFS pool can be migrated to other hardware, or the operating system can be reinstalled, and the RAID-Z structures and data will be recognized and immediately accessible by ZFS again. Ability to identify data that would have been found in a cache but has been discarded recently instead; this allows ZFS to reassess its caching decisions in light of later use and facilitates very high cache-hit levels (ZFS cache hit rates are typically over 80%); Alternative caching strategies can be used for data that would otherwise cause delays in data handling. For example, synchronous writes which are capable of slowing down the storage system can be converted to asynchronous writes by being written to a fast separate caching device, known as the SLOG (sometimes called the ZIL – ZFS Intent Log). Highly tunable—many internal parameters can be configured for optimal functionality. Can be used for high availability clusters and computing, although not fully designed for this use. Data integrity One major feature that distinguishes ZFS from other file systems is that it is designed with a focus on data integrity by protecting the user's data on disk against silent data corruption caused by data degradation, power surges (voltage spikes), bugs in disk firmware, phantom writes (the previous write did not make it to disk), misdirected reads/writes (the disk accesses the wrong block), DMA parity errors between the array and server memory or from the driver (since the checksum validates data inside the array), driver errors (data winds up in the wrong buffer inside the kernel), accidental overwrites (such as swapping to a live file system), etc. A 1999 study showed that neither any of the then-major and widespread filesystems (such as UFS, Ext, XFS, JFS, or NTFS), nor hardware RAID (which has some issues with data integrity) provided sufficient protection against data corruption problems. Initial research indicates that ZFS protects data better than earlier efforts. It is also faster than UFS and can be seen as its replacement. Within ZFS, data integrity is achieved by using a Fletcher-based checksum or a SHA-256 hash throughout the file system tree. Each block of data is checksummed and the checksum value is then saved in the pointer to that block—rather than at the actual block itself. Next, the block pointer is checksummed, with the value being saved at its pointer. This checksumming continues all the way up the file system's data hierarchy to the root node, which is also checksummed, thus creating a Merkle tree. In-flight data corruption or phantom reads/writes (the data written/read checksums correctly but is actually wrong) are undetectable by most filesystems as they store the checksum with the data. ZFS stores the checksum of each block in its parent block pointer so the entire pool self-validates. When a block is accessed, regardless of whether it is data or meta-data, its checksum is calculated and compared with the stored checksum value of what it "should" be. If the checksums match, the data are passed up the programming stack to the process that asked for it; if the values do not match, then ZFS can heal the data if the storage pool provides data redundancy (such as with internal mirroring), assuming that the copy of data is undamaged and with matching checksums. It is optionally possible to provide additional in-pool redundancy by specifying (or or more), which means that data will be stored twice (or three times) on the disk, effectively halving (or, for , reducing to one third) the storage capacity of the disk. Additionally some kinds of data used by ZFS to manage the pool are stored multiple times by default for safety, even with the default copies=1 setting. If other copies of the damaged data exist or can be reconstructed from checksums and parity data, ZFS will use a copy of the data (or recreate it via a RAID recovery mechanism), and recalculate the checksum—ideally resulting in the reproduction of the originally expected value. If the data passes this integrity check, the system can then update all faulty copies with known-good data and redundancy will be restored. Consistency of data held in memory, such as cached data in the ARC, is not checked by default, as ZFS is expected to run on enterprise-quality hardware with error correcting RAM, but the capability to check in-memory data exists and can be enabled using "debug flags". RAID ("RAID-Z") For ZFS to be able to guarantee data integrity, it needs multiple copies of the data, usually spread across multiple disks. Typically this is achieved by using either a RAID controller or so-called "soft" RAID (built into a file system). Avoidance of hardware RAID controllers While ZFS can work with hardware RAID devices, ZFS will usually work more efficiently and with greater data protection if it has raw access to all storage devices. ZFS relies on the disk for an honest view to determine the moment data is confirmed as safely written and it has numerous algorithms designed to optimize its use of caching, cache flushing, and disk handling. Disks connected to the system using a hardware, firmware, other "soft" RAID, or any other controller that modifies the ZFS-to-disk I/O path will affect ZFS performance and data integrity. If a third-party device performs caching or presents drives to ZFS as a single system without the low level view ZFS relies upon, there is a much greater chance that the system will perform less optimally and that ZFS will be less likely to prevent failures, recover from failures more slowly, or lose data due to a write failure. For example, if a hardware RAID card is used, ZFS may not be able to: determine the condition of disks; determine if the RAID array is degraded or rebuilding; detect all data corruption; place data optimally across the disks; make selective repairs; control how repairs are balanced with ongoing use; or make repairs that ZFS could usually undertake. The hardware RAID card will interfere with ZFS' algorithms. RAID controllers also usually add controller-dependent data to the drives which prevents software RAID from accessing the user data. In the case of a hardware RAID controller failure, it may be possible to read the data with another compatible controller, but this isn't always possible and a replacement may not be available. Alternate hardware RAID controllers may not understand the original manufacturer's custom data required to manage and restore an array. Unlike most other systems where RAID cards or similar hardware can offload resources and processing to enhance performance and reliability, with ZFS it is strongly recommended that these methods not be used as they typically reduce the system's performance and reliability. If disks must be attached through a RAID or other controller, it is recommended to minimize the amount of processing done in the controller by using a plain HBA (host adapter), a simple fanout card, or configure the card in JBOD mode (i.e. turn off RAID and caching functions), to allow devices to be attached with minimal changes in the ZFS-to-disk I/O pathway. A RAID card in JBOD mode may still interfere if it has a cache or, depending upon its design, may detach drives that do not respond in time (as has been seen with many energy-efficient consumer-grade hard drives), and as such, may require Time-Limited Error Recovery (TLER)/CCTL/ERC-enabled drives to prevent drive dropouts, so not all cards are suitable even with RAID functions disabled. ZFS's approach: RAID-Z and mirroring Instead of hardware RAID, ZFS employs "soft" RAID, offering RAID-Z (parity based like RAID 5 and similar) and disk mirroring (similar to RAID 1). The schemes are highly flexible. RAID-Z is a data/parity distribution scheme like RAID-5, but uses dynamic stripe width: every block is its own RAID stripe, regardless of blocksize, resulting in every RAID-Z write being a full-stripe write. This, when combined with the copy-on-write transactional semantics of ZFS, eliminates the write hole error. RAID-Z is also faster than traditional RAID 5 because it does not need to perform the usual read-modify-write sequence. As all stripes are of different sizes, RAID-Z reconstruction has to traverse the filesystem metadata to determine the actual RAID-Z geometry. This would be impossible if the filesystem and the RAID array were separate products, whereas it becomes feasible when there is an integrated view of the logical and physical structure of the data. Going through the metadata means that ZFS can validate every block against its 256-bit checksum as it goes, whereas traditional RAID products usually cannot do this. In addition to handling whole-disk failures, RAID-Z can also detect and correct silent data corruption, offering "self-healing data": when reading a RAID-Z block, ZFS compares it against its checksum, and if the data disks did not return the right answer, ZFS reads the parity and then figures out which disk returned bad data. Then, it repairs the damaged data and returns good data to the requestor. RAID-Z and mirroring do not require any special hardware: they do not need NVRAM for reliability, and they do not need write buffering for good performance or data protection. With RAID-Z, ZFS provides fast, reliable storage using cheap, commodity disks. There are five different RAID-Z modes: striping (similar to RAID 0, offers no redundancy), RAID-Z1 (similar to RAID 5, allows one disk to fail), RAID-Z2 (similar to RAID 6, allows two disks to fail), RAID-Z3 (a RAID 7 configuration, allows three disks to fail), and mirroring (similar to RAID 1, allows all but one disk to fail). The need for RAID-Z3 arose in the early 2000s as multi-terabyte capacity drives became more common. This increase in capacity—without a corresponding increase in throughput speeds—meant that rebuilding an array due to a failed drive could "easily take weeks or months" to complete. During this time, the older disks in the array will be stressed by the additional workload, which could result in data corruption or drive failure. By increasing parity, RAID-Z3 reduces the chance of data loss by simply increasing redundancy. Resilvering and scrub (array syncing and integrity checking) ZFS has no tool equivalent to fsck (the standard Unix and Linux data checking and repair tool for file systems). Instead, ZFS has a built-in scrub function which regularly examines all data and repairs silent corruption and other problems. Some differences are: fsck must be run on an offline filesystem, which means the filesystem must be unmounted and is not usable while being repaired, while scrub is designed to be used on a mounted, live filesystem, and does not need the ZFS filesystem to be taken offline. fsck usually only checks metadata (such as the journal log) but never checks the data itself. This means, after an fsck, the data might still not match the original data as stored. fsck cannot always validate and repair data when checksums are stored with data (often the case in many file systems), because the checksums may also be corrupted or unreadable. ZFS always stores checksums separately from the data they verify, improving reliability and the ability of scrub to repair the volume. ZFS also stores multiple copies of data—metadata, in particular, may have upwards of 4 or 6 copies (multiple copies per disk and multiple disk mirrors per volume), greatly improving the ability of scrub to detect and repair extensive damage to the volume, compared to fsck. scrub checks everything, including metadata and the data. The effect can be observed by comparing fsck to scrub times—sometimes a fsck on a large RAID completes in a few minutes, which means only the metadata was checked. Traversing all metadata and data on a large RAID takes many hours, which is exactly what scrub does. The official recommendation from Sun/Oracle is to scrub enterprise-level disks once a month, and cheaper commodity disks once a week. Capacity ZFS is a 128-bit file system, so it can address 1.84 × 1019 times more data than 64-bit systems such as Btrfs. The maximum limits of ZFS are designed to be so large that they should never be encountered in practice. For instance, fully populating a single zpool with 2128 bits of data would require 3×1024 TB hard disk drives. Some theoretical limits in ZFS are: 16 exbibytes (264 bytes): maximum size of a single file 248: number of entries in any individual directory 16 exbibytes: maximum size of any attribute 256: number of attributes of a file (actually constrained to 248 for the number of files in a directory) 256 quadrillion zebibytes (2128 bytes): maximum size of any zpool 264: number of devices in any zpool 264: number of file systems in a zpool 264: number of zpools in a system Encryption With Oracle Solaris, the encryption capability in ZFS is embedded into the I/O pipeline. During writes, a block may be compressed, encrypted, checksummed and then deduplicated, in that order. The policy for encryption is set at the dataset level when datasets (file systems or ZVOLs) are created. The wrapping keys provided by the user/administrator can be changed at any time without taking the file system offline. The default behaviour is for the wrapping key to be inherited by any child data sets. The data encryption keys are randomly generated at dataset creation time. Only descendant datasets (snapshots and clones) share data encryption keys. A command to switch to a new data encryption key for the clone or at any time is provided—this does not re-encrypt already existing data, instead utilising an encrypted master-key mechanism. the encryption feature is also fully integrated into OpenZFS 0.8.0 available for Debian and Ubuntu Linux distributions. Read/write efficiency ZFS will automatically allocate data storage across all vdevs in a pool (and all devices in each vdev) in a way that generally maximises the performance of the pool. ZFS will also update its write strategy to take account of new disks added to a pool, when they are added. As a general rule, ZFS allocates writes across vdevs based on the free space in each vdev. This ensures that vdevs which have proportionately less data already, are given more writes when new data is to be stored. This helps to ensure that as the pool becomes more used, the situation does not develop that some vdevs become full, forcing writes to occur on a limited number of devices. It also means that when data is read (and reads are much more frequent than writes in most uses), different parts of the data can be read from as many disks as possible at the same time, giving much higher read performance. Therefore, as a general rule, pools and vdevs should be managed and new storage added, so that the situation does not arise that some vdevs in a pool are almost full and others almost empty, as this will make the pool less efficient. Other features Storage devices, spares, and quotas Pools can have hot spares to compensate for failing disks. When mirroring, block devices can be grouped according to physical chassis, so that the filesystem can continue in the case of the failure of an entire chassis. Storage pool composition is not limited to similar devices, but can consist of ad-hoc, heterogeneous collections of devices, which ZFS seamlessly pools together, subsequently doling out space to as needed. Arbitrary storage device types can be added to existing pools to expand their size. The storage capacity of all vdevs is available to all of the file system instances in the zpool. A quota can be set to limit the amount of space a file system instance can occupy, and a reservation can be set to guarantee that space will be available to a file system instance. Caching mechanisms: ARC, L2ARC, Transaction groups, ZIL, SLOG, Special VDEV ZFS uses different layers of disk cache to speed up read and write operations. Ideally, all data should be stored in RAM, but that is usually too expensive. Therefore, data is automatically cached in a hierarchy to optimize performance versus cost; these are often called "hybrid storage pools". Frequently accessed data will be stored in RAM, and less frequently accessed data can be stored on slower media, such as solid state drives (SSDs). Data that is not often accessed is not cached and left on the slow hard drives. If old data is suddenly read a lot, ZFS will automatically move it to SSDs or to RAM. ZFS caching mechanisms include one each for reads and writes, and in each case, two levels of caching can exist, one in computer memory (RAM) and one on fast storage (usually solid state drives (SSDs)), for a total of four caches. A number of other caches, cache divisions, and queues also exist within ZFS. For example, each VDEV has its own data cache, and the ARC cache is divided between data stored by the user and metadata used by ZFS, with control over the balance between these. Special VDEV Class In OpenZFS 0.8 and later, it is possible to configure a Special VDEV class to preferentially store filesystem metadata, and optionally the Data Deduplication Table (DDT), and small filesystem blocks. This allows, for example, to create a Special VDEV on fast solid-state storage to store the metadata, while the regular file data is stored on spinning disks. This speeds up metadata-intensive operations such as filesystem traversal, scrub, and resilver, without the expense of storing the entire filesystem on solid-state storage. Copy-on-write transactional model ZFS uses a copy-on-write transactional object model. All block pointers within the filesystem contain a 256-bit checksum or 256-bit hash (currently a choice between Fletcher-2, Fletcher-4, or SHA-256) of the target block, which is verified when the block is read. Blocks containing active data are never overwritten in place; instead, a new block is allocated, modified data is written to it, then any metadata blocks referencing it are similarly read, reallocated, and written. To reduce the overhead of this process, multiple updates are grouped into transaction groups, and ZIL (intent log) write cache is used when synchronous write semantics are required. The blocks are arranged in a tree, as are their checksums (see Merkle signature scheme). Snapshots and clones An advantage of copy-on-write is that, when ZFS writes new data, the blocks containing the old data can be retained, allowing a snapshot version of the file system to be maintained. ZFS snapshots are consistent (they reflect the entire data as it existed at a single point in time), and can be created extremely quickly, since all the data composing the snapshot is already stored, with the entire storage pool often snapshotted several times per hour. They are also space efficient, since any unchanged data is shared among the file system and its snapshots. Snapshots are inherently read-only, ensuring they will not be modified after creation, although they should not be relied on as a sole means of backup. Entire snapshots can be restored and also files and directories within snapshots. Writeable snapshots ("clones") can also be created, resulting in two independent file systems that share a set of blocks. As changes are made to any of the clone file systems, new data blocks are created to reflect those changes, but any unchanged blocks continue to be shared, no matter how many clones exist. This is an implementation of the Copy-on-write principle. Sending and receiving snapshots ZFS file systems can be moved to other pools, also on remote hosts over the network, as the send command creates a stream representation of the file system's state. This stream can either describe complete contents of the file system at a given snapshot, or it can be a delta between snapshots. Computing the delta stream is very efficient, and its size depends on the number of blocks changed between the snapshots. This provides an efficient strategy, e.g., for synchronizing offsite backups or high availability mirrors of a pool. Dynamic striping Dynamic striping across all devices to maximize throughput means that as additional devices are added to the zpool, the stripe width automatically expands to include them; thus, all disks in a pool are used, which balances the write load across them. Variable block sizes ZFS uses variable-sized blocks, with 128 KB as the default size. Available features allow the administrator to tune the maximum block size which is used, as certain workloads do not perform well with large blocks. If data compression is enabled, variable block sizes are used. If a block can be compressed to fit into a smaller block size, the smaller size is used on the disk to use less storage and improve IO throughput (though at the cost of increased CPU use for the compression and decompression operations). Lightweight filesystem creation In ZFS, filesystem manipulation within a storage pool is easier than volume manipulation within a traditional filesystem; the time and effort required to create or expand a ZFS filesystem is closer to that of making a new directory than it is to volume manipulation in some other systems. Adaptive endianness Pools and their associated ZFS file systems can be moved between different platform architectures, including systems implementing different byte orders. The ZFS block pointer format stores filesystem metadata in an endian-adaptive way; individual metadata blocks are written with the native byte order of the system writing the block. When reading, if the stored endianness does not match the endianness of the system, the metadata is byte-swapped in memory. This does not affect the stored data; as is usual in POSIX systems, files appear to applications as simple arrays of bytes, so applications creating and reading data remain responsible for doing so in a way independent of the underlying system's endianness. Deduplication Data deduplication capabilities were added to the ZFS source repository at the end of October 2009, and relevant OpenSolaris ZFS development packages have been available since December 3, 2009 (build 128). Effective use of deduplication may require large RAM capacity; recommendations range between 1 and 5 GB of RAM for every TB of storage. An accurate assessment of the memory required for deduplication is made by referring to the number of unique blocks in the pool, and the number of bytes on disk and in RAM ("core") required to store each record—these figures are reported by inbuilt commands such as zpool and zdb. Insufficient physical memory or lack of ZFS cache can result in virtual memory thrashing when using deduplication, which can cause performance to plummet, or result in complete memory starvation. Because deduplication occurs at write-time, it is also very CPU-intensive and this can also significantly slow down a system. Other storage vendors use modified versions of ZFS to achieve very high data compression ratios. Two examples in 2012 were GreenBytes and Tegile. In May 2014, Oracle bought GreenBytes for its ZFS deduplication and replication technology. As described above, deduplication is usually not recommended due to its heavy resource requirements (especially RAM) and impact on performance (especially when writing), other than in specific circumstances where the system and data are well-suited to this space-saving technique. Additional capabilities Explicit I/O priority with deadline scheduling. Claimed globally optimal I/O sorting and aggregation. Multiple independent prefetch streams with automatic length and stride detection. Parallel, constant-time directory operations. End-to-end checksumming, using a kind of "Data Integrity Field", allowing data corruption detection (and recovery if you have redundancy in the pool). A choice of 3 hashes can be used, optimized for speed (fletcher), standardization and security (SHA256) and salted hashes (Skein). Transparent filesystem compression. Supports LZJB, gzip, LZ4 and Zstd. Intelligent scrubbing and resilvering (resyncing). Load and space usage sharing among disks in the pool. Ditto blocks: Configurable data replication per filesystem, with zero, one or two extra copies requested per write for user data, and with that same base number of copies plus one or two for metadata (according to metadata importance). If the pool has several devices, ZFS tries to replicate over different devices. Ditto blocks are primarily an additional protection against corrupted sectors, not against total disk failure. ZFS design (copy-on-write + superblocks) is safe when using disks with write cache enabled, if they honor the write barriers. This feature provides safety and a performance boost compared with some other filesystems. On Solaris, when entire disks are added to a ZFS pool, ZFS automatically enables their write cache. This is not done when ZFS only manages discrete slices of the disk, since it does not know if other slices are managed by non-write-cache safe filesystems, like UFS. The FreeBSD implementation can handle disk flushes for partitions thanks to its GEOM framework, and therefore does not suffer from this limitation. Per-user, per-group, per-project, and per-dataset quota limits. Filesystem encryption since Solaris 11 Express, and OpenZFS (ZoL) 0.8. (on some other systems ZFS can utilize encrypted disks for a similar effect; GELI on FreeBSD can be used this way to create fully encrypted ZFS storage). Pools can be imported in read-only mode. It is possible to recover data by rolling back entire transactions at the time of importing the zpool. ZFS is not a clustered filesystem; however, clustered ZFS is available from third parties. Snapshots can be taken manually or automatically. The older versions of the stored data that they contain can be exposed as full read-only file systems. They can also be exposed as historic versions of files and folders when used with CIFS (also known as SMB, Samba or file shares); this is known as "Previous versions", "VSS shadow copies", or "File history" on Windows, or AFP and "Apple Time Machine" on Apple devices. Disks can be marked as 'spare'. A data pool can be set to automatically and transparently handle disk faults by activating a spare disk and beginning to resilver the data that was on the suspect disk onto it, when needed. Limitations There are several limitations of the ZFS filesystem. Limitations in preventing data corruption The authors of a 2010 study that examined the ability of file systems to detect and prevent data corruption, with particular focus on ZFS, observed that ZFS itself is effective in detecting and correcting data errors on storage devices, but that it assumes data in RAM is "safe", and not prone to error. The study comments that "a single bit flip in memory causes a small but non-negligible percentage of runs to experience a failure, with the probability of committing bad data to disk varying from 0% to 3.6% (according to the workload)", and that when ZFS caches pages or stores copies of metadata in RAM, or holds data in its "dirty" cache for writing to disk, no test is made whether the checksums still match the data at the point of use. Much of this risk can be mitigated in one of two ways: According to the authors, by using ECC RAM; however, the authors considered that adding error detection related to the page cache and heap would allow ZFS to handle certain classes of error more robustly. One of the main architects of ZFS, Matt Ahrens, explains there is an option to enable checksumming of data in memory by using the ZFS_DEBUG_MODIFY flag (zfs_flags=0x10) which addresses these concerns. Other limitations specific to ZFS Capacity expansion is normally achieved by adding groups of disks as a top-level vdev: simple device, RAID-Z, RAID Z2, RAID Z3, or mirrored. Newly written data will dynamically start to use all available vdevs. It is also possible to expand the array by iteratively swapping each drive in the array with a bigger drive and waiting for ZFS to self-heal; the heal time will depend on the amount of stored information, not the disk size. As of Solaris 10 Update 11 and Solaris 11.2, it was neither possible to reduce the number of top-level vdevs in a pool except hot spares, cache, and log devices, nor to otherwise reduce pool capacity. This functionality was said to be in development in 2007. Enhancements to allow reduction of vdevs is under development in OpenZFS. Online shrinking by removing non-redundant top-level vdevs is supported since Solaris 11.4 released in August 2018 and OpenZFS (ZoL) 0.8 released May 2019. it was not possible to add a disk as a column to a RAID Z, RAID Z2 or RAID Z3 vdev. However, a new RAID Z vdev can be created instead and added to the zpool. Some traditional nested RAID configurations, such as RAID 51 (a mirror of RAID 5 groups), are not configurable in ZFS, without some 3rd-party tools. Vdevs can only be composed of raw disks or files, not other vdevs, using the default ZFS management commands. However, a ZFS pool effectively creates a stripe (RAID 0) across its vdevs, so the equivalent of a RAID 50 or RAID 60 is common. Reconfiguring the number of devices in a top-level vdev requires copying data offline, destroying the pool, and recreating the pool with the new top-level vdev configuration, except for adding extra redundancy to an existing mirror, which can be done at any time or if all top level vdevs are mirrors with sufficient redundancy the zpool split command can be used to remove a vdev from each top level vdev in the pool, creating a 2nd pool with identical data. IOPS performance of a ZFS storage pool can suffer if the ZFS raid is not appropriately configured. This applies to all types of RAID, in one way or another. If the zpool consists of only one group of disks configured as, say, eight disks in RAID Z2, then the IOPS performance will be that of a single disk (write speed will be equivalent to 6 disks, but random read speed will be similar to a single disk). However, there are ways to mitigate this IOPS performance problem, for instance add SSDs as L2ARC cache—which can boost IOPS into 100.000s. In short, a zpool should consist of several groups of vdevs, each vdev consisting of 8–12 disks, if using RAID Z. It is not recommended to create a zpool with a single large vdev, say 20 disks, because IOPS performance will be that of a single disk, which also means that resilver time will be very long (possibly weeks with future large drives). Resilver (repair) of a failed disk in a ZFS RAID can take a long time which is not unique to ZFS, it applies to all types of RAID, in one way or another. This means that very large volumes can take several days to repair or to being back to full redundancy after severe data corruption or failure, and during this time a second disk failure may occur, especially as the repair puts additional stress on the system as a whole. In turn this means that configurations that only allow for recovery of a single disk failure, such as RAID Z1 (similar to RAID 5) should be avoided. Therefore, with large disks, one should use RAID Z2 (allow two disks to fail) or RAID Z3 (allow three disks to fail). ZFS RAID differs from conventional RAID by only reconstructing live data and metadata when replacing a disk, not the entirety of the disk including blank and garbage blocks, which means that replacing a member disk on a ZFS pool that is only partially full will take proportionally less time compared to conventional RAID. Data recovery Historically, ZFS has not shipped with tools such as fsck to repair damaged file systems, because the file system itself was designed to self-repair, so long as it had been built with sufficient attention to the design of storage and redundancy of data. If the pool was compromised because of poor hardware, inadequate design or redundancy, or unfortunate mishap, to the point that ZFS was unable to mount the pool, traditionally there were no tools which allowed an end-user to attempt partial salvage of the stored data. This led to threads in online forums where ZFS developers sometimes tried to provide ad-hoc help to home and other small scale users, facing loss of data due to their inadequate design or poor system management. Modern ZFS has improved considerably on this situation over time, and continues to do so: Removal or abrupt failure of caching devices no longer causes pool loss. (At worst, loss of the ZIL may lose very recent transactions, but the ZIL does not usually store more than a few seconds' worth of recent transactions. Loss of the L2ARC cache does not affect data.) If the pool is unmountable, modern versions of ZFS will attempt to identify the most recent consistent point at which the pool which can be recovered, at the cost of losing some of the most recent changes to the contents. Copy on write means that older versions of data, including top-level records and metadata, may still exist even though they are superseded, and if so, the pool can be wound back to a consistent state based on them. The older the data, the more likely it is that at least some blocks have been overwritten and that some data will be irrecoverable, so there is a limit at some point, on the ability of the pool to be wound back. Informally, tools exist to probe the reason why ZFS is unable to mount a pool, and guide the user or a developer as to manual changes required to force the pool to mount. These include using zdb (ZFS debug) to find a valid importable point in the pool, using dtrace or similar to identify the issue causing mount failure, or manually bypassing health checks that cause the mount process to abort, and allow mounting of the damaged pool. , a range of significantly enhanced methods are gradually being rolled out within OpenZFS. These include: Code refactoring, and more detailed diagnostic and debug information on mount failures, to simplify diagnosis and fixing of corrupt pool issues; The ability to trust or distrust the stored pool configuration. This is particularly powerful, as it allows a pool to be mounted even when top-level vdevs are missing or faulty, when top level data is suspect, and also to rewind beyond a pool configuration change if that change was connected to the problem. Once the corrupt pool is mounted, readable files can be copied for safety, and it may turn out that data can be rebuilt even for missing vdevs, by using copies stored elsewhere in the pool. The ability to fix the situation where a disk needed in one pool, was accidentally removed and added to a different pool, causing it to lose metadata related to the first pool, which becomes unreadable. OpenZFS and ZFS Oracle Corporation ceased the public development of both ZFS and OpenSolaris after the acquisition of Sun in 2010. Some developers forked the last public release of OpenSolaris as the Illumos project. Because of the significant advantages present in ZFS, it has been ported to several different platforms with different features and commands. For coordinating the development efforts and to avoid fragmentation, OpenZFS was founded in 2013. According to Matt Ahrens, one of the main architects of ZFS, over 50% of the original OpenSolaris ZFS code has been replaced in OpenZFS with community contributions as of 2019, making “Oracle ZFS” and “OpenZFS” politically and technologically incompatible. Commercial and open source products 2008: Sun shipped a line of ZFS-based 7000-series storage appliances. 2013: Oracle shipped ZS3 series of ZFS-based filers and seized first place in the SPC-2 benchmark with one of them. 2013: iXsystems ships ZFS-based NAS devices called FreeNAS, (now named TrueNAS CORE), for SOHO and TrueNAS for the enterprise. 2014: Netgear ships a line of ZFS-based NAS devices called ReadyDATA, designed to be used in the enterprise. 2015: rsync.net announces a cloud storage platform that allows customers to provision their own zpool and import and export data using zfs send and zfs receive. 2020: iXsystems Begins development of a ZFS-based hyperconverged software called TrueNAS SCALE, for SOHO and TrueNAS for the enterprise. Oracle Corporation, closed source, and forking (from 2010) In January 2010, Oracle Corporation acquired Sun Microsystems, and quickly discontinued the OpenSolaris distribution and the open source development model. In August 2010, Oracle discontinued providing public updates to the source code of the Solaris OS/Networking repository, effectively turning Solaris 11 back into a closed source proprietary operating system. In response to the changing landscape of Solaris and OpenSolaris, the illumos project was launched via webinar on Thursday, August 3, 2010, as a community effort of some core Solaris engineers to continue developing the open source version of Solaris, and complete the open sourcing of those parts not already open sourced by Sun. illumos was founded as a Foundation, the illumos Foundation, incorporated in the State of California as a 501(c)6 trade association. The original plan explicitly stated that illumos would not be a distribution or a fork. However, after Oracle announced discontinuing OpenSolaris, plans were made to fork the final version of the Solaris ON, allowing illumos to evolve into an operating system of its own. As part of OpenSolaris, an open source version of ZFS was therefore integral within illumos. ZFS was widely used within numerous platforms, as well as Solaris. Therefore, in 2013, the co-ordination of development work on the open source version of ZFS was passed to an umbrella project, OpenZFS. The OpenZFS framework allows any interested parties to collaboratively develop the core ZFS codebase in common, while individually maintaining any specific extra code which ZFS requires to function and integrate within their own systems. Version history Note: The Solaris version under development by Sun since the release of Solaris 10 in 2005 was codenamed 'Nevada', and was derived from what was the OpenSolaris codebase. 'Solaris Nevada' is the codename for the next-generation Solaris OS to eventually succeed Solaris 10 and this new code was then pulled successively into new OpenSolaris 'Nevada' snapshot builds. OpenSolaris is now discontinued and OpenIndiana forked from it. A final build (b134) of OpenSolaris was published by Oracle (2010-Nov-12) as an upgrade path to Solaris 11 Express. List of operating systems supporting ZFS List of Operating Systems, distributions and add-ons that support ZFS, the zpool version it supports, and the Solaris build they are based on (if any): See also Comparison of file systems List of file systems Versioning file system – List of versioning file systems Notes References Bibliography External links Fork Yeah! The Rise and Development of illumos - slide show covering much of the history of Solaris, the decision to open source by Sun, the creation of ZFS, and the events causing it to be closed sourced and forked after Oracle's acquisition. The best cloud File System was created before the cloud existed (archived on Dec. 15, 2018) Comparison of SVM mirroring and ZFS mirroring EON ZFS Storage (NAS) distribution End-to-end Data Integrity for File Systems: A ZFS Case Study ZFS – The Zettabyte File System (archived on Feb. 28, 2013) ZFS and RAID-Z: The Über-FS? ZFS: The Last Word In File Systems, by Jeff Bonwick and Bill Moore (archived on Aug. 29, 2017) Visualizing the ZFS intent log (ZIL), April 2013, by Aaron Toponce Features of illumos including OpenZFS Previous wiki page with more links: Getting Started with ZFS, Sep. 15, 2014 (archived on Dec. 30, 2018), part of the illumos documentation 2005 software Compression file systems Disk file systems RAID Sun Microsystems software Volume manager
2555836
https://en.wikipedia.org/wiki/Encrypted%20filesystem
Encrypted filesystem
Encrypted filesystem may refer to: Filesystem-level encryption, a form of disk encryption where individual files or directories are encrypted by the file system itself Encrypting File System, the Microsoft Windows encryption subsystem of NTFS See also Disk encryption, which uses encrypts every bit of data that goes on a disk or disk volume Disk encryption hardware Disk encryption software Hardware-based full disk encryption
511837
https://en.wikipedia.org/wiki/List%20of%20college%20mascots%20in%20the%20United%20States
List of college mascots in the United States
This is an incomplete list of U.S. college mascot's names, consisting of named incarnations of live, costumed, or inflatable mascots. For team names, see List of college sports team nicknames. Mascot index 0–9 #1 Fan – child-like costumed mascot of Saginaw Valley State University A Ace Purple – official mascot of the Purple Aces of the University of Evansville Ace the Skyhawk – official mascot of Stonehill College Ace the Warhawk – official mascot of the Louisiana-Monroe Warhawks of the University of Louisiana at Monroe. Action C – official mascot of the Chippewas of Central Michigan University Air Dunker – inflatable mascot of the Murray State University Racers. Cousin of Dunker. Albert and Alberta Gator – the male and female alligator mascots of the Florida Gators of the University of Florida. Alphie – the costumed wolf mascot of the Nevada Wolf Pack of the University of Nevada, Reno. Archibald "Archie" Eagle – official mascot of the University of Southern Indiana. Archibald "Archie" McGrowl – a cougar costume. The mascot of Misericordia University Adelaide, "Addie" – English Bulldog. Mascot of the University of Redlands. Aristocat – the costumed mascot of the Tennessee State University Tigers and Lady Tigers Army Mules – three mules that act as the mascots for the Army Black Knights of the United States Military Academy (West Point). Argie the Argonaut – "Jason and the Argonauts" mascot of the West Florida Argonauts of the University of West Florida Arnie – "Corsair" mascot of University of Massachusetts Dartmouth Artie the Fighting Artichoke – artichoke mascot of Scottsdale Community College since the early 1970s. Arvee the Golden Eagle – Golden eagle mascot of Rock Valley College. Inaugurated in late 2013. Athena – the female mascot of Claremont McKenna College, Harvey Mudd College, and Scripps College. Attila – the costumed duck mascot of the ducks of Stevens Institute of Technology. Aubie – the tiger mascot of the Auburn Tigers of Auburn University. Avalanche the Golden Bear – The costumed bear mascot of the Golden Bears of Kutztown University of Pennsylvania. Awesome Eagle – costumed mascot of the Golden Eagles of Tennessee Technological University (Tennessee Tech). Azul the Eagle – costumed mascot of Florida Gulf Coast University. B Baby Blue – University of Delaware; second mascot of the University of Delaware; more of a child-friendly mascot Baby Jay – mascot of University of Kansas, accompanied by Big Jay Baldwin the Eagle – the American bald eagle mascot of Boston College Baldwin Jr – the inflatable version of Baldwin the Eagle at Boston College Bananas T. Bear – The black bear mascot of the University of Maine The Battling Bishop – mascot of Ohio Wesleyan Baxter – the bearcat mascot of Binghamton University The Bearcat – name of University of Cincinnati mascot. Beaker – name of the mascot of Eagles of Morehead State University. Beaver – name of the mascot of California Institute of Technology/Caltech Beaver – The beaver mascot of the University of Maine at Farmington Beaver – The beaver mascot of the Polytechnic University of Puerto Rico Bella – The female costumed mascot of the Angelo State University Rams Ben – The Bulldog mascot of McPherson College Benny the Bengal – The costumed Bengal tiger mascot of Idaho State University Benny – the beaver mascot of the Oregon State Beavers of Oregon State University Bernie – the costumed St. Bernard dog of Siena College. Bevo – a live Texas longhorn steer, the official mascot of the Texas Longhorns of the University of Texas at Austin Big Al – the costumed elephant mascot of the Alabama Crimson Tide of the University of Alabama. Big Blue – a lion mascot wearing a blue king's crown representing the Old Dominion Monarchs. Big Blue – a bull mascot representing the Utah State Aggies. Big Jay – the costumed mascot of the Jayhawks of the University of Kansas. Big Red – The main Fighting Razorback mascot of the University of Arkansas. A Cardinal, the mascot of Lamar University. Also the blob-like costumed mascot of the Western Kentucky Hilltoppers and the Daniel Boone like costumed mascot of the Sacred Heart University Pioneers. Big Stuff – the eagle mascot of Winthrop University. Bill the Goat – United States Naval Academy live goat and costumed mascot Billy Bluejay – the bluejay mascot of Creighton University Billy Bronco – Official costumed mascot of the Cal Poly Pomona Broncos; the athletic teams representing California State Polytechnic University, Pomona (Cal Poly Pomona). Billy the Panther – Official costumed mascot of the Eastern Illinois Panthers; the athletic teams representing Eastern Illinois University. Bison – name of the mascot of Howard University, Gallaudet University. The Bird – costumed mascot of the Air Force Falcons of the United States Air Force Academy Falcons Black Jack – since 2000, the costumed mascot of the Army Black Knights of the United States Military Academy (West Point) (Army) Blaster the Burro – one of two mascots of Colorado School of Mines along with Marvin the Miner. Blaze – the official mascot of the Alverno College Inferno. Blaze – the official Vulcan mascot of California University of Pennsylvania Blaze – the official dragon mascot of the University of Alabama at Birmingham. Blaze – the official red dragon mascot of the State University of New York College at Cortland. Blaze – the official maverick (stallion) mascot of the University of Texas at Arlington. Blitz – the official bearcat mascot of Willamette University (OR). Blizzard – the official husky mascot of St. Cloud State University. Blizzard T. Husky – the husky of Michigan Tech. The T stands for The. Blizzard can usually be seen ice skating before home hockey games. Blockie – the unofficial mascot of the University of Houston–Clear Lake that has been used on many university publications. It is an anthropomorphized block displaying the UHCL logo. Blossom and Weezy – the costumed mascots of the University of Arkansas at Monticello Blue – a live English Bulldog mascot for Butler University; Blue(deceased), Blue II(deceased), Blue III(Trip) & Blue IV Blue – a live bobcat, one of three official mascots of the University of Kentucky. Unlike the other two, he never attends home games because of his species' shy nature. He lives at the Salato Wildlife Education Center, a state-run facility in Frankfort. The Blue Blob – the blue fuzzy costumed mascot, is one of two official mascots for Xavier University (Cincinnati) and is especially popular with younger fans. Blue Jay – Creighton University, Johns Hopkins University, Polytechnic Institute of New York University. The Blue Devil – a costumed student who serves as mascot of the Duke Blue Devils of Duke University. Bobby – the Bearcat of Northwest Missouri State University Bobcat – The Bobcat mascot of University of California, Merced Bobcat – a costumed Bobcat mascot of New York University Bobcat – the Bobcat mascot of Bates College in Lewiston, Maine Boilermaker Special – a railroad locomotive replica mascot (officially) of Purdue. Boko – The Bobcat mascot of Texas State University. Bodi – the male Bison mascot of Manhattan Area Technical College. Bogey – the bearcat mascot of McKendree University. Boll Weevil – mascot of the University of Arkansas – Monticello, named by school president Frank Horsfall in 1925 Boomer – the bear of Missouri State University. Boomer – the bear mascot of Lake Forest College. Boomer – The bobcat mascot of Quinnipiac University. Boomer – one of the two white ponies of the University of Oklahoma that pulls the Sooner Schooner (the other pony's name is Sooner). There is both a live pony and costumed mascot. Boss – The Boston Terrier mascot of Wofford College. Boss Hogg – The inflatable Razorback mascot of the University of Arkansas. Brewer – The alcoholic beverage mascot of Vassar College Brandi – the female Bison mascot of Manhattan Area Technical College. Brit – Knighted Briton at Albion College. Brody the Bruin – the bruin of Bob Jones University. Bruce D. Bear – the mascot of The University of Central Arkansas Bruiser – costumed mascot of the Belmont University Bruins Bruiser – the Bulldog at Adrian College, Adrian, Michigan Bruiser and Marigold – the costumed bear for the Baylor Bears Bruno – the brown bear of Brown University. Brutus Buckeye – the anthropomorphic buckeye mascot of The Ohio State University. Brutus the Bruin Bear – Salt Lake Community College. Brutus Bulldog – Bulldog mascot at Ferris State University. Bryan – the lion mascot of Bryan College in Dayton, Tennessee. Bucky the Beaver – The stoic, determined, furry symbol of American River College in Sacramento. Bucky the Beaver – The buck-toothed mascot of Bemidji State University in Bemidji, Minnesota Bucky the Bronco – The official mascot of Santa Clara University in Santa Clara, California Bucky Beaver – Nature's engineer, the mascot of the California Institute of Technology (Caltech) in Pasadena, California Bucky The Parrot – the costumed parrot mascot of Barry University in Florida Bucky – the costumed bronc was the mascot of the University of Texas–Pan American before the school was merged into the current University of Texas Rio Grande Valley. Bucky (or Bucky Bison) – the costumed bison mascot of Bucknell University Bucky Badger – the lovable but mischievous badger mascot of the Wisconsin Badgers of the University of Wisconsin–Madison. Buddy Broncho – the official mascot of the University of Central Oklahoma Bronchos. The Buffalo – a costumed student and mascot of Milligan College Buford T. Beaver – the official mascot of Buena Vista University Bullet – a live black American Quarter Horse mascot of the Oklahoma State Cowboys and Cowgirls of Oklahoma State University–Stillwater. Bully – both the live Bulldog and the costumed Bulldog mascot of Mississippi State University. Burghy – the costumed cardinal mascot of SUNY Plattsburgh. Burrowing Owl – the mascot of the Florida Atlantic Owls of Florida Atlantic University. Buster Rameses XXIX – a live ram that serves as the mascot for the Fordham University Rams and the first school to use a Ram as mascot in 1893. The current live Ram is called Buster. There is also a costumed mascot. Buster Bronco – The official mascot of Boise State University Buster Bronco – The official mascot of Western Michigan University Butch T. Cougar – the cougar of Washington State. The "T" stands for "The". Butler Blue – the living Bulldog mascot of Butler University. The current mascot is Butler Blue III, also called "Trip". Buzz – the costumed yellow jacket mascot of Georgia Tech. C CAM the Ram – both the live male bighorn sheep and the costumed ram representing the Colorado State Rams of Colorado State University. Captain Skyhawk – the costumed mascot of the University of Tennessee at Martin Skyhawks. Captain Chris – a costumed likeness of Christopher Newport, the mascot of Christopher Newport University. Captain Cane – since 1994, the official mascot of the University of Tulsa, an anthropomorphized golden hurricane with human attributes such as biceps, clothes, and a perpetual smirk. From 1978 to 1994, the mascot was Huffy. Cardinal Bird – is the costumed mascot of the Wesleyan University Cardinals, University of the Incarnate Word Cardinals, as well as the Cardinals from the University of Louisville (sometimes called Red Bird). Cardinal Bird – the red bird mascot of Lamar University Captain Buc – the costumed mascot of Massachusetts Maritime Academy Captain George – former costumed pirate mascot of Armstrong State University. CavMan, The Cavalier – the official mascot of the University of Virginia. Cayenne – a costumed chili pepper for the Ragin' Cajuns of Louisiana–Lafayette. Cecil the Sagehen – the greater sage-grouse mascot of the Pomona-Pitzer Sagehens of Pomona College and Pitzer College Cecil the Crusader – North Greenville University Centennial Jay – University of Kansas Champ – The Bobcat mascot of Montana State University Champ – the costumed Bulldog mascot of Louisiana Tech University and the costumed Bobcat of Montana State University and the live Bulldog mascot of the University of Minnesota Duluth Champ the Husky – the Husky mascot of University of Southern Maine Chaparral – College of DuPage Charlie Cardinal – the cardinal mascot of Ball State University. Charlie the Charger – the costumed medieval war horse of University of New Haven. Charlie the Coyote – University of South Dakota Charlie T. Cougar – Concordia University Chicago The "T" stands for "the." Charlie Oredigger – Montana Tech Chauncey – costumed mascot of Coastal Carolina Chanticleers. Chauncey – beaver mascot of Champlain College. Chester – a costumed male lion, a mascot for Widener University (PA). Chief Illiniwek – the former official symbol of the University of Illinois Urbana-Champaign Fighting Illini, retired on February 21, 2007 Chief Osceola and his horse Renegade – the official symbols of the Florida State Seminoles. Chip – the costumed buffalo mascot of the University of Colorado Chompers – the costumed alligator mascot of Allegheny College Clash the Titan – the costume titan mascot of the University of Wisconsin-Oshkosh Clawed Z. Eagle – the costumed eagle of American University Clutch – the costumed mountain hawk of Lehigh University Clyde – the male costumed mascot of Olivet College Clyde The Cougar – the costumed cougar mascot of the College of Charleston. Cocky – the costumed mascot of the Jacksonville State University Gamecocks Cocky – the costumed mascot of the South Carolina Gamecocks Cody – the cougar mascot of Columbus State University The Colonel – the costumed mascot of the Eastern Kentucky University Colonels and Lady Colonels Colonel Ebirt – the former mascot of the College of William & Mary Tribe. The name "Ebirt" is "Tribe" spelled backwards and is a green blob dressed in colonial garb. Colonel Rock – a live bulldog mascot, one of two official mascots for Western Illinois University. Colonel Tillou – the official costumed mascot of Nicholls State University. Cool E. Cougar – cougar mascot for the College of Alameda commonly goes by Coolie. The "E" stands for education. Coop – the male mascot of Saginaw Valley State University Cooper the cougar-the official mascot of Caldwell University Corky – the cardinal costumed mascot of Concordia University Ann Arbor Corky – the hornet costumed mascot of Emporia State University Cosmo – the costumed cougar mascot of Brigham Young University. Cowboy Joe – the live Shetland pony mascot of the University of Wyoming. Crash the Cougar – the costumed cougar mascot of California State University San Marcos. The Crusader – the official mascot of the Valparaiso Crusaders of Valparaiso University. The Crusader – the official mascot of the Evangel Crusaders of Evangel University. Cubby – the second mascot of Brown University. It is a young, hat wearing bear appealing to young children. Cy the Cardinal – the costumed cardinal that serves as the mascot of the Iowa State Cyclones Curtiss the Warhawk – the costumed bird of prey that serves as the mascot of Auburn University at Montgomery Cutlass T. Crusader – or "Cuttie", the costumed and cartooned lion mascot of Clarke University in Dubuque, Iowa. Cyrus – the costumed kilt-clad mascot of the Presbyterian College Blue Hose D D'Artagnan – the captain of the Musketeers of the Guard is the mascot of Xavier University of Cincinnati. Damien, The Great Dane – The mascot of University of Albany Deacon – the mascot of the Bloomfield College Delphy – the mascot of the Universidad del Sagrado Corazón Demon Deacon – the mascot of Wake Forest University Diego – the mascot of University of San Diego The Oregon Duck – the mascot of the Oregon Ducks for the University of Oregon Duke Dog – the costumed mascot of the James Madison Dukes Doc – the costumed mascot of Towson Tigers Dominic – the Rambouillet ram mascot of Angelo State University Don – the mastodon of Indiana University-Purdue University Fort Wayne since 1970 Dooley – the skeleton of Emory University The Don – the mascot of the University of San Francisco The Doyle Owl – unofficial mascot of Reed College, a stone or cement owl sculpture subject to theft, showings, battles royale, capture, and related pranks Dubs II – the 14th live husky mascot of the University of Washington Duncan – the dolphin mascot of Jacksonville University Dunker – the horse mascot of Murray State University Durango – the bull mascot of University of Nebraska at Omaha Dusty – the Dustdevil mascot of Texas A&M International University Dutch – the costumed mascot of the Hope College Flying Dutchmen E Eddie the Cougar,#57 – mascot of Southern Illinois University Edwardsville Eddie the Eagle – mascot of North Carolina Central University Eddie the Golden Eagle - mascot of California State University, Los Angeles Eli the Eagle – mascot of Oral Roberts University Ellsworth the Golden Eagle – the mascot of SUNY Brockport Elwood – horse mascot of Longwood University Scrappy the Eagle – mascot of University of North Texas Ernie the Eagle – mascot of Bridgewater College Ernie the Eagle – mascot of Embry–Riddle Aeronautical University Eutectic – the mascot of St. Louis College of Pharmacy The Explorer – the mascot of La Salle University from the La Salle Explorers F The Falcon – United States Air Force Academy, Bowling Green (OH) State University Fandango – a costumed falcon mascot of Messiah College Fear, The Gold Knight – mascot of the College of Saint Rose The Fighting Okra – name of the unofficial mascot of Delta State University since the late 1980s. Has been featured in David Letterman's "Top Ten Worst Mascots List" Fighting Pickle – mascot of University of North Carolina School of the Arts since 1975 Finn – Mascot of Landmark College Flex the Falcon – falcon mascot of Bentley University Freddie – the costumed falcon mascot of Fairmont State University Freddie and Frieda Falcon – costumed mascots of the Bowling Green State University Falcons Freddy Falcon – mascot of the University of Wisconsin–River Falls Falcons and of Friends University Falcons Freedom – live bald eagle mascot of Georgia Southern University The Friar – mascot of Providence College; a costumed Dominican friar, re-introduced in 2001 after a six-season lapse. Friar Boy – a Dalmatian, animal mascot of Providence College; the last Friar Boy (V) passed in 2001; also was a costumed Dalmatian from 1995–2001. Flash – Kent State University Golden Flashes Flying Fleet – mascot of Erskine College G Gael Force 1 – the knight mascot of Saint Mary's College of California. Gaucho – the mascot of The University of California, Santa Barbara. Gaylord and Gladys – the male and female camel mascots of Campbell University General – the grizzly bear of Georgia Gwinnett College. General The Jaguar – The mascot of Texas A&M University-San Antonio. General Scott – one of three live mule mascots of the United States Military Academy (Army) George – The mascot of The George Washington University Colonials. Gladys – the squirrel mascot of Mary Baldwin College (from the squirrel on their coat of arms) Golden Eagle – Marquette Golden Eagles Glycerin – female knight mascot of the University of Central Florida; companion to Knightro Golden Griffin – griffin; mascot for Canisius College Golden Lion – lion; mascot for Raritan Valley Community College Goldy the Gopher – University of Minnesota Gompei – the bronzed head of a now deceased goat the mascot of Worcester Polytechnic Institute Gnarls – The Narwhal is the mascot of The New School since 2013 Gnarlz – Introduced in 2008 as the more aggressive partner of Wild E. Cat at the University of New Hampshire The Governor— mascot of Austin Peay University Greyhound – mascot of Loyola University Maryland Griff – live bulldog mascot of Drake University Griffin – official or quasi-official mascot of Reed College, taken from the coat of arms of Simeon Reed whose widow's bequest funded the establishment of the college; see also The Doyle Owl q.v. The Griffin – the costumed mascot of the William & Mary Tribe Grizz – mascot of Oakland University Grizz the Logger – mascot of University of Puget Sound Grubby Grubstake – the miner mascot of South Dakota School of Mines and Technology Gunrock the Mustang – University of California, Davis Gunston – a green creature with a colonial tri-cornered hat named after Gunston Hall, the colonial-period home of the namesake of George Mason University Gus the Eagle – Costumed mascot of Georgia Southern University Gus the Goose – Costumed mascot of the Washington College Shoreman Gus the Gorilla – Pittsburg State University Gussie – the live northern goshawk mascot of State University of New York at New Paltz (SUNY New Paltz) H Hairy Dawg – a person costumed as a Bulldog, University of Georgia (See also Uga below) Haley – the female costumed mascot of Olivet College Halo – a live St Bernard dog, the official mascot of Carroll College, Helena, Montana Handsome Dan – a live Bulldog, the official mascot of the Yale Bulldogs and the first mascot adopted by a university in the USA Harry the Hawk – the 9 ft tall 'walk-around' mascot of the University of Maryland Eastern Shore Harry the Husky – the husky mascot of the University of Washington Havoc the Wolf – the costumed wolf mascot of Loyola University New Orleans The Hawk – a costumed student who serves as mascot of the Saint Joseph's Hawks; the mascot flaps its "wings" without interruption (even during halftime) throughout SJU basketball games Hendrix the Husky – the costumed mascot of University of Washington Tacoma Hera – the live owl mascot of Florida Atlantic University Herbie Husker – the costumed mascot of the University of Nebraska Herky the Hawk – the costumed hawk-like bird of indeterminate species, mascot of the University of Iowa Herky the Hornet – the costumed hornet mascot of Sacramento State Herm – the costumed lion mascot of Eastern Mennonite University Hey Reb – the former costumed mascot of the University of Nevada, Las Vegas. Retired in 2021 The Highlander – the costumed mascot of Radford University Hink – the costumed mascot of Butler University Hillcat – the costumed mascot of Rogers State University Hokie Bird – is the costumed mascot of Virginia Tech Holly the Husky – is the costumed mascot of University of Washington Bothell Hoot – the mascot of the Rowan University Profs Hootie the Owl - the costumed red owl mascot of Keene State College Hootie the Owl - the costumed mascot of Oregon Institute of Technology Hooter – the costumed owl mascot of Temple University Hook 'em – the costumed longhorn mascot of The University of Texas Howie the Hawk – the costumed mascot of the University of Hartford Howl – the official mascot of the Arkansas State University, Red Wolves. I Ichabod – the mascot of Washburn University representing its namesake's first name. Iggy the Greyhound – mascot of Loyola University Maryland Iggy the Golden Eagle – mascot of Marquette University. Iggy the Lion – mascot of Loyola Marymount University. Ike the Eagle – mascot of Oklahoma Christian University. Indy – the Greyhound mascot of the University of Indianapolis. Indians – the mascot of Catawba College J Javelina – the mascot of Texas A&M University–Kingsville J.C. – the live ram mascot of Shepherd University J. Denny and Jenny Beaver – the beaver mascots of Bluffton University Jack the Jackrabbit – the rabbit mascot of South Dakota State University Jack the Bulldog – a live Bulldog of the Georgetown Hoyas. There is a costumed mascot with the same name. Jack and Jill – the sailfish of Palm Beach Atlantic University Jay and Baby Jay – are costumed "Jayhawk" mascots of the University of Kansas that are a mythical cross between a blue jay and a sparrowhawk. Jay – the costumed bluejay mascot of Johns Hopkins University Jawz, Jinx and Jazzy – the costumed jaguar mascots of Indiana University-Purdue University Indianapolis Jerry the Bulldog – the live English Bulldog mascot of Arkansas Tech University. After a 76-year absence, Jerry was adopted as the Arkansas Tech mascot on October 23, 2013. Joe Bruin and Josephine Bruin – the bruins of the University of California, Los Angeles (UCLA) Joe Miner – the pickaxe, pistol, and slide rule-toting mascot of Missouri University of Science and Technology (Missouri S&T) Joe Vandal – the costumed Vandal mascot of the University of Idaho. John Poet – mascot of Whittier College (both named after poet John Greenleaf Whittier) Johnny Thunderbird – mascot of St. John's University (New York) Jonas - the cougar mascot of Clark University Jonathan – the husky mascot of the University of Connecticut Judge – the live black bear mascot of Baylor University. The name is also given to the inflatable costumed mascot. Jumbo – the elephant mascot of Tufts University Junior Smokey – the second costumed blue tick hound mascot of the University of Tennessee that looks younger than the original costumed Smokey (see below), but considers himself as his brother. K Kaboom – Bradley University's costumed mascot of the Bradley Braves Kate and Willy – Hofstra University's costumed lion mascots. Kasey the Kangaroo – University of Missouri-Kansas City's costumed mascot. Keggy the Keg – of Dartmouth College (unofficial, proposed by the Jack-o-Lantern humor magazine). Katy the Kangaroo – Austin College's costumed mascot. Kid – one of the two costumed tiger mascots of Texas Southern University. Killian the Gael – The mascot of Iona College. Knightro – the costumed knight of the University of Central Florida. King Husky – the live Husky mascot of Northeastern University. King Triton – the mascot of University of California, San Diego (UCSD). Klawz Da Bear – the costumed bear of the University of Northern Colorado. Klondike – the costumed polar bear of Ohio Northern University. Kody – the costumed kodiak bear of Cascadia College. L Landshark – The recently adopted mascot of the University of Mississippi (Ole Miss). LaCumba – The former live jaguar mascot of Southern University. Lafitte – the costumed alligator in pirate garb that is the mascot of the University of New Orleans Privateers. Laker Louie – the costumed mascot of the Lake Land College Lakers. LeeRoy the Tiger – the costumed mascot of Trinity University (Texas). Leroy the Lynx – the costumed mascot of Rhodes College. The new name (formerly known as Maximus) and costume were introduced in 2013 after a vote from the student body. Leo and Lea Leopard – two costumed mascots of the University of La Verne Leo and Una – two live lion mascots of the University of North Alabama Lions The Leprechaun – the mascot of the Notre Dame Fighting Irish Lightning – the costumed blue pegasus-like creature that is the mascot for the Middle Tennessee Blue Raiders Lil' Joe Mountie – the mascot of Mt. San Antonio College Mounties, in Walnut, California. Lil' Red – the inflatable-costumed boy mascot of the University of Nebraska–Lincoln Lizzie the Screaming Eagle – the costumed eagle mascot of the oldest women's college in New Jersey, the College of Saint Elizabeth Lobo – the mascot of John Carroll University. Lobo comes from the Script of St. Ignatius of Loyola's (founder of the Jesuits) family "Lobo Y Olla" translated to "Wolf and Pot". Lord Jeff – formerly the unofficial mascot of Amherst College, it has now been disassociated from the school by the Board of Trustees. Originally depicted Lord Jeffery Amherst, the namesake of the town where the College is located. Louie the Cardinal – the official name of the mascot of the University of Louisville LU – the female costumed cardinal mascot of Lamar University LU Bison – the costumed bison mascot of Lipscomb University LU Wolf – a wolf is the official mascot of the Loyola-Chicago Ramblers; got a major facelift in 2001 Louie the Laker – mascot of Grand Valley State University Louie the Loper – mascot of University of Nebraska at Kearney Louie the Triton – the mascot of University of Missouri–St. Louis Lobo Louie – the costumed wolf mascot of the University of New Mexico Louie the Lumberjack – mascot of Northern Arizona University Lucky the Lion – the lion mascot of Texas A&M University at Commerce, formerly East Texas State University. Lucy Lobo – the costumed female wolf companion to Louie Lobo of the University of New Mexico Lucy – a live binturong who is a mascot of the University of Cincinnati. Lucy lives at the Cincinnati Zoo and frequently attends university events on a leash. Lulu – female Bulldog mascot of Gardner–Webb University M Mac T. Bulldog – male Bulldog mascot of Gardner–Webb University Mac the Scot – official mascot of Macalester College Mad Jack – name of the mountaineer mascot of Western State Colorado University Magnus – the current mascot of Cleveland State University Mammoth -official mascot of Amherst College Mandrake – short lived secondary mascot of the University of Oregon Marauder – official mascot of Millersville University (a marauder is a land pirate) Marco – the costumed American bison mascot of the Marshall Thundering Herd Mario the Magnificent Dragon – the official mascot of the Drexel University Dragons Marty the Saint – the official mascot of Saint Martin's University Marvin the Miner – Colorado School of Mines The Masked Rider – one of the official mascots of the Texas Tech Red Raiders Matty the Matador – the mascot of Cal State Northridge Max C Bear – the costumed mascot of SUNY Potsdam Melrose – a costumed female lion, a mascot for Widener University (PA). Miami Maniac – the baseball mascot of the University of Miami Mike the Tiger – live tiger (usually a Bengal tiger, but currently a Bengal-Siberian mix) mascot of the LSU Tigers, as well as the costumed tiger mascot. Minerva the Spartan – the official mascot of The University of North Carolina at Greensboro Mingus – the costumed jazz cat – official mascot for Berklee College of Music. Mingo – the costumed Husky mascot of Houston Baptist University. Miss Pawla— a costumed jaguaress of the University of South Alabama. She is the friend of Southpaw (see below). Mr. C (full name: Mr. Commodore) – the costumed commodore mascot of Vanderbilt University Mr. Wuf and Ms. Wuf – the costumed wolf mascots of the North Carolina State Wolfpack; they were married during halftime of the NC State-Wake Forest game on February 28, 1981 (Wake Forest University's Demon Deacon mascot presiding). Mocsie – Snake-like mascot of the Florida Southern College "Moccasins." Moe – kangaroo mascot of the VMI Keydets MoHarv – the golden eagle mascot of the University of Charleston in Charleston, West Virginia. Mo the Mule – the official mascot of University of Central Missouri Monte – the costumed mustang mascot of Morningside College in Sioux City, Iowa. Monte – the grizzly bear mascot of the University of Montana Monty the Eagle – Niagara University Monty the Mountaineer – Schreiner University Monty Montezuma the Aztec Warrior – San Diego State University Mortimer the Gopher – Goucher College Mountaineer – a West Virginia University student who dresses in pioneer costume as the school's mascot Mulerider and Molly the Mule – the mascots of Southern Arkansas University Musty the Mustang – the mascot of California Polytechnic State University MUcaw – the mascot of Mount Union in Alliance, Ohio. N Nathan the Quaker – the official mascot of the Guilford College Quakers Nitro the Knight – the official mascot of the Fairleigh Dickinson University Knights The Nittany Lion – mascot of the Penn State Nittany Lions Norm the Niner – the official mascot of the University of North Carolina at Charlotte Norm the Crimson Hawk – (referring to the school's origins as a normal school) the relatively new official mascot of Indiana University of Pennsylvania NYIT Bear – mascot of the New York Institute of Technology Nestor – mascot of Westfield State University O Oakie – costumed acorn mascot of SUNY-ESF Oakley the Barn Owl – the mascot of Texas Woman's University. Objee – costumed bear mascot of the United States Coast Guard Academy and until 1984 a live bear kept on campus. Ody Owl -- the costumed owl mascot of the Mississippi University for Women Old Sarge – costumed soldier mascot of Norwich University. Ole the Lion – mascot of St. Olaf College. Olé Gaucho – mascot of the University of California, Santa Barbara. Olé the Viking – mascot of Long Beach City College Vikings, in Long Beach, California. Ollie the Owl – mascot of Brandeis University, carrying a gavel (as the nickname is the Judges). Oski the Bear – costumed mascot of the California Golden Bears Otto the Orange – mascot of Syracuse University Owls – mascot of Temple University and Rice University and Bryn Mawr College Owlsley – costumed burrowing owl mascot of Florida Atlantic University. Ozzie – costumed osprey mascot of the University of North Florida. Ozzy – costumed mascot of the Ozarks Technical Community College. P The Panther – the athletic mascot of Middlebury College, whose 31 varsity teams are known as the Middlebury Panthers. Patrick the Patriot – the costumed mascot of Dallas Baptist University, Patriots. Paws – official costumed mascot of the Western Carolina University Catamounts. Paws – mascot of Northeastern University Huskies. Paydirt Pete – a costumed student who serves as a mascot for the University of Texas at El Paso Miners. Pedey – The mascot of the Mesalands Community College Stampede PeeDee the Pirate – a costumed student who serves as mascot for East Carolina University Pegasus and the UCF Knight – a live white Andalusian stallion that charges on the field at the beginning of games with the Knight as a rider for the University of Central Florida Peruna – A Shetland pony who represents the Southern Methodist University Mustangs Pete & Penny – two emperor penguins dressed in scarfs and stocking caps for Youngstown State University Peter the Anteater – the mascot based on the "ZOT!"-emitting animal from the comic strip B.C., at the University of California Irvine since 1965 Pete the Panther – a costumed student who serves as a mascot for the Florida Tech Panthers. Petey Penmen – the costumed mascot of Southern New Hampshire University. Petey the Stormy Petrel – the costumed mascot of Oglethorpe University Philip D. Tiger – The greyish yellow Bengal tiger in a blue and white basketball jersey of St. Philip's College. He is mainly on the walls and in promotional pictures of St. Philip's College. The D. stands for "Da" as in the word "the". Phoenix – Florida Polytechnic University, Olin College, the costumed mascot of Elon University (formerly the Fightin'Christians), The University of Chicago (nickname the Maroons), University of Wisconsin–Green Bay and Swarthmore College Pioneers – the mascot of Grinnell College, Iowa Pioneer Pete – a costumed student who serves as mascot of California State University, East Bay (Hayward, California) Pirate – the mascot of the Seton Hall Pirates at Seton Hall & also Hampton University's nickname of their athletic department and mascot. Pistol Pete – a costumed student who serves as mascot of the Oklahoma State as well as the costumed mascots of the University of Wyoming and New Mexico State University. Play – one of the two costumed tiger mascots of Texas Southern University. Polar Bear – the mascot of Bowdoin College, Ohio Northern University and the University of Alaska Fairbanks (where it is also known as Nanook). Pork Chop – the kid-sized junior Razorback mascot of the University of Arkansas Porky – The javelina mascot of Texas A&M University–Kingsville, Kingsville Pouncer – the costumed tiger mascot of University of Memphis. Pounce the Cougar – the cougar mascot of University of Minnesota Morris. Pounce the Panther – the mascot of Georgia State University. It is also the official Panther mascot of Purdue University North Central since 2003. Pounce Panther – the mascot of the Milwaukee Panthers. Predator – owl mascot of Bryn Mawr College, and the symbol of Athena. Power Cat – the tiger mascot of the University of the Pacific Privateer Pete – the Privateer mascot of the State University of New York Maritime College Prospector Pete – the inflated balloon mascot of Long Beach State Prowler – the costumed panther mascot of High Point University Purple Cow – the gold-spotted mascot of Williams College Puddles – The name of the University of Oregon former live duck mascot. Unofficial name of The Duck. Purple Knight – The staff carrying Purple Knight is the mascot of Saint Michael's College Purple Knight – The Purple Knight is the mascot of University of Bridgeport Puckman – the animated hard-hat wearing walking hockey puck mascot of Rensselaer Polytechnic Institute Purdue Pete – costumed mascot of Purdue University R Racer 1— live horse mascot during football games for Murray State University Raider – one of three live mule mascots for the United States Military Academy (Army) and the mascot of Colgate University. Raider Red – one of the official mascots of the Texas Tech Red Raiders Rally – the mascot of the University of Vermont Catamounts. Rally the Red Hawk – the mascot of the Ripon College (Wisconsin) Red Hawks. Ralphie – a live American bison the official mascot of the Colorado Buffaloes Ralphie – a costumed male Greyhound mascot for Eastern New Mexico University Ramblin' Wreck – the 1930 Ford Model A Sports Coupe mascot of Georgia Tech Rameses – a live ram that serves as the mascot for the North Carolina Tar Heels. It is also the name of the costumed mascot. Rammy – the mascot for West Chester University of Pennsylvania. RAMbo – the mascot for Shepherd University. Ranger D. Bear – the mascot for University of Wisconsin–Parkside Ranger II – one of three live mule mascots for the United States Military Academy (Army) Razor the Shark – Nova Southeastern University Rawhide – the mascot for Western New Mexico University Red The Cardinal – the mascot of MCPHS University Red Dragons – the mascot of The State University of New York College at Oneonta (SUNY Oneonta) Reddie Spirit – Henderson State University Reggie Redbird – Illinois State University Reveille – a live collie that serves as the mascot for the Texas A&M Aggies and is taken care of by the Corps of Cadets Rex the Lion – Queens University of Charlotte Rett and Ave - the costumed mascots of Averett University Rhett the Boston Terrier – the Boston Terrier representing Boston University Rhody the Ram – University of Rhode Island Ribby – The Razorback mascot for the University of Arkansas baseball team. RITchie the Tiger – the mascot for the Rochester Institute of Technology. Riptide – the costumed pelican mascot of Tulane University. Riverbats aka R.B. – the official mascot of Austin Community College debuted November 2010. Roar-ee the Lion – the official mascot of Columbia University. Created in October 2005. Roary the Lion – the current official mascot of Missouri Southern State University Roary the Panther – the official mascot of the Florida International University Panthers Roc the Panther – the costumed mascot of the Pittsburgh Panthers (University of Pittsburgh). RoCCy – a costumed tiger, the official mascot of Colorado College. Changed from Prowler in 2020. Rocky – a costumed yellowjacket, the official mascot of the University of Rochester Rocky – a costumed bulldog, one of two official mascots for Western Illinois University. Rocky Raider – the mascot for Three Rivers Community College (Poplar Bluff, Missouri). Rocky the Bull – the mascot of University of South Florida. Rocky II the Lion – mascot for the Slippery Rock University of Pennsylvania Pride. Rocky the Red Hawk – mascot of Montclair State University. Rocky the Rocket – male mascot of the University of Toledo. Rocky – a costumed mascot of University of North Carolina at Asheville. Rocky I – a live Old English Bulldog, live mascot of University of North Carolina at Asheville. Rocksy the Rockette – female mascot of the University of Toledo. Rodney the Ram – the mascot of Virginia Commonwealth University. Rodney the Raven – the mascot of Anderson University in Indiana. Roscoe - The male costumed mascot of the Angelo State University Rams Roscoe The Lion- The mascot of The College Of New Jersey in New Jersey. Rosie – the costumed elephant mascot of the Rose-Hulman Institute of Technology "Fightin' Engineers." Roomie the Lion – mascot of the Southeastern Louisiana University. Roongo the Husky – is the mascot for Bloomsburg University of Pennsylvania. Rooney – is the official mascot for the Roanoke College Maroons. Rowdy – an unofficial mascot of Purdue University (for official mascot – see Boilermaker Special) Rowdy – a roadrunner that is the costumed mascot of California State University, Bakersfield (CSUB). Rowdy – a roadrunner that is the mascot of Metropolitan State University of Denver Rowdy – a roadrunner that is the mascot for the University of Texas at San Antonio. Rowdy Raider – is the mascot for Wright State University. It used to be a Viking, but was changed to a wolf in 1997. Rowdy the Panther – is the mascot for Birmingham–Southern College. Rowdy the Riverhawk – is the mascot for the University of Massachusetts Lowell. Rowdy the Red Hawk – is the mascot for Southeast Missouri State University Rowdy the Cowboy – is the mascot for McNeese State University Rowdy the Maverick – is the mascot for Colorado Mesa University Roxie – a costumed female Greyhound mascot for Eastern New Mexico University Rudy Flyer – is the mascot for the University of Dayton Rudy the Redhawk – is the mascot for Seattle University Ruckus – the red-tailed hawk mascot of the University of Denver Rufus – the bobcat mascot for Ohio University. Rufus – the red wolf mascot for Indiana University East. S The Saluki Dog – live animal mascot of the Southern Illinois Salukis Sam the Minuteman – University of Massachusetts Amherst Minutemen and Minutewomen, Amherst, Massachusetts Sam the Ram – of Framingham State University, Framingham, Massachusetts Sammy D. Eagle – official costumed mascot of University of Mary Washington Sammie Seminole – The athletic logo for The Florida State University Sammy Seahawk – of Broward College, South Florida Sammy Spartan – of San Jose State University, San Jose, California Sammy the Seagull – of Salisbury University Sammy the Owl – of Rice University Sammy the Slug – a banana slug is the mascot of UC Santa Cruz. Sammy was named Reader's Digest best college mascot for 2004. Sammy and Samantha Bearkat – mascots of Sam Houston State University. Saints – Saint Bernard dog Emmanuel College (Massachusetts), Siena College (New York). The Scarlet Knight – of Rutgers, The State University of New Jersey Scarlet Hawk – of Illinois Institute of Technology, Chicago. Scarlet – of Arkansas State University, Jonesboro, Red Wolves accompanies the official mascot "Howl". Scarlet – the female mascot of Saginaw Valley State University Scorch – The official mascot of Minnesota State University, Moorhead. Scorch – The official mascot of Southeastern University (Florida). Scrappy – The owl mascot of Kennesaw State University Scrappy – the eagle mascot of the North Texas Mean Green Scrappy – The costumed mockingbird mascot of the Chattanooga Mocs of the University of Tennessee at Chattanooga Scratch – a student in wildcat costume who is one of three official mascots of the University of Kentucky, two of which attend games. Scratch is a more child-friendly version of "The Wildcat", the other mascot that attends games. Screech A. Eagle – official mascot of Northwestern College (Minnesota) Screech the Owl – mascot of William Woods University Scottie – the scottie dog mascot of Agnes Scott College. Scotty – the costumed kilt-clad mascot of Alma College. Scotty the Scottie Dog – The Scottish terrier official mascot of Carnegie Mellon University. Scotty Highlander – A tartan clad highlander bear representing UC Riverside. Scrotie – of the Rhode Island School of Design Sebastian the Ibis – mascot of the Miami Hurricanes. The Florida ibis according to folklore is the last bird to leave the area before a hurricane and the first bird to come back after the storm. "Sebastian" once carried a corn-cob pipe in its beak. Seahawks – the Osprey mascot of Salve Regina University Seymour D'Campus – costumed mascot of Southern Miss Golden Eagles Shadow – costumed mascot of Monmouth University (NJ). The Shark – mascot of UNLV. While the school's teams are named the Rebels (Runnin' Rebels for men's basketball only) the mascot is a shark in honor of former men's basketball coach Jerry Tarkanian nicknamed "The Shark". Shasta and Sasha – the mascot of the University of Houston's Houston Cougars – a male and a female costumed cougar, as well as a live, sponsored cougar enclosure at the Houston Zoo. Shooter – The second generation red fox mascot for Marist College. Sir Big Spur – a live rooster at the University of South Carolina since 2006 Sir Lance-a-lute – mascot of Pacific Lutheran University Sir Paladin – the costumed mascot of Furman University Skitch – Sasquatch (Bigfoot) mascot of Community Colleges of Spokane Skully – giant red parrot sidekick of the official "Marauder" mascot of Millersville University Skully – Unofficial mascot of the East Carolina Pirates Smokey – a live bluetick coonhound, the official mascot for the Tennessee Volunteers. The name is also applied to the costumed mascot. Sooner – one of the two white ponies of the University of Oklahoma that pulls the Sooner Schooner (the other pony's name is Boomer). There is both a live pony and costumed mascot. Sooner Schooner – a scale replica of a Conestoga wagon pulled by two ponies and driven by the RUF/NEKS of the University of Oklahoma. Southpaw – a costumed jaguar of the University of South Alabama. Sparky the Sun Devil – the maroon and gold devil mascot of the Arizona State Sun Devils Sparky – the eagle mascot of the Liberty University Flames. The name also applies to the dragon mascot of the Illinois-Chicago Flames. Sparty – the mascot of Michigan State University, a comical (and extremely buff) representation of a Spartan hoplite soldier clad in green with an elongated head. Speedy the Geoduck – mascot of Evergreen State College (Washington) Spike – the name given to several costume Bulldog mascots including Gonzaga , Samford , The Citadel, and Drake, as well as the inflatable-costumed mascot of the University of Georgia Spike and Simone – The two Bulldog mascots of Truman State University. The Stag – the costumed mascot of Fairfield University. The Stanford Tree – a dancing conifer of indeterminate species official mascot of Stanford Band unofficial mascot of Stanford University. Stanley the Stag – the costumed mascot for the men's teams of the combined athletic program of Harvey Mudd, Scripps, and Claremont McKenna colleges. The Statesman – the official mascot of Delta State University. Stella – The live owl mascot of Temple University Stertorous "Tor" Thunder – The costumed mascot of Wheaton College (Illinois) Stevie Pointer – The mascot of the University of Wisconsin Stevens Point. Sting – the costumed mascot of Black Hills State University Stomper – the costumed mascot of the Minnesota State University – Mankato Mavericks. Storm – tiger mascot for Trine University Thunder, first introduced in 2010. Stormy – the cyclonic mascot of Lake Erie College, introduced in 1994. Stormy the Shark – the mascot of Simmons University Stormin' Normin' – the mascot of the University of New England Nor'easters. Sturgis - The live owl mascot of Kennesaw State University Sunny the Sunbird – mascot of FPU Superfrog – the costumed horned frog mascot of TCU Sue E. Pig – The female Razorback mascot of the University of Arkansas. Sugar Bear - The female mascot of the University of Central Arkansas Swoop – the costumed mascot of the University of Utah Utes. "Swoop" is a red-tailed hawk, which is a bird native to the state of Utah. Swoop – the costumed mascot of Eastern Washington University and Emory University- both are eagles. Swoop – the costumed mascot of the Eastern Michigan University Eagles. "Swoop" is a bald eagle whose distinguishing characteristic is his fighting stance where he tends to poke his chest out in pure confidence. Swoop the RedHawk – the costumed mascot of Miami University. Sycamore Sam – the happy forest animal costume of no particular species but looks like a blue fox or dog; mascot of the Indiana State Sycamores. T Tarzán – live Bulldog of University of Puerto Rico at Mayagüez TC —costumed male mascot of the University of Northern Iowa Panthers. Tech – a live Bulldog mascot of Louisiana Tech University Terrible Swede – a costumed mascot of Bethany College (Kansas). Temoc – a costumed comet of the University of Texas at Dallas. Texan Rider – a costumed cowboy of the Tarleton State University. Testudo – a costumed Diamondback Terrapin of the University of Maryland College Park. Teton – a costumed buffalo of Williston State College. Thor – the thunderbird mascot of Cloud County Community College. Thor – the thunderbird mascot of Mesa Community College. Thor – the thunderbird mascot of Southern Utah University. Thresher – the threshing stone mascot of Bethel College. Thundar – the costumed bison mascot of North Dakota State University. Thunder – a live American bison the official mascot of West Texas A&M University Thunder - costumed bobcat mascot of Georgia College & State University. Thunder the Wolf – the official mascot of Northern State University Thundercat – the physical embodiment of the Southern Nazarene Crimson Storm. The Tiger – the mascot of Princeton University; the first collegiate mascot and subject of the first organized, recorded cheerleading cheer in 1884. Name reaffirmed as The Tiger in a 2007 referendum with widespread opposition to giving the mascot a name The Tiger – the mascot of Clemson University The Tiger Cub – the youthful partner of The Tiger at Clemson University Tim the Beaver – the mascot of the Massachusetts Institute of Technology Timeout – the costumed Bulldog mascot of Fresno State TK —costumed female mascot of the University of Northern Iowa Panthers. Toby – the costumed bear mascot of Mercer University. Toby the Tiger – the costumed mascot of East Texas Baptist University TOM III – a live Bengal tiger mascot of the University of Memphis Tommy Mo – the costumed Saints mascot of Thomas More College. Tommy Titan – the mascot of the University of Detroit Mercy. Tommy Trojan – thought by many to be the mascot, Tommy Trojan is the shrine of the University of Southern California. Traveler is their mascot. Topper the Hilltopper – the mascot of St. Edward's University Tory – a live female former racing Greyhound adopted by Eastern New Mexico University and used as community ambassador with male counterpart Vic Touchdown – also known as the Cornell Big Red Bear, the official mascot of Cornell University was a live bear from 1915 to 1939 when it was replaced with a costume. Tough Louie – the lumberjack mascot of Northern Arizona University. Traveler – a live white horse is the mascot of the University of Southern California who appears during all home football games. True Grit – the mascot of the University of Maryland, Baltimore County. Truman the Tiger – the mascot of the University of Missouri, named for former U.S. president Harry S Truman (the only Missouri native to hold the office). Tupper the Bulldog – the mascot of Bryant University and named for Earl Tupper, the creator of Tupperware Tuffy – the costumed eagle of the Ashland University Eagles. Tuffy – the costumed elephant of the Cal State Fullerton Titans. Tuffy – the live mascot of the NC State Wolfpack. Tuffy is a dog, either a Tamaskan or a German Shepherd – Husky mix. T-Roy – The mascot of the Troy University Trojans. Tusk – the live Russian boar mascot of the University of Arkansas. Tyler the Tiger – the costumed tiger mascot of DePauw University. U Uga – a live Bulldog, the official mascot of the Georgia Bulldogs of the University of Georgia. V Val the Valkyrie – the mascot of Converse College Venom – the mascot of Florida A&M University Vic – a live male former racing Greyhound adopted by Eastern New Mexico University and used as community ambassador with female counterpart Tory Vic the Demon – the mascot of Northwestern State University Victor E. Bluejay – the mascot of Elmhurst University Victor E. Bull – the bull of the University at Buffalo Victor E. Huskie – the mascot of Northern Illinois University Victor E. Panther – the former (graduated/retired) mascot of the Milwaukee Panthers Victor E. Tiger – the mascot of Fort Hays State University Hays, Kansas Tigers Victor E. Lion – the mascot of Molloy College in Long Island, New York Victor E. Viking – the mascot of Northern Kentucky University Victor E. Viking – the mascot of Western Washington University Victor E. Viking – the mascot of Portland State University Viktor the Viking - the mascot of Grand View University Victor E. Hawk – the mascot of Viterbo University Victor E. Warrior – the mascot of Wisconsin Lutheran College Vili – Villiami Fehoko, the unofficial mascot of the University of Hawaii at Mānoa since 2000 Vike – the former mascot of Cleveland State University Vinny the Dolphin – the mascot of College of Mount Saint Vincent Vixen – the mascot of Sweet Briar College W W – the warrior mascot of Wayne State University. Wakiza (Kiza) – the live husky mascot of Houston Baptist University. The name means Warrior Princess in native Alaskan language. Waldo – the wildcat mascot of Weber State University. War Eagle VII – the golden eagle mascot of Auburn University. Warriors – Sterling College teams are known as the Warriors. The "Warriors" nickname is mostly often depicted as armed Scottish Highlanders. Sterling College officially adopted the Scottish heritage as a tribute to its Presbyterian roots in 1984. Wally Pilot – the new mascot of the University of Portland. Wally Wabash – the mascot of Wabash College. Wassee – the tiger mascot of Hiwassee College. WebstUR – the spider mascot of the Richmond Spiders Wellington – mascot of Central Washington University. Wesley the Wildcat – mascot of Indiana Wesleyan University–Marion Whoo RU – mascot of Rowan University. Wilbur and Wilma Wildcat – the married costumed mascots of the University of Arizona. Wild E. Cat – the official costumed Wildcat mascot of the University of New Hampshire. The Wildcat – a costumed student who is one of three official mascots of the University of Kentucky, two of which attend games. It is also the mascot of Davidson College, but is also named Mr. Cat. Wildcat Willy – the mascot of Northern Michigan University. Will D. Cat – the costumed wildcat of Villanova University. Willie Warhawk – the costumed hawk mascot of the University of Wisconsin–Whitewater. Willie the Wave – the costumed mascot of Pepperdine University Willie T. Wildcat – The official mascot of Johnson & Wales University for both the logo and costumed mascot. Willie the Wildcat – the costumed wildcat mascot of California State University, Chico, Northwestern University and Kansas State University. Despite having the same name, the three have very different appearances. Wily the Bobcat – the costumed bobcat of Lees-McRae College. Wiley D. Wildcat – the costumed wildcat mascot of Wilmington University. Wolfie – the mascot of Stony Brook University and Western Oregon University. Wolfie – the blue mascot of the University of West Georgia. Wolfie (Jr.) – the second costumed wolf mascot of the University of Nevada, Reno Wolf Pack. Was the original mascot until 1999, but was reintroduced in 2007 with a younger, less menacing face. Rumored to be the original with a facelift. Woody – the costumed timber wolf mascot of Northwood University. Woody Wood Duck – the costumed wood duck mascot of Century College. WuShock – an anthropomorphic shock of wheat; the mascot of Wichita State University. Y Yank – the costumed tiger mascot of Hampden–Sydney College. YoUDee – the blue hen a costumed mascot of the University of Delaware. Yosef – the Mountaineer costumed mascot of Appalachian State University. Z Zac – wildcat mascot of Cazenovia College Zippy the Kangaroo – female kangaroo mascot of University of Akron See also Mascot#Sports mascots List of college sports team nicknames Religious symbolism in U.S. sports team names and mascots College athletics References External links Complete List of American Colleges and Universities; showing mascot, conference, affiliation, location, and year established. Mascot.net College mascot resource USAToday lists various mascot facts College football's 12 coolest mascots – 1. Ralphie the Buffalo (Colorado), 2. Uga (Georgia), 3. Chief Osceola (Florida State), 4. Mike the Tiger (LSU), 5. War Eagle (Auburn), 6. Stanford Tree, 7. Bevo (Texas), 8. The Mountaineer (West Virginia), 9. The Masked Rider (Texas Tech), 10. Sparty (Michigan State), 11. The Leprechaun (Notre Dame), 12. The Fighting Duck (Oregon). FoxSports.com. Retrieved 2010-09-01. American mascots College mascots in the United States
43800179
https://en.wikipedia.org/wiki/Norton%20Security
Norton Security
Norton Security is a cross-platform security suite that provides subscription-based real-time malware prevention and removal in addition to identity theft protection and performance tuning tools. Other features include a personal firewall, email spam filtering, and phishing protection. Released on September 23, 2014 as part of Symantec's streamlined Norton line, it replaces the long-running Norton Internet Security as their flagship antivirus product. Version history In 2014, in an effort to streamline its Norton product line, Symantec combined nine standalone Norton products into one all-purpose suite. Norton Security superseded Norton Internet Security (and the pre-2019 versions of Norton 360), with an overlapping release cycle that saw version 22 as the initial release of the former and the final release of the latter. However, version 22 of Norton 360 and Norton Internet Security were updates as opposed to full releases. In terms of similarities and differences with its predecessors, Norton Security retained all components of Norton Internet Security (including the antivirus, firewall and identity theft components) and added the optimization tools from Norton 360. Norton Security is available in three editions: Norton Security Standard with one license (valid for a single device), Norton Security Deluxe with five licenses and Norton Security Premium which offers ten licenses, 25 GB of hosted online backup, and a premium subscription to Symantec's parental control system. All editions include protection for Windows, OS X, Android and iOS devices. However, features may vary based on the operating system. In April 2019, the Norton 360 brand was reinstated, maintaining a similar plan structure but with the addition of VPN and, on the premium tiers, LifeLock (which was acquired by Symantec in 2017). System requirements Operating Systems Supported (Norton Security 22.16.0.247 for Windows) Microsoft Windows 7 (all versions) with Service Pack 1 (SP 1) or later Microsoft Windows 8/8.1 (all versions). Some protection features are not available in Windows 8 Start screen browsers. Microsoft Windows 10 (all versions). Mac Operating Systems (Norton Security 8.3 build 45 for Mac) macOS 10.10 or later Android Operating Systems (Norton Mobile Security 4.5.0 for Android) Android 4.1 or later iOS Operating Systems (Norton Security 1.1.5 for iOS) iOS 10 or later See also Norton 360 SONAR Comparison of antivirus software References External links Official site NortonLifeLock software Software companies established in 2014 Antivirus software Proprietary software
7214571
https://en.wikipedia.org/wiki/Common%20Criteria%20Testing%20Laboratory
Common Criteria Testing Laboratory
The Common Criteria model provides for the separation of the roles of evaluator and certifier. Product certificates are awarded by national schemes on the basis of evaluations carried by independent testing laboratories. A Common Criteria testing laboratory is a third-party commercial security testing facility that is accredited to conduct security evaluations for conformance to the Common Criteria international standard. Such facility must be accredited according to ISO/IEC 17025 with its national certification body. Examples List of laboratory designations by country: In the US they are called Common Criteria Testing Laboratory (CCTL) In Canada they are called Common Criteria Evaluation Facility (CCEF) In the UK they are called Commercial Evaluation Facilities (CLEF) In France they are called Centres d’Evaluation de la Sécurité des Technologies de l’Information (CESTI) In Germany they are called IT Security Evaluation Facility (ITSEF) Common Criteria Recognition Arrangement Common Criteria Recognition Arrangement (CCRA) or Common Criteria Mutual Recognition Arrangement (MRA) is an international agreement that recognizes evaluations against the Common Criteria standard performed in all participating countries. There are some limitations to this agreement and, in the past, only evaluations up to EAL4+ were recognized. With on-going transition away from EAL levels and the introduction of NDPP evaluations that “map” to up to EAL4 assurance components continue to be recognized. United States In the United States the National Institute of Standards and Technology (NIST) National Voluntary Laboratory Accreditation Program (NVLAP) accredits CCTLs to meet National Information Assurance Partnership (NIAP) Common Criteria Evaluation and Validation Scheme requirements and conduct IT security evaluations for conformance to the Common Criteria. CCTL requirements These laboratories must meet the following requirements: NIST Handbook 150, NVLAP Procedures and General Requirements NIST Handbook 150-20, NVLAP Information Technology Security Testing — Common Criteria NIAP specific criteria for IT security evaluations and other NIAP defined requirements CCTLs enter into contractual agreements with sponsors to conduct security evaluations of IT products and Protection Profiles which use the CCEVS, other NIAP approved test methods derived from the Common Criteria, Common Methodology and other technology based sources. CCTLs must observe the highest standards of impartiality, integrity and commercial confidentiality. CCTLs must operate within the guidelines established by the CCEVS. To become a CCTL, a testing laboratory must go through a series of steps that involve both the NIAP Validation Body and NVLAP. NVLAP accreditation is the primary requirement for achieving CCTL status. Some scheme requirements that cannot be satisfied by NVLAP accreditation are addressed by the NIAP Validation Body. At present, there are only three scheme-specific requirements imposed by the Validation Body. NIAP approved CCTLs must agree to the following: Located in the U.S. and be a legal entity, duly organized and incorporated, validly existing and in good standing under the laws of the state where the laboratory intends to do business Accept U.S. Government technical oversight and validation of evaluation-related activities in accordance with the policies and procedures established by the CCEVS Accept U.S. Government participants in selected Common Criteria evaluations. CCTL accreditation A testing laboratory becomes a CCTL when the laboratory is approved by the NIAP Validation Body and is listed on the Approved Laboratories List. To avoid unnecessary expense and delay in becoming a NIAP-approved testing laboratory, it is strongly recommended that prospective CCTLs ensure that they are able to satisfy the scheme-specific requirements prior to seeking accreditation from NVLAP. This can be accomplished by sending a letter of intent to the NIAP prior to entering the NVLAP process. Additional laboratory-related information can be found in CCEVS publications: #1 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Organization, Management, and Concept of Operations and Scheme Publication #4 Common Criteria Evaluation and Validation Scheme for Information Technology Security — Guidance to Common Criteria Testing Laboratories Canada In Canada the Communications Security Establishment Canada (CSEC) Canadian Common Criteria Scheme (CCCS) oversees Common Criteria Evaluation Facilities (CCEF). Accreditation is performed by Standards Council of Canada (SCC) under its Program for the Accreditation of Laboratories – Canada (PALCAN) according to CAN-P-1591, the SCC’s adaptation of ISO/IEC 17025-2005 for ITSET Laboratories. Approval is performed by the CCS Certification Body, a body within the CSEC, and is the verification of the applicant's ability to perform competent Common Criteria evaluations. Notes External links US: Common Criteria Evaluation and Validation Scheme US: Common Criteria Testing Laboratories Canada: Common Criteria Scheme Canada: Common Criteria Evaluation Facilities Common Criteria Recognition Agreement List of Common Criteria evaluated products ISO/IEC 15408 — available free as a public standard Computer security procedures Tests
24998792
https://en.wikipedia.org/wiki/Debugging
Debugging
In computer programming and software development, debugging is the process of finding and resolving bugs (defects or problems that prevent correct operation) within computer programs, software, or systems. Debugging tactics can involve interactive debugging, control flow analysis, unit testing, integration testing, log file analysis, monitoring at the application or system level, memory dumps, and profiling. Many programming languages and software development tools also offer programs to aid in debugging, known as debuggers. Etymology The terms "bug" and "debugging" are popularly attributed to Admiral Grace Hopper in the 1940s. While she was working on a Mark II computer at Harvard University, her associates discovered a moth stuck in a relay and thereby impeding operation, whereupon she remarked that they were "debugging" the system. However, the term "bug", in the sense of "technical error", dates back at least to 1878 and Thomas Edison (see software bug for a full discussion). Similarly, the term "debugging" seems to have been used as a term in aeronautics before entering the world of computers. Indeed, in an interview Grace Hopper remarked that she was not coining the term. The moth fit the already existing terminology, so it was saved. A letter from J. Robert Oppenheimer (director of the WWII atomic bomb "Manhattan" project at Los Alamos, NM) used the term in a letter to Dr. Ernest Lawrence at UC Berkeley, dated October 27, 1944, regarding the recruitment of additional technical staff. The Oxford English Dictionary entry for "debug" quotes the term "debugging" used in reference to airplane engine testing in a 1945 article in the Journal of the Royal Aeronautical Society. An article in "Airforce" (June 1945 p. 50) also refers to debugging, this time of aircraft cameras. Hopper's bug was found on September 9, 1947. Computer programmers did not adopt the term until the early 1950s. The seminal article by Gill in 1951 is the earliest in-depth discussion of programming errors, but it does not use the term "bug" or "debugging". In the ACM's digital library, the term "debugging" is first used in three papers from 1952 ACM National Meetings. Two of the three use the term in quotation marks. By 1963 "debugging" was a common-enough term to be mentioned in passing without explanation on page 1 of the CTSS manual. Peggy A. Kidwell's article Stalking the Elusive Computer Bug discusses the etymology of "bug" and "debug" in greater detail. Scope As software and electronic systems have become generally more complex, the various common debugging techniques have expanded with more methods to detect anomalies, assess impact, and schedule software patches or full updates to a system. The words "anomaly" and "discrepancy" can be used, as being more neutral terms, to avoid the words "error" and "defect" or "bug" where there might be an implication that all so-called errors, defects or bugs must be fixed (at all costs). Instead, an impact assessment can be made to determine if changes to remove an anomaly (or discrepancy) would be cost-effective for the system, or perhaps a scheduled new release might render the unnecessary. Not all issues are safety-critical or mission-critical in a system. Also, it is important to avoid the situation where a change might be more upsetting to users, long-term, than living with the known (where the "cure would be worse than the disease"). Basing decisions of the acceptability of some anomalies can avoid a culture of a "zero-defects" mandate, where people might be tempted to deny the existence of problems so that the result would appear as zero defects. Considering the collateral issues, such as the cost-versus-benefit impact assessment, then broader debugging techniques will expand to determine the frequency of anomalies (how often the same "bugs" occur) to help assess their impact to the overall system. Tools Debugging ranges in complexity from fixing simple errors to performing lengthy and tiresome tasks of data collection, analysis, and scheduling updates. The debugging skill of the programmer can be a major factor in the ability to debug a problem, but the difficulty of software debugging varies greatly with the complexity of the system, and also depends, to some extent, on the programming language(s) used and the available tools, such as debuggers. Debuggers are software tools which enable the programmer to monitor the execution of a program, stop it, restart it, set breakpoints, and change values in memory. The term debugger can also refer to the person who is doing the debugging. Generally, high-level programming languages, such as Java, make debugging easier, because they have features such as exception handling and type checking that make real sources of erratic behaviour easier to spot. In programming languages such as C or assembly, bugs may cause silent problems such as memory corruption, and it is often difficult to see where the initial problem happened. In those cases, memory debugger tools may be needed. In certain situations, general purpose software tools that are language specific in nature can be very useful. These take the form of static code analysis tools. These tools look for a very specific set of known problems, some common and some rare, within the source code, concentrating more on the semantics (e.g. data flow) rather than the syntax, as compilers and interpreters do. Both commercial and free tools exist for various languages; some claim to be able to detect hundreds of different problems. These tools can be extremely useful when checking very large source trees, where it is impractical to do code walk-throughs. A typical example of a problem detected would be a variable dereference that occurs before the variable is assigned a value. As another example, some such tools perform strong type checking when the language does not require it. Thus, they are better at locating likely errors in code that is syntactically correct. But these tools have a reputation of false positives, where correct code is flagged as dubious. The old Unix lint program is an early example. For debugging electronic hardware (e.g., computer hardware) as well as low-level software (e.g., BIOSes, device drivers) and firmware, instruments such as oscilloscopes, logic analyzers, or in-circuit emulators (ICEs) are often used, alone or in combination. An ICE may perform many of the typical software debugger's tasks on low-level software and firmware. Debugging process Normally the first step in debugging is to attempt to reproduce the problem. This can be a non-trivial task, for example as with parallel processes and some Heisenbugs. Also, specific user environment and usage history can make it difficult to reproduce the problem. After the bug is reproduced, the input of the program may need to be simplified to make it easier to debug. For example, a bug in a compiler can make it crash when parsing some large source file. However, after simplification of the test case, only few lines from the original source file can be sufficient to reproduce the same crash. Such simplification can be made manually, using a divide-and-conquer approach. The programmer will try to remove some parts of original test case and check if the problem still exists. When debugging the problem in a GUI, the programmer can try to skip some user interaction from the original problem description and check if remaining actions are sufficient for bugs to appear. After the test case is sufficiently simplified, a programmer can use a debugger tool to examine program states (values of variables, plus the call stack) and track down the origin of the . Alternatively, tracing can be used. In simple cases, tracing is just a few print statements, which output the values of variables at certain points of program execution. Techniques Interactive debugging (or tracing) is the act of watching (live or recorded) trace statements, or print statements, that indicate the flow of execution of a process. This is sometimes called , due to the use of the printf function in C. This kind of debugging was turned on by the command TRON in the original versions of the novice-oriented BASIC programming language. TRON stood for, "Trace On." TRON caused the line numbers of each BASIC command line to print as the program ran. Remote debugging is the process of debugging a program running on a system different from the debugger. To start remote debugging, a debugger connects to a remote system over a communications link such as a local area network. The debugger can then control the execution of the program on the remote system and retrieve information about its state. Post-mortem debugging is debugging of the program after it has already crashed. Related techniques often include various tracing techniques like examining log files, outputting a call stack on crash, and analysis of memory dump (or core dump) of the crashed process. The dump of the process could be obtained automatically by the system (for example, when the process has terminated due to an unhandled exception), or by a programmer-inserted instruction, or manually by the interactive user. "Wolf fence" algorithm: Edward Gauss described this simple but very useful and now famous algorithm in a 1982 article for Communications of the ACM as follows: "There's one wolf in Alaska; how do you find it? First build a fence down the middle of the state, wait for the wolf to howl, determine which side of the fence it is on. Repeat process on that side only, until you get to the point where you can see the wolf." This is implemented e.g. in the Git version control system as the command git bisect, which uses the above algorithm to determine which commit introduced a particular bug. Record and replay debugging is the technique of creating a program execution recording (e.g. using Mozilla's free rr debugging tool; enabling reversible debugging/execution), which can be replayed and interactively debugged. Useful for remote debugging and debugging intermittent, non-determinstic, and other hard-to-reproduce defects. Delta Debugging a technique of automating test case simplification. Saff Squeeze a technique of isolating failure within the test using progressive inlining of parts of the failing test. Causality tracking: There are techniques to track the cause effect chains in the computation. Those techniques can be tailored for specific bugs, such as null pointer dereferences. Debugging for embedded systems In contrast to the general purpose computer software design environment, a primary characteristic of embedded environments is the sheer number of different platforms available to the developers (CPU architectures, vendors, operating systems, and their variants). Embedded systems are, by definition, not general-purpose designs: they are typically developed for a single task (or small range of tasks), and the platform is chosen specifically to optimize that application. Not only does this fact make life tough for embedded system developers, it also makes debugging and testing of these systems harder as well, since different debugging tools are needed for different platforms. Despite the challenge of heterogeneity mentioned above, some debuggers have been developed commercially as well as research prototypes. Examples of commercial solutions come from Green Hills Software, Lauterbach GmbH and Microchip's MPLAB-ICD (for in-circuit debugger). Two examples of research prototype tools are Aveksha and Flocklab. They all leverage a functionality available on low-cost embedded processors, an On-Chip Debug Module (OCDM), whose signals are exposed through a standard JTAG interface. They are benchmarked based on how much change to the application is needed and the rate of events that they can keep up with. In addition to the typical task of identifying bugs in the system, embedded system debugging also seeks to collect information about the operating states of the system that may then be used to analyze the system: to find ways to boost its performance or to optimize other important characteristics (e.g. energy consumption, reliability, real-time response, etc.). Anti-debugging Anti-debugging is "the implementation of one or more techniques within computer code that hinders attempts at reverse engineering or debugging a target process". It is actively used by recognized publishers in copy-protection schemas, but is also used by malware to complicate its detection and elimination. Techniques used in anti-debugging include: API-based: check for the existence of a debugger using system information Exception-based: check to see if exceptions are interfered with Process and thread blocks: check whether process and thread blocks have been manipulated Modified code: check for code modifications made by a debugger handling software breakpoints Hardware- and register-based: check for hardware breakpoints and CPU registers Timing and latency: check the time taken for the execution of instructions Detecting and penalizing debugger An early example of anti-debugging existed in early versions of Microsoft Word which, if a debugger was detected, produced a message that said, "The tree of evil bears bitter fruit. Now trashing program disk.", after which it caused the floppy disk drive to emit alarming noises with the intent of scaring the user away from attempting it again. See also Assertion (software development) Automatic bug fixing Debugging pattern Magic debug values Shotgun debugging Software bug Software testing Time travel debugging Trace table Troubleshooting References Further reading External links Crash dump analysis patterns in-depth articles on analyzing and finding bugs in crash dumps Learn the essentials of debugging how to improve your debugging skills, a good article at IBM developerWorks (archived from the original on February 18, 2007) Plug-in Based Debugging For Embedded Systems Embedded Systems test and debug – about digital input generation results of a survey about embedded system test and debug, Byte Paradigm (archived from the original on January 12, 2012)
60817707
https://en.wikipedia.org/wiki/HarmonyOS
HarmonyOS
HarmonyOS () is a distributed operating system developed by Huawei to run on multiple devices. In a multi-kernel design, the operating system selects suitable kernels from the abstraction layer for devices with diverse resources. For IoT devices, the system is known to be based on LiteOS; while for smartphones and tablets, it is based on a Linux kernel and has used the open-source Android code to support running Android apps, in addition to HarmonyOS apps. The system includes a communication base DSoftBus for integrating physically separate devices into a virtual Super Device, allowing one device to control others and sharing data among devices with distributed communication capabilities. It supports several forms of apps, including the apps that can be installed from AppGallery on smartphones and tablets, installation-free Quick apps and lightweight Atomic Services accessible by users. HarmonyOS was first used in Honor smart TVs in August 2019 and later used in Huawei smartphones, tablets and smartwatches in June 2021. History Origins Reports surrounding an in-house operating system being developed by Huawei date back as far as 2012. These reports intensified during the Sino-American trade war, after the United States Department of Commerce added Huawei to its Entity List in May 2019 under an indictment that it knowingly exported goods, technology and services of U.S. origin to Iran in violation of sanctions. This prohibited U.S.-based companies from doing business with Huawei without first obtaining a license from the government. Huawei executive Richard Yu described an in-house platform as a "plan B" in case it is prevented from using Android on future smartphone products due to the sanctions. Prior to its unveiling, it was originally speculated to be a mobile operating system that could replace Android on future Huawei devices. In June 2019, an Huawei executive told Reuters that the OS was under testing in China, and could be ready "in months", but by July 2019, some Huawei executives described the OS as being an embedded operating system designed for IoT hardware, discarding the previous statements for it to be a mobile operating system. Some media outlets reported that this OS, referred to as "Hongmeng", could be released in China in either August or September 2019, with a worldwide release in the second quarter of 2020. On 24 May 2019, Huawei registered "Hongmeng" as a trademark in China. The name "Hongmeng" () came from Chinese mythology that symbolizes primordial chaos or the world before creation. The same day, Huawei registered trademarks surrounding "Ark OS" and variants with the European Union Intellectual Property Office. In July 2019, it was reported that Huawei had also registered trademarks surrounding the word "Harmony" for desktop and mobile operating system software, indicating either a different name or a component of the OS. Release On 9 August 2019, Huawei officially unveiled HarmonyOS at its inaugural developers' conference in Dongguan. Huawei described HarmonyOS as a free, microkernel-based distributed operating system for various types of hardware. The company focused primarily on IoT devices, including smart TVs, wearable devices, and in-car entertainment systems, and did not explicitly position HarmonyOS as a mobile OS. HarmonyOS 2.0 launched at the Huawei Developer Conference on 10 September 2020. Huawei announced it intended to ship the operating system on its smartphones in 2021. The first developer beta of HarmonyOS 2.0 was launched on 16 December 2020. Huawei also released the DevEco Studio IDE, which is based on IntelliJ IDEA, and a cloud emulator for developers in early access. Huawei officially released HarmonyOS 2.0 and launched new devices shipping with the OS in June 2021, and started rolling out system upgrades to Huawei's older phones for users gradually. HarmonyOS apps In contrast to Android apps being packaged into APK file format, HarmonyOS apps are released as an App Pack suffixed with .app for distribution at Huawei's AppGallery. Each App Pack contains one or more HarmonyOS Ability Package (HAP) files and a pack.info file. The AppGallery allows users to download and install Android apps that are compatible with HarmonyOS, and apps that are specifically designed for HarmonyOS in an APP pack. For general differentiation, some HarmonyOS apps are marked with an "HMOS" subscript on the app icon, and an underline beneath the app icon to signify the features of service cards available in HarmonyOS. Apps that are developed using specific HarmonyOS features will not be supported on devices running Android. Both HarmonyOS apps and Android apps are allowed to utilize Huawei Mobile Services as an option. However, the distributed communication technology provided in HarmonyOS system is made available for HarmonyOS apps, but not Android apps, based on the design of the operating system. As of June 2021, there were reportedly around 500,000 developers participated in developing HarmonyOS apps. Devices Huawei stated that HarmonyOS would initially be used on devices targeting the Chinese market. The company's former subsidiary brand, Honor, unveiled the Honor Vision line of smart TVs as the first consumer electronics devices to run HarmonyOS. The HarmonyOS 2.0 beta launched on 16 December 2020 supports the P30 series, P40 series, Mate 30 series, Mate 40 series, P50 series and MatePad Pro. HarmonyOS 2.0 was released as updates for the P40 and Mate X2 in June 2021. New Huawei Watch, MatePad Pro and PixLab X1 desktop printer models shipping with HarmonyOS were also unveiled. As of October, 2021, HarmonyOS 2.0 has over 150 million users. Relationship with OpenHarmony, Android and LiteOS OpenHarmony is an open-source version of HarmonyOS donated by Huawei to the OpenAtom Foundation. It supports devices running a mini system with memory as small as 128 KB, or running a standard system with memory greater than 128 MB. The open-source operating system contains the basic capabilities of HarmonyOS and does not depend on the Android Open Source Project (AOSP). Conversely, HarmonyOS runs on Huawei's proprietary architecture and has used the AOSP code and a Linux kernel in smartphones to enable the operating system to run Android apps, in addition to HarmonyOS apps, on devices launching Huawei Mobile Services. Legal issues In May 2019, Huawei applied for registration of the trademark "Hongmeng" through the Chinese Patent Office CNIPA, but the application was rejected in pursuance to Article 30 of the PRC Trade Mark Law, citing the trademark was similar to that of "CRM Hongmeng" in graphic design and "Hongmeng" in Chinese word. In less than a week before launching HarmonyOS 2.0 and new devices by Huawei, the Beijing Intellectual Property Court announced the first-instance judgement in May 2021 to uphold the decision by CNIPA as the trademark was not sufficiently distinctive in terms of its designated services. However, it was reported that the trademark had officially been transferred from Huizhou Qibei Technology to Huawei by end of May 2021. Criticism In depth analysis of Huawei's developer tools by Ars Technica, HarmonyOS running on smartphones was criticised as a rebranded version of Android and EMUI with nearly identical code bases. Following the release of the HarmonyOS 2.0 beta, Ars Technica and XDA Developers speculated that the smartphone version of the OS had been forked from Android 10. Ars Technica found that it resembled the existing EMUI software used on Huawei devices, but with all references to "Android" replaced by "HarmonyOS". It was also noted that the DevEco Studio software shared components and toolchains with Android Studio. When testing the new MatePad Pro in June 2021, Android Authority and The Verge similarly observed similarities in behavior, including that it was possible to install apps from Android APK files on the HarmonyOS-based tablet, and that it included the Android 10 easter egg—affirming the earlier reports. Initially, Huawei stated that HarmonyOS was a microkernel-based, distributed OS that was completely different from Android and iOS. A Huawei spokesperson subsequently stated that HarmonyOS supports multiple kernels and uses a Linux kernel if a device has a large amount of RAM, and that the company had taken advantage of a large number of third-party open-source resources, including Linux, to accelerate the development of a comprehensive architecture. See also EulerOS Flyme OS AliOS References 2019 software Embedded operating systems Huawei products Internet of things Android forks Mobile operating systems
1034453
https://en.wikipedia.org/wiki/Podcast
Podcast
A podcast is an episodic series of digital audio files that a user can download to a personal device to listen to at a time of their choosing. Streaming applications and podcasting services provide a convenient and integrated way to manage a personal consumption queue across many podcast sources and playback devices. There also exist podcast search engines, which help users find and share podcast episodes. A podcast series usually features one or more recurring hosts engaged in a discussion about a particular topic or current event. Discussion and content within a podcast can range from carefully scripted to completely improvised. Podcasts combine elaborate and artistic sound production with thematic concerns ranging from scientific research to slice-of-life journalism. Many podcast series provide an associated website with links and show notes, guest biographies, transcripts, additional resources, commentary, and even a community forum dedicated to discussing the show's content. The cost to the consumer is low, with many podcasts free to download. Some are underwritten by corporations or sponsored, with the inclusion of commercial advertisements. In other cases, a podcast could be a business venture supported by some combination of a paid subscription model, advertising or product delivered after sale. Because podcast content is often free, podcasting is often classified as a disruptive medium, adverse to the maintenance of traditional revenue models. Production A podcast generator maintains a central list of the files on a server as a web feed that one can access through the Internet. The listener or viewer uses special client application software on a computer or media player, known as a podcast client, which accesses this web feed, checks it for updates, and downloads any new files in the series. This process can be automated to download new files automatically, so it may seem to listeners as though podcasters broadcast or "push" new episodes to them. Podcast files can be stored locally on the user's device, or streamed directly. There are several different mobile applications that allow people to follow and listen to podcasts. Many of these applications allow users to download podcasts or stream them on demand. Most podcast players or applications allow listeners to skip around the podcast and to control the playback speed. Podcasting has been considered a converged medium (a medium that brings together audio, the web and portable media players), as well as a disruptive technology that has caused some individuals in radio broadcasting to reconsider established practices and preconceptions about audiences, consumption, production and distribution. Podcasts can be produced at little to no cost and are usually disseminated free-of-charge, which sets this medium apart from the traditional 20th-century model of "gate-kept" media and their production tools. Podcasters can, however, still monetize their podcasts by allowing companies to purchase ad time. They can also garner support from listeners through crowdfunding websites like Patreon, which provide special extras and content to listeners for a fee. Etymology "Podcast" is a portmanteau, a combination of "iPod" and "broadcast". The term "podcasting" was first suggested by The Guardian columnist and BBC journalist Ben Hammersley, who invented it in early February 2004 while writing an article for The Guardian newspaper. The term was first used in the audioblogging community in September 2004, when Danny Gregoire introduced it in a message to the iPodder-dev mailing list, from where it was adopted by podcaster Adam Curry. Despite the etymology, the content can be accessed using any computer or similar device that can play media files. Use of the term "podcast" predated Apple's addition of support for podcasting to the iPod, or its iTunes software. Some sources have suggested the backronym "portable on demand" for POD to avoid the loose reference to the iPod. History In October 2000, the concept of attaching sound and video files in RSS feeds was proposed in a draft by Tristan Louis. The idea was implemented by Dave Winer, a software developer and an author of the RSS format. Podcasting, once an obscure method of spreading audio information, has become a recognized medium for distributing audio content, whether for corporate or personal use. Podcasts are similar to radio programs in form, but they exist as audio files that can be played at a listener's convenience, anytime and anywhere. The first application to make this process feasible was iPodderX, developed by August Trometer and Ray Slakinski. By 2007, audio podcasts were doing what was historically accomplished via radio broadcasts, which had been the source of radio talk shows and news programs since the 1930s. This shift occurred as a result of the evolution of internet capabilities along with increased consumer access to cheaper hardware and software for audio recording and editing. In August 2004, Adam Curry launched his show Daily Source Code. It was a show focused on chronicling his everyday life, delivering news, and discussions about the development of podcasting, as well as promoting new and emerging podcasts. Curry published it in an attempt to gain traction in the development of what would come to be known as podcasting and as a means of testing the software outside of a lab setting. The name Daily Source Code was chosen in the hope that it would attract an audience with an interest in technology. Daily Source Code started at a grassroots level of production and was initially directed at podcast developers. As its audience became interested in the format, these developers were inspired to create and produce their own projects and, as a result, they improved the code used to create podcasts. As more people learned how easy it was to produce podcasts, a community of pioneer podcasters quickly appeared. In June 2005, Apple released iTunes 4.9 which added formal support for podcasts, thus negating the need to use a separate program in order to download and transfer them to a mobile device. Although this made access to podcasts more convenient and widespread, it also effectively ended advancement of podcatchers by independent developers. Additionally, Apple issued cease and desist orders to many podcast application developers and service providers for using the term "iPod" or "Pod" in their products' names. Within a year, many podcasts from public radio networks like the BBC, CBC Radio One, NPR, and Public Radio International placed many of their radio shows on the iTunes platform. In addition, major local radio stations like WNYC in New York City, WHYY-FM radio in Philadelphia, and KCRW in Los Angeles placed their programs on their websites and later on the iTunes platform. Concurrently, CNET, This Week in Tech, and later Bloomberg Radio, the Financial Times, and other for-profit companies provided podcast content, some using podcasting as their only distribution system. As of early 2019, the podcasting industry still generated little overall revenue, although the number of persons who listen to podcasts continues to grow steadily. Edison Research, which issues the Podcast Consumer quarterly tracking report, estimates that in 2019, 90 million persons in the U.S. have listened to a podcast in the last month. In 2020, 58% of the population of South Korea and 40% of the Spanish population had listened to a podcast in the last month. 12.5% of the UK population had listened to a podcast in the last week. The form is also acclaimed for its low overhead for a creator to start and maintain their show, merely requiring a good-quality microphone, a computer or mobile device and associated software to edit and upload the final product, and some form of acoustic quieting. Podcast creators tend to have a good listener base because of their relationships with the listeners. IP issues in trademark and patent law Trademark applications Between February 10 and 25 March 2005, Shae Spencer Management, LLC of Fairport, New York filed a trademark application to register the term "podcast" for an "online prerecorded radio program over the internet". On September 9, 2005, the United States Patent and Trademark Office (USPTO) rejected the application, citing Wikipedia's podcast entry as describing the history of the term. The company amended their application in March 2006, but the USPTO rejected the amended application as not sufficiently differentiated from the original. In November 2006, the application was marked as abandoned. Apple trademark protections On September 26, 2004, it was reported that Apple Inc. had started to crack down on businesses using the string "POD", in product and company names. Apple sent a cease and desist letter that week to Podcast Ready, Inc., which markets an application known as "myPodder". Lawyers for Apple contended that the term "pod" has been used by the public to refer to Apple's music player so extensively that it falls under Apple's trademark cover. Such activity was speculated to be part of a bigger campaign for Apple to expand the scope of its existing iPod trademark, which included trademarking "IPOD", "IPODCAST", and "POD". On November 16, 2006, the Apple Trademark Department stated that "Apple does not object to third-party usage of the generic term 'podcast' to accurately refer to podcasting services" and that "Apple does not license the term". However, no statement was made as to whether or not Apple believed they held rights to it. Personal Audio lawsuits Personal Audio, a company referred to as a "patent troll" by the Electronic Frontier Foundation, filed a patent on podcasting in 2009 for a claimed invention in 1996. In February 2013, Personal Audio started suing high-profile podcasters for royalties, including The Adam Carolla Show and the HowStuffWorks podcast. In October 2013, the EFF filed a petition with the US Trademark Office to invalidate the Personal Audio patent. On August 18, 2014, the Electronic Frontier Foundation announced that Adam Carolla had settled with Personal Audio. On April 10, 2015, the U.S. Patent and Trademark Office invalidated five provisions of Personal Audio's podcasting patent. Types of podcasts Enhanced podcasts An enhanced podcast, also known as a slidecast, is a type of podcast that combines audio with a slide show presentation. It is similar to a video podcast in that it combines dynamically-generated imagery with audio synchronization, but it is different in that it uses presentation software to create the imagery and the sequence of display separately from the time of the original audio podcast recording. The Free Dictionary, YourDictionary, and PC Magazine define an enhanced podcast as "an electronic slide show delivered as a podcast". Enhanced podcasts are podcasts that incorporate graphics and chapters. iTunes developed an enhanced podcast feature called "Audio Hyperlinking" that they patented in 2012. Enhanced podcasts can be used by businesses or in education. Enhanced podcasts can be created using QuickTime AAC or Windows Media files. Enhanced podcasts were first used in 2006. Fiction podcast A fiction podcast (also referred to as a "scripted podcast" or "narrative podcast") is similar to a radio drama, but in podcast form. They deliver a fictional story, usually told over multiple episodes and seasons, using multiple voice actors, dialogue, sound effects, and music to enrich the story. Fiction podcasts have attracted a number of well-known actors as voice talents, including Demi Moore & Matthew McConaughey as well as from content producers like Netflix, Spotify, Marvel, and DC Comics. While science-fiction and horror are quite popular, fiction podcasts cover a full range of literary genres from romance, comedy, and drama to fantasy, sci-fi, and detective fiction. Examples of fiction podcasts include The Bright Sessions, Homecoming, Wooden Overcoats and Wolverine: The Long Night. Podcast novels A podcast novel (also known as a "serialized audiobook" or "podcast audiobook") is a literary form that combines the concepts of a podcast and an audiobook. Like a traditional novel, a podcast novel is a work of literary fiction; however, it is recorded into episodes that are delivered online over a period of time. The episodes may be delivered automatically via RSS or through a website, blog, or other syndication method. Episodes can be released on a regular schedule, e.g., once a week, or irregularly as each episode is completed. In the same manner as audiobooks, some podcast novels are elaborately narrated with sound effects and separate voice actors for each character, similar to a radio play or scripted podcast, but many have a single narrator and few or no sound effects. Some podcast novelists give away a free podcast version of their book as a form of promotion. On occasion such novelists have secured publishing contracts to have their novels printed. Podcast novelists have commented that podcasting their novels lets them build audiences even if they cannot get a publisher to buy their books. These audiences then make it easier to secure a printing deal with a publisher at a later date. These podcast novelists also claim the exposure that releasing a free podcast gains them makes up for the fact that they are giving away their work for free. Video podcasts A video podcast is a podcast that contains video content. Web television series are often distributed as video podcasts. Dead End Days, a serialized dark comedy about zombies released from 31 October 2003 through 2004, is commonly believed to be the first video podcast. Live podcasts A number of podcasts are recorded either in total or for specific episodes in front of a live audience. Ticket sales allow the podcasters an additional way of monetising. Some podcasts create specific live shows to tour which are not necessarily included on the podcast feed. Events including the London Podcast Festival, SF Sketchfest and others regularly give a platform for podcasters to perform live to audiences. Equipment The most basic equipment for a podcast is a computer and a microphone. It is helpful to have a sound-proof room and headphones. The computer should have a recording or streaming application installed. Typical microphones for podcasting are connected using USB. If the podcast involves two or more people, each person requires a microphone, and a USB audio interface is needed to mix them together. If the podcast includes video (livestreaming), then a separate webcam might be needed, and additional lighting. See also List of podcast clients List of podcasting companies MP3 blog User-generated content Uses of podcasting Webcast References Further reading Geoghegan, Michael W.; Klass, Dan (August 16, 2005). Podcast Solutions: The Complete Guide to Podcasting. Apress. . Meinzer, Kristen (August 6, 2019). So You Want to Start a Podcast: Finding Your Voice, Telling Your Story, and Building a Community That Will Listen. William Morrow. . Morris, Tee; Tomasi, Chuck (September 15, 2017). Podcasting For Dummies. Wiley. . External links Podcasting Legal Guide: Rules for the Revolution, information by Creative Commons Articles containing video clips Digital audio Media formats Technology in society Web syndication British inventions 21st-century inventions
37504895
https://en.wikipedia.org/wiki/Tianhe-2
Tianhe-2
Tianhe-2 or TH-2 (, i.e. 'Milky Way 2') is a 33.86-petaflops supercomputer located in the National Supercomputer Center in Guangzhou, China. It was developed by a team of 1,300 scientists and engineers. It was the world's fastest supercomputer according to the TOP500 lists for June 2013, November 2013, June 2014, November 2014, June 2015, and November 2015. The record was surpassed in June 2016 by the Sunway TaihuLight. In 2015, plans of the Sun Yat-sen University in collaboration with Guangzhou district and city administration to double its computing capacities were stopped by a U.S. government rejection of Intel's application for an export license for the CPUs and coprocessor boards. In response to the U.S. sanction, China introduced the Sunway TaihuLight supercomputer in 2016, which substantially outperforms the Tianhe-2 (and also affected the update of Tianhe-2 to Tianhe-2A replacing US tech), and now ranks fourth in the TOP500 list while using completely domestic technology including the Sunway manycore microprocessor. History The development of Tianhe-2 was sponsored by the 863 High Technology Program, initiated by the Chinese government, the government of Guangdong province, and the government of Guangzhou city. It was built by China's National University of Defense Technology (NUDT) in collaboration with the Chinese IT firm Inspur. Inspur manufactured the printed circuit boards and helped with the installation and testing of the system software. The project was originally scheduled for completion in 2015, but was instead declared operational in June 2013. As of June 2013, the supercomputer had yet to become fully operational. It was expected to reach its full computing capabilities by the end of 2013. In June 2013, Tianhe-2 topped the TOP500 list of fastest supercomputers in the world and was still listed as the fastest machine in the November 2015 list. The computer beat out second-place finisher Titan by nearly a 2-to-1 margin. Titan, which is housed at the U.S. Department of Energy's Oak Ridge National Laboratory, achieved 17.59 petaflops, while Tianhe-2 achieved 33.86 petaflops. Tianhe-2's performance returned the title of the world's fastest supercomputer to China after Tianhe-I's début in November 2010. The Institute of Electrical and Electronics Engineers said Tianhe-2's win "symbolizes China's unflinching commitment to the supercomputing arms race". In June 2013, China housed 66 of the top 500 supercomputers, second only to the United States' 252 systems. The Chinese total increased to 168 of the top 500 systems by June 2016, overtaking the United States which fell to 165 of the top 500 supercomputers. Graph500 is an alternate list of top supercomputers based on a benchmark testing analysis of graphs. In their benchmark, the system tested at 2,061 gigaTEPS (traversed edges per second). The top system, IBM Sequoia, tested at 15,363 gigaTEPS. It also has first place in the HPCG benchmark test proposed by Jack Dongarra, with 0.580 HPCG PFLOPS in June 2014. Tianhe-2 has been housed at National University of Defense Technology. Specifications According to NUDT, Tianhe-2 would have been used for simulation, analysis, and government security applications. With 16,000 computer nodes, each comprising two Intel Ivy Bridge Xeon processors and three Xeon Phi coprocessor chips, it represented the world's largest installation of Ivy Bridge and Xeon Phi chips, counting a total of 3,120,000 cores (because of US sanctions, the upgrades Tianhe-2A switched out the Xeon Phi accelerators for Matrix-2000, and the upgraded faster system has 4,981,760 cores in total, but still dropped from 2nd to 4th place because of new faster systems added to the list). Each of the 16,000 nodes possessed 88 gigabytes of memory (64 used by the Ivy Bridge processors, and 8 gigabytes for each of the Xeon Phi processors). The total CPU plus coprocessor memory was 1,375 TiB (approximately 1.34 PiB). The system has a 12.4 PiB H2FS file system consisting of IO forwarding nodes providing a 1 TiB/s burst rate backed by a Lustre file system with 100 GiB/s sustained throughput. During the testing phase, Tianhe-2 was laid out in a non-optimal confined space. When assembled at its final location, the system will have had a theoretical peak performance of 54.9 petaflops. At peak power consumption, the system itself would have drawn 17.6 megawatts of power. Including external cooling, the system drew an aggregate of 24 megawatts. The completed computer complex would have occupied 720 square meters of space. The front-end system consisted of 4096 Galaxy FT-1500 CPUs, a SPARC derivative designed and built by NUDT. Each FT-1500 has 16 cores and a 1.8 GHz clock frequency. The chip has a performance of 144 gigaflops and runs on 65 watts. The interconnect, called the TH Express-2, designed by NUDT, utilized a fat tree topology with 13 switches each of 576 ports. Tianhe-2 ran on Kylin Linux, a version of the operating system developed by NUDT. Resource management is based on Slurm Workload Manager. Criticisms Researchers have criticized Tianhe-2 for being difficult to use. "It is at the world's frontier in terms of calculation capacity, but the function of the supercomputer is still way behind the ones in the US and Japan", says Chi Xuebin, deputy director of the Computer Network and Information Centre. "Some users would need years or even a decade to write the necessary code", he added. The location of Tianhe-2 is in Southern China, where the warmer weather with higher temperature could increase the electricity consumption by about 10% compared with a location in Northern China. See also Tianhe-1 Supercomputing in China References Further reading MilkyWay-2 supercomputer: system and application. Xiangke LIAO, Liquan XIAO, Canqun YANG, Yutong LU. Front. Comput. Sci., 2014, 8(3): 345–356 DOI:10.1007/s11704-014-3501-3 (6 September 2013) High performance interconnect network for Tianhe system. Xiang-Ke Liao, Zheng-Bin Pang, Ke-Fei Wang, Yu-Tong Lu, Min Xie, Jun Xia, De-Zun Dong, Guang Suo. JOURNAL OF COMPUTER SCIENCE AND TECHNOLOGY 30(2): 259–272 Mar. 2015. DOI:10.1007/s11390-015-1520-7 (30 November 2014) Petascale computers Supercomputing in China X86 supercomputers 64-bit computers
549892
https://en.wikipedia.org/wiki/GRASS%20GIS
GRASS GIS
Geographic Resources Analysis Support System (commonly termed GRASS GIS) is a geographic information system (GIS) software suite used for geospatial data management and analysis, image processing, producing graphics and maps, spatial and temporal modeling, and visualizing. It can handle raster, topological vector, image processing, and graphic data. GRASS GIS contains over 350 modules to render maps and images on monitor and paper; manipulate raster and vector data including vector networks; process multispectral image data; and create, manage, and store spatial data. It is licensed and released as free and open-source software under the GNU General Public License (GPL). It runs on multiple operating systems, including , Windows and Linux. Users can interface with the software features through a graphical user interface (GUI) or by plugging into GRASS via other software such as QGIS. They can also interface with the modules directly through a bespoke shell that the application launches or by calling individual modules directly from a standard shell. The latest stable release version (LTS) is GRASS GIS 7, which is available since 2015. The GRASS development team is a multinational group consisting of developers at many locations. GRASS is one of the eight initial software projects of the Open Source Geospatial Foundation. Architecture GRASS supports raster and vector data in two and three dimensions. The vector data model is topological, meaning that areas are defined by boundaries and centroids; boundaries cannot overlap within one layer. In contrast, OpenGIS Simple Features, defines vectors more freely, much as a non-georeferenced vector illustration program does. GRASS is designed as an environment in which tools that perform specific GIS computations are executed. Unlike GUI-based application software, the GRASS user is presented with a Unix shell containing a modified environment that supports execution of GRASS commands, termed modules. The environment has a state that includes parameters such as the geographic region covered and the map projection in use. All GRASS modules read this state and additionally are given specific parameters (such as input and output maps, or values to use in a computation) when executed. Most GRASS modules and abilities can be operated via a graphical user interface (provided by a GRASS module), as an alternative to manipulating geographic data in a shell. The GRASS distribution includes over 350 core modules. Over 100 add-on modules created by users are offered on its website. The libraries and core modules are written in C. Other modules are written in C, C++, Python, Unix shell, Tcl, or other scripting languages. The modules are designed under the Unix philosophy and hence can be combined using Python or shell scripting to build more complex or specialized modules, by users, without knowledge of C programming. There is cooperation between the GRASS and Quantum GIS (QGIS) projects. Recent versions of QGIS can be executed within the GRASS environment, allowing QGIS to be used as a user-friendly graphical interface to GRASS that more closely resembles other graphical GIS software than does the shell-based GRASS interface. Another project exists to re-implement GRASS in Java as JGRASS. History GRASS has been under continuous development since 1982 and has involved a large number of federal US agencies, universities, and private companies. The core components of GRASS and the management of integration of efforts into its releases was originally directed by the U.S. Army - Construction Engineering Research Laboratory (USA-CERL), a branch of the U.S. Army Corps of Engineers, in Champaign, Illinois. USA-CERL completed its last release of GRASS as version 4.1 in 1992, and provided five updates and patches to this release through 1995. USA-CERL also wrote the core components of the GRASS 5.0 floating point version. The development of GRASS was started by the USA-CERL to meet the need of the United States military for software for land management and environmental planning. A key motive was the National Environmental Policy Act. The development platform was Unix running on VAX hardware. During 1982 through 1995, USA-CERL led the development of GRASS, with the involvement of many others, including universities and other federal agencies. USA-CERL officially ceased its involvement in GRASS after release 4.1 (1995), though development had been limited to minor patches since 1993. A group formed at Baylor University to take over the software, releasing GRASS 4.2. Around this time, a port of the software to Linux was made. In 1998, Markus Neteler, the current project leader, announced the release of GRASS 4.2.1, which offered major improvements including a new graphical user interface. In October 1999, the license of the originally public domain software GRASS software was changed to the GNU GPL in version 5.0. Since then, GRASS has evolved into a powerful software suite with a wide range of applications in many different areas of scientific research and engineering. For example, it is used to estimate potential solar photovoltaic yield with r.sun. As of 2015, GRASS is used in academic and commercial settings around the world, and in many government agencies including NASA, NOAA, USDA, DLR, CSIRO, the National Park Service, the U.S. Census Bureau, USGS, and many environmental consulting companies. , the latest stable release version (LTS) is GRASS GIS 7. It was released in 2015, replacing the old stable branch (6.4) which was released in 2011. Version 7 added many new features, including large data support, a fast topological 2D/3D vector engine, powerful vector network analysis, a full temporal framework and many other features and improvements. , GRASS development is split into two branches: stable and developmental. The stable branch is recommended for most users, while the development branch operates as a testbed for new features. See also Object-based spatial database References Further reading Indian Example A.P. Pradeepkumar (2003) "Absolute Beginners Guide to Linux/GRASS installation" Online publication at GRASS Development Project Website 原著 A. P. Pradeepkumar (2003) GRASS 5.00 安装新手指南 in Chinese External links at OSGeo foundation GRASS GIS mirror web site, Italy GRASS GIS mirror at ibiblio, USA GRDSS, Geographic Resources Decision Support System (GRASS GUI) PyWPS (Python Web Processing Service with native support for GRASS) A (not so) short overview of the Geographic Information System GRASS The GRASS story, 1987 narrated by William Shatner. Provided by the AV-Portal of the German National Library of Science and Technology Cross-platform software Free GIS software Free software programmed in C Free educational software Software that uses wxPython Remote sensing software
852322
https://en.wikipedia.org/wiki/Interactive%20Disassembler
Interactive Disassembler
The Interactive Disassembler (IDA) is a disassembler for computer software which generates assembly language source code from machine-executable code. It supports a variety of executable formats for different processors and operating systems. It also can be used as a debugger for Windows PE, Mac OS X Mach-O, and Linux ELF executables. A decompiler plug-in for programs compiled with a C/ compiler is available at extra cost. The latest full version of IDA Pro is commercial, while an earlier and less capable version is available for download free of charge (version 7.6 ). IDA performs automatic code analysis, using cross-references between code sections, knowledge of parameters of API calls, and other information. However, the nature of disassembly precludes total accuracy, and a great deal of human intervention is necessarily required; IDA has interactive functionality to aid in improving the disassembly. A typical IDA user will begin with an automatically generated disassembly listing and then convert sections from code to data and vice versa, rename, annotate, and otherwise add information to the listing, until it becomes clear what it does. Created as a shareware application by Ilfak Guilfanov, IDA was later sold as a commercial product by DataRescue, a Belgian company, who improved it and sold it under the name IDA Pro. In 2005, Guilfanov founded Hex-Rays to pursue the development of the Hex-Rays Decompiler IDA extension. In January 2008, Hex-Rays assumed the development and support of DataRescue's IDA Pro. Scripting "IDC scripts" make it possible to extend the operation of the disassemble. Some helpful scripts are provided, which can serve as the basis for user written scripts. Most frequently scripts are used for extra modification of the generated code. For example, external symbol tables can be loaded thereby using the function names of the original source code. Users have created plugins that allow other common scripting languages to be used instead of, or in addition to, IDC. IdaRUB supports Ruby and IDAPython adds support for Python. As of version 5.4, IDAPython (dependent on Python 2.5) comes preinstalled with IDA Pro. Supported systems/processors/compilers System hosts Windows x86 and ARM Linux x86 x86 Recognized executable file formats COFF and derivatives, including Win32/64/generic PE ELF and derivatives (generic) Mach-O (Mach) NLM (NetWare) LC/LE/LX (OS/2 3.x and various DOS extenders) NE (OS/2 2.x, Win16, and various DOS extenders) MZ (MS-DOS) OMF and derivatives (generic) AIM (generic) raw binary, such as a ROM image or a COM file Instruction sets Intel 80x86 family ARM architecture Motorola 68k and H8 Zilog Z80 MOS 6502 Intel i860 DEC Alpha Analog Devices ADSP218x Angstrem KR1878 Atmel AVR series DEC series PDP11 Fujitsu F2MC16L/F2MC16LX Fujitsu FR 32-bit Family Hitachi SH3/SH3B/SH4/SH4B Hitachi H8: h8300/h8300a/h8s300/h8500 Intel 196 series: 80196/80196NP Intel 51 series: 8051/80251b/80251s/80930b/80930s Intel i960 series Intel Itanium (ia64) series Java virtual machine MIPS: mipsb/mipsl/mipsr/mipsrl/r5900b/r5900l Microchip PIC: PIC12Cxx/PIC16Cxx/PIC18Cxx MSIL Mitsubishi 7700 Family: m7700/m7750 Mitsubishi m32/m32rx Mitsubishi m740 Mitsubishi m7900 Motorola DSP 5600x Family: dsp561xx/dsp5663xx/dsp566xx/dsp56k Motorola ColdFire Motorola HCS12 NEC 78K0/78K0S PA-RISC PowerPC Xenon PowerPC Family SGS-Thomson ST20/ST20c4/ST7 SPARC Family Samsung SAM8 Siemens C166 series TMS320Cxxx series Compiler/libraries (for automatic library function recognition) Borland C++ 5.x for DOS/Windows Borland C++ 3.1 Borland C Builder v4 for DOS/Windows GNU C++ for Cygwin Microsoft C Microsoft QuickC Microsoft Visual C++ Watcom C++ (16/32 bit) for DOS/OS2 ARM C v1.2 GNU C++ for Unix/common Debugging IDA Pro supports a number of debuggers, including: Remote Windows, Linux, and Mac applications (provided by Hex-Rays) allow running an executable in its native environment (presumably using a virtual machine for malware) GNU Debugger (gdb) is supported on Linux and OS X, as well as the native Windows debugger A Bochs plugin is provided for debugging simple applications (i.e., damaged UPX or mpress compacted executables) An Intel PIN-based debugger A trace replayer See also Ghidra JEB Radare2 Binary Ninja Cheat engine References Further reading External links Disassemblers Debuggers Software for modeling software
64966407
https://en.wikipedia.org/wiki/Mahmoud%20Samir%20Fayed
Mahmoud Samir Fayed
Mahmoud Samir Fayed (born December 29, 1986) is a computer programmer, known as the creator of the PWCT programming language. PWCT is a free open source visual programming language for software development. He also created or designed Ring. the dynamically typed, programming language. He is a researcher at King Saud University. Prior to that, he worked at the Riyadh Techno Valley in the Information and Communication Technology Incubator. Background Fayed started to learn computer programming at 10 years old under the supervision of his father who works as a computer programmer. He started using the Clipper programming language under MS-DOS. In 2006 he wrote free Arabic programming books. He studied computer science at the Faculty of Electronic Engineering, Menoufia University, Egypt, graduating in 2008. Fayed received a Master's degree in 2017, from the College of Computer and Information Sciences, King Saud University, Saudi Arabia. Career PWCT language In 2005 Fayed began work on a new visual programming language called PWCT and distributed it as a free-open source project in 2008. Supernova language In 2009 Fayed began work on a new programming language called Supernova and distributed it as a free-open source project in 2010. The language support writing the source code in Arabic/English keywords at the same time and it's a Domain-specific language for GUI development using natural code. Supernova is developed using PWCT. JVLC Journal In 2013 Fayed worked with other researchers as a reviewer for the Journal of Visual Languages and Computing. The journal is published by Elsevier. LASCNN algorithm In 2013-2014 Fayed worked with other researchers on designing the LASCNN algorithm. In graph theory, LASCNN is a Localized Algorithm for Segregation of Critical/Non-critical Nodes. The LASCNN algorithm establishes k hop neighbor list and a duplicate free pair wise connection list based on k hop information. If the neighbors are stay connected then the node is non critical. Ring language In 2013 Fayed began work on a new programming language called Ring and distributed it as a free-open source project in 2016. Ring aims to offer a language focused on helping the developer with building natural interfaces and declarative DSLs. Ring is influenced with many programming languages including Lua, Python, C and Ruby. The Ring programming language includes libcurl, Allegro, LibSDL, OpenGL and Qt in the standard library. Papers Fayed et al., PWCT: a novel general-purpose visual programming language in support of pervasive application development, CCF Transactions on Pervasive Computing and Interaction, 2020 Imran, MA Alnuem, MS Fayed, A Alamri, Localized algorithm for segregation of critical/non-critical nodes in mobile ad hoc and sensor networks, Procedia Computer Science, 2013 References Further reading Ayouni (2020) Beginning Ring Programming, Apress (part of Springer Nature) Hassouna (2019) Ring Basics (Arabic Book), Hassouna Academy Fayed (2016) Ring Programming Language, Code Project Fayed (2010) Supernova Programming Language, Code Project External links PWCT and other stuff Ring programming language Supernova programming language Fayed home page at the King Saud University 1986 births Living people Free software programmers Programming language designers Open source people Egyptian computer scientists King Saud University alumni
30628437
https://en.wikipedia.org/wiki/Ideal%20lattice
Ideal lattice
In discrete mathematics, ideal lattices are a special class of lattices and a generalization of cyclic lattices. Ideal lattices naturally occur in many parts of number theory, but also in other areas. In particular, they have a significant place in cryptography. Micciancio defined a generalization of cyclic lattices as ideal lattices. They can be used in cryptosystems to decrease by a square root the number of parameters necessary to describe a lattice, making them more efficient. Ideal lattices are a new concept, but similar lattice classes have been used for a long time. For example, cyclic lattices, a special case of ideal lattices, are used in NTRUEncrypt and NTRUSign. Ideal lattices also form the basis for quantum computer attack resistant cryptography based on the Ring Learning with Errors. These cryptosystems are provably secure under the assumption that the shortest vector problem (SVP) is hard in these ideal lattices. Introduction In general terms, ideal lattices are lattices corresponding to ideals in rings of the form for some irreducible polynomial of degree . All of the definitions of ideal lattices from prior work are instances of the following general notion: let be a ring whose additive group is isomorphic to (i.e., it is a free -module of rank ), and let be an additive isomorphism mapping to some lattice in an -dimensional real vector space (e.g., ). The family of ideal lattices for the ring under the embedding is the set of all lattices , where is an ideal in Definition Notation Let be a monic polynomial of degree , and consider the quotient ring . Using the standard set of representatives , and identification of polynomials with vectors, the quotient ring is isomorphic (as an additive group) to the integer lattice , and any ideal defines a corresponding integer sublattice . An ideal lattice is an integer lattice such that for some monic polynomial of degree and ideal . Related properties It turns out that the relevant properties of for the resulting function to be collision resistant are: should be irreducible. the ring norm is not much bigger than for any polynomial , in a quantitative sense. The first property implies that every ideal of the ring defines a full-rank lattice in and plays a fundamental role in proofs. Lemma: Every ideal of , where is a monic, irreducible integer polynomial of degree , is isomorphic to a full-rank lattice in . Ding and Lindner gave evidence that distinguishing ideal lattices from general ones can be done in polynomial time and showed that in practice randomly chosen lattices are never ideal. They only considered the case where the lattice has full rank, i.e. the basis consists of linear independent vectors. This is not a fundamental restriction because Lyubashevsky and Micciancio have shown that if a lattice is ideal with respect to an irreducible monic polynomial, then it has full rank, as given in the above lemma. Algorithm: Identifying ideal lattices with full rank bases Data: A full-rank basis Result: true and , if spans an ideal lattice with respect to , otherwise false. Transform into HNF Calculate , , and Calculate the product if only the last column of P is non-zero then set to equal this column else return false if for then use CRT to find and else return false if then return true, else return false where the matrix M is Using this algorithm, it can be seen that many lattices are not ideal lattices. For example, let and , then is ideal, but is not. with is an example given by Lyubashevsky and Micciancio. Performing the algorithm on it and referring to the basis as B, matrix B is already in Hermite Normal Form so the first step is not needed. The determinant is , the adjugate matrix and finally, the product is At this point the algorithm stops, because all but the last column of have to be zero if would span an ideal lattice. Use in cryptography Micciancio introduced the class of structured cyclic lattices, which correspond to ideals in polynomial rings , and presented the first provably secure one-way function based on the worst-case hardness of the restriction of Poly(n)-SVP to cyclic lattices. (The problem γ-SVP consists in computing a non-zero vector of a given lattice, whose norm is no more than γ times larger than the norm of a shortest non-zero lattice vector.) At the same time, thanks to its algebraic structure, this one-way function enjoys high efficiency comparable to the NTRU scheme evaluation time and storage cost). Subsequently, Lyubashevsky and Micciancio and independently Peikert and Rosen showed how to modify Micciancio's function to construct an efficient and provably secure collision resistant hash function. For this, they introduced the more general class of ideal lattices, which correspond to ideals in polynomial rings . The collision resistance relies on the hardness of the restriction of Poly(n)-SVP to ideal lattices (called Poly(n)-Ideal-SVP). The average-case collision-finding problem is a natural computational problem called Ideal-SIS, which has been shown to be as hard as the worst-case instances of Ideal-SVP. Provably secure efficient signature schemes from ideal lattices have also been proposed, but constructing efficient provably secure public key encryption from ideal lattices was an interesting open problem. The fundamental idea of using LWE and Ring LWE for key exchange was proposed and filed at the University of Cincinnati in 2011 by Jintai Ding and provided a state of the art description of a quantum resistant key exchange using Ring LWE. The paper appeared in 2012 after a provisional patent application was filed in 2012. In 2014, Peikert presented a key transport scheme following the same basic idea of Ding's, where the new idea of sending additional signal for rounding in Ding's construction is also utilized. A digital signature using the same concepts was done several years earlier by Vadim Lyubashevsky in, "Lattice Signatures Without Trapdoors." Together, the work of Peikert and Lyubashevsky provide a suite of Ring-LWE based quantum attack resistant algorithms with the same security reductions. Efficient collision resistant hash functions The main usefulness of the ideal lattices in cryptography stems from the fact that very efficient and practical collision resistant hash functions can be built based on the hardness of finding an approximate shortest vector in such lattices. Independently constructed collision resistant hash functions by Peikert and Rosen, as well as Lyubashevsky and Micciancio, based on ideal lattices (a generalization of cyclic lattices), and provided a fast and practical implementation. These results paved the way for other efficient cryptographic constructions including identification schemes and signatures. Lyubashevsky and Micciancio gave constructions of efficient collision resistant hash functions that can be proven secure based on worst case hardness of the shortest vector problem for ideal lattices. They defined hash function families as: Given a ring , where is a monic, irreducible polynomial of degree and is an integer of order roughly , generate random elements , where is a constant. The ordered -tuple determines the hash function. It will map elements in , where is a strategically chosen subset of , to . For an element , the hash is . Here the size of the key (the hash function) is , and the operation can be done in time by using the Fast Fourier Transform (FFT) , for appropriate choice of the polynomial . Since is a constant, hashing requires time . They proved that the hash function family is collision resistant by showing that if there is a polynomial-time algorithm that succeeds with non-negligible probability in finding such that , for a randomly chosen hash function , then a certain problem called the “shortest vector problem” is solvable in polynomial time for every ideal of the ring . Based on the work of Lyubashevsky and Micciancio in 2006, Micciancio and Regev defined the following algorithm of hash functions based on ideal lattices: Parameters: Integers with , and vector f . Key: vectors chosen independently and uniformly at random in . Hash function: given by . Here are parameters, f is a vector in and is a block-matrix with structured blocks . Finding short vectors in on the average (even with just inverse polynomial probability) is as hard as solving various lattice problems (such as approximate SVP and SIVP) in the worst case over ideal lattices, provided the vector f satisfies the following two properties: For any two unit vectors u, v, the vector [F∗u]v has small (say, polynomial in , typically norm. The polynomial is irreducible over the integers, i.e., it does not factor into the product of integer polynomials of smaller degree. The first property is satisfied by the vector corresponding to circulant matrices, because all the coordinates of [F∗u]v are bounded by 1, and hence . However, the polynomial corresponding to is not irreducible because it factors into , and this is why collisions can be efficiently found. So, is not a good choice to get collision resistant hash functions, but many other choices are possible. For example, some choices of f for which both properties are satisfied (and therefore, result in collision resistant hash functions with worst-case security guarantees) are where is prime, and for equal to a power of 2. Digital signatures Digital signatures schemes are among the most important cryptographic primitives. They can be obtained by using the one-way functions based on the worst-case hardness of lattice problems. However, they are impractical. A number of new digital signature schemes based on learning with errors, ring learning with errors and trapdoor lattices have been developed since the learning with errors problem was applied in a cryptographic context. Their direct construction of digital signatures based on the complexity of approximating the shortest vector in ideal (e.g., cyclic) lattices. The scheme of Lyubashevsky and Micciancio has worst-case security guarantees based on ideal lattices and it is the most asymptotically efficient construction known to date, yielding signature generation and verification algorithms that run in almost linear time. One of the main open problems that was raised by their work is constructing a one-time signature with similar efficiency, but based on a weaker hardness assumption. For instance, it would be great to provide a one-time signature with security based on the hardness of approximating the Shortest Vector Problem (SVP) (in ideal lattices) to within a factor of . Their construction is based on a standard transformation from one-time signatures (i.e. signatures that allow to securely sign a single message) to general signature schemes, together with a novel construction of a lattice based one-time signature whose security is ultimately based on the worst-case hardness of approximating the shortest vector in all lattices corresponding to ideals in the ring for any irreducible polynomial . Key-Generation Algorithm: Input: , irreducible polynomial of degree . Set , , For all positive , let the sets and be defined as: such that such that Choose uniformly random Pick a uniformly random string If then Set else Set to the position of the first 1 in the string end if Pick independently and uniformly at random from and respectively Signing Key: . Verification Key: Signing Algorithm: Input: Message such that ; signing key Output: Verification Algorithm: Input: Message ; signature ; verification key Output: “ACCEPT”, if and “REJECT”, otherwise. The SWIFFT hash function The hash function is quite efficient and can be computed asymptotically in time using the Fast Fourier Transform (FFT) over the complex numbers. However, in practice, this carries a substantial overhead. The SWIFFT family of hash functions defined by Micciancio and Regev is essentially a highly optimized variant of the hash function above using the (FFT) in . The vector f is set to for equal to a power of 2, so that the corresponding polynomial is irreducible. Let be a prime number such that divides , and let be an invertible matrix over to be chosen later. The SWIFFT hash function maps a key consisting of vectors chosen uniformly from and an input to where is as before and . Multiplication by the invertible matrix maps a uniformly chosen to a uniformly chosen . Moreover, if and only if . Together, these two facts establish that finding collisions in SWIFFT is equivalent to finding collisions in the underlying ideal lattice function , and the claimed collision resistance property of SWIFFT is supported by the connection to worst case lattice problems on ideal lattices. The algorithm of the SWIFFT hash function is: Parameters: Integers such that is a power of 2, is prime, and . Key: vectors chosen independently and uniformly at random in . Input: vectors . Output: the vector , where is the component-wise vector product. Learning with errors (LWE) Ring-LWE Learning with errors (LWE) problem has been shown to be as hard as worst-case lattice problems and has served as the foundation for many cryptographic applications. However, these applications are inefficient because of an inherent quadratic overhead in the use of LWE. To get truly efficient LWE applications, Lyubashevsky, Peikert and Regev defined an appropriate version of the LWE problem in a wide class of rings and proved its hardness under worst-case assumptions on ideal lattices in these rings. They called their LWE version ring-LWE. Let , where the security parameter is a power of 2, making irreducible over the rationals. (This particular comes from the family of cyclotomic polynomials, which play a special role in this work). Let be the ring of integer polynomials modulo . Elements of (i.e., residues modulo ) are typically represented by integer polynomials of degree less than . Let be a sufficiently large public prime modulus (bounded by a polynomial in ), and let be the ring of integer polynomials modulo both and . Elements of may be represented by polynomials of degree less than -whose coefficients are from . In the above-described ring, the R-LWE problem may be described as follows. Let be a uniformly random ring element, which is kept secret. Analogously to standard LWE, the goal of the attacker is to distinguish arbitrarily many (independent) ‘random noisy ring equations’ from truly uniform ones. More specifically, the noisy equations are of the form , where a is uniformly random and the product is perturbed by some ‘small’ random error term, chosen from a certain distribution over . They gave a quantum reduction from approximate SVP (in the worst case) on ideal lattices in to the search version of ring-LWE, where the goal is to recover the secret (with high probability, for any ) from arbitrarily many noisy products. This result follows the general outline of Regev's iterative quantum reduction for general lattices, but ideal lattices introduce several new technical roadblocks in both the ‘algebraic’ and ‘geometric’ components of the reduction. They used algebraic number theory, in particular, the canonical embedding of a number field and the Chinese Remainder Theorem to overcome these obstacles. They got the following theorem: Theorem Let be an arbitrary number field of degree . Let be arbitrary, and let the (rational) integer modulus be such that . There is a probabilistic polynomial-time quantum reduction from - to - , where . In 2013, Guneysu, Lyubashevsky, and Poppleman proposed a digital signature scheme based on the Ring Learning with Errors problem. In 2014, Peikert presented a Ring Learning with Errors Key Exchange (RLWE-KEX) in his paper, "Lattice Cryptography for the Internet." This was further developed by the work of Singh. Ideal-LWE Stehle, Steinfeld, Tanaka and Xagawa defined a structured variant of LWE problem (Ideal-LWE) to describe an efficient public key encryption scheme based on the worst case hardness of the approximate SVP in ideal lattices. This is the first CPA-secure public key encryption scheme whose security relies on the hardness of the worst-case instances of -Ideal-SVP against subexponential quantum attacks. It achieves asymptotically optimal efficiency: the public/private key length is bits and the amortized encryption/decryption cost is bit operations per message bit (encrypting bits at once, at a cost). The security assumption here is that -Ideal-SVP cannot be solved by any subexponential time quantum algorithm. It is noteworthy that this is stronger than standard public key cryptography security assumptions. On the other hand, contrary to the most of public key cryptography, lattice-based cryptography allows security against subexponential quantum attacks. Most of the cryptosystems based on general lattices rely on the average-case hardness of the Learning with errors (LWE). Their scheme is based on a structured variant of LWE, that they call Ideal-LWE. They needed to introduce some techniques to circumvent two main difficulties that arise from the restriction to ideal lattices. Firstly, the previous cryptosystems based on unstructured lattices all make use of Regev's worst-case to average-case classical reduction from Bounded Distance Decoding problem (BDD) to LWE (this is the classical step in the quantum reduction from SVP to LWE). This reduction exploits the unstructured-ness of the considered lattices, and does not seem to carry over to the structured lattices involved in Ideal-LWE. In particular, the probabilistic independence of the rows of the LWE matrices allows to consider a single row. Secondly, the other ingredient used in previous cryptosystems, namely Regev's reduction from the computational variant of LWE to its decisional variant, also seems to fail for Ideal-LWE: it relies on the probabilistic independence of the columns of the LWE matrices. To overcome these difficulties, they avoided the classical step of the reduction. Instead, they used the quantum step to construct a new quantum average-case reduction from SIS (average-case collision-finding problem) to LWE. It also works from Ideal-SIS to Ideal-LWE. Combined with the reduction from worst-case Ideal-SVP to average-case Ideal-SIS, they obtained the a quantum reduction from Ideal-SVP to Ideal-LWE. This shows the hardness of the computational variant of Ideal-LWE. Because they did not obtain the hardness of the decisional variant, they used a generic hardcore function to derive pseudorandom bits for encryption. This is why they needed to assume the exponential hardness of SVP. Fully homomorphic encryption A fully homomorphic encryption (FHE) scheme is one which allows for computation over encrypted data, without first needing to decrypt. The problem of constructing a fully homomorphic encryption scheme was first put forward by Rivest, Adleman and Dertouzos in 1978, shortly after the invention of RSA by Rivest, Adleman and Shamir. An encryption scheme is homomorphic for circuits in if, for any circuit , given , , and , it holds that . is fully homomorphic if it is homomorphic for all circuits of size where is the scheme's security parameter. In 2009, Gentry proposed the first solution to the problem of constructing a fully homomorphic encryption scheme. His scheme was based on ideal lattices. See also Lattice-based cryptography Homomorphic encryption Ring learning with errors key exchange Post-quantum cryptography Short integer solution problem References Number theory Lattice-based cryptography Post-quantum cryptography
44387062
https://en.wikipedia.org/wiki/Nvidia%20GameWorks
Nvidia GameWorks
Nvidia GameWorks is a middleware software suite developed by Nvidia. The Visual FX, PhysX and Optix SDKs provide a wide range of enhancements pre-optimized for Nvidia GPUs. GameWorks is partially open-source. The competing solution being in development by AMD is GPUOpen, which was announced to be free and open-source software under the MIT License. Components Nvidia Gameworks consists of several main components: VisualFX: For rendering effects such as smoke, fire, water, depth of field, soft shadows, HBAO+, TXAA, FaceWorks and HairWorks. PhysX: For physics, destruction, particle and fluid simulations. OptiX: For baked lighting and general-purpose ray-tracing. Core SDK: For facilitating development on Nvidia hardware. In addition, the suite contains sample code for DirectX and OpenGL developers, as well as tools for debugging, profiling, optimization and Android development. See also PhysX GPUOpen TressFX Havok (software) References External links Nvidia GameWorks Middleware for video games Nvidia software Video game development software Free and open-source software
30816346
https://en.wikipedia.org/wiki/Hudson%20Warehouse
Hudson Warehouse
The Hudson Warehouse is a theatre company in New York City that presents classical plays that are accessible, affordable, and exciting to the public. They perform three outdoor plays in the summer months in Riverside Park and fall/winter productions at Goddard Riverside Bernie Wohl Center. Their mission is to bring arts for whom the arts aren't accessible and to this end, they perform in jails in collaboration with The New York Department of Corrections. So far, they have performed for inmates in Manhattan, Brooklyn, The Bronx, and at Rikers. Known as "The Other Shakespeare in the Park," the company was founded in 2004 by Nicholas Martin-Smith, who serves as its artistic director. Summer performances take place on the North Patio of the Soldiers' and Sailors' Monument in Riverside Park, at West 89th Street and Riverside Drive in New York City, along the Hudson River. Hudson Warehouse is the resident theater company of Goddard Riverside Bernie Wohl Center and their fall/winter season consists of two productions. The theater's full year season includes five productions of the classics, including Shakespeare, Euripides and Chekhov. Along with Martin-Smith, the Executive Director Susane Lee, and Associate Artistic Director George K. Wells, the Artists in Residence include Bruce Barton, David Palmer Brown, Patrina Caruana, Emily Sarah Cohn, Karen Collazzo, Tommy Demenkoff, Nick DeVita, Ron Hatcher, Roxann Kraemer, Nathan Mattingly, Vince Phillip, Paul Singleton, Roger Dale Stude, and resident costumer John-Ross Winter. History Hudson Warehouse's first season in 2004 consisted of a single modest production of The Tempest, performed over two weeks that July. The season has since extended to the whole summer, with three productions that each have a month-long run. Past productions include Hamlet, Midsummer Night's Dream, The Taming of the Shrew, Pericles, Prince of Tyre, MacBeth, Romeo and Juliet, Merry Wives of Windsor, Cyrano and Trojan Women, adapted from the tragedy by Euripides. Hudson Warehouse productions in 2012 were The Comedy of Errors, The Rover, and Richard III. The company also holds readings and workshops throughout the year, including its 'Shakespeare in the Bar' series and the 'Writers-a-Go-Go' (WAGG) contemporary play reading series. In addition to the summer season and other Shakespeare readings throughout the year, the company also teaches workshops on the classics to high school students, brings its productions into schools, run the 'Hudson Warehouse Shakespeare Workout' and teach play writing to 4th and 5th graders as part of the 'Afterschool Program' at Goddard Riverside. In May of 2013 Hudson Warehouse was honored as the recipient of Goddard Riverside's 'Good Neighbor Award' "In Recognition of Your Extraordinary Deeds in Helping Build a Better Community." In the autumn of that year Hudson Warehouse became the Resident Theater Company at Goddard Riverside's Bernie Wohl Arts Center at 647 Columbus Avenue on the Upper West Side of Manhattan. In November 2013 they continued their 11th season with a remounting of their June 2013 production of The Complete Works of William Shakespeare (abridged), by Adam Long, Daniel Singer and Jess Winfield, at the Bernie Wohl Center Directed by Susane Lee. The cast included Ian Harkins, Rafe Terrizzi and Nicholas Martin-Smith. This was followed by a production of Julius Caesar in March 2014. In May of 2018 the company was presented with a Proclamation from the city of New York, by City Council Member Helen Rosenthal in honor of their 15th Anniversary Season and their commitment to making art accessible to the community. Productions 2021: Love's Labour's Lost by William Shakespeare, directed by Nicholas Martin-Smith; The Count of Monte Cristo, adapted by Susane Lee, directed by Nicholas Martin-Smith 2019: Antony and Cleopatra, by William Shakespeare, directed by George K. Wells; The Man in the Iron Mask, adapted by Susane Lee, directed by Nicholas Martin-Smith; The Merry Wives of Windsor, by William Shakespeare 2018: Trojan Women, adapted and directed by Nicholas Martin-Smith, based on the works of Euripides and Charles Mee; Romeo and Juliet, by William Shakespeare, directed by Nicholas Martin-Smith; The Three Musketeers: Twenty Years Later, adapted and written by Susane Lee from The d'Artagnan Romances of Alexandre Dumas, directed by Nicholas Martin-Smith; Hamlet, by William Shakespeare, adapted and directed by George K. Wells 2017: The Triumph of Love, by Pierre de Marivaux, directed by Emily Rose Parman; The Three Musketeers, adapted and written by Susane Lee from the novel by Alexandre Dumas, directed by Nicholas Martin-Smith; HENRY V, by William Shakespeare, directed by Nicholas Martin-Smith 2016: Much Ado About Nothing, by Wm. Shakespeare, directed by Nicholas Martin-Smith; Lysisarah: “Let's Make America Great Again!" Based on Lysistrata by Aristophanes. Adapted and directed by Susane Lee; Othello, by Wm. Shakespeare, directed by Nicholas Martin-Smith 2015: Henry 4.1, by Wm. Shakespeare, directed by Nicholas Martin-Smith; She Stoops to Conquer, by Oliver Goldsmith, directed by Ian Harkins; Titus Andronicus, by Wm. Shakespeare, directed by Nicholas Martin-Smith 2014: The Life and Death of King John, by Wm. Shakespeare; The Importance of Being Ernest, directed by Nicholas Martin-Smith; The Winter's Tale directed by Nicholas Martin-Smith 2013: The Complete Works of William Shakespeare (abridged), directed by Susane Lee; King Lear, directed by Jesse Michael Mothershed; The Three Musketeers, adapted by Susane Lee, directed by Nicholas Martin-Smith 2012: The Comedy of Errors, directed by Susane Lee; The Rover, directed by Jesse Michael Mothershed; Richard III, directed by Nicholas Martin-Smith 2011: The Merry Wives of Windsor, directed by Eric Nightengale; The Seagull, directed by Tommy Demenkoff; The Taming of the Shrew, directed by Jesse Michael Mothershed 2010: Trojan Women, adapted and directed by Nicholas Martin-Smith; Cyrano, adapted by Joseph Hamel and directed by Nicholas Martin-Smith; Romeo and Juliet, directed by Nicholas Martin-Smith 2009: The Tempest, directed by Jerrod Bogard; Hamlet, adapted by Joseph Hamel and directed by Nicholas Martin-Smith; Midsummer Night's Dream, directed by Richard Harden 2008: Much Ado About Nothing, directed by Nicholas Martin-Smith; Pericles, Prince of Tyre, directed by David Fuller 2007: As You Like It, directed by Nicholas Martin-Smith; MacBeth directed by Richard Harden 2006: Love's Labour's Lost, directed by Nicholas Martin-Smith; The Bacchae 2005: Twelfth Night, directed by Nicholas Martin-Smith 2004: The Tempest, directed by Nicholas Martin-Smith Critical reception “NewYorkCentric has attended several Hudson Warehouse productions over the years and feels strongly that it is one of the greatest free cultural institutions in the city." HappentoLikeNewYork.org called the Hudson Warehouse 2010 adaptation of Romeo and Juliet, set in the turmoil of the modern Middle East: "a 'savage' version of the classic tale set in the sands on Afghanistan." Calling it a "high-intensity cage match," it declared, "the cast of Romeo and Juliet at Hudson Warehouse made me a believer." Of the company's choice to do Cyrano, Newsday said: "Somebody dares to greet the elements with words by someone other than Shakespeare. Nicholas Martin-Smith directs this revival of Edmond Rostand's irresistible late-Romantic swashbuckling tragedy about the heroism and beauty lost behind a nose." Richard Grayson of 'Dumbo Books of Brooklyn' wrote “The cast always works as a unit, working together – yes, taking their star turns and getting their individual laughs – but ultimately in service of presenting a believable world and moving the story along.” Of the 2011 production of Merry Wives of Windsor, Steven McElroy of the New York Times said, "Clouds loomed over Riverside Park in Manhattan ... but the stars (were) aligned for the cast of this month's Hudson Warehouse production of The Merry Wives of Windsor." The company's production of Hamlet was noted for using multiple actors to play the role of Hamlet. "Most of us are aware that no one Hamlet can express all the manifold variations of the character ... so how about three Hamlets, deployed artfully? These three Hamlets invited multiplicity simply through the actorly presence of each," noted Bernice Kliman in her Shakespeare Newsletter article. The 'L' Magazine voted Hudson Warehouse the “Best Out Door Theatre” in New York City, saying "a combination of the excellent hardworking cast and the sunsets over the Hudson that serve as their backdrop makes these outdoor productions a must." Shakespeare in the Bar and Writers-a-Go-Go reading series The company's 'Writers A Go-Go' was created by executive director Susane Lee in 2012 to promote the work of contemporary playwrights. It features readings of plays by new and emerging writers in an informal barroom setting. It also co-produces with Goddard Riverside's Community Arts Programs both the Valentines Day Monologue Festival:'The Many Faces of Love,' as well as the annual Veteran's Day commemoration. The series is run by Hudson Warehouse artist in residence Roger Dale Stude. Since 2010 Hudson Warehouse has also brought its work into the barroom in its Shakespeare in the Bar series, where the acting troupe sit among the bar patrons as if customers themselves as they perform the readings. Regarding the series, John Marshall of the Huffington Post has written, "A natural outgrowth of the Warehouse's critically acclaimed summer productions at the Sailors and Soldiers' Monument, Shakespeare in the Bar seeks to create the same intimate, accessible atmosphere, not just for Shakespeare, but for other classics as well." The 2012/2013 'Shakespeare in the Bar' season included Richard II, Lysistrata by Aristophanes, Othello, The Winter's Tale and Hedda Gabler by Henrik Ibsen. Earlier seasons included productions of The Taming of the Shrew, The Seagull by Anton Chekhov to mark Chekhov's 151st birthday, Henry V, The Merry Wives of Windsor, Richard II, Macbeth, and Tartuffe'' by Molière. References External links Hudson Warehouse Facebook page Off-Broadway theaters Shakespeare festivals
51542361
https://en.wikipedia.org/wiki/Justin%20Cappos
Justin Cappos
Justin Cappos (born February 27, 1977) is a computer scientist and cybersecurity expert whose data-security software has been adopted by a number of widely used open-source projects. His research centers on software update systems, security, and virtualization, with a focus on real-world security problems. Cappos has been a faculty member at New York University Tandon School of Engineering since 2011, and was awarded tenure in 2017. Now an associate professor in the Department of Computer Science and Engineering, he has introduced a number of new software products and system protocols as head of the school's Secure Systems Laboratory. These include technologies that detect and isolate security faults, secure private data, provide a secure mechanism for fixing software flaws in different contexts, and even foster a deeper understanding about how to help programmers avoid security flaws in the first place. Recognizing the practical impact of his work, Popular Science selected Cappos as one of its Brilliant 10 in 2013, naming him as one of 10 brilliant scientists under 40. His awareness of the risks of today's connected culture—a knowledge strong enough to keep him from owning a smartphone or other connected device, or from using social media like Facebook and Twitter—has led to numerous requests to serve as an expert commentator on issues of cyber security and privacy for local, national, and international media. Education and early research initiatives The topic of Cappos' Ph.D. dissertation at the University of Arizona was the Stork Project, a software package manager he built with John H. Hartman, a professor in the Department of Computer Science. Stork is still used today in some applications, but, more importantly, the project called attention to the need for improved security for software update processes, a research area Cappos has continued to pursue. While a post-doctoral researcher at the University of Washington in 2009, Cappos also developed a peer-to-peer computing platform called Seattle, which allows device-to-device connectivity in a decentralized network. Seattle is currently used by thousands of developers, who can access, download, and use the program on any type of smart device. In addition, spin-off technologies, such as Sensibility Testbed, have extended the use of Seattle's security and enforced privacy protection strategies, allowing researchers to collect data from sensors at no risk to the privacy of the device owner. Compromise-resilient strategies In 2010, Cappos developed The Update Framework (TUF), a flexible software framework that builds system resilience against key compromises and other attacks that can threaten the integrity of a repository. TUF was designed for easy integration into the native programming languages of existing update systems, and since its inception, it has been adopted or is in the process of being integrated by a number of high-profile open-source projects. One of the more significant earlier adoptions was Docker Content Trust.an implementation of the Notary project from Docker that deploys Linux containers. Notary, which is built on TUF, can certify the validity of the sources of Docker images. In October 2017, Notary and TUF were both adopted as hosted projects by the Linux Foundation as part of its Cloud Native Computing Foundation. In December 2019, TUF became the first specification and first security-focused project to graduate from CNCF. TUF has also been standardized in Python, and been independently implemented in the Go language by Flynn, an open-source platform as a service (PaaS) for running applications in production. To date, the list of tech companies and organizations using TUF include IBM, VMware, Digital Ocean, Microsoft, Google, Amazon, Leap, Kolide, Docker, and Cloudflare. Another significant compromise-resilient software update framework by Cappos is the 2017 launch of a TUF-adapted technology called Uptane. Uptane is designed to secure software updates for automobiles, particularly those delivered via over-the-air programming. Developed in partnership with the University of Michigan Transportation Research Institute and the Southwest Research Institute, and in collaboration with stakeholders in industry, academia, and government, Uptane modifies the TUF design to meet the specific security needs of the automotive industry. These needs include accommodating computing units that vary greatly in terms of memory, storage capability, and access to the Internet, while preserving the customizability manufacturers need to design cars for specific client usage. To date, Uptane has been integrated into OTA Plus and ATS Garage, two over-the-air software update products from Advanced Telematic Systems, and is a key security component of the OTAmatic program created by Airbiquity. The Airbiquity project was honored with a BIG Award for Business in the 2017 New Product Category in January 2018, and Popular Science magazine named Uptane one of the top 100 inventions for 2017. The first standard volume issued for the project, entitled IEEE-ISTO 6100.1.0.0 Uptane Standard for Design and Implementation, was released on July 31, 2019. Uptane is now a Joint Development Foundation project of the Linux Foundation, operating under the formal title of Joint Development Foundation Projects, LLC, Uptane Series. Other significant research projects In 2016, Cappos introduced in-toto, an open metadata standard that provides documentation of the end-to-end security of a software supply chain. The framework gathers both key information and signatures from all who can access a piece of software through the various stages of coding, testing, building and packaging, thus making transparent all the steps that were performed, by whom and in what order. By creating accountability, in-toto can prevent attackers from either directly introducing malicious changes into the code, or from altering the metadata that keeps the record of those changes along the supply chain. in-toto has collaborated with open source communities such as Docker and OpenSUSE. Datadog utilizes both in-toto and TUF. In December of 2020, the framework released its first major version. While working on in-toto, Cappos and the SSL research group identified metadata manipulation as a new threat against Version Control Systems like Git. His team has developed several new approaches to address this problem, including a defense scheme that mitigates these attacks by maintaining a cryptographically-signed log of relevant developer actions. By documenting the state of the repository at a particular time when an action is taken, developers are given a shared history, so irregularities are easily detected. One recent accomplishment in this research arena is Arch Linux integrating a patch to check for invalid tags in git into the next release of its pacman utility. More recently, Cappos and his collaborators have focused on development of a browser extension that can ensure users of convenient web-based hosting services, such as GitHub or GitLab, that the server will faithfully carry out their requested actions. Another Cappos project, developed in 2014, introduced a method to make passwords for databases harder to crack. PolyPasswordHasher, is a secure scheme that interrelates stored password data, forcing hackers to crack passwords in sets. By making it significantly harder for attackers to figure out the necessary threshold of passwords needed to gain access, PolyPasswordHasher-enabled databases become very difficult to breach. PPH is currently used in several projects, including the Seattle Clearinghouse and BioBank. Implementations are available for seven languages, including Java, Python, C, and Ruby. References External links Prof. Justin Cappos, New York University profile page Justin Cappos, New York University Tandon School of Engineering profile page Secure Systems Laboratory website Selected publications List of Publications from Microsoft Academic Search Media citations and commentary WLIW 21-PBS TV Long Island (26 March 2018). SciTech Now "Protecting today's highly computerized cars from hackers" The Verge (14 Feb 2018). O'Kane, Sean. "Chrysler’s over-the-air update fiasco is limited to the Northeast, but customers are still waiting for a fix" WBRC Ch.6-TV (4 Jan 2018). Gauntt, Joshua. "Charging your phone in ride-sharing services, airports could put your information at risk" Healthcare Analytics (29 Dec 2017) Steptoe, George. "The Worst Healthcare Cybersecurity Breaches of 2017" IEEE CyberSecurity (4 Oct 2017). “Justin Cappos on why cars are not like computers when it comes to Cybersecurity” The Washington Post (11 July 2017) “Hackers have been stealing credit card numbers from Trump’s Hotels for Months” AdAge (27 June 2017) “Pay up or lose everything: What Madison Avenue should know about the WPP Ransom Attack” KSTX-Texas Public Radio (21 June 2017) All Things Considered "Software Protecting Future Cars, Starting To Make Inroads" Financial Times (14 June 2017). “Three US banks chiefs fall victim to email pranksters” Fox 5-TV News (23 May 2017) Toohey, Joe. "Can big data analysis swing a political election?" Fox 5-TV News (15 May 2017) Chi’en, Arthur. "WannaCry malware exploited OS weakness to spread" The Los Angeles Times (15 May 2017) Dave, Paresh and James F. Peltz. “WannaCry cyberattack: When a hack shuts down a hospital, who’s to blame” BBC (4 May 2017) “Google docs users hit by phishing scam” WBUR-NPR Boston (4 May 2017) On Point. "Phishing, Hacks And Better Online Security" Reuters (3 May 2017) “Spam campaign targets Google users with malicious link” Fox 5-TV News (25 Apr 2017) King, Mac. "You really should read an app's service terms" Fox 5-TV News (30 March 2017). "Selling Your Online Search History" International Business Times (24 March 2017). “Is Privacy Real? The CIA is Jeopardizing America’s Digital Security” WBUR-NPR Boston (17 March 2017), Here and Now, "Researchers Race To Develop Software To Prevent Car Hacking" The Washington Post (10 March 2017). “The 42 words you can never say in emails to the D.C. government” WNBC-TV New York (9 March 2017). “WikiLeaks to Help Shield Tech Firms From CIA's Hacking Tools” Fox 5-TV News (7 March 2017). "WikiLeaks Publishes 1000s of CIA Cyber-espionage Documents" Mic (3 March 2017). Pence Email Hack: How the VPs private email debacle compares to Hillary Clinton’s” KSTX- Texas Public Radio (26 January 2017), Here and Now. "A Future Car May Be Protected From Hacking By Software Developed In San Antonio" Reuters Live on Facebook (18 January 2017). “How Uptane can Protect Your Car from Hackers” WWL AM 870 / FM 105.3 (9 January 2017). "How Did the Russian Hacks Happen?" Fox 5-TV News (18 November 2016). "Are smart devices worth the hacking risk?" WPIX-11-TV News (31 October 2016) Diaz, Mario. "Clinton email investigation bombshell dominates campaign for both candidates” CNN Money (15 August 2016) Pagliery, Jose. "Hacker claims to be selling stolen NSA spy tools" Vice (6 July 2016) Pearl, Mike. "We Asked a Cybersecurity Expert if Clinton's Email System Could Have Jeopardized National Security" Scientific American (23 March 2016) Sneed, Annie. "The Most Vulnerable Ransomware Targets Are the Institutions We Rely On Most" PBS Newshour (29 February 2016). "Ransomware attack takes down LA hospital for hours" PBS Newshour (18 April 2015). "The hack attack that takes your computer hostage till you pay" New York Daily News (4 March 2015). "Should you check your personal email at work?" CBS News (3 December 2014). "5 counterintuitive ways to protect against hackers" CBS-TV News (15 August 2014). "How a password manager can help you stay more secure online" Varonis (6 January 2015). "Interview With NYU-Poly’s Professor Justin Cappos: Security Lessons From Retail Breaches" MIT Technology Review'' (21 February 2013) Lim, Dawn. "Startup Red Balloon Security Offers to Protect Printers, Phones, and Other Devices from Hackers" New York University 1977 births Living people Polytechnic Institute of New York University faculty Computer security academics
33374902
https://en.wikipedia.org/wiki/Rochard
Rochard
Rochard is a science fiction platform game available for the PlayStation 3 through the PlayStation Network, for Microsoft Windows, Linux and Mac OS X through the Steam online distribution platform, and for Linux as part of the Humble Indie Bundle 6. Developed by Recoil Games, the game revolves around the manipulation of gravity and the use of a gravity device used to easily move heavy objects around. The game was launched on the PlayStation Network on the 27th of September, 2011 in USA and on 28th September, 2011 in Europe. The game was launched for Windows on 15th November, 2011. Gameplay Rochard is a two-dimensional side-scrolling platformer taking place in three-dimensional scenes. The player character, John Rochard, works his way through a series of environments, each containing a mix of puzzle and combat encounters. To overcome these challenges, the player has access to several tools and mechanics that relate to gravity, weight and matter properties. Players can change the gravity between “normal” earthlike gravity and low gravity. Controlling the gravity is the key feature of the game and allows John to, for example, jump higher in low gravity, alter trajectories of thrown objects, or swing on certain objects using the Gravity Beam. Some levels have sections where the gravity is inverted. In some other levels, players can invert the gravity themselves. The player is equipped with the "G-Lifter", a modular mining tool hosting various subsystems like a remote gravity controller, a flashlight, and a communication device. When the gravity beam mode is selected it allows John to grab and shoot or drop certain objects like crates, explosives containers etc. With the gravity beam John can also manipulate and move certain objects, like big mining lasers, cargo containers etc. After an upgrade the G-Beam is powerful enough to lift John in low G allowing him to dangle and swing from certain objects. All objects which can be manipulated with the gravity beam are highlighted with a white swipe effect on the surface. The player gets to upgrade the gravity beam several times to gain new abilities to it. In addition to the G-Swing the player can use the G-Beam as a weapon against flying droids, automated turrets and even human enemies (respectively). Force fields block certain objects. There are four types of force fields: Bio force field (red, blocks human characters); Matter force field (blue, blocks inanimate objects); Energy force field (orange, blocks weapon fire and explosions); Omni force field (white, blocks everything). Fuses are used to control power on certain electrically powered items. The controlled item is attached to a fuse socket with a thick visible cable. The player can control the power on the item by attaching or detaching the fuse to/from the socket. The fuses cannot be physically damaged but they can be disabled temporarily by shooting at them or using explosives on them. Plot John Rochard, leader of the lowest producing team of astro-miners the Skyrig Corporation ever employed, accidentally discovers an ancient structure hidden deep in an asteroid. Soon afterwards, John’s team goes missing without a trace and he finds himself stranded on the asteroid and under attack by space bandits. John quickly realizes that dangerous forces are at work, determined to use the discovery for their own sinister means. As the supposed reinforcements of his boss Maximillian arrive, it is revealed that they are in league with the space bandits. Fighting his way down through the tunnels to get to his trapped team mates Skyler and Zander. As John reaches them, Zander succumbs to his wounds, sending both Skyler and John on a path to uncover the mystery surrounding their recent find and to avenge their fallen colleague. The ancient structure turns out to be an alien temple, containing Native American glyphs. Unable to read the engravings, John and Skyler decide to head for Floyd, Skyler's uncle, whose Native American roots might help them make sense of it all. Upon arriving on the seemingly deserted asteroid-based casino, John heads to Floyd’s office. It turns out the space bandits have taken over the casino and John has to fight his way to the office which is found vacant. Skyler suggests John to head to the money vault, where Floyd might have barricaded himself. Upon finding Floyd, John is told about an ancient legend of Native American Indians and the Katsina statues that grant its user divine powers. It is revealed that he has to get a decoder disc from his boss’s office, located in the Skyrig headquarters, to be able to find the real Katsina temple. John fights his way to meet Skyler in an abandoned hangar, from where they take off to the Skyrig headquarters. John infiltrates the headquarters using ventilation shafts and other back doors, avoiding security cameras while sneaking his way towards his boss’ office. Once presence is noticed, he has to fight his way to the office, where he finds the decoder ring that can be used to decipher the strange writings at the alien temple. John escapes the office and battles his way past sky police and their combat droids to reach a secluded cargo hangar. Skyler picks up John and they head back to the mining asteroid. The pirates have trashed the place so John decides to take an alternate route to the alien temple. After fighting an army of droids and turrets, John has to use a huge mining laser to get into the alien temple. At the temple he finds out that the decoder ring is actually a power source, which makes the strange markings on the walls glow. A large star map is revealed pointing to the Casino Asteroid they visited earlier. Sky police have found John and Skyler and they get separated. John has to find an alternative way to the hangar. When John gets there he finds Skyler captured by the space police and his ship being blown up. He decides to use an old race bird “Switchblade” to pursue the sky police and Skyler to the casino. John enters the abandoned part of the casino asteroid, which is an old Skyrig mine. On his way to the second alien temple he finds an old "Helga" G-Lifter, which has old hazardous features still active: it's able to grab human characters and shoots anti-gravity charges which lift objects they attach to. He fights his way to the Katsina temple and finds his boss holding the Katsina statue there. In the ensuing fight, John comes very close to Maximillian. When he is about to defeat his boss, a giant vortex appears devouring first Maximillian, followed by Skyler and John. They are all sucked into another dimension, leaving only an ominously glowing "Helga" G-Lifter behind. Development Rochard has been developed using the Unity engine and was launched in late September 2011 as the first PlayStation 3 game to use this engine. Unity's multi-platform capabilities also resulted in a speedy follow-up release on Steam on November 15, 2011. The Mac version followed in December 2011 on Mac App Store, and the Linux version was available in Humble Indie Bundle 6 on September 18, 2012. The game features a soundtrack composed by Markus “Captain” Kaarlonen from Poets of the Fall, mixing southern rock/blues and 80′s inspired electronic music, originally composed on Amiga with ProTracker. A special version of the game that includes the full soundtrack in an MP3 format was released on Steam alongside the game's regular edition. Voice actors: Jon St. John – John Rochard Lani Minella – Skyler Hanson Eric Newsome – Zander and Floyd Marc Biagi – Maximillian Sam Mowry – Skypolice Dave Rivas – Skypolice Expansion An expansion pack for the game entitled Rochard: Hard Times was released in March 2013. The downloadable content features four new, extra challenging puzzle levels. The expansion has an emphasis on puzzle-solving as opposed to combat, and the levels were created to be challenging for even the most experienced players. The expansion is, however, not integrated into the story told by the original game. Reception Rochard has received generally favorable reviews for its initial release on PSN, with a Metacritic average score of 79% and a GameRankings average score of 79%. The following Steam release was well received for its solid conversion of the control mechanics to a keyboard and mouse format, standing at a Metacritic average of 82% and a GameRankings average of 82.5%. Prior to its release, Rochard was awarded "Best of Gamescom 2011" by GamingXP. Following its release, the game has received Editors' Choice awards from IGN, GamePro, Gaming Nexus, GameShark and Pelit and won the Unity Awards for Best Gameplay and Best Graphics at the Unite 2011 event. GamingOnLinux reviewer Hamish Paul Wilson gave the game 8/10, commenting that "in the end it comes across as what it was probably always meant to be: a fun and competent physics platformer that does not take itself too seriously." References External links 2011 video games Android (operating system) games Linux games MacOS games Platform games PlayStation 3 games PlayStation Network games Side-scrolling video games Video games developed in Finland Video games scored by Markus Kaarlonen Video games with 2.5D graphics Windows games Single-player video games
43915231
https://en.wikipedia.org/wiki/Nachum%20Dershowitz
Nachum Dershowitz
Nachum Dershowitz is an Israeli computer scientist, known e.g. for the Dershowitz–Manna ordering and the multiset path ordering used to prove termination of term rewrite systems. He obtained his B.Sc. summa cum laude in 1974 in Computer Science–Applied Mathematics from Bar-Ilan University, and his Ph.D. in 1979 in Applied Mathematics from the Weizmann Institute of Science. From 1978, he worked at Department of Computer Science of the University of Illinois at Urbana-Champaign, until he became a full professor of the Tel Aviv University (School of Computer Science) in 1998. He was a guest researcher at Weizmann Institute, INRIA, ENS Cachan, Microsoft Research, and the universities of Stanford, Paris, Jerusalem, Chicago, and Beijing. He received the Herbrand Award for Distinguished Contributions to Automatic Reasoning in 2011. He has co-authored the standard text on calendar algorithms, Calendrical Calculations, with Edward Reingold. An implementation of the algorithm in Common Lisp is put into the public domain, and is also distributed with the book. See also New Moon Lunisolar calendar Selected publications Dershowitz, Nachum and Reingold, Edward M., Calendrical Calculations, Cambridge University Press, , 1997 Dershowitz, Nachum 2005. The Four Sons of Penrose, in Proceedings of the Eleventh Conference on Logic for Programming, Artificial Intelligence, and Reasoning (LPAR; Jamaica), G. Sutcliffe and A. Voronkov, eds., Lecture Notes in Computer Science, vol. 3835, Springer-Verlag, Berlin, pp. 125–138. References External links Publications at DBLP Home page Nachum Dershowitz at the Mathematics Genealogy Project Video "The Church-Turing Thesis", Nachum Dershowitz on Sixth Israel CS Theory Day, Mar 13, 2013 Israeli computer scientists Theoretical computer scientists Living people Year of birth missing (living people)
61510351
https://en.wikipedia.org/wiki/Cadwork%20informatik%20AG
Cadwork informatik AG
cadwork informatik AG is a multinational software company headquartered in Basel, Switzerland. It develops and markets software products primarily for the construction industry. These products include timber industry products in computer-aided design (CAD) and computer-aided manufacturing (CAM) as well as products in building information model (BIM) and virtual design and construction (VDC). These products are suitable for designers, structural engineers, construction engineers, civil engineering draftspeople, building contractors, and in the case of BIMTeam VDC, the construction crews. cadwork has been commended for their integration of design, manufacturing, and construction—contributing to Europe's 25-year lead on North America in the timber construction industry. cadwork was formed in 1988 at the École Polytechnique Fédérale de Lausanne (EPFL) Department of Timber Construction as a continuation of research in software by the Swiss Center for Electronics and Microtechnology (CSEM), a watch industry research department. The company has seven subsidiaries with offices in Saône, France; Hildesheim, Germany; Rýmařov, Czech Republic; Breitenwang, Austria; Cuarte, Spain; Montréal, and Palo Alto in California. Products Products and technical details cadwork is mainly known for their timber industry wood products, but in Switzerland, it is a leading software for civil engineers. In 2004, cadwork started the development of Lexocad, a BIM-level design-planning-construction software. cadwork 2D: The entry-level module to cadwork 2d with two-dimensional lines and surfaces with different hatches. It is a '2.5D CAD system' , in addition to two-dimensional editing, cadwork 2d also allows for height information. Height information is particularly suitable for construction planning while transitioning from 2D to the 3D module, which in 3d provides for developing a tender for construction project competitions. An alternative to cadwork 2D is AutoCAD. A module of cadwork 2d allows for reinforced concrete and geographic information system (GIS) plans. The file format is .2d. cadwork 3D: A timber 3D CAD/CAM design product. In cadwork 3d, one works with volumes to depict walls, generate wooden and steel rakes, and to plan three-dimensional node space. The primary areas of application are in timber and steel construction. cadwork 3D offers free volume models; an internal axis (length, width, and height) makes it possible to assemble the individual components easily into complex, three-dimensional structures. There are several additional modules, such as a wizard for entering a roof, an assistant for the elementation of walls, or also the output of the individual components as a dimensioned 2D component drawing or on computer numerical control (CNC) joinery machines. The Timber Engineering Reference described cadwork 3d as broadly featured with modules for heavy timber roof, heavy timber frame, log structures, panelization, CAM fabrication, parametric variant module, glulam lamination design, and a bill of materials. The file format is .3d. cadwork 2dr/cadwork Engineer: An infrastructure design product with engineering features for topographic and road. The file format is .2dr. cadwork Lexocad: A commercial BIM 3D design product, which includes 4D scheduling, 5D pricing, and 6D execution. When combined with cadwork 2dr, Lexocad provides a BIM planning and construction interface for construction crews and construction engineers. Lexocad has free cadwork viewers, which allows zooming, panning, and printing; however, the freeware version does not allow modifying a file. The file format is .lxz. cadwork BIMteam: A webGL model interface with total station functions, an alternative is Autodesk BIM 360. cadwork BIMview: A webGL viewer. Research and development collaborations University: cadwork informatik AG is an active member of the Center for Integrated Facility Engineering (CIFE), Stanford University. cadwork informatik AG partnered on a "Science to Market" project with the Institute of 4D Technologies (i4Ds), University of Applied Sciences Northwestern Switzerland. This project included the Swiss Federation Innovation Agency (KTI/CTI) and industry partner Swiss Federal Railways Corporation (SBB) in Bern. cadwork informatik AG supported the development of shadow algorithms at Brno University of Technology, Czech Republic Industry: Swiss Research Centre for Rationalization in Building and Civil Engineering CRB, Basel; collaboration on integrating ontology representation through BIM of products, processes, and applied resources. Software: BuildingSMART; collaboration on infra-BIM for roads, rail, bridges, and waterways. Education and certifications The cadwork customer base is a network of practitioners; through a long-term client relationship cadwork continues software support, development of custom solutions, and application education. In education, cadwork works with collaborators to provide continuing education opportunities, such as with the Tall Wood Institute. In North America, cadwork collaborates with the Timber Framers Guild apprenticeship education in collaboration with the United Brotherhood of Carpenters and Joiners of America. Manufacturing cadwork is the primary fabrication software used by timber manufacturers globally, as such, cadwork has a diverse network of manufacturers throughout the world. To work in timber manufacturing, knowledge of cadwork is a criterion so to allow for operating and programming of CNC machines. Modeling: Timber Frame Headquarters Fabricators Timber Artisans Rocky Mountain Joinery Euclid Timber Frames Equipment Dürr AG HOMAG woodworking tools Hundegger machinery Company history cadwork informatik AG was founded in 1988 as a continuation of the research by the Swiss Center for Electronics and Microtechnology (CSEM) and the École Polytechnique Fédérale de Lausanne (EPFL). In 1992, cadwork expanded outside the Swiss market by opening cadwork Germany in Hildesheim. 1980, the initial developers envisioned cadwork as a tool for the watch industry; developed by the Centre suisse d'électronique et de microtechnique (CSEM). 1982, the École Polytechnique Fédérale de Lausanne (EPFL) continued developing cadwork in Fortran and compiled in 32 or 64 bits on workstations, such as those from Apollo Computer, Digital Equipment Corporation (VAX and Alpha platforms), and Hewlett-Packard. 1989, Cadwork informatik was founded as a Swiss society in Basel, Switzerland. 1992, cadwork informatik Software GmbH founded in Hildesheim. 1996, cadwork dropped support for workstations and became Microsoft Windows compliant. 2004, cadwork started the development of Lexocad. 2016, cadwork France Sàrl founded in the East of France. Recognition of industry practitioners The ‘cadwork competition' provides recognition for aesthetically pleasing timber construction. There are various categories, in 2019, Laminated Timber Solutions received a public buildings award for their Molenbeek-Saint-Jean passive building constructed entirely of cross-laminated timber (CLT). This residential structure consists of 17 units (single- and dual-story apartments) with ancillary common parts. See also Virtual design and construction Industry Foundation Classes (IFC) Open Design Alliance (OpenDWG) 3D ACIS Modeler (ACIS) Construction management Construction engineering References Timber framing Building information modeling Computer-aided design software 3D graphics software Computer-aided design software for Windows Software companies of Switzerland
4033944
https://en.wikipedia.org/wiki/Virtual%20Case%20File
Virtual Case File
Virtual Case File (or VCF) was a software application developed by the United States Federal Bureau of Investigation (FBI) between 2000 and 2005. The project was officially abandoned in April 2005, while still in development stage and cost the federal government nearly $170 million. In 2006, the Washington Post wrote "In a 318-page report, completed in January 2005 and obtained by The Post under the Freedom of Information Act, [the Aerospace Corporation] said the SAIC software was incomplete, inadequate and so poorly designed that it would be essentially unusable under real-world conditions. Even in rudimentary tests, the system did not comply with basic requirements, the report said. It did not include network-management or archiving systems—a failing that would put crucial law enforcement and national security data at risk" Origins In September 2000, the FBI announced the "Trilogy" program, intended to modernize the bureau's outdated Information Technology (IT) infrastructure. The project had three parts: purchasing modern desktop computers for all FBI offices, developing secure high-performance WAN and LAN networks, and modernizing the FBI's suite of investigative software applications. The first two goals of Trilogy were generally successful, despite cost overruns. Replacing the Bureau's Automated Case Support (ACS) software system proved difficult. It had been developed in-house by the bureau and was used to manage all documents relating to cases being investigated by the FBI, enabling agents to search and analyze evidence between different cases. The project was originally scheduled to take three years and cost US$380 million. ACS was considered by 2000 a legacy system, made up of many separate stovepipe applications that were difficult and cumbersome to use. ACS was built on top of many obsolete 1970s-era software tools, including the programming language Natural, the ADABAS database management system, and IBM 3270 green screen terminals. Some IT analysts believed that ACS was already obsolete when it was first deployed in 1995. Launch Bob E. Dies, then the bureau's assistant director of information resources and head of the Trilogy project, prepared initial plans in 2000 for a replacement to ACS and several other outdated software applications. In June 2001, a cost-plus contract for the software aspects of the project was awarded to Science Applications International Corporation (SAIC), and the network aspects were contracted to DynCorp. Dies was the first of five people who would eventually be in charge of the project. The software was originally intended to be deployed in mid-2004, and was originally intended to be little more than a web front-end to the existing ACS data. Problems and abandonment Robert Mueller was appointed director of the FBI in September 2001, just one week before the September 11, 2001 attacks. The attacks highlighted the Bureau's information sharing problems and increased pressure for the Bureau to modernize. In December 2001, the scope of VCF was changed with the goal being complete replacement of all previous applications and migration of the existing data into an Oracle database. Additionally, the project's deadline was pushed up to December 2003. Initial development was based on meetings with users of the current ACS system. SAIC broke its programmers up into eight separate and sometimes competing teams. One SAIC security engineer, Matthew Patton, used VCF as an example in an October 24, 2002 post on the InfoSec News mailing list regarding the state of federal information system projects in response to a Senator's public statements a few days earlier about the importance of doing such projects well. His post was regarded by FBI and SAIC management as attempting to "blow the whistle" on what he saw as crippling mismanagement of a national security-critical project. Patton was quickly removed from the project and eventually left SAIC for personal reasons. In December 2002, the Bureau asked the United States Congress for increased funding, seeing it was behind schedule. Congress approved an additional $123 million for the Trilogy project. In 2003, the project saw a quick succession of three different CIO's come and go before Zal Azmi took the job, which he held until 2008. Despite development snags throughout 2003, SAIC delivered a version of VCF in December 2003. The software was quickly deemed inadequate by the Bureau, who lamented inadequacies in the software. SAIC claimed most of the FBI's complaints stemmed from specification changes they insisted upon after the fact. On March 24, 2004, Robert Mueller testified to Congress that the system would be operational by the summer, although this seemed impractical and unlikely to happen. SAIC claimed it would require over $50 million to get the system operational, which the Bureau refused to pay. Finally, in May 2004 the Bureau agreed to pay SAIC $16 million extra to attempt to salvage the system and also brought in Aerospace Corporation to review the project at a further cost of $2 million. Meanwhile, the Bureau had already begun talks for a replacement project beginning as early as 2005. Aerospace Corp.'s generally negative report was released in the fall of 2004. Development continued throughout 2004 until the project was officially scrapped in April 2005. Reasons for failure The project demonstrated a systematic failure of software engineering practices: Lack of a strong technical architecture ("blueprint") from the outset led to poor architectural decisions Repeated changes in specification Repeated turnover of management, which contributed to the specification problem Micromanagement of software developers The inclusion of many FBI personnel who had little or no formal training in computer science as managers and even engineers on the project Scope creep as requirements were continually added to the system even as it was falling behind schedule Code bloat due to changing specifications and scope creep—at one point it was estimated the software had over 700,000 lines of code. Planned use of a flash cutover deployment made it difficult to adopt the system until it was perfected. Implications The bureau faced a great deal of criticism following the failure of the VCF program. The program lost $104 million in taxpayer money. In addition, the bureau continued to use the antiquated ACS system, which many analysts felt was hampering the bureau's new counter-terrorism mission. In March 2005, the bureau announced it was beginning a new, more ambitious software project code-named Sentinel to replace ACS. After several delays, new leadership, a slightly bigger budget, and adoption of agile software development methodology, it was completed under budget and was in use agency-wide on July 1, 2012. References External links IEEE Spectrum article: Who killed the virtual case file? 11 page detailed article of the entire timeline The FBI's Upgrade That Wasn't - Washington Post article about the project Testimony of Inspector General Glenn A. Fine before the Department of Justice - February 3, 2005: Project Audit results Testimony of Inspector General Glenn A. Fine before the Department of Justice - July 27, 2005 Matthew Patton's October 24, 2002 posting on InfoSec News about VCF IEEE Spectrum Radio audio discussion of the failure. Participants are Peter Neumann, Steve Bellovin, Matt Blaze, and Robert Charette. Federal Bureau of Investigation
8740064
https://en.wikipedia.org/wiki/Basic4GL
Basic4GL
Basic4GL (B4GL; from Basic for openGL) is an interpreted, open source version of the BASIC programming language which features support for 3D computer graphics using OpenGL. While being interpreted, it is also able to compile programs on top of the virtual machine to produce standalone executable programs. It uses a syntax similar to traditional dialects of BASIC and features an IDE and a very thorough and comprehensive debugger. Basic4GL is not designed to compete with programming languages such as C++; it was intended to replace older languages such as QBasic or GFA BASIC. Basic4GL features the usual commands that you would expect to find in a version of BASIC such as... PRINT INPUT GOSUB It also includes a few features that C programmers will be familiar with, such as support for pointers, structures and most importantly the entire OpenGL v1.1 API. History Tom Mulgrew created Basic4GL from a desire to be able to run OpenGL functions easily and quickly, without all of the setup normally required in a language such as c++ and be more stable. He built a virtual machine similar to one used at his workplace. It started simply, with few OpenGL functions and minimal other functionality. The first version was relatively popular. The first version was named GLBasic, which also happens to be a commercial programming language. The issue was civilly resolved, and Mulgrew's project renamed Basic4GL. Mulgrew set himself the goal to expand Basic4GL to the point that it could load and display and MD2 model. Versions 2.3.0 - Added networking capability 2.3.5 - Support for code compilation at runtime 2.4.2 - Changed sound system from OpenAL to Audiere 2.4.3 - Support for Plugin DLLs added 2.5.0 - Support for functions added 2.5.8 - Support for hexadecimal numbers Platform Basic4GL was designed to run on the Windows operating system, but versions are being developed for Linux and Mac OS. Basic4GL for Linux Currently Basic4GL is being ported over to Linux. The major difference between Basic4GL for Windows and the new Linux version is that it uses the SDL library rather than Windows specific libraries to initialize an OpenGL enabled window. There is also a Linux-based project to create an extended version of Basic4GL that wraps more closely to the SDL library known as Basic4SDL. Basic4GL for Mac A version for Mac OS is currently under development. No working versions have been released. Example code Dim A For A = 0 To 4 Printr "Hello "; A Next When the above code is entered into Basic4GL and executed, the following is output to the monitor screen. Hello 0 Hello 1 Hello 2 Hello 3 Hello 4 Features Support for sound and music When Basic4GL was first released it could only play sounds but in 2006 support for music was added using the Open Al sound engine but later replaced with Audiere. Functions and subroutines When Basic4GL was first released it had no support for functions. That changed however when version 2.5.0 was released in January 2008. Now Basic4GL has full support for local variables, parameters, forward declaration and recursion. Plugins In August 2006 support for Plugin DLLs was added to Basic4GL. This means that you can write your own commands and include them in the Basic4GL programming language, all you need is a C++ Compiler. Plugins expand the capabilities of Basic4GL and many exist, providing such things as physics engines, TrueType Fonts, collision detection etc. SourceForge Both Basic4GL for Windows and the new Linux version have been placed on SourceForge, this means that people are free to develop the languages and make improvements to them. Basic4Games A successor to Basic4GL is currently being developed dubbed "Basic4Games". Only one preview has been released. See also References External links BASIC programming language Video game development software
41113637
https://en.wikipedia.org/wiki/Vithaldas%20Thackersey
Vithaldas Thackersey
Sir Vithaldas Damoda Thackersey (30 November 1873 – 12 August 1922) was an Indian businessman from Bombay state and was a member of Imperial Council of India during the 1900s. He chaired the Industrial Conference, a subsidiary conference of Indian National Congress, in Kolkata in 1903 The vision of Maharashi Karve and the foresight of Sir Vithaldas Thackersey led to the establishment of the first women’s university in India. Recognizing the pioneering work of Karve, Thackersey made a generous contribution of Rs. 15 lakh to commemorate the memory of his mother, Nathibai. In 1920 the university was named Shreemati Nathibai Damodar Thackersey Women’s University SNDT Women's University . He was knighted by the British Government in 1908. After her husband died, Premkunver, Lady Thakersey (Premlila Thakersey) continued his work in the fields of education and philanthropy. She was dedicated to women's education specifically and ultimately became the first vice-chancellor of SNDT Women’s University in Mumbai. Lady Premlila Thakersey also continued her husband's work by generously donating to establish the degree college in Home Science. This donation is what started Sir Vithaldas Thakersey College of Home Science in 1959. References 1873 births 1922 deaths Members of the Imperial Legislative Council of India Indian knights Knights Bachelor Members of the Bombay Legislative Council
37687916
https://en.wikipedia.org/wiki/Software%20architecture%20description
Software architecture description
Software architecture description is the set of practices for expressing, communicating and analysing software architectures (also called architectural rendering), and the result of applying such practices through a work product expressing a software architecture (ISO/IEC/IEEE 42010). Architecture descriptions (ADs) are also sometimes referred to as architecture representations, architecture specifications or software architecture documentation. Concepts Architecture description defines the practices, techniques and types of representations used by software architects to record a software architecture. Architecture description is largely a modeling activity (Software architectural model). Architecture models can take various forms, including text, informal drawings, diagrams or other formalisms (modeling language). An architecture description will often employ several different model kinds to effectively address a variety of audiences, the stakeholders (such as end users, system owners, software developers, system engineers, program managers) and a variety of architectural concerns (such as functionality, safety, delivery, reliability, scalability). Often, the models of an architecture description are organized into multiple views of the architecture such that "each [view] addresses specific concerns of interest to different stakeholders of the system". An architecture viewpoint is a way of looking at a system (RM ODP). Each view in an architecture description should have a viewpoint documenting the concerns and stakeholders it is addressed to, and the model kinds, notations and modeling conventions it utilizes (ISO/IEC/IEEE 42010). The use of multiple views, while effective for communicating with diverse stakeholders and recording and analyzing diverse concerns, does raise potential problems: since views are typically not independent, the potential for overlap means there may be redundancy or inconsistency between views of a single system. Various mechanisms can be used to define and manage correspondences between views to share detail, to reduce redundancy and to enforce consistency. A common misunderstanding about architecture descriptions is that ADs only discuss "technical issues", but ADs need to address issues of relevance to many stakeholders. Some issues are technical; many issues are not: ADs are used to help architects, their clients and others manage cost, schedule and process. A related misunderstanding is that ADs only address the structural aspects of a system. However, this rarely satisfies the stakeholders, whose concerns often include structural, behavioral, aesthetic, and other "extra-functional" concerns. History The earliest architecture descriptions used informal pictures and diagrams and associated text. Informal descriptions remain the most widely used representations in industry. Influences on architecture description came from the areas of Software Engineering (such as data abstraction and programming in the large) and from system design (such as SARA). Work on programming in the large, such as module interconnection languages (MILs) focused on the expression of the large-scale properties of software: modules (including programs, libraries, subroutines and subsystems) and module-relationships (dependencies and interconnections between modules). This work influenced both architectural thinking about programming languages (e.g., Ada), and design and architecture notations (such as Buhr diagrams and use case maps and codified in architectural features of UML: packages, subsystems, dependences) and much of the work on architecture description languages. In addition to MILs, under the influence of mature work in the areas of Requirements and Design within Software Engineering, various kinds of models were "lifted" from software engineering and design to be applied to the description of architectures. These included function and activity models from Structured Analysis SADT, data modeling techniques (entity-relation) and object-oriented techniques. Perry and Wolf cited the precedent of building architecture for the role of multiple views: "A building architect works with the customer by means of a number of different views in which some particular aspect of the building is emphasized." Perry and Wolf posited that the representation of architectures should include: { elements, form and rationale }, distinguishing three kinds of elements (and therefore three kinds of views): processing: how the data is transformed; data: information that is used and transformed; connecting: glue holding the other elements together; Perry and Wolf identified four objectives or uses for architecture descriptions (called "architecture specifications" in their paper): prescribe architectural constraints without overspecifying solutions separate aesthetics from engineering express different aspects of the architecture each in an appropriate manner conduct architecture analysis, particularly dependency and consistency analyses Following the Perry and Wolf paper, two schools of thought on software architecture description emerged: Multiple views school Structuralist school Mechanisms for architecture description There are several common mechanisms used for architecture description. These mechanisms facilitate reuse of successful styles of description so that they may be applied to many systems: architecture viewpoints architecture description languages architecture frameworks Architecture viewpoints Software architecture descriptions are commonly organized into views, which are analogous to the different types of blueprints made in building architecture. Each view addresses a set of system concerns, following the conventions of its viewpoint, where a viewpoint is a specification that describes the notations, modeling techniques to be used in a view to express the architecture in question from the perspective of a given set of stakeholders and their concerns (ISO/IEC 42010). The viewpoint specifies not only the concerns framed (i.e., to be addressed) but the presentation, model kinds used, conventions used and any consistency (correspondence) rules to keep a view consistent with other views. Examples of viewpoints include: Functional viewpoint Logical viewpoint Information/Data viewpoint Module viewpoint Component-and-connector viewpoint Requirements viewpoint Developer/Implementation viewpoint Concurrency/process/runtime/thread/execution viewpoint Performance viewpoint Security viewpoint Physical/Deployment/Installation viewpoint User action/feedback viewpoint The term viewtype is used to refer to categories of similar views sharing a common set of elements and relations. Architecture description languages An architecture description language (ADL) is any means of expression used to describe a software architecture (ISO/IEC/IEEE 42010). Many special-purpose ADLs have been developed since the 1990s, including AADL (SAE standard), Wright (developed by Carnegie Mellon), Acme (developed by Carnegie Mellon), xADL (developed by UCI), Darwin (developed by Imperial College London), DAOP-ADL (developed by University of Málaga), and ByADL (University of L'Aquila, Italy). Early ADLs emphasized modeling systems in terms of their components, connectors and configurations. More recent ADLs (such as ArchiMate and SysML) have tended to be "wide-spectrum" languages capable of expressing not only components and connectors but a variety of concerns through multiple sub-languages. In addition to special-purpose languages, existing languages such as the UML can be used as ADLs "for analysis, design, and implementation of software-based systems as well as for modeling business and similar processes." Architecture frameworks An architecture framework captures the "conventions, principles and practices for the description of architectures established within a specific domain of application and/or community of stakeholders" (ISO/IEC/IEEE 42010). A framework is usually implemented in terms of one or more viewpoints or ADLs. Frameworks of interest in software architecture include: 4+1 RM-ODP (Reference Model of Open Distributed Processing) TOGAF Multiple Views Represented in Kruchten's very influential 1995 paper on the "4+1 view model", this approach emphasized the varying stakeholders and concerns to be modeled. Structuralism Second, reflected in work of CMU and elsewhere, the notion that architecture was the high level organization of a system at run-time and that architecture should be described in terms of their components and connectors: "the architecture of a software system defines that system in terms of computational components and interactions among those components". During the 1990s-2000s, much of the academic work on ADLs took place within the paradigm of components and connectors. However, these ADLs have had very little impact in industry. Since the 1990s, there has been a convergence in approaches toward architecture description, with IEEE 1471 in 2000 codifying best practices: supporting, but not requiring, multiple viewpoints in an AD. Architecture description via decisions Elaborating on the rationale aspect of Perry and Wolf's original formula, a third school of thought has emerged, documenting the decisions and reasons for decisions as an essential way of conceiving and expressing a software architecture. This approach treats decisions as first-class elements of the architecture description, making explicit what was often implicit in earlier representations. Uses of architecture descriptions Architecture descriptions serve a variety of purposes including (ISO/IEC/IEEE 42010): to guide system construction and maintenance to aid system planning, costing and evolution to serve as a medium for analysis, evaluation or comparison of architectures to facilitate communication among system stakeholders regarding the architecture and the system to document architectural knowledge beyond the scope of individual projects (such as software product lines and product families, and reference architectures) to capture reusable architectural idioms (such as architectural styles and patterns) References See also Architecture description language Architecture framework Separation of concerns (Core concern and Concern (computer science)) Software architectural model Software architecture documentation View model Software architecture