id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
1617964
https://en.wikipedia.org/wiki/Lucida%20Grande
Lucida Grande
Lucida Grande is a humanist sans-serif typeface. It is a member of the Lucida family of typefaces designed by Charles Bigelow and Kris Holmes. It’s best known for its implementation throughout the macOS user interface from 1999 to 2014, as well as in other Apple software like Safari for Windows. As of OS X Yosemite (version 10.10), the system font was changed from Lucida Grande to Helvetica Neue. In OS X El Capitan (version 10.11) the system font changed again, this time to San Francisco. The typeface looks very similar to Lucida Sans and Lucida Sans Unicode. Like Sans Unicode, Grande supports the most commonly used characters defined in version 2.0 of the Unicode standard. Three weights of Lucida Grande: Normal, Bold, and Black, in three styles: Roman, Italic, and Oblique, were developed by Bigelow & Holmes. Apple released the Regular (Normal Roman) and Bold Roman with OS X. In June, 2014, Bigelow & Holmes released four weights: Light, Normal, Bold, and Black, in three styles: Roman, Italic, and Oblique. B&H also released Narrow versions of those twelve weight/styles, plus four Lucida Grande Monospaced fonts in Regular, Bold, Italic, and Bold Italic styles, with narrow versions of the four monospaced weight/styles. Lucida Grande fonts directly from Bigelow & Holmes contain the pan-European WGL character set. Scripts and Unicode ranges Lucida Grande contains 2,826 Unicode-encoded glyphs (2,245 characters) in version 5.0d8e1 (Revision 1.002). Language support by version: Similarity to Lucida Sans/Lucida Sans Unicode Almost all glyphs in Lucida Grande (and Lucida Grande Bold) look identical to their matching counterparts in Lucida Sans (and Lucida Sans Demibold) as well as Lucida Sans Unicode, with the very few exceptions of: The digit "1" with a serif on the baseline; The hyphen "-" that is longer, roughly of an en-dash width; The commercial at "@" with a larger and more upright letter and circle. These slightly different characters look clearer in small font sizes in display and user interface (especially graphical and web-based) uses. Note: If you have installed Lucida Grande font on Windows or Linux you will see followings above. Uses Apart from macOS releases prior to OS X Yosemite, many websites and blogs use Lucida Grande as the default typeface for body text, for example Facebook and many phpBB forums. Since this typeface is usually absent from most other operating systems like Windows and Linux, the CSS style sheets of these websites often include the fonts (usually Sans-serif: Tahoma, Verdana, Trebuchet MS, Segoe UI, Calibri, DejaVu Sans, Arial, Open Sans, or even Lucida Sans Unicode, in case Lucida Grande is unavailable for rendering. After the introduction of OS X Yosemite where Lucida Grande is no longer used as the default system font, several developers have created utilities to bring Lucida Grande back as the default system font. Although it was designed primarily as a screen font, Lucida Grande/Sans also appears frequently in print, due at least in part to the ubiquity of Mac platform (and thus the typeface) in professional-grade desktop publishing. The Getty-Dubay Italic Handwriting Series of penmanship workbooks in particular is typeset primarily in a specially modified version of Lucida Sans (with a cursive lowercase "y"), as its monoline italic bears a close resemblance to the form of writing that the program teaches. See also Lucida Sans Unicode List of typefaces included with macOS Unicode fonts Sans-serif References External links Lucida Grande Text Samples - Light, Normal, Bold, Black Lucida Grande Mono Text Samples - Normal, Bold Humanist sans-serif typefaces Unicode typefaces Apple Inc. typefaces Typefaces designed by Charles Bigelow (type designer) Typefaces designed by Kris Holmes
39844459
https://en.wikipedia.org/wiki/Hybris%20%28software%29
Hybris (software)
Hybris or libhybris is a compatibility layer for computers running Linux distributions based on the GNU C library or Musl, intended for using software written for Bionic-based Linux systems, which mainly includes Android libraries and device drivers. History Hybris was initially written by Carsten Munk, a Mer developer, who released it on GitHub on 5 August 2012 and publicly announced the project later that month. Munk has since been hired by Jolla as their Chief Research Engineer. Hybris has also been picked up by the Open webOS community for WebOS Ports, by Canonical for Ubuntu Touch and by the AsteroidOS project. In April 2013, Munk announced that Hybris has been extended to allow Wayland compositors to use graphic device drivers written for Android. Weston has had support for libhybris since version 1.3, which was released on 11 October 2013. Features Hybris loads "Android libraries, and overrides some symbols from bionic with glibc" calls, making it possible to use Bionic-based software, such as binary-only Android drivers, on glibc-based Linux distributions. Hybris can also translate Android's EGL calls into Wayland EGL calls, allowing Android graphic drivers to be used on Wayland-based systems. This feature was initially developed by Collabora's Pekka Paalanen for his Android port of Wayland. See also C standard library Free and open-source graphics device driver References External links C (programming language) libraries C standard library Compatibility layers Embedded Linux Free computer libraries Free software programmed in C Software using the Apache license
54503775
https://en.wikipedia.org/wiki/Resurrection%20Remix%20OS
Resurrection Remix OS
Resurrection Remix OS, abbreviated as RR, is a free and open-source operating system for smartphones and tablet computers, based on the Android mobile platform. UX designer and head developer Altan KRK & Varun Date started the project in 2012. History On February 9, 2018, Resurrection Remix 6.0.0 was released, based on Android 8.1 Oreo after months in development. In early 2019 Resurrection Remix 7.0.0, 7.0.1 and 7.0.2 were released, based on Android 9 Pie. The project seemed abandoned after a disagreement between 2 major developers which caused one of them (Acar) to leave, but later in mid 2020, Resurrection Remix came back with 8.5.7 based on Android 10. 8.7.3 is the latest version based on Android 10. Reviews and studies A DroidViews review of Resurrection Remix OS called it "feature packed," and complimented the large online community, updates, and customization options, as compared with the simplicity of Lineage OS. ZDNet stated Resurrection Remix OS was a custom ROM that could evade SafetyNet exclusions and display Netflix app in Play Store. Resurrection Remix OS was one of a few operating systems mentioned as Android upgrade options in Upcycled Technology. Resurrection Remix OS was one of a handful of operating systems supported by the OpenKirin development team for bringing pure Android to Huawei devices, and was one of two suggested for OnePlus 5. In a 2017 detailed review, Stefanie Enge of Curved.De said Resurrection Remix combined the best of LineageOS, OmniROM and SlimRoms. The camera performance was criticized, however, extensive customization options, speed and lack of Google services were all acclaimed. In a study of phone sensors, Resurrection Remix OS was one of six Android Operating Systems used on two Xiaomi devices, to compare gyroscope, accelerometer, orientation and light sensor data to values recorded by very accurate, reference sensors. Supported devices More than 150 devices are supported, some of which are: See also List of custom Android firmware References External links Custom Android firmware Linux distributions
6019
https://en.wikipedia.org/wiki/Computational%20chemistry
Computational chemistry
Computational chemistry is a branch of chemistry that uses computer simulation to assist in solving chemical problems. It uses methods of theoretical chemistry, incorporated into computer programs, to calculate the structures and properties of molecules, groups of molecules, and solids. It is essential because, apart from relatively recent results concerning the hydrogen molecular ion (dihydrogen cation, see references therein for more details), the quantum many-body problem cannot be solved analytically, much less in closed form. While computational results normally complement the information obtained by chemical experiments, it can in some cases predict hitherto unobserved chemical phenomena. It is widely used in the design of new drugs and materials. Examples of such properties are structure (i.e., the expected positions of the constituent atoms), absolute and relative (interaction) energies, electronic charge density distributions, dipoles and higher multipole moments, vibrational frequencies, reactivity, or other spectroscopic quantities, and cross sections for collision with other particles. The methods used cover both static and dynamic situations. In all cases, the computer time and other resources (such as memory and disk space) increase quickly with the size of the system being studied. That system can be a molecule, a group of molecules, or a solid. Computational chemistry methods range from very approximate to highly accurate; the latter are usually feasible for small systems only. Ab initio methods are based entirely on quantum mechanics and basic physical constants. Other methods are called empirical or semi-empirical because they use additional empirical parameters. Both ab initio and semi-empirical approaches involve approximations. These range from simplified forms of the first-principles equations that are easier or faster to solve, to approximations limiting the size of the system (for example, periodic boundary conditions), to fundamental approximations to the underlying equations that are required to achieve any solution to them at all. For example, most ab initio calculations make the Born–Oppenheimer approximation, which greatly simplifies the underlying Schrödinger equation by assuming that the nuclei remain in place during the calculation. In principle, ab initio methods eventually converge to the exact solution of the underlying equations as the number of approximations is reduced. In practice, however, it is impossible to eliminate all approximations, and residual error inevitably remains. The goal of computational chemistry is to minimize this residual error while keeping the calculations tractable. In some cases, the details of electronic structure are less important than the long-time phase space behavior of molecules. This is the case in conformational studies of proteins and protein-ligand binding thermodynamics. Classical approximations to the potential energy surface are used, typically with molecular mechanics force fields, as they are computationally less intensive than electronic calculations, to enable longer simulations of molecular dynamics. Furthermore, cheminformatics uses even more empirical (and computationally cheaper) methods like machine learning based on physicochemical properties. One typical problem in cheminformatics is to predict the binding affinity of drug molecules to a given target. Other problems include predicting binding specificity, off-target effects, toxicity, and pharmacokinetic properties. History Building on the founding discoveries and theories in the history of quantum mechanics, the first theoretical calculations in chemistry were those of Walter Heitler and Fritz London in 1927, using valence bond theory. The books that were influential in the early development of computational quantum chemistry include Linus Pauling and E. Bright Wilson's 1935 Introduction to Quantum Mechanics – with Applications to Chemistry, Eyring, Walter and Kimball's 1944 Quantum Chemistry, Heitler's 1945 Elementary Wave Mechanics – with Applications to Quantum Chemistry, and later Coulson's 1952 textbook Valence, each of which served as primary references for chemists in the decades to follow. With the development of efficient computer technology in the 1940s, the solutions of elaborate wave equations for complex atomic systems began to be a realizable objective. In the early 1950s, the first semi-empirical atomic orbital calculations were performed. Theoretical chemists became extensive users of the early digital computers. One major advance came with the 1951 paper in Reviews of Modern Physics by Clemens C. J. Roothaan in 1951, largely on the "LCAO MO" approach (Linear Combination of Atomic Orbitals Molecular Orbitals), for many years the second-most cited paper in that journal. A very detailed account of such use in the United Kingdom is given by Smith and Sutcliffe. The first ab initio Hartree–Fock method calculations on diatomic molecules were performed in 1956 at MIT, using a basis set of Slater orbitals. For diatomic molecules, a systematic study using a minimum basis set and the first calculation with a larger basis set were published by Ransil and Nesbet respectively in 1960. The first polyatomic calculations using Gaussian orbitals were performed in the late 1950s. The first configuration interaction calculations were performed in Cambridge on the EDSAC computer in the 1950s using Gaussian orbitals by Boys and coworkers. By 1971, when a bibliography of ab initio calculations was published, the largest molecules included were naphthalene and azulene. Abstracts of many earlier developments in ab initio theory have been published by Schaefer. In 1964, Hückel method calculations (using a simple linear combination of atomic orbitals (LCAO) method to determine electron energies of molecular orbitals of π electrons in conjugated hydrocarbon systems) of molecules, ranging in complexity from butadiene and benzene to ovalene, were generated on computers at Berkeley and Oxford. These empirical methods were replaced in the 1960s by semi-empirical methods such as CNDO. In the early 1970s, efficient ab initio computer programs such as ATMOL, Gaussian, IBMOL, and POLYAYTOM, began to be used to speed ab initio calculations of molecular orbitals. Of these four programs, only Gaussian, now vastly expanded, is still in use, but many other programs are now in use. At the same time, the methods of molecular mechanics, such as MM2 force field, were developed, primarily by Norman Allinger. One of the first mentions of the term computational chemistry can be found in the 1970 book Computers and Their Role in the Physical Sciences by Sidney Fernbach and Abraham Haskell Taub, where they state "It seems, therefore, that 'computational chemistry' can finally be more and more of a reality." During the 1970s, widely different methods began to be seen as part of a new emerging discipline of computational chemistry. The Journal of Computational Chemistry was first published in 1980. Computational chemistry has featured in several Nobel Prize awards, most notably in 1998 and 2013. Walter Kohn, "for his development of the density-functional theory", and John Pople, "for his development of computational methods in quantum chemistry", received the 1998 Nobel Prize in Chemistry. Martin Karplus, Michael Levitt and Arieh Warshel received the 2013 Nobel Prize in Chemistry for "the development of multiscale models for complex chemical systems". Fields of application The term theoretical chemistry may be defined as a mathematical description of chemistry, whereas computational chemistry is usually used when a mathematical method is sufficiently well developed that it can be automated for implementation on a computer. In theoretical chemistry, chemists, physicists, and mathematicians develop algorithms and computer programs to predict atomic and molecular properties and reaction paths for chemical reactions. Computational chemists, in contrast, may simply apply existing computer programs and methodologies to specific chemical questions. Computational chemistry has two different aspects: Computational studies, used to find a starting point for a laboratory synthesis, or to assist in understanding experimental data, such as the position and source of spectroscopic peaks. Computational studies, used to predict the possibility of so far entirely unknown molecules or to explore reaction mechanisms not readily studied via experiments. Thus, computational chemistry can assist the experimental chemist or it can challenge the experimental chemist to find entirely new chemical objects. Several major areas may be distinguished within computational chemistry: The prediction of the molecular structure of molecules by the use of the simulation of forces, or more accurate quantum chemical methods, to find stationary points on the energy surface as the position of the nuclei is varied. Storing and searching for data on chemical entities (see chemical databases). Identifying correlations between chemical structures and properties (see quantitative structure–property relationship (QSPR) and quantitative structure–activity relationship (QSAR)). Computational approaches to help in the efficient synthesis of compounds. Computational approaches to design molecules that interact in specific ways with other molecules (e.g. drug design and catalysis). Accuracy Computational chemistry is not an exact description of real-life chemistry, as our mathematical models of the physical laws of nature can only provide us with an approximation. However, the majority of chemical phenomena can be described to a certain degree in a qualitative or approximate quantitative computational scheme. Molecules consist of nuclei and electrons, so the methods of quantum mechanics apply. Computational chemists often attempt to solve the non-relativistic Schrödinger equation, with relativistic corrections added, although some progress has been made in solving the fully relativistic Dirac equation. In principle, it is possible to solve the Schrödinger equation in either its time-dependent or time-independent form, as appropriate for the problem in hand; in practice, this is not possible except for very small systems. Therefore, a great number of approximate methods strive to achieve the best trade-off between accuracy and computational cost. Accuracy can always be improved with greater computational cost. Significant errors can present themselves in ab initio models comprising many electrons, due to the computational cost of full relativistic-inclusive methods. This complicates the study of molecules interacting with high atomic mass unit atoms, such as transitional metals and their catalytic properties. Present algorithms in computational chemistry can routinely calculate the properties of small molecules that contain up to about 40 electrons with errors for energies less than a few kJ/mol. For geometries, bond lengths can be predicted within a few picometers and bond angles within 0.5 degrees. The treatment of larger molecules that contain a few dozen atoms is computationally tractable by more approximate methods such as density functional theory (DFT). There is some dispute within the field whether or not the latter methods are sufficient to describe complex chemical reactions, such as those in biochemistry. Large molecules can be studied by semi-empirical approximate methods. Even larger molecules are treated by classical mechanics methods that use what are called molecular mechanics (MM). In QM-MM methods, small parts of large complexes are treated quantum mechanically (QM), and the remainder is treated approximately (MM). Methods One molecular formula can represent more than one molecular isomer: a set of isomers. Each isomer is a local minimum on the energy surface (called the potential energy surface) created from the total energy (i.e., the electronic energy, plus the repulsion energy between the nuclei) as a function of the coordinates of all the nuclei. A stationary point is a geometry such that the derivative of the energy with respect to all displacements of the nuclei is zero. A local (energy) minimum is a stationary point where all such displacements lead to an increase in energy. The local minimum that is lowest is called the global minimum and corresponds to the most stable isomer. If there is one particular coordinate change that leads to a decrease in the total energy in both directions, the stationary point is a transition structure and the coordinate is the reaction coordinate. This process of determining stationary points is called geometry optimization. The determination of molecular structure by geometry optimization became routine only after efficient methods for calculating the first derivatives of the energy with respect to all atomic coordinates became available. Evaluation of the related second derivatives allows the prediction of vibrational frequencies if harmonic motion is estimated. More importantly, it allows for the characterization of stationary points. The frequencies are related to the eigenvalues of the Hessian matrix, which contains second derivatives. If the eigenvalues are all positive, then the frequencies are all real and the stationary point is a local minimum. If one eigenvalue is negative (i.e., an imaginary frequency), then the stationary point is a transition structure. If more than one eigenvalue is negative, then the stationary point is a more complex one, and is usually of little interest. When one of these is found, it is necessary to move the search away from it if the experimenter is looking solely for local minima and transition structures. The total energy is determined by approximate solutions of the time-dependent Schrödinger equation, usually with no relativistic terms included, and by making use of the Born–Oppenheimer approximation, which allows for the separation of electronic and nuclear motions, thereby simplifying the Schrödinger equation. This leads to the evaluation of the total energy as a sum of the electronic energy at fixed nuclei positions and the repulsion energy of the nuclei. A notable exception are certain approaches called direct quantum chemistry, which treat electrons and nuclei on a common footing. Density functional methods and semi-empirical methods are variants on the major theme. For very large systems, the relative total energies can be compared using molecular mechanics. The ways of determining the total energy to predict molecular structures are: Ab initio methods The programs used in computational chemistry are based on many different quantum-chemical methods that solve the molecular Schrödinger equation associated with the molecular Hamiltonian. Methods that do not include any empirical or semi-empirical parameters in their equations – being derived directly from theoretical principles, with no inclusion of experimental data – are called ab initio methods. This does not imply that the solution is an exact one; they are all approximate quantum mechanical calculations. It means that a particular approximation is rigorously defined on first principles (quantum theory) and then solved within an error margin that is qualitatively known beforehand. If numerical iterative methods must be used, the aim is to iterate until full machine accuracy is obtained (the best that is possible with a finite word length on the computer, and within the mathematical and/or physical approximations made). The simplest type of ab initio electronic structure calculation is the Hartree–Fock method (HF), an extension of molecular orbital theory, in which the correlated electron-electron repulsion is not specifically taken into account; only its average effect is included in the calculation. As the basis set size is increased, the energy and wave function tend towards a limit called the Hartree–Fock limit. Many types of calculations (termed post-Hartree–Fock methods) begin with a Hartree–Fock calculation and subsequently correct for electron-electron repulsion, referred to also as electronic correlation. As these methods are pushed to the limit, they approach the exact solution of the non-relativistic Schrödinger equation. To obtain exact agreement with experiment, it is necessary to include relativistic and spin orbit terms, both of which are far more important for heavy atoms. In all of these approaches, along with choice of method, it is necessary to choose a basis set. This is a set of functions, usually centered on the different atoms in the molecule, which are used to expand the molecular orbitals with the linear combination of atomic orbitals (LCAO) molecular orbital method ansatz. Ab initio methods need to define a level of theory (the method) and a basis set. The Hartree–Fock wave function is a single configuration or determinant. In some cases, particularly for bond breaking processes, this is inadequate, and several configurations must be used. Here, the coefficients of the configurations, and of the basis functions, are optimized together. The total molecular energy can be evaluated as a function of the molecular geometry; in other words, the potential energy surface. Such a surface can be used for reaction dynamics. The stationary points of the surface lead to predictions of different isomers and the transition structures for conversion between isomers, but these can be determined without a full knowledge of the complete surface. A particularly important objective, called computational thermochemistry, is to calculate thermochemical quantities such as the enthalpy of formation to chemical accuracy. Chemical accuracy is the accuracy required to make realistic chemical predictions and is generally considered to be 1 kcal/mol or 4 kJ/mol. To reach that accuracy in an economic way it is necessary to use a series of post-Hartree–Fock methods and combine the results. These methods are called quantum chemistry composite methods. Density functional methods Density functional theory (DFT) methods are often considered to be ab initio methods for determining the molecular electronic structure, even though many of the most common functionals use parameters derived from empirical data, or from more complex calculations. In DFT, the total energy is expressed in terms of the total one-electron density rather than the wave function. In this type of calculation, there is an approximate Hamiltonian and an approximate expression for the total electron density. DFT methods can be very accurate for little computational cost. Some methods combine the density functional exchange functional with the Hartree–Fock exchange term and are termed hybrid functional methods. Semi-empirical methods Semi-empirical quantum chemistry methods are based on the Hartree–Fock method formalism, but make many approximations and obtain some parameters from empirical data. They were very important in computational chemistry from the 60s to the 90s, especially for treating large molecules where the full Hartree–Fock method without the approximations were too costly. The use of empirical parameters appears to allow some inclusion of correlation effects into the methods. Primitive semi-empirical methods were designed even before, where the two-electron part of the Hamiltonian is not explicitly included. For π-electron systems, this was the Hückel method proposed by Erich Hückel, and for all valence electron systems, the extended Hückel method proposed by Roald Hoffmann. Sometimes, Hückel methods are referred to as "completely emprirical" because they do not derive from a Hamiltonian. Yet, the term "empirical methods", or "empirical force fields" is usually used to describe Molecular Mechanics. Molecular mechanics In many cases, large molecular systems can be modeled successfully while avoiding quantum mechanical calculations entirely. Molecular mechanics simulations, for example, use one classical expression for the energy of a compound, for instance the harmonic oscillator. All constants appearing in the equations must be obtained beforehand from experimental data or ab initio calculations. The database of compounds used for parameterization, i.e., the resulting set of parameters and functions is called the force field, is crucial to the success of molecular mechanics calculations. A force field parameterized against a specific class of molecules, for instance proteins, would be expected to only have any relevance when describing other molecules of the same class. These methods can be applied to proteins and other large biological molecules, and allow studies of the approach and interaction (docking) of potential drug molecules. Methods for solids Computational chemical methods can be applied to solid state physics problems. The electronic structure of a crystal is in general described by a band structure, which defines the energies of electron orbitals for each point in the Brillouin zone. Ab initio and semi-empirical calculations yield orbital energies; therefore, they can be applied to band structure calculations. Since it is time-consuming to calculate the energy for a molecule, it is even more time-consuming to calculate them for the entire list of points in the Brillouin zone. Chemical dynamics Once the electronic and nuclear variables are separated (within the Born–Oppenheimer representation), in the time-dependent approach, the wave packet corresponding to the nuclear degrees of freedom is propagated via the time evolution operator (physics) associated to the time-dependent Schrödinger equation (for the full molecular Hamiltonian). In the complementary energy-dependent approach, the time-independent Schrödinger equation is solved using the scattering theory formalism. The potential representing the interatomic interaction is given by the potential energy surfaces. In general, the potential energy surfaces are coupled via the vibronic coupling terms. The most popular methods for propagating the wave packet associated to the molecular geometry are: the split operator technique, the Chebyshev (real) polynomial, the multi-configuration time-dependent Hartree method (MCTDH), the semiclassical method. Molecular dynamics Molecular dynamics (MD) use either quantum mechanics, molecular mechanics or a mixture of both to calculate forces which are then used to solve Newton's laws of motion to examine the time-dependent behaviour of systems. The result of a molecular dynamics simulation is a trajectory that describes how the position and velocity of particles varies with time. The phase point of a system described by the positions and momenta of all its particles on a previous time point, will determine the next phase point in time by integrating over Newton's laws of motion. Monte Carlo Monte Carlo (MC) generates configurations of a system by making random changes to the positions of its particles, together with their orientations and conformations where appropriate. It is a random sampling method, which makes use of the so-called importance sampling. Importance sampling methods are able to generate low energy states, as this enables properties to be calculated accurately. The potential energy of each configuration of the system can be calculated, together with the values of other properties, from the positions of the atoms. Quantum mechanics/Molecular mechanics (QM/MM) QM/MM is a hybrid method that attempts to combine the accuracy of quantum mechanics with the speed of molecular mechanics. It is useful for simulating very large molecules such as enzymes. Interpreting molecular wave functions The atoms in molecules (QTAIM) model of Richard Bader was developed to effectively link the quantum mechanical model of a molecule, as an electronic wavefunction, to chemically useful concepts such as atoms in molecules, functional groups, bonding, the theory of Lewis pairs, and the valence bond model. Bader has demonstrated that these empirically useful chemistry concepts can be related to the topology of the observable charge density distribution, whether measured or calculated from a quantum mechanical wavefunction. QTAIM analysis of molecular wavefunctions is implemented, for example, in the AIMAll software package. Software packages Many self-sufficient computational chemistry software packages exist. Some include many methods covering a wide range, while others concentrate on a very specific range or even on one method. Details of most of them can be found in: Biomolecular modelling programs: proteins, nucleic acid. Molecular mechanics programs. Quantum chemistry and solid state physics software supporting several methods. Molecular design software Semi-empirical programs. Valence bond programs. See also Citations General bibliography C. J. Cramer Essentials of Computational Chemistry, John Wiley & Sons (2002). T. Clark A Handbook of Computational Chemistry, Wiley, New York (1985). A.K. Hartmann, Practical Guide to Computer Simulations, World Scientific (2009) F. Jensen Introduction to Computational Chemistry, John Wiley & Sons (1999). K.I. Ramachandran, G Deepa and Krishnan Namboori. P.K. Computational Chemistry and Molecular Modeling Principles and applications Springer-Verlag GmbH . P. v. R. Schleyer (Editor-in-Chief). Encyclopedia of Computational Chemistry. Wiley, 1998. . D. Sherrill. Notes on Quantum Mechanics and Computational Chemistry. J. Simons An introduction to Theoretical Chemistry, Cambridge (2003) . A. Szabo, N.S. Ostlund, Modern Quantum Chemistry, McGraw-Hill (1982). D. Young Computational Chemistry: A Practical Guide for Applying Techniques to Real World Problems, John Wiley & Sons (2001). D. Young's Introduction to Computational Chemistry. Specialized journals on computational chemistry Annual Reports in Computational Chemistry Computational and Theoretical Chemistry Computational and Theoretical Polymer Science Computers & Chemical Engineering Journal of Chemical Information and Modeling Journal of Chemical Information and Modeling Journal of Chemical Software Journal of Chemical Theory and Computation Journal of Cheminformatics Journal of Computational Chemistry Journal of Computer Aided Chemistry Journal of Computer Chemistry Japan Journal of Computer-aided Molecular Design Journal of Theoretical and Computational Chemistry Molecular Informatics Theoretical Chemistry Accounts External links NIST Computational Chemistry Comparison and Benchmark DataBase – Contains a database of thousands of computational and experimental results for hundreds of systems American Chemical Society Division of Computers in Chemistry – American Chemical Society Computers in Chemistry Division, resources for grants, awards, contacts and meetings. CSTB report Mathematical Research in Materials Science: Opportunities and Perspectives – CSTB Report 3.320 Atomistic Computer Modeling of Materials (SMA 5107) Free MIT Course Chem 4021/8021 Computational Chemistry Free University of Minnesota Course Technology Roadmap for Computational Chemistry Applications of molecular and materials modelling. Impact of Advances in Computing and Communications Technologies on Chemical Science and Technology CSTB Report MD and Computational Chemistry applications on GPUs Computational fields of study Theoretical chemistry
14968551
https://en.wikipedia.org/wiki/PyChess
PyChess
PyChess is a free software chess client developed for GNU. It allows users to play offline or online via the Free Internet Chess Server (FICS). PyChess also incorporates a built-in chess engine, which in contrast to most other chess AIs is written in the Python language and focuses more on fun of play than raw strength. For more advanced users, PyChess allows for virtually any other external chess engine to be used with it. History Development on PyChess was started by Thomas Dybdahl Ahle in 2006, and the first public release was sent out later that year. The release contained the bare minimum of features to play a game of chess, and was backed only by the GNU Chess engine. In the end of 2006, PyChess was close to become a part of GNOME Games, which were holding a usage survey of aspiring new games to include in the suite. Being nearly just started at the time, it lost to the more established glChess, which managed to fix its hardware accelerating dependency before the end of the trial. glChess is still developed as a part of GNOME today. Afterwards there were talks of the two programs merging, but the developers decided they were targeting different user segments, with PyChess aiming towards more advanced users. In 2009 PyChess won Les Trophées du Libre in Paris in the category of hobby computing. PyChess has grown steadily since then, with increasing year-to-year development activity, and would cost more than $500,000 to develop today in terms of the man-hours required to develop such a codebase. By 2011 it was among the seven most frequently used chess clients to access the Free Internet Chess Server, which in turn is the only non-web-based chess server available for Linux. Version 0.12 of PyChess uses PyGObject and GTK+ 3, prior versions used the obsoleted PyGTK. Logo The current PyChess logo was contributed by Karol Kreński in 2007. Karol's original design was very cartoonish, but was modified into a slightly calmer expression. Aims According to the PyChess website: The PyChess project puts heavy emphasis on simplicity, trying to avoid the complicated user interfaces of XBoard and BabasChess. This implies adding new features slowly, so they can be integrated in the overall usage scheme, and make things "just work". At the same time the project strives to contain most of the features known from major Windows chess clients such as Chessbase and Aquarium by ChessOK. See also GNOME Chess References External links Free chess software Free software programmed in Python PC games that use GTK Software that uses PyGTK Software that uses PyGObject 2006 software
50498591
https://en.wikipedia.org/wiki/Mindbody%20Inc.
Mindbody Inc.
MINDBODY, Inc. is a San Luis Obispo, California-based software-as-a-service company that provides cloud-based online scheduling and other business management software for the wellness services industry. Founded in 2001, the company services over 58,000 health and wellness businesses with about 35 million consumers in over 130 countries and territories. The company also owns ClassPass. It is majority owned by Vista Equity Partners, a private equity firm. The Mindbody mobile app is integrated with Fitbit and Under Armour’s MyFitnessPal. History Mindbody was originally known as HardBody Software. It was co-founded by Blake Beltram, Rick Stollmeyer, and Robert Murphy in 2000. The company was incorporated as MINDBODY, Inc. in 2001. In 2005, Mindbody Online was launched. Financing The company received its first financing round of US$1 million in November 2005 from Tech Coast Angels and Pasadena Angels. In April 2009, it received $5.6 million in financing from Catalyst Investors. In August 2010, it raised $11 million from Bessemer Venture Partners and Catalyst Investors. It received another round of funding in November 2012 of US$35 million from Bessemer Venture Partners, Institutional Venture Partners, and Catalyst Investors. In February 2014, Mindbody received its final round of private funding in the amount of US$50 million from Bessemer Venture Partners, Institutional Venture Partners, Catalyst Investors, W Capital Partners and Montreux Equity Partners. In June 2015, the company became a public company via an initial public offering, raising $100 million. In February 2019, the company was acquired by Vista Equity Partners for US$1.9 billion. Acquisitions In February 2010, Mindbody acquired ClientMagic, a company that provided scheduling and business management software to salons and spas. In June 2013, Mindbody acquired Jill's List, a platform for Integrative Healthcare practitioners. In February 2015, Mindbody acquired Fitness Mobile Apps. At the time of the acquisition the company was creating customizable applications for iOS and Android platforms. In September 2016, Mindbody acquired HealCode, a technology company that designed web tools for the fitness and wellness industry. In March 2017, the company acquired Lymber Wellness. In February 2018, the company acquired FitMetrix. In October 2018, the company exposed millions of user records due to its servers not having passwords. In March 2018, the company acquired Booker Software for $150 million. In May 2019, the company acquired Bowtie.ai. In May 2019, the company acquired Simplicity First. In May 2020, the company acquired ZeeZor, an analytics and staff engagement platform for salon and spa businesses. In October 2021, the company acquired ClassPass. Awards and recognition 2016 Webby Award and Webby People's Voice Award in the Mobile App: Fitness and Recreation category. Mindbody Connect—Codie Award for Best Consumer Mobile Application 2015. Mindbody Express—Silicon Valley Business Awards, App: Best Finance & Management App 2015. Named among the top 50 best places to work by Glassdoor in 2016, 2015 and 2014. Named among Inc. magazine’s fastest growing companies in the US from 2008–2015, and making the Inc. 500 list in 2009. Ranked 8th on the list of Top 10 Most Innovative Companies in Beauty by Fast Company. References 2001 establishments in the United States 2015 initial public offerings 2019 mergers and acquisitions Business software Companies formerly listed on the Nasdaq Cloud applications Cloud computing providers Privately held companies of the United States Software companies based in California Software companies established in 2001 Software companies of the United States
14010835
https://en.wikipedia.org/wiki/Bruce%20Artwick
Bruce Artwick
Bruce Arthur Artwick (born January 1, 1953) is an American software engineer. He is the creator of the first consumer flight simulator software. He founded Sublogic after graduating from the University of Illinois at Urbana–Champaign in 1977, and released the first version of Flight Simulator for the Apple II in 1979. His original Apple II software was purchased by Microsoft in 1982 and became Microsoft Flight Simulator 1.0. After Sublogic, Bruce founded the Bruce Artwick Organization, which continued development of flight simulator products and was eventually bought out by Microsoft. Early life and education Artwick was born and raised in Norridge, Illinois, and attended Triton Junior College before transferring to University of Illinois at Urbana–Champaign to study computer engineering in 1973. When he arrived, Artwick first switched his focus to electrical engineering because he believed that the degree would be more acceptable to the public eye. As a student at the University of Illinois, Artwick expressed his enthusiasm for aviation by doing research at the Aviation Research Lab. Artwick held a technician position in the DCL (Digital Computer Lab). Between 1975 and 1976, Artwick and his graphic group at the University designed graphic terminals for the DCL. During this time, Artwick found the time to become a pilot. The number of hours spent doing graphics led to a rich understanding of the topic. Artwick noted, "I learned more working in the basement of the DCL than in classes." Artwick graduated with a bachelor's degree in electrical engineering in 1975 and obtained a master's degree in electrical engineering in the following year. Flight Simulator In his thesis of May 1976, called “A versatile computer generated dynamic flight display”, he displayed a model of the flight of an aircraft on a computer screen. With this, Artwick proved that it was possible to use the 6800 microprocessor, which powered some of the first available microcomputers, to handle the graphics and calculations of the specifications needed to produce real-time flight simulation. After establishing Sublogic in 1977, Artwick took his thesis one step further by developing the first flight simulator program for the Apple II, which was based on the 6502 microprocessor. He followed up the simulator with a Radio Shack TRS-80 version. By the year 1981, Flight Simulator became so popular that it was reportedly the best-selling title for Apple. Shortly after, Microsoft decided to enter the fray to obtain a license for Flight Simulator. Microsoft obtained a joint license and by November 1982, Microsoft's version of Flight Simulator hit the stores as a PC entertainment program. As years passed, computer graphics continued to improve and Flight Simulator software also changed along with it. Sublogic Bruce Artwick established Sublogic in October 1977. It was incorporated in April 1978 by Bruce's partner, Stu Moment. The business strategy of Sublogic was to sell software through the mail. The company found itself growing a substantial amount in just two years’ time and Artwick decided to move part of his operation back to Champaign-Urbana. Sublogic continued to grow and developed various versions of the flight simulator program as well as other entertainment programs. At the beginning of the year 1982, Flight Simulator became a top selling product for Apple who purchased the Flight Simulator product. Microsoft realized his expertise in the field of flight simulation and asked Artwick to partake in a project that would change the simulator industry. Instead of flight purposes, Microsoft wanted to showcase the machine's graphics capabilities. By the late 1980s, the Sublogic business started to decline because the 8-bit market shifted to a 16-bit market so Artwick decided to pursue other things and left Sublogic. The name Sublogic came from logic circuits Artwick built for the PDP-11 in the University of Illinois' Digital Computer Laboratory (DCL). BAO Ltd. In 1988, he left Sublogic and found BAO Ltd. (Bruce Artwick Organization), retaining the copyright to Flight Simulator, which he continued to develop. BAO started off with six employees and grew to over 30 by 1995. BAO continued to grow and oversaw development of many aviation products in many different versions on various systems. The market by this time expanded to include flight simulator products for the Federal Aviation Administration. It was there that BAO produced aviation-related software that would be implemented into things like tower control simulation to train air traffic controllers. In 1994, BAO released Microsoft Space Simulator. In 1995, it released Tower, an air traffic control simulator. In January 1995, BAO and the copyright to Flight Simulator were acquired by Microsoft. Artwick remained with the company as a consultant. See also Airfight flight simulator References External links Bruce Artwick at MobyGames The Flight Simulator Men 1953 births American software engineers American video game designers American video game programmers Engineers from Illinois Living people Microsoft Flight Simulator People from Norridge, Illinois
2214121
https://en.wikipedia.org/wiki/European%20Train%20Control%20System
European Train Control System
The European Train Control System (ETCS) is the signalling and control component of the European Rail Traffic Management System (ERTMS). It is a replacement for legacy train protection systems and designed to replace the many incompatible safety systems currently used by European railways. The standard was also adopted outside Europe and is an option for worldwide application. In technical terms it is a type of positive train control (PTC). ETCS is implemented with standard trackside equipment and unified controlling equipment within the train cab. In its advanced form, all lineside information is passed to the driver wirelessly inside the cab, removing the need for lineside signals watched by the driver. This will give the foundation for a later to be defined automatic train operation. Trackside equipment aims to exchange information with the vehicle for safely supervising train circulation. The information exchanged between track and trains can be either continuous or intermittent according to the ERTMS/ETCS level of application and to the nature of the information itself. The need for a system like ETCS stems from more and longer running trains resulting from economic integration of the European Union (EU) and the liberalisation of national railway markets. At the beginning of the 1990s there were some national high speed train projects supported by the EU which lacked interoperability of trains. This catalysed the Directive 1996/48 about the interoperability of high-speed trains, followed by Directive 2001/16 extending the concept of interoperability to the conventional rail system. ETCS specifications have become part of, or are referred to, the Technical Specifications for Interoperability (TSI) for (railway) control-command systems, pieces of European legislation managed by the European Union Agency for Railways (ERA). It is a legal requirement that all new, upgraded or renewed tracks and rolling stock in the European railway system should adopt ETCS, possibly keeping legacy systems for backward compatibility. Many networks outside the EU have also adopted ETCS, generally for high-speed rail projects. The main goal of achieving interoperability had mixed success in the beginning. Deployment has been slow, as there is no business case for replacing existing train protection systems, especially in Germany and France which already had advanced train protection systems installed in most mainlines. Even though these legacy systems were developed in the 1960s, they provided similar performance to ETCS Level 2, thus the reluctance of infrastructure managers to replace these systems with ETCS. There are also significant problems regarding compatibility of the latest software releases or baselines of infrastructure-side equipment with older on-board equipment, forcing in many cases the train operating companies to replace ETCS equipment after only a few years. Switzerland, an early adopter of ETCS Limited Supervision, has introduced a moratorium on its planned roll-out of ETCS Level 2 due to cost and capacity concerns, added to fears about GSM-R obsolescence starting in 2030. History The European railway network grew from separate national networks with little more in common than standard gauge. Notable differences include voltages, loading gauge, couplings, signalling and control systems. By the end of the 1980s there were 14 national standard train control systems in use across the EU, and the advent of high-speed trains showed that signalling based on lineside signals is insufficient. Both factors led to efforts to reduce the time and cost of cross-border traffic. On 4 and 5 December 1989, a working group including Transport Ministers resolved a master plan for a trans-European high-speed rail network, the first time that ETCS was suggested. The Commission communicated the decision to the European Council, which approved the plan in its resolution of 17 December 1990. This led to a resolution on 91/440/EEC as of 29 July 1991, which mandated the creation of a requirements list for interoperability in high-speed rail transport. The rail manufacturing industry and rail network operators had agreed on creation of interoperability standards in June 1991. Until 1993, the organizational framework was created to start technical specifications that would be published as Technical Specifications for Interoperability (TSI). The mandate for TSI was resolved by 93/38/EEC. In 1995, a development plan first mentioned the creation of the European Rail Traffic Management System (ERTMS). Because ETCS is in many parts implemented in software, some wording from software technology is used. Versions are called system requirements specifications (SRS). This is a bundle of documents, which may have different versioning for each document. A main version is called baseline (BL). Baseline 1 The specification was written in 1996 in response to EU Council Directive 96/48/EC99 of 23 July 1996 on interoperability of the trans-European high-speed rail system. First the European Railway Research Institute was instructed to formulate the specification and about the same time the ERTMS User Group was formed from six railway operators that took over the lead role in the specification. The standardisation went on for the next two years and it was felt to be slow for some industry partners – 1998 saw the formation of Union of Signalling Industry (UNISIG), including Alstom, Ansaldo, Bombardier, Invensys, Siemens and Thales that were to take over the finalisation of the standard. In July 1998, SRS 5a documents were published that formed the first baseline for technical specifications. UNISIG provided for corrections and enhancements of the baseline specification leading to the Class P specification in April 1999. This baseline specification has been tested by six railways since 1999 as part of the ERTMS. Baseline 2 The railway companies defined some extended requirements that were included to ETCS (e.g. RBC-Handover and track profile information), leading to the Class 1 SRS 2.0.0 specification of ETCS (published in April 2000). Further specification continued through a number of drafts until UNISIG published the SUBSET-026 defining the current implementation of ETCS signalling equipment – this Class 1 SRS 2.2.2 was accepted by the European Commission in decision 2002/731/EEC as mandatory for high-speed rail and in decision 2004/50/EEC as mandatory for conventional rail. The SUBSET-026 is defined from eight chapters where chapter seven defines the ETCS language and chapter eight describes the balise telegram structure of ETCS Level 1. Later UNISIG published the corrections as SUBSET-108 (known as Class 1 SRS 2.2.2 "+"), that was accepted in decision 2006/679/EEC. The earlier ETCS specification contained a lot of optional elements that limited interoperability. The Class 1 specifications were revised in the following year leading to SRS 2.3.0 document series that was made mandatory by the European Commission in decision 2007/153/EEC on 9 March 2007. Annex A describes the technical specifications on interoperability for high-speed (HS) and conventional rail (CR) transport. Using SRS 2.3.0 a number of railway operators started to deploy ETCS on a large scale, for example the Italian Sistema Controllo Marcia Treno (SCMT) is based on Level 1 balises. Further development concentrated on compatibility specification with the earlier Class B systems leading to specifications like EuroZUB that continued to use the national rail management on top of Eurobalises for a transitional period. Following the experience in railway operation the European Union Agency for Railways (ERA) published a revised specification Class 1 SRS 2.3.0d ("debugged") that was accepted by the European Commission in April 2008. This compilation SRS 2.3.0d was declared final (later called Baseline 2) in this series. There were a list of unresolved functional requests and a need for stability in practical rollouts. So in parallel started the development of baseline 3 series to incorporate open requests, strip off unneeded stuff and combine it with solutions found for baseline 2. The structure of functional levels was continued. Baseline 3 While some countries switched to ETCS with some benefit, German and French railway operators had already introduced modern types of train protection systems so they would gain no benefit. Instead, ideas were introduced on new modes like "Limited Supervision" (known at least since 2004) that would allow for a low-cost variant, a new and superior model for braking curves, a cold movement optimisation and additional track description options. These ideas were compiled into a "baseline 3" series by the ERA and published as a Class 1 SRS 3.0.0 proposal on 23 December 2008. The first consolidation SRS 3.1.0 of the proposal was published by ERA on 26 February 2010 and the second consolidation SRS 3.2.0 on 11 January 2011. The specification GSM-R Baseline 0 was published as Annex A to the baseline 3 proposal on 17 April 2012. At the same time a change to Annex A of SRS 2.3.0d was proposed to the European Commission that includes GSM-R baseline 0 allowing ETCS SRS 3.3.0 trains to run on SRS 2.3.0d tracks. The baseline 3 proposal was accepted by the European Commission with decision 2012/88/EU on 25. January 2012. The update for SRS 3.3.0 and the extension for SRS 2.3.0d were accepted by the European Commission with decision 2012/696/EU on 6. November 2012. The ERA work programme concentrated on the refinement of the test specification SRS 3.3.0 that was to be published in July 2013. In parallel the GSM-R specification was to be extended into a GSM-R baseline 1 until the end of 2013. The German Deutsche Bahn has since announced equipping at least the TEN Corridors running on older tracks to be using either Level 1 Limited Supervision or Level 2 on high-speed sections. Current work continues on Level 3 definition with low-cost specifications (compare ERTMS Regional) and the integration of GPRS into the radio protocol to increase the signalling bandwidth as required in shunting stations. The specifications for ETCS baseline 3 and GSM-R baseline 0 (Baseline 3 Maintenance Release 1) were published as recommendations SRS 3.4.0 by the ERA in May 2014 for submission to the Railway Interoperability and Safety Committee (RISC) in a meeting in June 2014. The SRS 3.4.0 was accepted by the European Commission with the amending decision 2015/14/EU on 5. January 2015. Stakeholders such as Deutsche Bahn have opted for a streamlined development model for ETCS – DB will assemble a database of change requests (CRs) to be assembled by priority and effect in a CR-list for the next milestone report (MRs) that shall be published on fixed dates through ERA. The SRS 3.4.0 from Q2 2014 matches with the MR1 from this process. The further steps were planned for the MR2 to be published in Q4 2015 (that became the SRS 3.5.0) and the MR3 to be published in Q3 2017 (whereas SRS 3.6.0 was settled earlier in June 2016). Each specification will be commented on and handed over to the RISC for subsequent legalization in the European Union. Deutsche Bahn has expressed a commitment to keep the Baseline 3 specification backward compatible starting at least with SRS 3.5.0 that is due in 2015 according to the streamlined MR2 process, with the MR1 adding requirements from its tests in preparation for the switch to ETCS (for example better frequency filters for the GSM-R radio equipment). The intention is based on plans to start replacing its PZB train protection system at the time. In December 2015, the ERA published the Baseline 3 Release 2 (B3R2) series including GSM-R Baseline 1. The B3R2 is publicly named to be not an update to the previous Baseline 3 Maintenance Release 1 (B3MR1). The notable change is the inclusion of EGPRS (GPRS with mandatory EDGE support) in the GSM-R specification, corresponding to the new Eirene FRS 8 / SRS 16 specifications. Additionally B3R2 includes the ETCS Driver Machine Interface and the SRS 3.5.0. This Baseline 3 series was accepted by European Commission with decisions 2016/919/EC in late May 2016. The decision references ETCS SRS 3.6.0 that was subsequently published by the ERA in a Set 3 in June 2016. The publications of the European Commission and ERA for SRS 3.6.0 were synchronized to the same day, 15 June. The Set 3 of B3R2 is marked as the stable basis for subsequent ERTMS deployments in the EU. The name of Set 3 follows the style of publications of the decisions of the European Commission where updates to the Baseline 2 and Baseline 3 specifications were accepted at the same time – for example decision 2015/14/EU of January 2015 has two tables "Set of specifications # 1 (ETCS baseline 2 and GSM-R baseline 0)" and "Set of specifications # 2 (ETCS baseline 3 and GSM-R baseline 0)". In the decision of May 2016 there are three tables: "Set of specifications # 1 (ETCS Baseline 2 and GSM-R Baseline 1)", "Set of specifications # 2 (ETCS Baseline 3 Maintenance Release 1 and GSM-R Baseline 1)", and "Set of specifications # 3 (ETCS Baseline 3 Release 2 and GSM-R Baseline 1)". In that decision the SRS (System Requirement Specification) and DMI (ETCS Driver Machine Interface) are kept at 3.4.0 for Set 2 while updating Set 3 to SRS and DMI 3.6.0. All three of the tables (Set 1, Set 2 and Set 3) are updated to include the latest EIRENE FRS 8.0.0 including the same GSM-R SRS 16.0.0 to ensure interoperability. In that decision the SRS is kept at 2.3.0 for Set 1 – and the decision of 2012/88/EU was repealed that was first introducing the interoperability of Set 1 and Set 2 (with SRS 3.3.0 at the time) based on GSM-R Baseline 0. Introduction of Baseline 3 on railways requires installation of it on board, which requires re-certification of trains. This will cost less than first ETCS certification, but still at least €100k per vehicle. This makes Baseline 3 essentially a new incompatible ETCS which requires replacement of electronic equipment and software onboard and along the track when installing. Trains with ETCS Baseline 3 are allowed to go on railways with Baseline 2 if certified for it, so railways with ETCS do not need to change system urgently. The first live tests of Baseline 3 took place in Denmark July 2016. Denmark wants to install ERTMS on all its railways, and then use Baseline 3. British freight and passenger operators have signed contracts to install Baseline 3 in their trains, the first around 2020. Deployment planning The development of ETCS has matured to a point that cross-border traffic is possible and some countries have announced a date for the end of older systems. The first contract to run the full length of a cross-border railway was signed by Germany and France in 2004 on the high-speed line from Paris to Frankfurt, including LGV Est. The connection opened in 2007 using ICE3MF, to be operational with ETCS trains by 2016. The Netherlands, Germany, Switzerland and Italy have a commitment to open Corridor A from Rotterdam to Genoa for freight by the start of 2015. Non-European countries also are starting to deploy ERTMS/ETCS, including Algeria, China, India, Israel, Kazakhstan, Korea, Mexico, New Zealand, and Saudi Arabia. Australia will switch to ETCS on some dedicated lines starting in 2013. The European Commission has mandated that European railways to publish their deployment planning up to 5 July 2017. This will be used to create a geographical and technical database (TENtec) that can show the ETCS deployment status on the Trans-European Network. From the comparative overview the commission wants to identify the needs for additional coordination measures to support the implementation. Synchronous with the publication of ETCS SRS 3.6.0 on 15 June 2017 the Regulation 2016/796/EC was published. It mandates the replacement of the European Railways Agency by the European Union Agency for Railways. The agency was tasked with the creation of a regulatory framework for a Single European Railway Area (SERA) in the 4th Railway Package to be resolved in late June 2016. A week later the new EU Agency for Railways emphasized the stability of B3R2 and the usage as the foundation for oncoming ETCS implementations in the EU. Based on projections in the Rhine-Alps-Corridor, a break-even of the cross-border ETCS implementation is expected in the early 2030s. A new memorandum of understanding was signed on InnoTrans in September 2016 for a completion of the first ETCS Deployment Plan targets by 2022. The new planning was accepted by the European Commission in January 2017 with a goal to have 50% of the Core Network Corridors equipped by 2023 and the remainder in a second phase up to 2030. The costs for the switch to ETCS are well documented in the Swiss reports from their railway operator SBB to the railway authority BAV. In December 2016 it was shown that they could start switching parts of the system to ETCS Level 2 whenever a section needs improvement. This would not only result in a network where sections of ETCS and the older ZUB would switch back and forth along lines, but the full transition to ETCS would last until 2060 and its cost were estimated at 9.5 billion Swiss Franc (US$ ). The expected advantages of ETCS for more security and up to 30% more throughput would also be at stake. Thus legislation favours the second option where the internal equipment of interlocking stations would be replaced by new electronic ETCS desks before switching the network to ETCS Level 2. However the current railway equipment manufacturers did not provide enough technology options at the time of the report to start it off. So the plan would be to run feasibility studies until 2019 with a projected start of changeover set to 2025. A rough estimate indicates that the switch to ETCS Level 2 could be completed within 13 years from that point and it would cost about 6.1 billion Swiss Franc (US$ ). For comparison, SBB indicated that the maintenance of lineside signals would also cost about 6.5 billion Swiss Franc (US$ ) which however can be razed once Level 2 is effective. The Swiss findings influenced the German project "Digitale Schiene" (digital rail). It is estimated that 80% of the rail network can be operated by GSM-R without lineside signals. This will bring about 20% more trains that can be operated in the country. The project was unveiled in January 2018 and it will start off with a feasibility study on electronic interlocking stations that should show a transition plan by mid 2018. It is expected that 80% of the network have been rebuilt to the radio-controlled system by 2030. This is more extensive than earlier plans which focused more on ETCS Level 1 with Limited Supervision instead of Level 2. Alternative implementations The ETCS standard has listed a number of older Automatic Train Controls (ATC) as Class B systems. While they are set to obsolescence, the older line side signal information can be read by using Specific Transmission Modules (STM) hardware and fed the Class B signal information to a new ETCS onboard safety control system for partial supervision. In practice, an alternative transition scheme is sometimes used where an older ATC is rebased to use Eurobalises. This leverages the fact that a Eurobalise can transmit multiple information packets and the reserved national datagram (packet number 44) can encode the signal values from the old system in parallel with ETCS datagram packets. The older train-born ATC system is equipped with an additional Eurobalise reader that converts the datagram signals. This allows for a longer transitional period where the old ATC and Eurobalises are attached on the sleepers until all trains have a Eurobalise reader. The newer ETCS-compliant trains can be switched to an ETCS operation scheme by a software update of the onboard train computer. In Switzerland, a replacement of the older Integra-Signum magnets and ZUB 121 magnets to Eurobalises in the Euro-Signum plus EuroZUB operation scheme is under way. All trains had been equipped with Eurobalise readers and signal converters until 2005 (generally called "Rucksack" ""). The general operation scheme will be switched to ETCS by 2017 with an allowance for older trains to run on specific lines with EuroZUB until 2025. In Belgium, the TBL 1 crocodiles were complemented with Eurobalises in the TBL 1+ operation scheme. The TBL 1+ definition allowed for an additional speed restriction to be transmitted to the train computer already. Likewise in Luxembourg the Memor II (using crocodiles) was extended into a Memor II+ operation scheme. In Berlin, the old mechanical train stops on the local S-Bahn rapid transit system are replaced by Eurobalises in the newer ZBS train control system. Unlike the other systems it is not meant to be transitional for a later ETCS operation scheme. The signalling centres and the train computer use ETCS components with a specific software version, manufacturers like Siemens point out that their ETCS systems can be switched for operating on ETCS, TBL, or ZBS lines. The Wuppertal Suspension Railway called for bids on a modernization of its train protection and management system. Alstom won the tender with a plan largely composed of ETCS components. Instead of GSM-R the system uses TETRA which had been in use already for voice communication. The TETRA system will be expanded to allow movement authority being signaled by digital radio. Because train integrity will not be checked, the solution was called as ETCS Level 2+ by the manufacturer. Train integrity is the level of belief in the train being complete and not having left coaches or wagons behind. The usage of moving blocks was dropped however while the system was implemented with just 256 balises checking the odometry of the trains that signal their position by radio to the ETCS control center. It is expected that headways will drop from 3,5 minutes to 2 minutes when the system is activated. The system was inaugurated on 1 September 2019. Levels of ETCS ETCS is specified at four numbered levels: Level 0: ETCS-compliant locomotives or rolling stock do not interact with lineside equipment, i.e. because missing ETCS compliance. Level NTC (former STM): ETCS-compliant driving cars are equipped with additional Specific Transmission Modules (STM) for interaction with legacy signalling systems. Inside the cabs are standardised ETCS driver interfaces. With Baseline 3 definitions it is called National Train Control. Level 1: ETCS is installed on lineside (possibly superimposed with legacy systems) and on board; spot transmission of data from track to train (and versa) via Eurobalises or Euroloops. Level 2: As level 1, but eurobalises are only used for the exact train position detection. The continuous data transmission via GSM-R with the Radio Block Center (RBC) give the required signalling information to the drivers display. There is further lineside equipment needed, i.e. for train integrity detection. Level 3: As level 2, but train location and train integrity supervision no longer rely on trackside equipment such as track circuits or axle counters. Level 0 Level 0 applies when an ETCS-fitted vehicle is used on a non-ETCS route. The trainborne equipment monitors the maximum speed of that type of train. The train driver observes the trackside signals. Since signals can have different meanings on different railways, this level places additional requirements on drivers' training. If the train has left a higher-level ETCS, it might be limited in speed globally by the last balises encountered. Level 1 Level 1 is a cab signalling system that can be superimposed on the existing signalling system, leaving the fixed signalling system (national signalling and track-release system) in place. Eurobalise radio beacons pick up signal aspects from the trackside signals via signal adapters and telegram coders (Lineside Electronics Unit – LEU) and transmit them to the vehicle as a movement authority together with route data at fixed points. The on-board computer continuously monitors and calculates the maximum speed and the braking curve from these data. Because of the spot transmission of data, the train must travel over the Eurobalise beacon to obtain the next movement authority. In order for a stopped train to be able to move (when the train is not stopped exactly over a balise), there are optical signals that show permission to proceed. With the installation of additional Eurobalises ("infill balises") or a EuroLoop between the distant signal and main signal, the new proceed aspect is transmitted continuously. The EuroLoop is an extension of the Eurobalise over a particular distance that basically allows data to be transmitted continuously to the vehicle over cables emitting electromagnetic waves. A radio version of the EuroLoop is also possible. For example, in Norway and Sweden the meanings of single green and double green are contradictory. Drivers have to know the difference (already with traditional systems) to drive beyond the national borders safely. In Sweden, the ETCS Level 1 list of signal aspects are not fully included in the traditional list, so there is a special marking saying that such signals have slightly different meanings. Limited Supervision Whereas ETCS L1 Full Supervision requires supervision to be provided at every signal, ETCS L1 Limited Supervision allows for only a part of the signals to be included, thus allowing to tailor the installation of equipment, only to points of the network where the increase in functionality justifies the cost. Formally, this is possible for all ETCS levels, but it is currently only applied with Level 1. As supervision is not provided at every signal, this implies that cab signalling is not available and the driver must still look out for trackside signals. For this reason, the level of safety is not as high, as not all signals are included and there is still reliance on the driver seeing and respecting the trackside signalling. Studies have shown that ETCS L1 LS has the same capacity as plain Level 1 FS for half the cost . Cost advantages come from reduced efforts necessary for calibrating, configurating and designing the track equipment and ETCS telegrams. Another advantage is, that Limited Supervision has little requirements for the underlying interlocking, hence it can be applied even on lines with mechanical interlockings as long as LEUs can read respective signal aspects. In contrast Level 2 requires to replace older interlockings with electronic or digital interlockings. That has led to railway operators pushing for the inclusion of Limited Supervision into the ETCS Baseline 3. Although interoperable according to TSI, implementations of Limited Supervision are much more diverse than other ETCS modes, e.g. functionality of L1LS in Germany is strongly based on PZB principles of operation and common signal distances. Limited Supervision mode was proposed by RFF/SNCF (France) based on a proposal by SBB (Switzerland). Several years later a steering group was announced in spring 2004. After the UIC workshop on 30 June 2004 it was agreed that UIC should produce a FRS document as the first step. The resulting proposal was distributed to the eight administrations that were identified: ÖBB (Austria), SNCB/NMBS (Belgium), BDK (Denmark), DB Netze (Germany), RFI (Italy), CFR (Romania), Network Rail (UK) and SBB (Switzerland). After 2004 German Deutsche Bahn took over the responsibility for the change request. In Switzerland the Federal Office of Transport (BAV) announced in August 2011 that beginning with 2018 the Eurobalise-based EuroZUB/EuroSignum signalling will be switched to Level 1 Limited Supervision. High-speed lines are already using ETCS Level 2. The north–south corridor should be switched to ETCS by 2015 according to international contracts regarding the TEN-T Corridor-A from Rotterdam to Genova (European backbone). But it is delayed and will be usable with December 2017 timetable change. Level 2 Level 2 is a digital radio-based system. Movement authority and other signal aspects are displayed in the cab for the driver. Apart from a few indicator panels, it is therefore possible to dispense with trackside signalling. However, the train detection and the train integrity supervision still remain in place at the trackside. Train movements are monitored continually by the radio block centre using this trackside-derived information. The movement authority is transmitted to the vehicle continuously via GSM-R or GPRS together with speed information and route data. The Eurobalises are used at this level as passive positioning beacons or "electronic milestones". Between two positioning beacons, the train determines its position via sensors (axle transducers, accelerometer and radar). The positioning beacons are used in this case as reference points for correcting distance measurement errors. The on-board computer continuously monitors the transferred data and the maximum permissible speed. Level 3 With Level 3, ETCS goes beyond pure train protection functionality with the implementation of full radio-based train spacing. Fixed train detection devices (GFM) are no longer required. As with Level 2, trains find their position themselves by means of positioning beacons and via sensors (axle transducers, accelerometer and radar) and must also be capable of determining train integrity on board to the very highest degree of reliability. By transmitting the positioning signal to the radio block centre, it is always possible to determine that point on the route the train has safely cleared. The following train can already be granted another movement authority up to this point. The route is thus no longer cleared in fixed track sections. In this respect, Level 3 departs from classic operation with fixed intervals: given sufficiently short positioning intervals, continuous line-clear authorisation is achieved and train headways come close to the principle of operation with absolute braking distance spacing ("moving block"). Level 3 uses radio to pass movement authorities to the train. Level 3 uses train reported position and integrity to determine if it is safe to issue the movement authority. Level 3 is currently under development. Solutions for reliable train integrity supervision are highly complex and are hardly suitable for transfer to older models of freight rolling stock. The Confirmed Safe Rear End (CSRE) is the point in rear of the train at the furthest extent of the safety margin. If the Safety margin is zero, the CSRE aligns with the Confirmed Rear End. Some kind of end-of-train device is needed or special lines for rolling stock with included integrity checks like commuter multiple units or high speed passenger trains. A ghost train is a vehicle in the Level 3 Area that are not known to the Level 3 Track-side. ERTMS Regional A variant of Level 3 is ERTMS Regional, which has the option to be used with virtual fixed blocks or with true moving block signalling. It was early defined and implemented in a cost sensitive environment in Sweden. In 2016 with SRS 3.5+ it was adopted by core standards and is now officially part of Baseline 3 Level 3. It is possible to use train integrity supervision, or by accepting limited speed and traffic volume to lessen the effect and probability of colliding with detached rail vehicles. ERTMS Regional has lower commissioning and maintenance costs, since trackside train detection devices are not routinely used, and is suitable for lines with low traffic volume. These low-density lines usually have no automatic train protection system today, and thus will benefit from the added safety. GNSS Instead of using fixed balises to detect train location there may be "virtual balises" based on satellite navigation and GNSS augmentation. Several studies about the usage of GNSS in railway signalling solutions have been researched by the UIC (GADEROS/GEORAIL) and ESA (RUNE/INTEGRAIL). Experiences in the LOCOPROL project show that real balises are still required in railway stations, junctions, and other areas where greater positional accuracy is required. The successful usage of satellite navigation in the GLONASS-based Russian ABTC-M block control has triggered the creation of the ITARUS-ATC system that integrates Level 2 RBC elements – the manufacturers Ansaldo STS and VNIIAS aim for certification of the ETCS compatibility of this system. The first real implementation of the virtual balise concept has been done during the ESA project 3InSat on 50 km of track of the Cagliari–Golfo Aranci Marittima railway on Sardinia in which a SIL-4 train localisation at signalling system level has been developed using differential GPS. There is a pilot project "ERSAT EAV" running since 2015 with the objective to verify the suitability of EGNSS as the enabler of cost-efficient and economically sustainable ERTMS signalling solutions for safety railway applications. Ansaldo STS has come to lead the UNISIG working group on GNSS integration into ERTMS within Next Generation Train Control (NGTC) WP7, whose main scope is to specify ETCS virtual balise functionality, taking into account the interoperability requirement. Following the NGTC specifications the future interoperable GNSS positioning systems, supplied by different manufacturers, will reach the defined positioning performance in the locations of the virtual balises. Level 4 Level 4 is an idea that has been mooted that envisages Train Convoys or Virtual Coupling as ways to increase track capacity, it is merely for discussion at the moment. Train-borne equipment All the trains compliant with ETCS will be fitted with on-board systems certified by Notified Bodies. This equipment consists of wireless communication, rail path sensing, central logic unit, cab displays and control devices for driver action. Man Machine Interface The Man Machine Interface (MMI) is the standardised interface for the driver, also called "Driver Machine Interface" (DMI). It consists of a set of colour displays with touch input for ETCS and separate for GSM-R communication. This is added with control devices specific for the train type. Specific Transmission Module The Specific Transmission Module (STM) is a special interface for the EVC for communicating with legacy Class B ATP systems like PZB, Memor and ATB. It consists of specific sensing elements to lineside installations and an interface for hardware and logic adapting interface to EVC. The EVC must get special software for translation of legacy signals to unified internal ETCS communication. The driver is using standard ETCS cab equipment also on non ETCS lines. The STM enables therefore the usage of the ETCS equipped driving vehicle on the non-equipped network and is today essential for interoperability. Balise Transmission Module The Balise Transmission Module (BTM) is a set with antennas and the wireless interface for reading data telegrams from and writing to eurobalises. Odometric sensors The odometric sensors are significant for exact position determination. In ETCS Level 2 installations are rare installation of eurobalises as definite milestones. Between such milestones the position is estimated and measured relative to the last passed milestone. Initially it was tested, that in difficult adhesive conditions axle revolution transmitters would not give required precision. European Vital Computer The European Vital Computer (EVC) also called Eurocab is the heart of local computing capabilities in the driving vehicle. It is connected with external data communication, internal controls to speed regulation of the loco, location sensors and all cab devices of the driver. Euroradio The Euroradio communication unit is compulsory and is used for voice and data communication. Because in ETCS Level 2 all signalling information is exchanged via GSM-R, the equipment is fully doubled with two simultaneous connections to the RBC. Juridical Recording Unit The Juridical Recording Unit (JRU) is part of the EVC for recording the last actions of the driver, last parameters of signalling and machine conditions. Such a train event recorder is functionally equivalent to the flight recorder of aircraft. Train Interface Unit The Train Interface Unit (TIU) is the interface of the EVC to the train and/or the locomotive for submitting commands or receiving information. Lineside equipment Lineside equipment is the fixed installed part of ETCS installation. According to ETCS Levels the rail related part of installation is decreasing. While in Level 1 sequences with two or more of eurobalises are needed for signal exchange, in Level 2 balises are used for milestone application only. It is replaced in Level 2 by mobile communication and more sophisticated software. In Level 3 even less fixed installation is used. In 2017 first positive tests for satellite positioning were done. Eurobalise The Eurobalise is a passive or active antenna device mounted on rail sleepers. Mostly it transmits information to the driving vehicle. It can be arranged in groups to transfer information. There are Fixed and Transparent Data Balises. Transparent Data Balises are sending changing information from LEU to the trains, e.g. signal indications. Fixed Balises are programmed for a special information like gradients and speed restrictions. Euroloop The Euroloop is an extension for Eurobalises in ETCS Level 1. It is a special Leaky feeder for transmitting information telegrams to the car. Lineside Electronic Unit The Lineside Electronic Unit (LEU) is the connecting unit between the Transparent Data Balises with signals or Signalling control in ETCS Level 1. Radio Block Centre A Radio Block Centre is a specialised computing device with specification Safety integrity level 4 (SIL) for generating Movement Authorities (MA) and transmitting it to trains. It gets information from Signalling control and from the trains in its section. It hosts the specific geographic data of the railway section and receives cryptographic keys from trains passing in. According to conditions the RBC will attend the trains with MA until leaving the section. RBC have defined interfaces to trains, but have no regulated interfaces to Signalling Control and only have national regulation. Operation modes in ETCS ETCS test laboratories Three ETCS test laboratories work together to bring support to the industry: Multitel has become accredited ISO17025 for EVC Test (Subset-076 / Subset-094) since 22 February 2011. To be a reference laboratory ERA is requesting the laboratories to be accredited ISO17025. Future GSM is no longer being developed outside of GSM-R. However, as of 2021, ERA expected GSM-R equipment suppliers to support the technology until at least 2030. ERA is considering what action is needed to smoothly transition to a successor system such as GPRS or Edge. The Baseline 3 of ETCS contains functionality for this. Deployment In July 2009, the European Commission announced that ETCS is mandatory for all EU-funded projects that include new or upgraded signalling, and GSM-R is required when radio communications are upgraded. Some short stretches in Switzerland, Italy, the Netherlands, Germany, France, Sweden, and Belgium are equipped with Level 2 and in operation. ETCS corridors Based on the proposal for 30 TEN-T Priority Axes and Projects during 2003, a cost/benefit analysis was performed by the UIC, presented in December 2003. This identified ten rail corridors covering about 20% of the TEN network that should be given priority in changing to ETCS, and these were included in decision 884/2004/EC by the European Commission. In 2005 the UIC combined the axes into the following ETCS Corridors, subject to international development contracts: Corridor A: Rotterdam – Duisburg – Basel – Genoa Corridor B: Naples – Bologna – Innsbruck – Munich – Berlin – Stockholm Corridor C: Antwerp – Strasbourg – Basel/Antwerp – Dijon – Lyon Corridor D: Valencia – Barcelona – Lyon – Turin – Milan – Trieste – Ljubljana – Budapest Corridor E: Dresden – Prague – Vienna – Budapest – Constanta Corridor F: Aachen – Duisburg – Hanover – Magdeburg – Berlin – Poznań – Warsaw – Belarus The Trans-European Transport Network Executive Agency (TEN-T EA) publishes ETCS funding announcements showing the progress of trackside equipment and onboard equipment installation. Corridor A gets trackside equipment January 2007 – December 2012 (2007-DE-60320-P German section Betuweroute – Basel), June 2008 – December 2013 (2007-IT-60360-P Italian section). The Betuweroute in the Netherlands is already using Level 2 and Switzerland will switch to ETCS in 2017. Corridor B, January 2007 – December 2012 (2007-AT-60450-P Austrian part), January 2009 – December 2013 (2009-IT-60149-P Italian section Brenner – Verona). Corridor C, May 2006 – December 2009 (2006-FR-401c-S LGV-Est). Corridor D, January 2009 – December 2013 (2009-EU-60122-P Valencia – Montpellier, Turin – Ljubljana/Murska). Corridor E, June 2008 – December 2012 (2007-CZ-60010-P Czech section), May 2009 – December 2013 (2009-AT-60148-P Austrian section via Vienna). Corridor F, January 2007 – December 2012 (2007-DE-60080-P Aachen – Duisburg/Oberhausen). Corridor A has two routes in Germany – the double track east of the Rhine (rechte Rheinstrecke) will be ready with ETCS in 2018 (Emmerich, Oberhausen, Duisburg, Düsseldorf, Köln-Kalk, Neuwied, Oberlahnstein, Wiesbaden, Darmstadt, Mannheim, Schwetzingen, Karlsruhe, Offenburg, Basel), while the upgrade of the double track west of the Rhine (linke Rheinstrecke) will be postponed. Corridor F will be developed in accordance with Poland as far as it offers ETCS transport: Frankfurt – Berlin – Magdeburg will be ready in 2012, Hanover to Magdeburg – Wittenberg – Görlitz in 2015. At the other end Aachen to Oberhausen will be ready in 2012, the missing section from Oberhausen to Hanover in 2020. The other two corridors are postponed and Germany chooses to support the equipment of locomotives with STMs to fulfill the requirement of ETCS transport on the corridors. Australia Implementation in Adelaide, SA is planned for mid/late 2014. Implementation of ETCS Level 2 in South East Queensland is planned to be operational from 2021. Planning to trial in the Central Queensland with electric coal trains west of Rockhampton from 2019. ETCS L2 is fundamental to the implementation of Rio Tinto Iron Ore's AutoHaul system, and implemented throughout the majority of their heavy-haul network. Implementation of ETCS L1/LS on Sydney and NSW's electrified heavy rail suburban lines is being progressively rolled out across the rail network with the northern and southern lines operational 2020. Portions of the electrified network are planned to be equipped with ETCS L2 + ATO; the implementation project is called 'Digital Systems'. Austria Implementation in Austria started in 2001 with a level 1 test section on the Eastern Railway between Vienna and Nickelsdorf. By the end of 2005 the whole line between Vienna and Budapest had been equipped with ETCS L1. The newly built stretches of the Western Railway between Vienna and St. Pölten and the New Lower Inn Valley Railway are equipped with ETCS L2, as is the North railway from Vienna to Bernhardstal. As of 2019 a total of 484 km are operational under ETCS. Belgium In Belgium the state railway company SNCB (in French, in Dutch NMBS, in German NGBE) led all activities for introduction of ETCS since the end of the 1990s. The interest resulted from new High Speed Lines (HSL) under construction, the development of the ports at the Atlantic and technically rotting national signalling systems. in 1999 the council of SNCB decided the opening of HSL 2 with proprietary system TBL 2, but all following lines should use ETCS. To rise the level of security on conventional lines, it was thought to use ETCS L1 for compatibility. But because of high costs for full implementation on rolling stock, it was chosen to select standard components from ETCS for interfacing locos (receiver) and rails (balises) to easy support existing infrastructure. The balises were sending information with reserved national packet type 44, compatible with common signalling. The system was named TBL1+. Later it can be complemented with standardised ETCS information. This is the same migration path as chosen in Italy (SCMT) or Switzerland (Euro-Signum and Euro-ZUB). In 2003 the SNCB selected a consortium to supply ETCS for the next high-speed lines with Level 2 and fallback with Level 1. It was chosen to supply ETCS L1LS first and later migrate to L1FS. So it was started tendering the renewing of 4000 signals with TBL1+ and L1 including support for 20 years in 2001. In 2006 Siemens was selected for delivery. Following the privatisation of SNCB in 2006 a split-off company Infrabel stepped in to be responsible for the whole state railway infrastructure. It continued the introduction of ETCS railway infrastructure, whereas SNCB was responsible for rolling material. Following some serious accidents (i.e. Halle train collision) caused by missing or malfunctioning protection systems, there was the obvious target to raise the security level in the whole network. The first line in ETCS operation was HSL 3 in 2007, which is 56 km (35 mi) long. Because of lack of trains equipped with ETCS, the commercial start of operations was in 2009 with ICE 3 and Thalys trains. The operations started with ETCS SRS 2.2.2 and were later upgraded to 2.3.0. The HSL 4 high-speed line was constructed at the same time as HSL 3 and so got the same ETCS equipment. Testing began in 2006 and commercial traffic started about 2008 with locomotive-hauled trains under Level 1. In 2009 commercial high-speed traffic started under ETCS L2 with supported Thalys- and ICE-trains like on HSL 3. A special feature is the first full-speed gapless border crossing under ETCS L2 supervision with HSL Zuid. In 2009 all railway lines in Belgium were covered by GSM-R, a foundation of ETCS L2 installation and also useful in L1 operation. In 2011 was released a first national ETCS–Masterplan, which was renewed in 2016. It names following four phases of ETCS introduction: Phase 1: TBL1+ programme completed (until end of 2015, succeeded); Phase 2: Network fully equipped with ETCS and TBL+ (2016–2022, in progress); Phase 3: Making ETCS the only technical standard and removing of TBL+ (until 2025); Phase 4: Convergence towards a homogeneous version of ETCS L2 (about 2030–2035). The first conventional railway line, which was equipped with ETCS L1, was Brussels–Liège. It started public service in March 2012. Next was in December 2014 the Liefkenshoek rail link with ETCS L2 in Antwerp, connecting the north and south banks of Scheldt by tunnel for cargo traffic. Infrabel has budgeted about 332 Million Euro for signalling including ETCS in 2015. After tendering it was given in summer 2015 a long time order to the consortium of Siemens Mobility and Cofely-Fabricom about the installation of ETCS L2 on more than 2200 km of rails. The order includes the delivery of computer based interlockings for the full network until 2025. The complete Belgian part of the European north-south Corridor C (port of Antwerp–Mediterranean Sea) with a length of about 430 km is crossable with ETCS L1 since the end of 2015. According to Infrabel was this the longest conventional railway supported with ETCS in Europe. Summarizing at end of 2015, there were 1225 km mainlines (about a fifth of the network) usable with ETCS L1 or L2. In 2016 was given an order for 1362 double deck coaches of Belgium type M7. They are to be delivered between 2018 and 2021 and have a complete ETCS equipment for replacement of older types. As of december 2021, 36% of the Infrabel network was equipped with a form of ETCS. China (People's Republic) October 2008: Opening of Beijing–Tianjin Intercity Railway equipped with ETCS Level 1. December 2009: Opening of Wuhan–Guangzhou High-Speed Railway equipped with CTCS Level 3 (based on ETCS Level 2). Croatia In Croatia, Croatian Railways deployed Level 1 on the Vinkovci–Tovarnik line in 2012. Denmark December 2008: In Denmark, plans were announced in for the conversion of its entire national network to Level 2. This was necessitated by the near obsolete nature of parts of its network. The total cost of the project is estimated at €3.3bn, with conversion beginning in 2009 and projected for completion in 2021. Denmark has decided to drop its older ATC, which will reach its end of life between 2015 and 2020, switching the network of 2100 km to ETCS. The S-train network in Copenhagen will use the Siemens TrainGuard system. Two suppliers will equip the rest of the country to Level 2 with an option for Level 3 (ERTMS Regional) in rural parts. Implementation will be between 2014 and 2018. Denmark will be the first to introduce GPRS support on its network by 2017. Hence Banedanemark is driving this development with other ETCS users in Europe that has led to the inclusion in B3R2 in late 2015. Due to complexity the completion date was moved by two years to 2023, especially for testing in the S-train network, while the equipment of the first three main lines will be done in 2018. November 2017: Further delays of the complete roll-out from 2023 to 2030 were announced. The following dilemma has appeared: ETCS must be introduced before electrification. Electrification must be introduced before new trains are obtained. New trains must be purchased before ETCS is introduced. Because the old signalling system was not built compatible with electrification, and many components (which often have to be developed anew and be certified) must be replaced to make them compatible, expensive and time-consuming and fairly meaningless if it shall soon be replaced by ETCS. Diesel trains must mainly be custom-made and are expensive (like IC4) because of little demand in Europe, and DSB wants to have electric trains for the future. But most lines are not electrified yet. The plan was to fit the existing old diesel trains such as IC3 with ETCS, but that has proven difficult, since they are not well documented because various ad hoc spare parts have been fitted in various ways and other problems. Furthermore, the new Copenhagen–Ringsted high-speed line was planned for opening in 2018 with ETCS only, creating a deadline, but there is a decision to introduce old signalling there, and delay ETCS roll-out for several years (still the dilemma must be solved by fitting ETCS into the trains). France June 2007: The LGV Est from Vaires-sur-Marne (Seine-et-Marne) near Paris to Baudrecourt (Moselle) opens with ETCS. It is an extension to the French high-speed TGV network, connecting Paris and Strasbourg. July 2017: The LGV BPL from Connerré (near Le Mans) to Rennes opens with ETCS L2. July 2017: The LGV SEA from Tours to Bordeaux opens with ETCS L2. Germany Germany intends to use Level 1 only as Limited Supervision – neither Full Supervision nor Euroloops will be installed. The first project that was intended to implement ETCS was the Köln–Frankfurt high-speed rail line that had been under construction since 1995. Due to the delays in the ETCS specification a new variant of LZB (CIR ELKE-II) was implemented instead. The next planned and first actual implementation was on the Leipzig-Ludwigsfelde main line to Berlin. There, SRS 2.2.2 was tested together with a PZB and LZB mixed installation in conditions of fast and mixed traffic. The section was co-financed by the EU and DB to gain more experience with the ETCS Level 2 mode. Since April 2002 the ETCS section was in daily usage and in March 2003 it was announced that it had reached the same degree of reliability as before using ETCS. Since 6. December 2005 an ETCS train ran at 200 km/h as a part of the normal operation plan on the line north of Leipzig to obtain long-term recordings. As of 2009, the line had been decommissioned for ETCS and is henceforth in use with LZB and PZB. The ETCS equipment seems partly not to be upgradable. In 2011, the installation of ETCS L2 (SRS 2.3.0d) was ordered for 14 Mio EUR following the reconstruction and enhancement of the railway line Berlin-Rostock. A first part of 35 km was finished at the end of 2013 between Lalendorf and Kavelstorf. The newly built Ebensfeld–Erfurt segment of Nuremberg–Erfurt high-speed railway as well as the Erfurt–Leipzig/Halle high-speed railway and the upgraded Erfurt–Eisenach segment of the Halle–Bebra railway are equipped with ETCS L2. The north-eastern part (Erfurt–Leipzig/Halle) is in commercial use since December 2015 exclusively with ETCS L2 SRS 2.3.0d. The southern part (Ebensfeld–Erfurt) started test running and driver training in the end of August 2017 and regular operation with ETCS L2 in December 2017. Starting in December 2017 there are about 20 high-speed trains per day from Munich to Berlin. ECTS on the western part (Erfurt–Eisenach) was also scheduled for commencing operation in December 2017 but commission was delayed until August 2018. Germany started replacing some of its PZB and LZB systems in 2015. During 2014 it was planned to use a dual equipment for the four main freight corridors to comply with the EC 913/2010 regulation. Further testing showed that a full ETCS system can increase capacity by 5-10% leading into a new concept "Zukunft Bahn" to accelerate the deployment, presented in December 2015. The overall cost reduction of about half a billion euro may be reinvested to complete the switch to ETCS that may take about 15 years. The Deutsche Bahn expected to get further federal funding after the 2017 German federal election. In a first step, another 1750 km of existing railway lines are planned to be equipped with ETCS until 2023, focusing on the Rhine-Alpine corridor, the Paris–Southwest Germany corridor and border-crossing lines. With Germany pressing for Baseline 3, neighbouring countries like Austria intend to update their vehicle fleet, especially modernizing the GSM-R radio on the trains. One of the last additions to B3R2 was the usage of EDGE in GSM-R. This is already widely deployed in the German rail network (including better frequency filters for the GSM-R radio equipment). In January 2018 the project "Digitale Schiene" (digital rail) was unveiled that intended to bring about a transition plan by mid 2018. Deutsche Bahn intends to equip 80% of the rail network with GSM-R by 2030 razing any lineside signals in the process. This will bring about 20% more trains that can be operated in the country. In the process 160,000 signals and 400,000 km of interlocking cables become dispensable. The Digital Rail project came about shortly after the Nuremberg–Erfurt high-speed railway was operational in December 2017 being the first high-speed line to have no lineside signals anymore. After some teething problems with radio reception it settled within the expected range of usability. Priority is on the 1450 km Rhine Corridor that is about to be equipped with ETCS Level 2. Bringing ETCS to the corridor has been agreed on at the EU level in 2016 as part of the TEN Core network that has expectations set to 2023. The Digital Rail project of 2018 has set the completion date to 2022 for using ETCS Level 2 while Switzerland intends to switch to ETCS Level 2 no later than 2025. Switzerland is expecting an increase in capacity of 30% that will probably come out the same on congested sections along the Rhine. Greece New high speed line Athens to Thessaloniki will be the first ETCS Level 1 in Greece. System expected to be ready by the end of 2021 Hungary In Hungary, the Zalacséb–Hodoš line was equipped with Level 1 as a pilot project in 2006. The Budapest–Hegyeshalom Level 1 was launched in 2008, and it was extended to Rajka (GYSEV) in 2015. The Békéscsaba-Lőkösháza line was equipped with Level 1 as an extension of the Level 2 network until further refurbishments will take place. In Hungary Level 2 is under construction in the Kelenföld-Székesfehérvár line as a part of a full reconstruction, and planned to be ready before 2015. In Hungary Level 2 is under construction, but due to problems with the installation of GSM-R, all of them are delayed. The Level 2 system is under construction in several phases. Currently the Boba-Hodoš, Székesfehérvár station, Székesfehérvár-Ferencváros, Ferencváros-Monor, Monor-Szajol, Szajol-Gyoma and the Gyoma-Békéscsaba sections are under construction. The GYSEV is currently installing Level 2 to the Sopron-Szombathely-Szentgotthárd line. India National Capital Region Transport Corporation has decided to equip European Train Control System (ETCS) on its Sarai Kale Khan hub in India's First Rapid Rail corridor-Delhi Meerut RRTS Route. Indonesia LRT Palembang is equipped with ETCS Level 1 for train protection system and PT. LEN Industri (Persero) provides the trackside fixed-block signalling. The line is slated to open mid-2018. Italy December 2005: Rome–Naples high-speed railway opens with ETCS Level 2. February 2006: ETCS Level 2 is extended to the Turin–Milan high-speed line on the section between Turin and Novara. December 2008: Opening of Milano–Bologna line. Autumn/Winter 2009: Opening of High Speed lines Novara–Milano and Bologna–Florence, thus completing the whole HS line Turin-Naples. December 2016: Opening of high-speed line Treviglio-Brescia, part of Milan-Verona line. December 2016: Italy has 704 km of high-speed lines which use Level 2. These lines do not overlap with national signaling systems and do not have side light signals. They are connecting Torino to Naples in 5 and a half hours and Milan to Rome in 2 hours 50 minutes. Israel In Israel ETCS Level 2 will begin replacing PZB in 2020. Three separate tenders were issued in 2016 for this purpose (one contract each was let for track-side infrastructure, rolling-stock integration, and the erection of a GSM-R network). Initial test runs of the system began on 31-March-2020. Concurrent with the implementation of ERTMS are railway electrification works, and an upgrade of the signaling system in the northern portion of Israel Railways' network from relay-based to electronic interlocking. (The southern portion of the network already employs electronic signaling.) Libya In Libya, Ansaldo STS was awarded a contract in July 2009 to install Level 2. This has stalled because of civil war. Luxembourg Procurement for ETCS started in 1999 and the tender was won by Alcatel SEL in July 2002. By 1. March 2005 a small network had been established that was run under ETCS Level 1. The track-side installations were completed in 2014 after spending about 33 million Euro. The equipment of the rolling stock did take a bit longer. In early 2016 it became known that the new Class 2200 could not run on Belgium lines. In February 2017 the changeover of Class 3000 was not even started, and Class 4000 had just one prototype installation. However the problems were resolved later with the complete rolling stock having ETCS installations by December 2017. The government had pushed for the changeover following the rail accident of Bettembourg on 14. February 2017. With the rolling stock being ready as well, the end date of the usage of the old Memor-II+-systems was set to 31. December 2019. With the decision of 29. January 2018 all trains have to use ETCS by default and it should be continued to use on tracks in Belgium and France as far as possible. Mexico ETCS at Level 1 equips the commuter line 1 of the Tren Suburbano (in service since 2018) which is about 27 km long. ETCS Level 2 will be used on the Toluca–Mexico City commuter rail that will have about 57 km. Morocco ETCS equips and will equip the high-speed lines that link Tangier to Kénitra (in service from 2018) and Kénitra to Casablanca via Rabat (under construction, planned to open in 2020). Other high-speed lines planned to link Casablanca to Agadir and Rabat to Oujda from 2030 will likely be equipped as well. Netherlands 2001: ETCS Pilot Projects. Bombardier Transportation Rail Control Solutions and Alstom Transportation each equipped a section of line and two test trains with ETCS Level 1 and Level 2. The Bombardier Transportation project was installed between Steenwijk and Heerenveen. The Alstom project was installed between Maastricht and Heerlen. The trains used were former "Motorpost" self-propelled postal vans. One of these - 3024 - is still operational with Bombardier equipment in 2018. The pilot line equipment was dismantled in 2005. June 2007: The Betuweroute, a new cargo line with ETCS Level 2 between the port of Rotterdam and the German border opens for commercial traffic. September 2009: HSL-Zuid/HSL 4 opened to commercial traffic. It is a new 125-km long high-speed line between the Netherlands and Belgium that uses ETCS Level 2 with a fallback option to ETCS Level 1 (although restricted to 160 km/h in the Netherlands). December 2011: Entry to operation of the rebuilt and 4-tracked Holendrecht - Utrecht line with dual-signalling Class B ATB-EG/vV and ETCS Level 2 December 2012: The newly constructed Hanzelijn between Lelystad and Zwolle entered service with dual-signalling Class B ATB-EG/vV and ETCS Level 2 New Zealand April 2009: ETCS will be used in Auckland. 2010: New Zealand begins rolling out ETCS together with new solid-state interlocking for electrification in Auckland. April 2014: The first true ETCS Level 1 system in the Southern Hemisphere was commissioned for KiwiRail by Siemens Rail Automation, in conjunction with the introduction of the ETCS-compliant AM class electric multiple units. Norway In August 2015 the eastern branch of the Østfold Line becomes first line with ETCS functionality in Norway. Philippines In 2022, Level 1 was installed by Alstom on the Manila LRT Line 1 in preparation for the Cavite extension of the line. Level 1 shall also be installed for the South Main Line as part of the PNR South Long Haul project, and as a minimum requirement on the Mindanao Railway. Level 2 shall also be installed on the North–South Commuter Railway with a maximum speed of . Hitachi Rail STS (formerly Ansaldo STS) is the sole bidder for the supply of such equipment. Poland In Poland, Level 1 was installed in 2011 on the CMK high-speed line between Warsaw and Katowice-Kraków, to allow speeds to be raised from to , and eventually to . The CMK line, which was built in the 1970s, was designed for a top speed of 250 km/h, but was not operated above 160 km/h due to lack of cab signalling. The ETCS signalling on the CMK was certified on 21 November 2013, allowing trains on the CMK to operate at . In Poland, Level 2 has been installed as part of a major upgrading of the 346 km Warsaw-Gdańsk-Gdynia line that reduced Warsaw – Gdańsk travel times from five to two hours and 39 minutes in December 2015. Level 2 has been installed on line E30 between Legnica – Węgliniec – Bielawa Dolna on the German border and is being installed on the Warsaw-Łódź line. Slovakia In Slovakia, the system has been deployed as part of the Bratislava–Košice mainline modernisation program, currently between Bratislava (east of Bratislava-Rača station) and Nové Mesto nad Váhom, with the rest of the line to follow. The current implementation is limited to 160 km/h due to limited braking distances between the control segments. Spain December 2004: Zaragoza – Huesca High Speed line in Spain opens with ETCS Level 1. December 2007:Córdoba-Málaga High speed line in Spain opens with ETCS Level 1, in addition with LZB and the spanish ATP "ASFA". Also, the line has been equipped with level 2. December 2007: Madrid-Segovia-Valladolid High speed line opens with ETCS Level 1, but has also been equipped to update to Level 2 in the future. December 2009: Madrid-Zaragoza-Barcelona High speed line fully opens with ETCS level 2. First line in the world to run ETCS level 2. December 2010: Madrid-Cuenca-Valencia and Madrid-Cuenca-Albacete High speed line opens with ETCS Level 1, but has also been equipped to upgrade to level 2 in the future. October 2011: ETCS Level 2 was commissioned on the Madrid-Barcelona high speed line, allowing the speed to be raised to with Madrid-Barcelona travel times reduced to 2 hours 30 minutes. December 2011: Orense-Santiago high speed line opens with ETCS level 1, but has also been equipped to upgrade to level 2 in the future. January 2013: Barcelona-Girona-Figueres high speed line opens with ETCS level 1. This line connects France to Spain. Sweden August 2010: In Sweden, the Bothnia Line was inaugurated using ETCS Level 2. November 2010: On West Dalarna Line in mid Sweden a demonstration run was made using ETCS Level 3 (ERTMS Regional). February 2012: Full commissioning of West Dalarna Line (Repbäcken-Malung) under ETCS Level 3 without lineside signals or track detection devices. May 2012, the Transport Administration in Sweden decided to delay the introduction of ERTMS into more Swedish railways a few years, because of the trouble on Botniabanan and Ådalsbanan railways, and unclear financing of rebuilding the rolling stock. Switzerland December 2004: ETCS Level 2 is to be installed on the Mattstetten-Rothrist new line, a high-speed line opened in 2004 between Bern and Zürich for train speeds of . This ETCS Level 2 installation was the pioneering ETCS installation in Switzerland. Technical problems with the new ETCS technology caused ETCS operation to be put off past the planned starting date. February 2006: ETCS Level 2 is finally installed on the Mattstetten–Rothrist line. ETCS Level 2 operation was fully implemented in March 2007. June 2007: The Lötschberg Base Tunnel, part of the Swiss NRLA project, opens with ETCS Level 2 and went in commercial use in December. Switzerland has announced in 2011 that it will switch from its national ZUB/Signum to ETCS Level 1 for conventional rail by enabling L1 LS packets on its transitional Euro-ZUB balises during 2017. A switch to Level 2 is planned for 2025 as a cost reduction of 30% is expected. Thailand State Railway of Thailand selected ETCS Level 1 for signalling for Bangkok's Suburban Commuter (SRT Red Lines) to be open in early 2021. ETCS Level 1 will also be installed in mainlines extended from Bangkok to Chumphon (Southern Line), Nakhon Sawan (Northern Line), Khon Kaen (Northeastern Line), Si Racha (Eastern Coast Line) and in shortcut line from Chachoengsao to Kaeng Khoi (Shortcut from Eastern Line to North/Northeastern Line) along with Double Tracking Phase I projects and ATP system upgrade of existing double track lines, both scheduled to be completed in 2022. Turkey In Turkey, Level 2 is installed on the Ankara–Konya high-speed line designed for . The new high-speed line has reduced Ankara-Konya travel times from hours to 75 minutes. United Kingdom October 2006: Network Rail announced that ETCS would be operational on the Cambrian line in December 2008 and would cost £59million. 2008: On the Cambrian line Network Rail will install In-Cab ETCS Level 2, specification 2.3.0d. This level does not require conventional fixed signals – existing signals and RETB boards will be removed. Additionally, the lineside speed signs will be redundant – drivers are given the appropriate maximum speed on the cab display. The main supplier was Ansaldo STS. Interfleet Technology of Derby were commissioned to carry out the design for the passenger rolling stock and subsequently managed the installation on site at LNWR, Crewe under contract to Ansaldo STS. Eldin Rail were contracted by Ansaldo STS as its infrastructure partner managing and installing all aspects of lineside infrastructure including the purpose built Control Centre. During the design phase the key project stakeholders; Network Rail, Arriva Trains Wales and Angel Trains were all consulted to ensure the design was robust due to the criticality of the project, as the first installation of its kind in the UK. Twenty-four Class 158s were fitted as well as three Class 97/3 locomotives (formerly Class 37s) to be used for piloting services. The Class 97/3 design and installation was provided by Transys Projects of Birmingham for Ansaldo STS. 2010: Begin of the national roll-out of ETCS in the United Kingdom. February 2010: The Cambrian ETCS – Pwllheli to Harlech Rehearsal commenced on 13 February 2010 and successfully finished on 18 February 2010. The driver familiarisation and practical handling stage of the Rehearsal has provided an excellent opportunity to monitor the use of GSM-R voice in operation on this route. The first train departed Pwllheli at 0853hrs in ERTMS Level 2 Operation with GSM-R voice being used as the only means of communication between the driver and the signaller. October 2010: The commercial deployment of ETCS Level 2 by passenger trains started on the Cambrian Line between Pwllheli and Harlech in Wales without lineside signals. March 2011: Full commissioning of Cambrian Line (Sutton Bridge Junction-Aberystwyth or Pwllheli) in Wales under ETCS level 2. In 2013, a Network Rail class 97/3 locomotive with Hitachi's Level 2 onboard equipment successfully completed demonstration tests. July 2015: As part of the Thameslink Programme, ETCS is used for the first time in the Core using new British Rail Class 700 rolling stock. This upgrade is in order to raise capacity in the core to up to 24tph. 2020: The Heathrow branch of the Elizabeth line started using ETCS. See also Communications-based train control Interoperable Communications Based Signaling References External links ERTMS website at the European Union Agency for Railways (including ETCS specs) ETCS homepage of the UIC BNET United Kingdom: Can ERTMS/ETCS become URTMS/UTCS? European Rail Traffic Management System Railway signalling block systems Train protection systems
1043769
https://en.wikipedia.org/wiki/MAC%20times
MAC times
MAC times are pieces of file system metadata which record when certain events pertaining to a computer file occurred most recently. The events are usually described as "modification" (the data in the file was modified), "access" (some part of the file was read), and "metadata change" (the file's permissions or ownership were modified), although the acronym is derived from the "mtime", "atime", and "ctime" structures maintained by Unix file systems. Windows file systems do not update ctime when a file's metadata is changed, instead using the field to record the time when a file was first created, known as "creation time" or "birth time". Some other systems also record birth times for files, but there is no standard name for this metadata; ZFS, for example, stores birth time in a field called "crtime". MAC times are commonly used in computer forensics. The name Mactime was originally coined by Dan Farmer, who wrote a tool with the same name. Modification time (mtime) A file's modification time describes when the content of the file most recently changed. Because most file systems do not compare data written to a file with what is already there, if a program overwrites part of a file with the same data as previously existed in that location, the modification time will be updated even though the contents did not technically change. Access time (atime) A file's access time identifies when the file was most recently opened for reading. Access times are usually updated even if only a small portion of a large file is examined. A running program can maintain a file as "open" for some time, so the time at which a file was opened may differ from the time data was most recently read from the file. Because some computer configurations are much faster at reading data than at writing it, updating access times after every read operation can be very expensive. Some systems mitigate this cost by storing access times at a coarser granularity than other times; by rounding access times only to the nearest hour or day, a file which is read repeatedly in a short time frame will only need its access time updated once. In Windows, this is addressed by waiting for up to an hour to flush updated access dates to the disk. Some systems also provide options to disable access time updating altogether. In Windows, starting with Vista, file access time updating is disabled by default. Change time and creation time (ctime) Unix and Windows file systems interpret 'ctime' differently: Unix systems maintain the historical interpretation of ctime as being the time when certain file metadata, not its contents, were last changed, such as the file's permissions or owner (e.g. 'This file's metadata was changed on 05/05/02 12:15pm'). Windows systems use ctime to mean 'creation time' (also called 'birth time') (e.g. 'This file was created on 05/05/02 12:15pm'). This difference in usage can lead to incorrect presentation of time metadata when a file created on a Windows system is accessed on a Unix system and vice versa. Most Unix file systems don't store the creation time, although some, such as HFS+, ZFS, and UFS2 do. NTFS stores both the creation time and the change time. The semantics of creation times is the source of some controversy. One view is that creation times should refer to the actual content of a file: e.g. for a digital photo the creation time would note when the photo was taken or first stored on a computer. A different approach is for creation times to stand for when the file system object itself was created, e.g. when the photo file was last restored from a backup or moved from one disk to another. Metadata issues As with all file system metadata, user expectations about MAC times can be violated by programs which are not metadata-aware. Some file-copying utilities will explicitly set MAC times of the new copy to match those of the original file, while programs that simply create a new file, read the contents of the original, and write that data into the new copy, will produce new files whose times do not match those of the original. Some programs, in an attempt to avoid losing data if a write operation is interrupted, avoid modifying existing files. Instead, the updated data is written to a new file, and the new file is moved to overwrite the original. This practice loses the original file metadata unless the program explicitly copies the metadata from the original file. Windows is not affected by this due to a workaround feature called File System Tunneling. See also Computer forensics References External links Discussion about Windows and Unix timestamps (Cygwin project mailing list) Computer file systems Computer forensics
16247198
https://en.wikipedia.org/wiki/Internet%20Explorer%209
Internet Explorer 9
Internet Explorer 9 or IE9 (officially Windows Internet Explorer 9) is a web browser for Windows. It was released by Microsoft on March 14, 2011, as the ninth version of Internet Explorer and the successor to Internet Explorer 8, and can replace previous versions of Internet Explorer on Windows Vista, Windows Server 2008, Windows 7 and Windows Server 2008 R2 but unlike version 8, this version does not support Windows XP and Windows Server 2003. It and older versions of Internet Explorer are no longer supported. Microsoft released Internet Explorer 9 as a major out-of-band version that was not tied to the release schedule of any particular version of Windows, unlike previous versions. It is the first version since Internet Explorer 2 not to be bundled with a Windows operating system, although some OEMs have installed it with Windows 7 on their PCs, as well as new Windows 7 laptops. Internet Explorer 9 requires Windows Vista SP2, Windows Server 2008 SP2 or Windows 7 at the minimum. It is the last version of Internet Explorer to support Windows Vista SP2, Windows Server 2008 SP2, Windows 7 below SP1 and Windows Server 2008 R2 below SP1; as the following version, Internet Explorer 10 will only support Windows 7 SP1 or later and Windows Server 2008 R2 SP1 or later. Both IA-32 and x64 builds are available. Internet Explorer 9 supports ECMAScript 5 (ES5), several CSS 3 properties and embedded ICC v2 or v4 color profiles support via Windows Color System, and has improved JavaScript performance. It is the last of the five major web browsers to implement support for Scalable Vector Graphics (SVG). It also features hardware-accelerated graphics rendering using Direct2D, hardware-accelerated text rendering using DirectWrite, hardware-accelerated video rendering using Media Foundation, imaging support provided by Windows Imaging Component, and high fidelity printing powered by the XML Paper Specification (XPS) print pipeline. Internet Explorer 9 also supports the HTML5 video and audio tags and the Web Open Font Format. Support for Internet Explorer 9 on most Windows versions ended on January 12, 2016 when Microsoft began requiring customers to use the latest version of Internet Explorer available for each Windows version. For versions of Windows where Internet Explorer 9 was the final version of Internet Explorer available, support ended alongside the end of support for that version of Windows. On January 14, 2020, Microsoft released the final IE9 update for Windows Server 2008, marking the end of IE9 support on all platforms. Release history Development Development of Internet Explorer 9 began shortly after Internet Explorer 8 was released. Microsoft began taking features suggestions through Microsoft Connect soon after Internet Explorer 8 was released. The Internet Explorer team focused on improving support and performance for HTML5, CSS3, SVG, XHTML, JavaScript, hardware acceleration, and the user interface featuring agility and "a clean new design". Microsoft first announced Internet Explorer 9 at PDC 2009 and spoke mainly about how it takes advantage of hardware acceleration in DirectX to improve the performance of web applications and improve the quality of web typography. Later, Microsoft announced that they had joined the W3C's SVG Working Group, which sparked speculation that Internet Explorer 9 will support the SVG W3C recommendation. This was proven to be true at MIX 10, where they demonstrated support for basic SVG markup and improved support for HTML5. They also announced that they would increase the support greatly by the time the first Internet Explorer 9 Beta was released. The Internet Explorer team also introduced the new JavaScript engine for 32-bit Internet Explorer 9, codenamed Chakra, which uses Just-in-time compilation to execute JavaScript as native code. In mid-September 2011, the Acid3 test was revised to remove a few "antiquated and unusual" tests and as a result IE9 now passes the test with a score of 100/100 At MIX 10, the first Internet Explorer 9 Platform Preview was released, which featured support for CSS3 and SVG, a new JavaScript engine called Chakra, and a score of 55/100 on the Acid3 test, up from 20/100 for Internet Explorer 8. On May 5, 2010, the second Internet Explorer 9 Platform Preview was released, which featured a score of 68/100 on the Acid3 test and faster performance on the WebKit SunSpider JavaScript benchmark than the first Internet Explorer 9 Platform Preview. On June 23, 2010, the third Internet Explorer 9 Platform Preview was released, which featured a score of 83/100 on the Acid3 test and a faster JavaScript engine than the second Internet Explorer 9 Platform Preview. The third Internet Explorer 9 Platform Preview also includes support for HTML5 audio, video, and canvas tags, and WOFF. On August 4, 2010, the fourth Internet Explorer 9 Platform Preview was released, which features a score of 95/100 on the Acid3 test and a faster JavaScript engine than the third Internet Explorer 9 Platform Preview. On September 15, 2010, the Internet Explorer 9 Public Beta was released alongside Platform Preview 5, featuring a new user interface. In contrast to the previews, the Beta replaces any previously installed version of Internet Explorer. The sixth Internet Explorer 9 Platform Preview was released on October 28, 2010, and includes support for CSS3 2D transforms and HTML5 semantic elements. The seventh Internet Explorer 9 Platform Preview was released on November 17, 2010, and features better JavaScript performance. These previews were not full builds of Internet Explorer 9, as they were for testing the latest version of the MSHTML (Trident) browser engine. They were for web developers to send feedback on the improvements made, functioned in parallel with any other installed browsers, and were previews of the renderer technology only, containing minimalistic user interfaces and lacking traditional interface elements such as an address bar and navigation buttons. Microsoft updated these previews approximately every eight weeks. On November 23, 2010, two updates for the Internet Explorer 9 Public Beta were released. KB2448827 brings improvements to reliability and fixes stability issues from the previous beta release. There are not much details of resolved issues disclosed by Microsoft. Moreover, KB2452648 resolves the in-built feedback issue with Internet Explorer 9 and the latest version of Windows Live Sign-in Assistant. These updates can be fetched from Windows Update or the Microsoft Download Center website. On the same day, Internet Explorer build 9.0.8027.6000 based on Internet Explorer 9 Platform Preview 7 was leaked. On February 10, 2011, the Internet Explorer 9 Release Candidate and Platform Preview 8 were released. The Release Candidate version featured improved performance, a Tracking Protection feature, a refined UI, support for more web standards, and other improvements. The final version of Internet Explorer 9 was publicly released during the South by Southwest (SXSW) Interactive conference in Austin, Texas, on March 14, 2011. Changes from previous versions User Interface Internet Explorer 9 includes significant alterations to its user interface when compared with previous versions. These include: Pinned Sites: Integrates with the Windows 7 taskbar to make web site experience more like an application where users may "pin" a site and then return to it later like a shortcut. In the release candidate, users can pin a site and add more homepages to that site (e.g. pin Facebook and add Twitter as another homepage to that pinned site, so it would become a social program) Security-enabled Download Manager: Manages file transfers and can pause and resume downloads and informs if a file may be malicious Enhanced Tabs and Tab Page: the new tab page can show most visited sites, and tabs are shown next to the address bar (there is an option to have a separate row, like in Internet Explorer 8) with the feature of closing an inactive tab. Tabs can be "torn off" which means they can be dragged up and down to be moved from one IE window to another. This also ties in with the Aero Snap feature. Add-on Performance Advisor: Shows which third-party add-ons may be slowing down browser performance and then allows the option to disable or remove them Compact user interface, which includes the removal of the separate search box found in Internet Explorer 7 and 8. Also removed is the tab menu list found in Internet Explorer 8. Scripting JavaScript engine Internet Explorer 9 (32-bit) features a faster JavaScript engine than Internet Explorer 8's, internally known as Chakra. Chakra has a separate background thread for compiling JavaScript. Windows runs that thread in parallel on a separate core when one is available. Compiling in the background enables users to keep interacting with webpages while Internet Explorer 9 generates even faster code. By running separately in the background, this process can take advantage of modern multi-core machines. In Microsoft's preliminary SunSpider benchmarks for the third 32-bit Internet Explorer 9 Platform Preview, it outperformed the Internet Explorer 8 engine by a factor of 10 and also outperformed the newest Firefox 4.0 pre-release. Microsoft provided information that its new javascript engine uses dead code elimination optimization for faster performance, which included a small section of code in the SunSpider test as dead code. Robert Sayre, a Mozilla developer investigated this further, showing that Internet Explorer 9's preview 3 dead code elimination had bugs, providing test cases exposing these bugs resulting in wrong compilation. After its final release, 32-bit Internet Explorer 9 has been tested to be the leading mainstream browser in the Sunspider performance test. The engine significantly improves support for ECMA-262: ECMAScript Language Specification standard, including features new to the recently finalized Fifth Edition of ECMA-262 (often abbreviated ES5). The Internet Explorer 9 browser release scored only 3 faults from 10440 tests in the Test262 Ecmascript conformance test (Ver. 0.6.2 5-Apr-2011) created by Ecma International. The 64-bit version of Internet Explorer 9, which is not the default browser even on 64-bit systems, does not have the JIT compiler and performs up to 4 times slower. DOM DOM improvements include: DOM Traversal and Range Full DOM L2 and L3 events getComputedStyle from DOM Style DOMContentLoaded CSS Internet Explorer 9 has improved Cascading Style Sheets (CSS) support. The Internet Explorer 9 implementation report, which was created using Internet Explorer 9 Beta, shows Internet Explorer 9 passing 97.7% of all tests on the W3C CSS 2.1 test suite. This is the highest pass rate amongst CSS 2.1 implementation reports submitted to W3C. CSS3 improvements include support for the following modules: CSS3 2D Transforms CSS3 Backgrounds and Borders CSS3 Color CSS3 Fonts CSS3 Media Queries CSS3 Namespaces CSS3 Values and Units CSS3 Selectors HTML5 HTML5 Media Internet Explorer 9 includes support for the HTML5 video and audio tags. The audio tag will include native support for the MP3 and AAC codecs, while the video tag will natively support H.264/MPEG-4 AVC. Support for other video formats, such as WebM, will require third-party plugins. HTML5 Canvas Internet Explorer 9 includes support for the HTML5 canvas element. HTML5 Inline SVG support The first Internet Explorer 9 Platform Preview has support for: Methods of embedding: inline HTML, inline XHTML, , full .svg documents Structure: , , , , Shapes: , , , , , , Text Filling, Stroking, (CSS3) Color DOML2 Core and SVGDOM Events Presentation Attributes and CSS Styling Transform definitions: translate, skewX, skewY, scale, rotate SVG elements that are supported in the Platform Preview are fully implemented. Elements that exist in the Platform Preview have corresponding SVGDOM support and can be styled with CSS/presentation attributes. The final build of Internet Explorer 9 also supports: Methods of embedding: , , , css image, .svgz Gradients and Patterns Clipping, Masking, and Compositing Cursor, Marker Remainder of Text, Transforms, Events Web typography Internet Explorer was the first browser to support web fonts through the @font-face rule, but only supported the Embedded OpenType (EOT) format, and lacked support for parts of the CSS3 fonts module. Internet Explorer 9 completed support for the CSS3 fonts module and added WOFF support. It is the first version of Internet Explorer to support TTF fonts, but will only use them if none of their embedding permission bits are set. Navigation Timings Internet Explorer 9 implements the new W3C Navigation Timings format. Microsoft has been a part of creating this format during the development of Internet Explorer 9. Tracking Protection Internet Explorer 9 includes a Tracking Protection feature which improves upon Internet Explorer 8's InPrivate Filtering. Internet Explorer 8's InPrivate Filtering blocked third-party content using an XML list which had to be imported or automatically built a list by observing third-party servers that users kept interacting with as they browsed the web, and once a server showed up more than a set number of times, InPrivate Filtering would block future connections to it Internet Explorer 9 supports two methods of tracking protection. The primary method is through the use of Tracking Protection Lists (TPL) which are now supplied by internet privacy-related organizations or companies. Tracking Protection by default remains on once enabled, unlike InPrivate Filtering which had to be enabled each time Internet Explorer 8 started. When a TPL is selected, Internet Explorer 9 blocks or allows third-party URI downloads based on rules in the TPL. Users can create their personal TPL's or select a TPL supplied by a third party. The other method is the use of a Do Not Track header and DOM property. Browser requests from Internet Explorer 9 include this header whenever a TPL is selected. Websites that follow this header should not deliver tracking mechanisms in their websites. At the moment following this header is a voluntary code of conduct but this method could in future be enforced by government legislation. These tracking protection methods were submitted to W3C for standardization. Malware protection Internet Explorer 9 uses layered protection against malware. It uses technical measures to protect its memory like the DEP/NSX protection, Safe Exception handlers (SafeSEH) and ASLR protection used in Internet Explorer 8. In addition to those existing forms of memory protection, Internet Explorer 9 now opts-in to SEHOP (Structured Exception Handler Overwrite Protection) which works by validating the integrity of the exception handling chain before dispatching exceptions. This helps ensure that structured exception handling cannot be used as an exploit vector, even when running outdated browser add-ons that have not been recompiled to take advantage of SafeSEH. In addition, Internet Explorer 9 is compiled with the new C++ compiler provided with Visual Studio 2010. This compiler includes a feature known as Enhanced GS, also known as Stack Buffer Overrun Detection, which helps prevent stack buffer overruns by detecting stack corruption and avoiding execution if such corruption is encountered. Internet Explorer 8 used SmartScreen technology, which, according to Microsoft, was successful against phishing or other malicious sites and in blocking of socially engineered malware. In Internet Explorer 9, the protection against malware downloads is extended with SmartScreen Application Reputation. This warns downloaders if they are downloading an application without a safe reputation from a site that does not have a safe reputation. In late 2010, the results of browser malware testing undertaken by NSS labs were published. The study looked at the browser's capability to prevent users following socially engineered links of a malicious nature and downloading malicious software. It did not test the browser's ability to block malicious web pages or code. According to NSS, Internet Explorer 9 blocked 99% of malware downloads compared to 90% for Internet Explorer 8 that does not have SmartScreen Application Reputation feature. In early 2010, similar tests gave Internet Explorer 8 an 85% passing grade, the 5% improvement being attributed to "continued investments in improved data intelligence". By comparison, the same research showed that Chrome 6, Firefox 3.6 and Safari 5, which all rely on Google's Safe Browsing Service, scored 6%, 19% and 11%, respectively. Opera 10 scored 0%, failing to "detect any of the socially engineered malware samples". Manufacturers of other browsers criticized the test, focusing upon the lack of transparency of URLs tested and the lack of consideration of layered security additional to the browser, with Google commenting that "The report itself clearly states that it does not evaluate browser security related to vulnerabilities in plug-ins or the browsers themselves", and Opera commenting that the results appeared "odd that they received no results from our data providers" and that "social malware protection is not an indicator of overall browser security". Internet Explorer 9's dual-pronged approach to blocking access to malicious URLs—SmartScreen Filter to block bad URLs, and Application Reputation to detect untrustworthy executables—provides the best socially engineered malware blocking of any stable browser version. Internet Explorer 9 blocked 92 percent of malware with its URL-based filtering, and 100 percent with Application-based filtering enabled. Internet Explorer 8, in second place, blocked 90 percent of malware. Tied for third place were Safari 5, Chrome 10, and Firefox 4, each blocking just 13 percent. Bringing up the rear was Opera 11, blocking just 5 percent of malware. User agent string Due to technical improvements of the browser, the Internet Explorer developer team decided to change the user agent (UA) string. The Mozilla/4.0 token was changed to Mozilla/5.0 to match the user agent strings of other recent browsers and to indicate that Internet Explorer 9 is more interoperable than previous versions. The Trident/4.0 token was likewise changed to Trident/5.0. Because long, extended UA strings cause compatibility issues, Internet Explorer 9's default UA string does not include .NET identifiers or other "pre-platform" and "post-platform" tokens that were sent by previous versions of the browser. The extended string is still available to websites via the browser's .userAgent property, and is sent when a web page is displayed in compatibility mode. Extensibility In Internet Explorer 9, the extensibility mechanisms for Browser Helper Objects (BHOs) and toolbars remain the same. Not loading BHOs or toolbars improves startup time, but limits the ability of developers to augment the user experience through these extensibility mechanisms. Removed features Separate search box Security zone information and Protected Mode'88' status, progress bar, and other status bar elements except for the Zoom button Support for DirectX page transitions Possibility to place the menu bar above the address bar Reception Release candidate A release candidate was launched on February 10, 2011 in San Francisco. New features since the last beta version were tracking protection and use of hardware accelerated graphics, and improvements included faster performance and more support for emerging HTML5 standards. Noting that according to Net Applications, Internet Explorer's share fell to 56% in January 2011, the BBC quoted Microsoft's claims that Internet Explorer 9 is "playing catch up, but it leapfrogs everything" and "you are seeing innovation after innovation that other folks are catching up to." In The Register, Tim Anderson said Internet Explorer 9 was Microsoft's answer to the fall in Internet Explorer's market share (from 68.5% in July 2008 to 46% in January 2011, according to StatCounter). He felt it was "fast and polished", a "remarkable improvement" over version 8, noting "superb" development tools and "real and significant" support for HTML5, though "not as comprehensive as the company's publicity implies." However, configuration options are "strewn all over the user interface", and the "distinctive and excellent" ActiveX filtering and Tracking Protection features might be "perplexing for less technical users." Having reached release candidate status eleven months after it was originally announced at the March 2010 MIX conference, "Microsoft's development process is too slow." The new version is "a good modern browser" but "the competition is moving faster."Computing observed that "the feature set has piled up" since development began, with recent changes including "a completely rejigged JavaScript engine, and far better web standards support." It reported that Internet Explorer 9 RC ranked above Firefox, slightly above Safari, and below Chrome and Opera on Futuremark's Peacekeeper browser benchmark. Internet Explorer 9 scored 95% on the unofficial Acid3 standards test. Michael Muchmore's first impressions in PC Magazine were broadly positive, praising features of the InPrivate mode (which "I'm surprised other browser makers haven't included") and concluding that Internet Explorer 9's tracking protection was "more flexible and comprehensive" than Mozilla's. The review reported that Internet Explorer 9 "now wins the SunSpider JavaScript Benchmark" and had achieved "a hefty improvement" on Google's JavaScript benchmark – though it was still far behind Chrome 9. However "in normal browsing, I was hard pressed to see a [performance] difference between Chrome and Internet Explorer." The release candidate was also "perfectly" compatible with far more sites than the beta, but there are still issues with some sites because their developers are not yet testing with the new browser. The RC scores 4 out of 5 ("very good") for now. Final release On its first day of commercial availability, Internet Explorer 9 was downloaded over 2.35 million times. Blogging his March 2011 performance tests for ZDNet, Adrian Kingsley-Hughes concluded that Chrome 10, Internet Explorer 9 (32-bit) Final Release, Opera 11.01 and Firefox 4's Release candidate were "pretty evenly matched.... Microsoft has worked hard on IE, taking it from being the slowest in the pack to one of the fastest. Bottom line, I really don’t think that JavaScript performance is an issue any more, and certainly in real-world testing it’s hard to see a difference between the browsers." On 31 October 2011, PC World ranked Internet Explorer 9 as #19 on its 100 Best Products of 2011. The other web browser listed was Maxthon 3.1, a hybrid browser based on Google Chrome and Internet Explorer. A review of IE9 beta in PC World'' noted a performance improvement over IE8. Mobile version At the February 2011 Mobile World Congress, Steve Ballmer announced a major update to Windows Phone due towards the end of 2011, which will include a mobile version of Internet Explorer 9 that supports the same web standards (e.g. HTML5) and hardware accelerated graphics as the PC version. Microsoft demonstrated hardware-accelerated performance of a fish-tank demo using a development build of mobile Internet Explorer 9 compared with slow performance on the November 2010 iOS 4.2.1 RTM of Safari on iPhone 4. See also Browser wars Comparison of web browsers List of web browsers Timeline of web browsers Usage share of web browsers References Further reading External links Beauty of the Web: Showcasing Internet Explorer – a Microsoft website that showcases Internet Explorer in general Internet Explorer Test Drive – a Microsoft website that features web browser benchmark tests Build My Pinned Site – a Microsoft website that teaches how to use site pinning capabilities of Internet Explorer 9 and later 2011 software Internet Explorer Windows web browsers
233488
https://en.wikipedia.org/wiki/Machine%20learning
Machine learning
Machine learning (ML) is the study of computer algorithms that can improve automatically through experience and by the use of data. It is seen as a part of artificial intelligence. Machine learning algorithms build a model based on sample data, known as training data, in order to make predictions or decisions without being explicitly programmed to do so. Machine learning algorithms are used in a wide variety of applications, such as in medicine, email filtering, speech recognition, and computer vision, where it is difficult or unfeasible to develop conventional algorithms to perform the needed tasks. A subset of machine learning is closely related to computational statistics, which focuses on making predictions using computers; but not all machine learning is statistical learning. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a related field of study, focusing on exploratory data analysis through unsupervised learning. Some implementations of machine learning use data and neural networks in a way that mimics the working of a biological brain. In its application across business problems, machine learning is also referred to as predictive analytics. Overview Learning algorithms work on the basis that strategies, algorithms, and inferences that worked well in the past are likely to continue working well in the future. These inferences can be obvious, such as "since the sun rose every morning for the last 10,000 days, it will probably rise tomorrow morning as well". They can be nuanced, such as "X% of families have geographically separate species with color variants, so there is a Y% chance that undiscovered black swans exist". Machine learning programs can perform tasks without being explicitly programmed to do so. It involves computers learning from data provided so that they carry out certain tasks. For simple tasks assigned to computers, it is possible to program algorithms telling the machine how to execute all steps required to solve the problem at hand; on the computer's part, no learning is needed. For more advanced tasks, it can be challenging for a human to manually create the needed algorithms. In practice, it can turn out to be more effective to help the machine develop its own algorithm, rather than having human programmers specify every needed step. The discipline of machine learning employs various approaches to teach computers to accomplish tasks where no fully satisfactory algorithm is available. In cases where vast numbers of potential answers exist, one approach is to label some of the correct answers as valid. This can then be used as training data for the computer to improve the algorithm(s) it uses to determine correct answers. For example, to train a system for the task of digital character recognition, the MNIST dataset of handwritten digits has often been used. History and relationships to other fields The term machine learning was coined in 1959 by Arthur Samuel, an American IBMer and pioneer in the field of computer gaming and artificial intelligence. Also the synonym self-teaching computers was used in this time period. A representative book of the machine learning research during the 1960s was the Nilsson's book on Learning Machines, dealing mostly with machine learning for pattern classification. Interest related to pattern recognition continued into the 1970s, as described by Duda and Hart in 1973. In 1981 a report was given on using teaching strategies so that a neural network learns to recognize 40 characters (26 letters, 10 digits, and 4 special symbols) from a computer terminal. Tom M. Mitchell provided a widely quoted, more formal definition of the algorithms studied in the machine learning field: "A computer program is said to learn from experience E with respect to some class of tasks T and performance measure P if its performance at tasks in T, as measured by P, improves with experience E." This definition of the tasks in which machine learning is concerned offers a fundamentally operational definition rather than defining the field in cognitive terms. This follows Alan Turing's proposal in his paper "Computing Machinery and Intelligence", in which the question "Can machines think?" is replaced with the question "Can machines do what we (as thinking entities) can do?". Modern day machine learning has two objectives, one is to classify data based on models which have been developed, the other purpose is to make predictions for future outcomes based on these models. A hypothetical algorithm specific to classifying data may use computer vision of moles coupled with supervised learning in order to train it to classify the cancerous moles. Where as, a machine learning algorithm for stock trading may inform the trader of future potential predictions. Artificial intelligence As a scientific endeavor, machine learning grew out of the quest for artificial intelligence. In the early days of AI as an academic discipline, some researchers were interested in having machines learn from data. They attempted to approach the problem with various symbolic methods, as well as what was then termed "neural networks"; these were mostly perceptrons and other models that were later found to be reinventions of the generalized linear models of statistics. Probabilistic reasoning was also employed, especially in automated medical diagnosis. However, an increasing emphasis on the logical, knowledge-based approach caused a rift between AI and machine learning. Probabilistic systems were plagued by theoretical and practical problems of data acquisition and representation. By 1980, expert systems had come to dominate AI, and statistics was out of favor. Work on symbolic/knowledge-based learning did continue within AI, leading to inductive logic programming, but the more statistical line of research was now outside the field of AI proper, in pattern recognition and information retrieval. Neural networks research had been abandoned by AI and computer science around the same time. This line, too, was continued outside the AI/CS field, as "connectionism", by researchers from other disciplines including Hopfield, Rumelhart and Hinton. Their main success came in the mid-1980s with the reinvention of backpropagation. Machine learning (ML), reorganized as a separate field, started to flourish in the 1990s. The field changed its goal from achieving artificial intelligence to tackling solvable problems of a practical nature. It shifted focus away from the symbolic approaches it had inherited from AI, and toward methods and models borrowed from statistics and probability theory. The difference between ML and AI is frequently misunderstood. ML learns and predicts based on passive observations, whereas AI implies an agent interacting with the environment to learn and take actions that maximize its chance of successfully achieving its goals. As of 2020, many sources continue to assert that ML remains a subfield of AI. Others have the view that not all ML is part of AI, but only an 'intelligent subset' of ML should be considered AI. Data mining Machine learning and data mining often employ the same methods and overlap significantly, but while machine learning focuses on prediction, based on known properties learned from the training data, data mining focuses on the discovery of (previously) unknown properties in the data (this is the analysis step of knowledge discovery in databases). Data mining uses many machine learning methods, but with different goals; on the other hand, machine learning also employs data mining methods as "unsupervised learning" or as a preprocessing step to improve learner accuracy. Much of the confusion between these two research communities (which do often have separate conferences and separate journals, ECML PKDD being a major exception) comes from the basic assumptions they work with: in machine learning, performance is usually evaluated with respect to the ability to reproduce known knowledge, while in knowledge discovery and data mining (KDD) the key task is the discovery of previously unknown knowledge. Evaluated with respect to known knowledge, an uninformed (unsupervised) method will easily be outperformed by other supervised methods, while in a typical KDD task, supervised methods cannot be used due to the unavailability of training data. Optimization Machine learning also has intimate ties to optimization: many learning problems are formulated as minimization of some loss function on a training set of examples. Loss functions express the discrepancy between the predictions of the model being trained and the actual problem instances (for example, in classification, one wants to assign a label to instances, and models are trained to correctly predict the pre-assigned labels of a set of examples). Generalization The difference between optimization and machine learning arises from the goal of generalization: while optimization algorithms can minimize the loss on a training set, machine learning is concerned with minimizing the loss on unseen samples. Characterizing the generalization of various learning algorithms is an active topic of current research, especially for deep learning algorithms. Statistics Machine learning and statistics are closely related fields in terms of methods, but distinct in their principal goal: statistics draws population inferences from a sample, while machine learning finds generalizable predictive patterns. According to Michael I. Jordan, the ideas of machine learning, from methodological principles to theoretical tools, have had a long pre-history in statistics. He also suggested the term data science as a placeholder to call the overall field. Leo Breiman distinguished two statistical modeling paradigms: data model and algorithmic model, wherein "algorithmic model" means more or less the machine learning algorithms like Random forest. Some statisticians have adopted methods from machine learning, leading to a combined field that they call statistical learning. Theory A core objective of a learner is to generalize from its experience. Generalization in this context is the ability of a learning machine to perform accurately on new, unseen examples/tasks after having experienced a learning data set. The training examples come from some generally unknown probability distribution (considered representative of the space of occurrences) and the learner has to build a general model about this space that enables it to produce sufficiently accurate predictions in new cases. The computational analysis of machine learning algorithms and their performance is a branch of theoretical computer science known as computational learning theory. Because training sets are finite and the future is uncertain, learning theory usually does not yield guarantees of the performance of algorithms. Instead, probabilistic bounds on the performance are quite common. The bias–variance decomposition is one way to quantify generalization error. For the best performance in the context of generalization, the complexity of the hypothesis should match the complexity of the function underlying the data. If the hypothesis is less complex than the function, then the model has under fitted the data. If the complexity of the model is increased in response, then the training error decreases. But if the hypothesis is too complex, then the model is subject to overfitting and generalization will be poorer. In addition to performance bounds, learning theorists study the time complexity and feasibility of learning. In computational learning theory, a computation is considered feasible if it can be done in polynomial time. There are two kinds of time complexity results: Positive results show that a certain class of functions can be learned in polynomial time. Negative results show that certain classes cannot be learned in polynomial time. Approaches Machine learning approaches are traditionally divided into three broad categories, depending on the nature of the "signal" or "feedback" available to the learning system: Supervised learning: The computer is presented with example inputs and their desired outputs, given by a "teacher", and the goal is to learn a general rule that maps inputs to outputs. Unsupervised learning: No labels are given to the learning algorithm, leaving it on its own to find structure in its input. Unsupervised learning can be a goal in itself (discovering hidden patterns in data) or a means towards an end (feature learning). Reinforcement learning: A computer program interacts with a dynamic environment in which it must perform a certain goal (such as driving a vehicle or playing a game against an opponent). As it navigates its problem space, the program is provided feedback that's analogous to rewards, which it tries to maximize. Supervised learning Supervised learning algorithms build a mathematical model of a set of data that contains both the inputs and the desired outputs. The data is known as training data, and consists of a set of training examples. Each training example has one or more inputs and the desired output, also known as a supervisory signal. In the mathematical model, each training example is represented by an array or vector, sometimes called a feature vector, and the training data is represented by a matrix. Through iterative optimization of an objective function, supervised learning algorithms learn a function that can be used to predict the output associated with new inputs. An optimal function will allow the algorithm to correctly determine the output for inputs that were not a part of the training data. An algorithm that improves the accuracy of its outputs or predictions over time is said to have learned to perform that task. Types of supervised learning algorithms include active learning, classification and regression. Classification algorithms are used when the outputs are restricted to a limited set of values, and regression algorithms are used when the outputs may have any numerical value within a range. As an example, for a classification algorithm that filters emails, the input would be an incoming email, and the output would be the name of the folder in which to file the email. Similarity learning is an area of supervised machine learning closely related to regression and classification, but the goal is to learn from examples using a similarity function that measures how similar or related two objects are. It has applications in ranking, recommendation systems, visual identity tracking, face verification, and speaker verification. Unsupervised learning Unsupervised learning algorithms take a set of data that contains only inputs, and find structure in the data, like grouping or clustering of data points. The algorithms, therefore, learn from test data that has not been labeled, classified or categorized. Instead of responding to feedback, unsupervised learning algorithms identify commonalities in the data and react based on the presence or absence of such commonalities in each new piece of data. A central application of unsupervised learning is in the field of density estimation in statistics, such as finding the probability density function. Though unsupervised learning encompasses other domains involving summarizing and explaining data features. Cluster analysis is the assignment of a set of observations into subsets (called clusters) so that observations within the same cluster are similar according to one or more predesignated criteria, while observations drawn from different clusters are dissimilar. Different clustering techniques make different assumptions on the structure of the data, often defined by some similarity metric and evaluated, for example, by internal compactness, or the similarity between members of the same cluster, and separation, the difference between clusters. Other methods are based on estimated density and graph connectivity. Semi-supervised learning Semi-supervised learning falls between unsupervised learning (without any labeled training data) and supervised learning (with completely labeled training data). Some of the training examples are missing training labels, yet many machine-learning researchers have found that unlabeled data, when used in conjunction with a small amount of labeled data, can produce a considerable improvement in learning accuracy. In weakly supervised learning, the training labels are noisy, limited, or imprecise; however, these labels are often cheaper to obtain, resulting in larger effective training sets. Reinforcement learning Reinforcement learning is an area of machine learning concerned with how software agents ought to take actions in an environment so as to maximize some notion of cumulative reward. Due to its generality, the field is studied in many other disciplines, such as game theory, control theory, operations research, information theory, simulation-based optimization, multi-agent systems, swarm intelligence, statistics and genetic algorithms. In machine learning, the environment is typically represented as a Markov decision process (MDP). Many reinforcement learning algorithms use dynamic programming techniques. Reinforcement learning algorithms do not assume knowledge of an exact mathematical model of the MDP, and are used when exact models are infeasible. Reinforcement learning algorithms are used in autonomous vehicles or in learning to play a game against a human opponent. Dimensionality reduction Dimensionality reduction is a process of reducing the number of random variables under consideration by obtaining a set of principal variables. In other words, it is a process of reducing the dimension of the feature set, also called "number of features". Most of the dimensionality reduction techniques can be considered as either feature elimination or extraction. One of the popular methods of dimensionality reduction is principal component analysis (PCA). PCA involves changing higher-dimensional data (e.g., 3D) to a smaller space (e.g., 2D). This results in a smaller dimension of data (2D instead of 3D), while keeping all original variables in the model without changing the data. The manifold hypothesis proposes that high-dimensional data sets lie along low-dimensional manifolds, and many dimensionality reduction techniques make this assumption, leading to the area of manifold learning and manifold regularization. Other types Other approaches have been developed which don't fit neatly into this three-fold categorisation, and sometimes more than one is used by the same machine learning system. For example topic modeling, meta learning. As of 2020, deep learning has become the dominant approach for much ongoing work in the field of machine learning. Self learning Self-learning as a machine learning paradigm was introduced in 1982 along with a neural network capable of self-learning named crossbar adaptive array (CAA). It is a learning with no external rewards and no external teacher advice. The CAA self-learning algorithm computes, in a crossbar fashion, both decisions about actions and emotions (feelings) about consequence situations. The system is driven by the interaction between cognition and emotion. The self-learning algorithm updates a memory matrix W =||w(a,s)|| such that in each iteration executes the following machine learning routine: In situation s perform an action a; Receive consequence situation s’; Compute emotion of being in consequence situation v(s’); Update crossbar memory w’(a,s) = w(a,s) + v(s’). It is a system with only one input, situation s, and only one output, action (or behavior) a. There is neither a separate reinforcement input nor an advice input from the environment. The backpropagated value (secondary reinforcement) is the emotion toward the consequence situation. The CAA exists in two environments, one is the behavioral environment where it behaves, and the other is the genetic environment, wherefrom it initially and only once receives initial emotions about situations to be encountered in the behavioral environment. After receiving the genome (species) vector from the genetic environment, the CAA learns a goal-seeking behavior, in an environment that contains both desirable and undesirable situations. Feature learning Several learning algorithms aim at discovering better representations of the inputs provided during training. Classic examples include principal components analysis and cluster analysis. Feature learning algorithms, also called representation learning algorithms, often attempt to preserve the information in their input but also transform it in a way that makes it useful, often as a pre-processing step before performing classification or predictions. This technique allows reconstruction of the inputs coming from the unknown data-generating distribution, while not being necessarily faithful to configurations that are implausible under that distribution. This replaces manual feature engineering, and allows a machine to both learn the features and use them to perform a specific task. Feature learning can be either supervised or unsupervised. In supervised feature learning, features are learned using labeled input data. Examples include artificial neural networks, multilayer perceptrons, and supervised dictionary learning. In unsupervised feature learning, features are learned with unlabeled input data. Examples include dictionary learning, independent component analysis, autoencoders, matrix factorization and various forms of clustering. Manifold learning algorithms attempt to do so under the constraint that the learned representation is low-dimensional. Sparse coding algorithms attempt to do so under the constraint that the learned representation is sparse, meaning that the mathematical model has many zeros. Multilinear subspace learning algorithms aim to learn low-dimensional representations directly from tensor representations for multidimensional data, without reshaping them into higher-dimensional vectors. Deep learning algorithms discover multiple levels of representation, or a hierarchy of features, with higher-level, more abstract features defined in terms of (or generating) lower-level features. It has been argued that an intelligent machine is one that learns a representation that disentangles the underlying factors of variation that explain the observed data. Feature learning is motivated by the fact that machine learning tasks such as classification often require input that is mathematically and computationally convenient to process. However, real-world data such as images, video, and sensory data has not yielded to attempts to algorithmically define specific features. An alternative is to discover such features or representations through examination, without relying on explicit algorithms. Sparse dictionary learning Sparse dictionary learning is a feature learning method where a training example is represented as a linear combination of basis functions, and is assumed to be a sparse matrix. The method is strongly NP-hard and difficult to solve approximately. A popular heuristic method for sparse dictionary learning is the K-SVD algorithm. Sparse dictionary learning has been applied in several contexts. In classification, the problem is to determine the class to which a previously unseen training example belongs. For a dictionary where each class has already been built, a new training example is associated with the class that is best sparsely represented by the corresponding dictionary. Sparse dictionary learning has also been applied in image de-noising. The key idea is that a clean image patch can be sparsely represented by an image dictionary, but the noise cannot. Anomaly detection In data mining, anomaly detection, also known as outlier detection, is the identification of rare items, events or observations which raise suspicions by differing significantly from the majority of the data. Typically, the anomalous items represent an issue such as bank fraud, a structural defect, medical problems or errors in a text. Anomalies are referred to as outliers, novelties, noise, deviations and exceptions. In particular, in the context of abuse and network intrusion detection, the interesting objects are often not rare objects, but unexpected bursts of inactivity. This pattern does not adhere to the common statistical definition of an outlier as a rare object, and many outlier detection methods (in particular, unsupervised algorithms) will fail on such data unless it has been aggregated appropriately. Instead, a cluster analysis algorithm may be able to detect the micro-clusters formed by these patterns. Three broad categories of anomaly detection techniques exist. Unsupervised anomaly detection techniques detect anomalies in an unlabeled test data set under the assumption that the majority of the instances in the data set are normal, by looking for instances that seem to fit least to the remainder of the data set. Supervised anomaly detection techniques require a data set that has been labeled as "normal" and "abnormal" and involves training a classifier (the key difference to many other statistical classification problems is the inherently unbalanced nature of outlier detection). Semi-supervised anomaly detection techniques construct a model representing normal behavior from a given normal training data set and then test the likelihood of a test instance to be generated by the model. Robot learning Robot learning is inspired by a multitude of machine learning methods, starting from supervised learning, reinforcement learning, and finally meta-learning (e.g. MAML). Association rules Association rule learning is a rule-based machine learning method for discovering relationships between variables in large databases. It is intended to identify strong rules discovered in databases using some measure of "interestingness". Rule-based machine learning is a general term for any machine learning method that identifies, learns, or evolves "rules" to store, manipulate or apply knowledge. The defining characteristic of a rule-based machine learning algorithm is the identification and utilization of a set of relational rules that collectively represent the knowledge captured by the system. This is in contrast to other machine learning algorithms that commonly identify a singular model that can be universally applied to any instance in order to make a prediction. Rule-based machine learning approaches include learning classifier systems, association rule learning, and artificial immune systems. Based on the concept of strong rules, Rakesh Agrawal, Tomasz Imieliński and Arun Swami introduced association rules for discovering regularities between products in large-scale transaction data recorded by point-of-sale (POS) systems in supermarkets. For example, the rule found in the sales data of a supermarket would indicate that if a customer buys onions and potatoes together, they are likely to also buy hamburger meat. Such information can be used as the basis for decisions about marketing activities such as promotional pricing or product placements. In addition to market basket analysis, association rules are employed today in application areas including Web usage mining, intrusion detection, continuous production, and bioinformatics. In contrast with sequence mining, association rule learning typically does not consider the order of items either within a transaction or across transactions. Learning classifier systems (LCS) are a family of rule-based machine learning algorithms that combine a discovery component, typically a genetic algorithm, with a learning component, performing either supervised learning, reinforcement learning, or unsupervised learning. They seek to identify a set of context-dependent rules that collectively store and apply knowledge in a piecewise manner in order to make predictions. Inductive logic programming (ILP) is an approach to rule-learning using logic programming as a uniform representation for input examples, background knowledge, and hypotheses. Given an encoding of the known background knowledge and a set of examples represented as a logical database of facts, an ILP system will derive a hypothesized logic program that entails all positive and no negative examples. Inductive programming is a related field that considers any kind of programming language for representing hypotheses (and not only logic programming), such as functional programs. Inductive logic programming is particularly useful in bioinformatics and natural language processing. Gordon Plotkin and Ehud Shapiro laid the initial theoretical foundation for inductive machine learning in a logical setting. Shapiro built their first implementation (Model Inference System) in 1981: a Prolog program that inductively inferred logic programs from positive and negative examples. The term inductive here refers to philosophical induction, suggesting a theory to explain observed facts, rather than mathematical induction, proving a property for all members of a well-ordered set. Models Performing machine learning involves creating a model, which is trained on some training data and then can process additional data to make predictions. Various types of models have been used and researched for machine learning systems. Artificial neural networks Artificial neural networks (ANNs), or connectionist systems, are computing systems vaguely inspired by the biological neural networks that constitute animal brains. Such systems "learn" to perform tasks by considering examples, generally without being programmed with any task-specific rules. An ANN is a model based on a collection of connected units or nodes called "artificial neurons", which loosely model the neurons in a biological brain. Each connection, like the synapses in a biological brain, can transmit information, a "signal", from one artificial neuron to another. An artificial neuron that receives a signal can process it and then signal additional artificial neurons connected to it. In common ANN implementations, the signal at a connection between artificial neurons is a real number, and the output of each artificial neuron is computed by some non-linear function of the sum of its inputs. The connections between artificial neurons are called "edges". Artificial neurons and edges typically have a weight that adjusts as learning proceeds. The weight increases or decreases the strength of the signal at a connection. Artificial neurons may have a threshold such that the signal is only sent if the aggregate signal crosses that threshold. Typically, artificial neurons are aggregated into layers. Different layers may perform different kinds of transformations on their inputs. Signals travel from the first layer (the input layer) to the last layer (the output layer), possibly after traversing the layers multiple times. The original goal of the ANN approach was to solve problems in the same way that a human brain would. However, over time, attention moved to performing specific tasks, leading to deviations from biology. Artificial neural networks have been used on a variety of tasks, including computer vision, speech recognition, machine translation, social network filtering, playing board and video games and medical diagnosis. Deep learning consists of multiple hidden layers in an artificial neural network. This approach tries to model the way the human brain processes light and sound into vision and hearing. Some successful applications of deep learning are computer vision and speech recognition. Decision trees Decision tree learning uses a decision tree as a predictive model to go from observations about an item (represented in the branches) to conclusions about the item's target value (represented in the leaves). It is one of the predictive modeling approaches used in statistics, data mining, and machine learning. Tree models where the target variable can take a discrete set of values are called classification trees; in these tree structures, leaves represent class labels and branches represent conjunctions of features that lead to those class labels. Decision trees where the target variable can take continuous values (typically real numbers) are called regression trees. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making. In data mining, a decision tree describes data, but the resulting classification tree can be an input for decision making. Support-vector machines Support-vector machines (SVMs), also known as support-vector networks, are a set of related supervised learning methods used for classification and regression. Given a set of training examples, each marked as belonging to one of two categories, an SVM training algorithm builds a model that predicts whether a new example falls into one category or the other. An SVM training algorithm is a non-probabilistic, binary, linear classifier, although methods such as Platt scaling exist to use SVM in a probabilistic classification setting. In addition to performing linear classification, SVMs can efficiently perform a non-linear classification using what is called the kernel trick, implicitly mapping their inputs into high-dimensional feature spaces. Regression analysis Regression analysis encompasses a large variety of statistical methods to estimate the relationship between input variables and their associated features. Its most common form is linear regression, where a single line is drawn to best fit the given data according to a mathematical criterion such as ordinary least squares. The latter is often extended by regularization (mathematics) methods to mitigate overfitting and bias, as in ridge regression. When dealing with non-linear problems, go-to models include polynomial regression (for example, used for trendline fitting in Microsoft Excel), logistic regression (often used in statistical classification) or even kernel regression, which introduces non-linearity by taking advantage of the kernel trick to implicitly map input variables to higher-dimensional space. Bayesian networks A Bayesian network, belief network, or directed acyclic graphical model is a probabilistic graphical model that represents a set of random variables and their conditional independence with a directed acyclic graph (DAG). For example, a Bayesian network could represent the probabilistic relationships between diseases and symptoms. Given symptoms, the network can be used to compute the probabilities of the presence of various diseases. Efficient algorithms exist that perform inference and learning. Bayesian networks that model sequences of variables, like speech signals or protein sequences, are called dynamic Bayesian networks. Generalizations of Bayesian networks that can represent and solve decision problems under uncertainty are called influence diagrams. Genetic algorithms A genetic algorithm (GA) is a search algorithm and heuristic technique that mimics the process of natural selection, using methods such as mutation and crossover to generate new genotypes in the hope of finding good solutions to a given problem. In machine learning, genetic algorithms were used in the 1980s and 1990s. Conversely, machine learning techniques have been used to improve the performance of genetic and evolutionary algorithms. Training models Typically, machine learning models require a high quantity of reliable data in order for the models to perform accurate predictions. When training a machine learning model, machine learning engineers need to target and collect a large and representative sample of data. Data from the training set can be as varied as a corpus of text, a collection of images, sensor data, and data collected from individual users of a service. Overfitting is something to watch out for when training a machine learning model. Trained models derived from biased or non-evaluated data can result in skewed or undesired predictions. Bias models may result in detrimental outcomes thereby furthering the negative impacts to society or objectives. Algorithmic bias is a potential result from data not fully prepared for training. Machine learning ethics is becoming a field of study and notably be integrated within machine learning engineering teams. Federated learning Federated learning is an adapted form of distributed artificial intelligence to training machine learning models that decentralizes the training process, allowing for users' privacy to be maintained by not needing to send their data to a centralized server. This also increases efficiency by decentralizing the training process to many devices. For example, Gboard uses federated machine learning to train search query prediction models on users' mobile phones without having to send individual searches back to Google. Applications There are many applications for machine learning, including: Agriculture Anatomy Adaptive website Affective computing Astronomy Banking Bioinformatics Brain–machine interfaces Cheminformatics Citizen science Climate science Computer networks Computer vision Credit-card fraud detection Data quality DNA sequence classification Economics Financial market analysis General game playing Handwriting recognition Information retrieval Insurance Internet fraud detection Knowledge graph embedding Linguistics Machine learning control Machine perception Machine translation Marketing Medical diagnosis Natural language processing Natural language understanding Online advertising Optimization Recommender systems Robot locomotion Search engines Sentiment analysis Sequence mining Software engineering Speech recognition Structural health monitoring Syntactic pattern recognition Telecommunication Theorem proving Time-series forecasting User behavior analytics Behaviorism In 2006, the media-services provider Netflix held the first "Netflix Prize" competition to find a program to better predict user preferences and improve the accuracy of its existing Cinematch movie recommendation algorithm by at least 10%. A joint team made up of researchers from AT&T Labs-Research in collaboration with the teams Big Chaos and Pragmatic Theory built an ensemble model to win the Grand Prize in 2009 for $1 million. Shortly after the prize was awarded, Netflix realized that viewers' ratings were not the best indicators of their viewing patterns ("everything is a recommendation") and they changed their recommendation engine accordingly. In 2010 The Wall Street Journal wrote about the firm Rebellion Research and their use of machine learning to predict the financial crisis. In 2012, co-founder of Sun Microsystems, Vinod Khosla, predicted that 80% of medical doctors' jobs would be lost in the next two decades to automated machine learning medical diagnostic software. In 2014, it was reported that a machine learning algorithm had been applied in the field of art history to study fine art paintings and that it may have revealed previously unrecognized influences among artists. In 2019 Springer Nature published the first research book created using machine learning. In 2020, machine learning technology was used to help make diagnoses and aid researchers in developing a cure for COVID-19. Machine learning is recently applied to predict the green behavior of human-being. Recently, machine learning technology is also applied to optimise smartphone's performance and thermal behaviour based on the user's interaction with the phone. Limitations Although machine learning has been transformative in some fields, machine-learning programs often fail to deliver expected results. Reasons for this are numerous: lack of (suitable) data, lack of access to the data, data bias, privacy problems, badly chosen tasks and algorithms, wrong tools and people, lack of resources, and evaluation problems. In 2018, a self-driving car from Uber failed to detect a pedestrian, who was killed after a collision. Attempts to use machine learning in healthcare with the IBM Watson system failed to deliver even after years of time and billions of dollars invested. Machine learning has been used as a strategy to update the evidence related to systematic review and increased reviewer burden related to the growth of biomedical literature. While it has improved with training sets, it has not yet developed sufficiently to reduce the workload burden without limiting the necessary sensitivity for the findings research themselves. Bias Machine learning approaches in particular can suffer from different data biases. A machine learning system trained specifically on current customers may not be able to predict the needs of new customer groups that are not represented in the training data. When trained on man-made data, machine learning is likely to pick up the constitutional and unconscious biases already present in society. Language models learned from data have been shown to contain human-like biases. Machine learning systems used for criminal risk assessment have been found to be biased against black people. In 2015, Google photos would often tag black people as gorillas, and in 2018 this still was not well resolved, but Google reportedly was still using the workaround to remove all gorillas from the training data, and thus was not able to recognize real gorillas at all. Similar issues with recognizing non-white people have been found in many other systems. In 2016, Microsoft tested a chatbot that learned from Twitter, and it quickly picked up racist and sexist language. Because of such challenges, the effective use of machine learning may take longer to be adopted in other domains. Concern for fairness in machine learning, that is, reducing bias in machine learning and propelling its use for human good is increasingly expressed by artificial intelligence scientists, including Fei-Fei Li, who reminds engineers that "There’s nothing artificial about AI...It’s inspired by people, it’s created by people, and—most importantly—it impacts people. It is a powerful tool we are only just beginning to understand, and that is a profound responsibility.” Overfitting Settling on a bad, overly complex theory gerrymandered to fit all the past training data is known as overfitting. Many systems attempt to reduce overfitting by rewarding a theory in accordance with how well it fits the data, but penalizing the theory in accordance with how complex the theory is. Other limitations Learners can also disappoint by "learning the wrong lesson". A toy example is that an image classifier trained only on pictures of brown horses and black cats might conclude that all brown patches are likely to be horses. A real-world example is that, unlike humans, current image classifiers often don't primarily make judgments from the spatial relationship between components of the picture, and they learn relationships between pixels that humans are oblivious to, but that still correlate with images of certain types of real objects. Modifying these patterns on a legitimate image can result in "adversarial" images that the system misclassifies. Adversarial vulnerabilities can also result in nonlinear systems, or from non-pattern perturbations. Some systems are so brittle that changing a single adversarial pixel predictably induces misclassification. Model assessments Classification of machine learning models can be validated by accuracy estimation techniques like the holdout method, which splits the data in a training and test set (conventionally 2/3 training set and 1/3 test set designation) and evaluates the performance of the training model on the test set. In comparison, the K-fold-cross-validation method randomly partitions the data into K subsets and then K experiments are performed each respectively considering 1 subset for evaluation and the remaining K-1 subsets for training the model. In addition to the holdout and cross-validation methods, bootstrap, which samples n instances with replacement from the dataset, can be used to assess model accuracy. In addition to overall accuracy, investigators frequently report sensitivity and specificity meaning True Positive Rate (TPR) and True Negative Rate (TNR) respectively. Similarly, investigators sometimes report the false positive rate (FPR) as well as the false negative rate (FNR). However, these rates are ratios that fail to reveal their numerators and denominators. The total operating characteristic (TOC) is an effective method to express a model's diagnostic ability. TOC shows the numerators and denominators of the previously mentioned rates, thus TOC provides more information than the commonly used receiver operating characteristic (ROC) and ROC's associated area under the curve (AUC). Ethics Machine learning poses a host of ethical questions. Systems which are trained on datasets collected with biases may exhibit these biases upon use (algorithmic bias), thus digitizing cultural prejudices. For example, in 1988, the UK's Commission for Racial Equality found that St. George's Medical School had been using a computer program trained from data of previous admissions staff and this program had denied nearly 60 candidates who were found to be either women or had non-European sounding names. Using job hiring data from a firm with racist hiring policies may lead to a machine learning system duplicating the bias by scoring job applicants by similarity to previous successful applicants. Responsible collection of data and documentation of algorithmic rules used by a system thus is a critical part of machine learning. AI can be well-equipped to make decisions in technical fields, which rely heavily on data and historical information. These decisions rely on objectivity and logical reasoning. Because human languages contain biases, machines trained on language corpora will necessarily also learn these biases. Other forms of ethical challenges, not related to personal biases, are seen in health care. There are concerns among health care professionals that these systems might not be designed in the public's interest but as income-generating machines. This is especially true in the United States where there is a long-standing ethical dilemma of improving health care, but also increasing profits. For example, the algorithms could be designed to provide patients with unnecessary tests or medication in which the algorithm's proprietary owners hold stakes. There is potential for machine learning in health care to provide professionals an additional tool to diagnose, medicate, and plan recovery paths for patients, but this requires these biases to be mitigated. Hardware Since the 2010s, advances in both machine learning algorithms and computer hardware have led to more efficient methods for training deep neural networks (a particular narrow subdomain of machine learning) that contain many layers of non-linear hidden units. By 2019, graphic processing units (GPUs), often with AI-specific enhancements, had displaced CPUs as the dominant method of training large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from AlexNet (2012) to AlphaZero (2017), and found a 300,000-fold increase in the amount of compute required, with a doubling-time trendline of 3.4 months. Neuromorphic/Physical Neural Networks A physical neural network or Neuromorphic computer is a type of artificial neural network in which an electrically adjustable material is used to emulate the function of a neural synapse. "Physical" neural network is used to emphasize the reliance on physical hardware used to emulate neurons as opposed to software-based approaches. More generally the term is applicable to other artificial neural networks in which a memristor or other electrically adjustable resistance material is used to emulate a neural synapse. Embedded Machine Learning Embedded Machine Learning is a sub-field of machine learning, where the machine learning model is run on embedded systems with limited computing resources such as wearable computers, edge devices and microcontrollers. Running machine learning model in embedded devices removes the need for transferring and storing data on cloud servers for further processing, henceforth, reducing data breaches and privacy leaks happening because of transferring data, and also minimizes theft of intellectual properties, personal data and business secrets. Embedded Machine Learning could be applied through several techniques including hardware acceleration, using approximate computing, optimization of machine learning models and many more. Software Software suites containing a variety of machine learning algorithms include the following: Free and open-source software Caffe Microsoft Cognitive Toolkit Deeplearning4j DeepSpeed ELKI Infer.NET Keras LightGBM Mahout Mallet ML.NET mlpack MXNet Neural Lab OpenNN Orange pandas (software) ROOT (TMVA with ROOT) scikit-learn Shogun Spark MLlib SystemML TensorFlow Torch / PyTorch Weka / MOA XGBoost Yooreeka Proprietary software with free and open-source editions KNIME RapidMiner Proprietary software Amazon Machine Learning Angoss KnowledgeSTUDIO Azure Machine Learning Ayasdi IBM Watson Studio Google Prediction API IBM SPSS Modeler KXEN Modeler LIONsolver Mathematica MATLAB Neural Designer NeuroSolutions Oracle Data Mining Oracle AI Platform Cloud Service PolyAnalyst RCASE SAS Enterprise Miner SequenceL Splunk STATISTICA Data Miner Journals Journal of Machine Learning Research Machine Learning Nature Machine Intelligence Neural Computation Conferences AAAI Conference on Artificial Intelligence Association for Computational Linguistics (ACL) European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) International Conference on Machine Learning (ICML) International Conference on Learning Representations (ICLR) International Conference on Intelligent Robots and Systems (IROS) Conference on Knowledge Discovery and Data Mining (KDD) Conference on Neural Information Processing Systems (NeurIPS) See also References Sources . Further reading Nils J. Nilsson, Introduction to Machine Learning. Trevor Hastie, Robert Tibshirani and Jerome H. Friedman (2001). The Elements of Statistical Learning, Springer. . Pedro Domingos (September 2015), The Master Algorithm, Basic Books, Ian H. Witten and Eibe Frank (2011). Data Mining: Practical machine learning tools and techniques Morgan Kaufmann, 664pp., . Ethem Alpaydin (2004). Introduction to Machine Learning, MIT Press, . David J. C. MacKay. Information Theory, Inference, and Learning Algorithms Cambridge: Cambridge University Press, 2003. Richard O. Duda, Peter E. Hart, David G. Stork (2001) Pattern classification (2nd edition), Wiley, New York, . Christopher Bishop (1995). Neural Networks for Pattern Recognition, Oxford University Press. . Stuart Russell & Peter Norvig, (2009). Artificial Intelligence – A Modern Approach. Pearson, . Ray Solomonoff, An Inductive Inference Machine, IRE Convention Record, Section on Information Theory, Part 2, pp., 56–62, 1957. Ray Solomonoff, An Inductive Inference Machine A privately circulated report from the 1956 Dartmouth Summer Research Conference on AI. Kevin P. Murphy (2021). Probabilistic Machine Learning: An Introduction, MIT Press. External links International Machine Learning Society mloss is an academic database of open-source machine learning software. Cybernetics Learning
37541199
https://en.wikipedia.org/wiki/BEEBUG
BEEBUG
BEEBUG was a magazine published for users of the BBC Microcomputer between 1982 and 1994. It was the first subscription magazine for computers made by Acorn Computers. History BBC Micro User Group The group was formed in 1982 by Sheridan Williams and Lee Calcraft. Calcraft and Williams were contributors to Personal Computer World magazine (PCW) at the time. Calcraft was writing under pseudonyms in PCW, Acorn User and The Micro User. Williams was a founding contributor to PCW. When Acorn announced that they had won the contract to provide the computer to support the BBC's Computer Literacy Project, BEEBUG was formed to provide a magazine and support group. It turned out that Acorn were unable to supply the BBC Micro for many months and customers who had ordered the computer were anxious to learn as much about it before its arrival. Within 6 months membership reached 10,000 and by 1985 membership exceeded 30,000; in the final issue, the editors estimated 60,000 people had subscribed at one time or another during the magazine's lifetime. The company is still in existence and nowadays the core business involves providing computer networks in schools. Magazine The first issue of the Beebug Newsletter appeared in April 1982 and the last issue, volume 12 no 10, in April 1994. Newsagents sold the magazine at some point. It was the first subscription magazine for computers made by Acorn Computers. At the start the cover was monochrome, but a colour printed cover was then introduced in March 1983 when membership was 16,000. At the beginning each issue had 28 pages, but it expanded to 50 pages by 1985 when membership exceeded 30,000. The content included hints, program listings, hardware and software reviews, brain teasers and competitions. Illustrations were rudimentary. The magazine sometimes included special members' offers for items such as operating system upgrades. Cover mounted tape cassettes containing programs, binders and an advertising supplement were also published. It was published 10 times a year in A5 format. It was published by BEEBUG Publications Ltd, based in St Albans, UK. In 1985 membership including a postal subscription in the UK cost £11-90 a year (10 issues). Reception The magazine and its younger Acorn Archimedes companion RISC User were considered by Archive in 1990 as "friendly rival[s]". The magazine was remembered in 1998 as being "an essential source of information and tips for BBC Micro and Master users". Professor Krisantha Weerasuriya of Sri Lanka's University of Colombo noted the user group and its magazine to be "very helpful" in a 1988 issue of the BMJ. Legacy A review from a 1984 issue of the magazine was cited in United States patent in 1993. Some of the topics covered in the magazine listings included fractal trees, Lorenz attractors and modelling of 3D functions. Such basic principles have been included in the 2004 book Flash Math Creativity, with reference to the magazine's coverage of the topics. An enhanced version of one listing was included in the 1996 book An Introduction to Experimental Physics. See also Acorn User The Micro User / Acorn Computing Archive (magazine) Electron User References External links BEEBUG Magazine covers Cambridge University library reference Digitized BeeBug Magazines at 8 bit software 1982 establishments in the United Kingdom 1994 disestablishments in the United Kingdom Defunct computer magazines published in the United Kingdom Magazines established in 1982 Magazines disestablished in 1994 Ten times annually magazines
41904425
https://en.wikipedia.org/wiki/Luftrausers
Luftrausers
Luftrausers is a shoot 'em up video game developed by Netherlands-based indie developer studio Vlambeer and published by Devolver Digital for Microsoft Windows, OS X, Linux, PlayStation 3 and PlayStation Vita. It was released on March 18, 2014 and ported to Android by General Arcade on December 20, 2014. A demake of the game, titled LuftrauserZ, was developed by Paul Koller for Commodore 64, Commodore 128 and Commodore 64 Games System, and released by RGCD and Vlambeer on December 8, 2017. Gameplay Luftrausers is an airplane-based shoot 'em up. Unlike traditional shoot 'em ups, where the direction of the player craft is fixed, Luftrausers allows 360 degrees of motion, more akin to a multidirectional shooter. However, the player's main weapon can only fire from the front, forcing the player to take into account the position and angle of the airplane. Players must take into account the momentum of their plane while flying, using gravity and drift to maneuver as much as forward propulsion. There is no health bar: player health is indicated by a circle around the aircraft that shrinks as damage increases and naturally regenerates when not firing the weapon. Aircraft can be customized with dozens of combinations of engines, weapons, and hulls that affect the soundtrack as much as the gameplay. Each weapon part comes with sets of challenges that give objects to strive for, on top of the arcade nature of the game. Development Jan Willem Nijman of Vlambeer started Luftrausers while on the airplane home from the March 2012 Game Developers Conference (GDC) in San Francisco. Polygon reported that he was inspired by the "beauty of his view" and did not have on-flight television. At the time, Vlambeer was finishing development on Ridiculous Fishing, a game whose development was marked by a high-profile struggle with a similar, subsequent game known as a clone. Luftrausers is an update of Luftrauser, an earlier, free Flash game created by Vlambeer's Rami Ismail, artist Paul Veer, and composer Kozilek in the GameMaker: Studio engine and was ported to pure C++ by Michel Paulissen via his Dex converter tool. Vlambeer originally announced the game for release in Q2 2013 on PlayStation 3 and PlayStation Vita, having chosen those platforms due to their positive working relationship with Sony. In the weeks before the expected release, the Bangalore-based Rubiq Lab released a similar airplane dogfight game called SkyFar in April 2013 and Vlambeer contacted Apple and Google to intervene so as to avoid "another clone war". Rami Ismail's backpack containing "everything Vlambeer" including electronics and the game's code was stolen during E3 2013. Although their work was backed up, Ismail described the theft as "a giant pain". The game was simultaneously released on Microsoft Windows, OS X, Linux, PlayStation 3, and PlayStation Vita (via Steam and the Humble Store) on March 18, 2014, being published by Devolver Digital. The game's two-and-a-half-year development cycle was profitable within three days of its release. On December 20, 2014, the game was ported to Amazon Fire TV by General Arcade. Reception The game received "generally favorable" reviews, according to video game review score aggregator Metacritic. Giant Bomb awarded the game Best Music of 2014. At the 2014 National Academy of Video Game Trade Reviewers (NAVGTR) awards the game was nominated for Control Precision. References External links Luftrausers at Devolver Digital Luftrauser at (Not) Vlambeer 2014 video games Indie video games Android (operating system) games Windows games MacOS games Linux games PlayStation 3 games PlayStation Network games PlayStation Vita games Single-player video games Video games developed in the Netherlands Video games scored by Jukio Kallio Vlambeer games Devolver Digital games Multidirectional shooters Retro-style video games
17027
https://en.wikipedia.org/wiki/Kermit%20%28protocol%29
Kermit (protocol)
Kermit is a computer file transfer/management protocol and a set of communications software tools primarily used in the early years of personal computing in the 1980s. It provides a consistent approach to file transfer, terminal emulation, script programming, and character set conversion across many different computer hardware and operating system platforms. Technical The Kermit protocol supports text and binary file transfers on both full-duplex and half-duplex 8-bit and 7-bit serial connections in a system- and medium-independent fashion, and is implemented on hundreds of different computer and operating system platforms. On full-duplex connections, a Sliding Window Protocol is used with selective retransmission which provides excellent performance and error recovery characteristics. On 7-bit connections, locking shifts provide efficient transfer of 8-bit data. When properly implemented, as in the Columbia University Kermit Software collection, its authors claim performance is equal to or better than other protocols such as ZMODEM, YMODEM, and XMODEM, especially on poor connections. On connections over RS-232 Statistical Multiplexers where some control characters cannot be transmitted, Kermit can be configured to work, unlike protocols like XMODEM that require the connection to be transparent (i.e. all 256 possible values of a byte to be transferable). Kermit can be used as a means to bootstrap other software, even itself. To distribute Kermit through non 8-bit clean networks Columbia developed .boo, a binary-to-text encoding system similar to BinHex. For instance, IBM PC compatibles and Apple computers with a Compatibility Card installed can connect to otherwise incompatible systems such as a mainframe computer to receive MS-DOS Kermit in .boo format. Users can then type in a "baby Kermit" in BASIC on their personal computers that downloads Kermit and converts it into binary. Similarly, CP/M machines use many different floppy disk formats, which means that one machine often cannot read disks from another CP/M machine, and Kermit is used as part of a process to transfer applications and data between CP/M machines and other machines with different operating systems. The CP/M file-copy program PIP can usually access a computer's serial (RS-232) port, and if configured to use a very low baud rate (because it has no built-in error correction) can be used to transfer a small, simple version of Kermit from one machine to another over a null modem cable, or failing that, a very simple version of the Kermit protocol can be hand coded in binary in less than 2K using DDT, the CP/M Dynamic Debugging Tool. Once done, the simple version of Kermit can be used to download a fully functional version. That version can then be used to transfer any CP/M application or data. Newer versions of Kermit included scripting language and automation of commands. The Kermit scripting language evolved from its TOPS-20 EXEC-inspired command language and was influenced syntactically and semantically by ALGOL 60, C, BLISS-10, PL/I, SNOBOL, and LISP. The correctness of the Kermit protocol has been verified with formal methods. History In the late 1970s, users of Columbia University's mainframe computers had only 35 kilobytes of storage per person. Kermit was developed at the university so students could move files between them and floppy disks at various microcomputers around campus, such as IBM or DEC DECSYSTEM-20 mainframes and Intertec Superbrains running CP/M. IBM mainframes used an EBCDIC character set and CP/M and DEC machines used ASCII, so conversion between the two character sets was one of the early functions built into Kermit. The first file transfer with Kermit occurred in April 1981. The protocol was originally designed in 1981 by Frank da Cruz and Bill Catchings. Columbia University coordinated development of versions of Kermit for many different computers at the university and elsewhere, and distributed the software for free; Kermit for the new IBM Personal Computer became especially popular. In 1986 the university founded the Kermit Project, which took over development and started charging fees for commercial use; the project was financially self-sufficient. For non-commercial use, Columbia University stated that By 1988 Kermit was available on more than 300 computers and operating systems. The protocol became a de facto data communications standard for transferring files between dissimilar computer systems, and by the early 1990s it could convert multilingual character encodings. Kermit software has been used in many countries, for tasks ranging from simple student assignments to solving compatibility problems aboard the International Space Station. It was ported to a wide variety of mainframe, minicomputer and microcomputer systems down to handhelds and electronic pocket calculators. Most versions had a user interface based on the original TOPS-20 Kermit. Later versions of some Kermit implementations also support network as well as serial connections. Implementations that are presently supported include C-Kermit (for Unix and OpenVMS) and Kermit 95 (for versions of Microsoft Windows from Windows 95 onwards and OS/2), but other versions remain available as well. As of 1 July 2011, Columbia University ceased to host this project and released it to open source. In June 2011, the Kermit Project released a beta version of C-Kermit v9.0 under an Open Source Revised 3-Clause BSD License. As well as the implementations developed and/or distributed by Columbia University, the Kermit protocol was implemented in a number of third-party communications software packages, among others ProComm and ProComm Plus. The term "SuperKermit" was coined by third-party vendors to refer to higher speed Kermit implementations offering features such as full duplex operation, sliding windows, and long packets; however, that term was deprecated by the original Kermit team at Columbia University, who saw these as simply features of the core Kermit protocol. Naming and copyright Kermit was named after Kermit the Frog from The Muppets, with permission from Henson Associates. The program's icon in the Apple Macintosh version was a depiction of Kermit the Frog. A backronym was nevertheless created, perhaps to avoid trademark issues, [[PDP-10|KL10]] Error-Free Reciprocal Microprocessor Interchange over TTY lines. Kermit is an open protocol—anybody can base their own program on it, but some Kermit software and source code is copyright by Columbia University. The final license page said: See also IND$FILE BLAST (protocol) References Further reading External links Computer-related introductions in 1981 BBS file transfer protocols Communication software File transfer protocols Terminal emulators Columbia University
3233
https://en.wikipedia.org/wiki/Acceptance%20testing
Acceptance testing
In engineering and its various subdisciplines, acceptance testing is a test conducted to determine if the requirements of a specification or contract are met. It may involve chemical tests, physical tests, or performance tests. In systems engineering, it may involve black-box testing performed on a system (for example: a piece of software, lots of manufactured mechanical parts, or batches of chemical products) prior to its delivery. In software testing, the ISTQB defines acceptance testing as: Acceptance testing is also known as user acceptance testing (UAT), end-user testing, operational acceptance testing (OAT), acceptance test-driven development (ATDD) or field (acceptance) testing. Acceptance criteria are the criteria that a system or component must satisfy in order to be accepted by a user, customer, or other authorized entity. A smoke test may be used as an acceptance test prior to introducing a build of software to the main testing process. Overview Testing is a set of activities conducted to facilitate discovery and/or evaluation of properties of one or more items under test. Each individual test, known as a test case, exercises a set of predefined test activities, developed to drive the execution of the test item to meet test objectives; including correct implementation, error identification, quality verification and other valued detail. The test environment is usually designed to be identical, or as close as possible, to the anticipated production environment. It includes all facilities, hardware, software, firmware, procedures and/or documentation intended for or used to perform the testing of software. UAT and OAT test cases are ideally derived in collaboration with business customers, business analysts, testers, and developers. It's essential that these tests include both business logic tests as well as operational environment conditions. The business customers (product owners) are the primary stakeholders of these tests. As the test conditions successfully achieve their acceptance criteria, the stakeholders are reassured the development is progressing in the right direction. User acceptance test (UAT) criteria (in agile software development) are usually created by business customers and expressed in a business domain language. These are high-level tests to verify the completeness of a user story or stories 'played' during any sprint/iteration. Operational acceptance test (OAT) criteria (regardless if using agile, iterative or sequential development) are defined in terms of functional and non-functional requirements; covering key quality attributes of functional stability, portability and reliability. Process The acceptance test suite may need to be performed multiple times, as all of the test cases may not be executed within a single test iteration. The acceptance test suite is run using predefined acceptance test procedures to direct the testers which data to use, the step-by-step processes to follow and the expected result following execution. The actual results are retained for comparison with the expected results. If the actual results match the expected results for each test case, the test case is said to pass. If the quantity of non-passing test cases does not breach the project's predetermined threshold, the test suite is said to pass. If it does, the system may either be rejected or accepted on conditions previously agreed between the sponsor and the manufacturer. The anticipated result of a successful test execution: test cases are executed, using predetermined data actual results are recorded actual and expected results are compared, and test results are determined. The objective is to provide confidence that the developed product meets both the functional and non-functional requirements. The purpose of conducting acceptance testing is that once completed, and provided the acceptance criteria are met, it is expected the sponsors will sign-off on the product development/enhancement as satisfying the defined requirements (previously agreed between business and product provider/developer). User acceptance testing User acceptance testing (UAT) consists of a process of verifying that a solution works for the user. It is not system testing (ensuring software does not crash and meets documented requirements) but rather ensures that the solution will work for the user (i.e. tests that the user accepts the solution); software vendors often refer to this as "Beta testing". This testing should be undertaken by a subject-matter expert (SME), preferably the owner or client of the solution under test, and provide a summary of the findings for confirmation to proceed after trial or review. In software development, UAT as one of the final stages of a project often occurs before a client or customer accepts the new system. Users of the system perform tests in line with what would occur in real-life scenarios. It is important that the materials given to the tester be similar to the materials that the end user will have. Testers should be given real-life scenarios such as the three most common or difficult tasks that the users they represent will undertake. The UAT acts as a final verification of the required business functionality and proper functioning of the system, emulating real-world conditions on behalf of the paying client or a specific large customer. If the software works as required and without issues during normal use, one can reasonably extrapolate the same level of stability in production. User tests, usually performed by clients or by end-users, do not normally focus on identifying simple cosmetic problems such as spelling errors, nor on showstopper defects, such as software crashes; testers and developers identify and fix these issues during earlier unit testing, integration testing, and system testing phases. UAT should be executed against test scenarios. Test scenarios usually differ from System or Functional test cases in that they represent a "player" or "user" journey. The broad nature of the test scenario ensures that the focus is on the journey and not on technical or system-specific details, staying away from "click-by-click" test steps to allow for a variance in users' behaviour. Test scenarios can be broken down into logical "days", which are usually where the actor (player/customer/operator) or system (backoffice, front end) changes. In industry, a common UAT is a factory acceptance test (FAT). This test takes place before installation of the equipment. Most of the time testers not only check that the equipment meets the specification, but also that it is fully functional. A FAT usually includes a check of completeness, a verification against contractual requirements, a proof of functionality (either by simulation or a conventional function test) and a final inspection. The results of these tests give clients confidence in how the system will perform in production. There may also be legal or contractual requirements for acceptance of the system. Operational acceptance testing Operational acceptance testing (OAT) is used to conduct operational readiness (pre-release) of a product, service or system as part of a quality management system. OAT is a common type of non-functional software testing, used mainly in software development and software maintenance projects. This type of testing focuses on the operational readiness of the system to be supported, and/or to become part of the production environment. Acceptance testing in extreme programming Acceptance testing is a term used in agile software development methodologies, particularly extreme programming, referring to the functional testing of a user story by the software development team during the implementation phase. The customer specifies scenarios to test when a user story has been correctly implemented. A story can have one or many acceptance tests, whatever it takes to ensure the functionality works. Acceptance tests are black-box system tests. Each acceptance test represents some expected result from the system. Customers are responsible for verifying the correctness of the acceptance tests and reviewing test scores to decide which failed tests are of highest priority. Acceptance tests are also used as regression tests prior to a production release. A user story is not considered complete until it has passed its acceptance tests. This means that new acceptance tests must be created for each iteration or the development team will report zero progress. Types of acceptance testing Typical types of acceptance testing include the following User acceptance testing This may include factory acceptance testing (FAT), i.e. the testing done by a vendor before the product or system is moved to its destination site, after which site acceptance testing (SAT) may be performed by the users at the site. Operational acceptance testingAlso known as operational readiness testing, this refers to the checking done to a system to ensure that processes and procedures are in place to allow the system to be used and maintained. This may include checks done to back-up facilities, procedures for disaster recovery, training for end users, maintenance procedures, and security procedures. Contract and regulation acceptance testing In contract acceptance testing, a system is tested against acceptance criteria as documented in a contract, before the system is accepted. In regulation acceptance testing, a system is tested to ensure it meets governmental, legal and safety standards. Factory acceptance testing Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether a component or system satisfies the requirements, normally including hardware as well as software. Alpha and beta testing Alpha testing takes place at developers' sites, and involves testing of the operational system by internal staff, before it is released to external customers. Beta testing takes place at customers' sites, and involves testing by a group of customers who use the system at their own locations and provide feedback, before the system is released to other customers. The latter is often called "field testing". List of acceptance-testing frameworks Concordion, Specification by example (SbE) framework Concordion.NET, acceptance testing in .NET Cucumber, a behavior-driven development (BDD) acceptance test framework Capybara, Acceptance test framework for Ruby web applications Behat, BDD acceptance framework for PHP Lettuce, BDD acceptance framework for Python Fabasoft app.test for automated acceptance tests Framework for Integrated Test (Fit) FitNesse, a fork of Fit Gauge (software), Test Automation Framework from Thoughtworks iMacros ItsNat Java Ajax web framework with built-in, server based, functional web testing capabilities. Maveryx Test Automation Framework for functional testing, regression testing, GUI testing, data-driven and codeless testing of Desktop and Web applications. Mocha, a popular web acceptance test framework based on Javascript and Node.js Ranorex Robot Framework Selenium Specification by example (Specs2) Watir See also Acceptance sampling Conference room pilot Development stage Dynamic testing Engineering validation test Grey box testing Test-driven development White box testing Functional testing (manufacturing) References Further reading External links Acceptance Test Engineering Guide by Microsoft patterns & practices "Using Customer Tests to Drive Development" from Methods & Tools "Acceptance TDD Explained" from Methods & Tools Software testing Hardware testing Procurement Agile software development
14780172
https://en.wikipedia.org/wiki/Service-oriented%20programming
Service-oriented programming
Service-oriented programming (SOP) is a programming paradigm that uses "services" as the unit of computer work, to design and implement integrated business applications and mission critical software programs. Services can represent steps of business processes and thus one of the main applications of this paradigm is the cost-effective delivery of standalone or composite business applications that can "integrate from the inside-out" Introduction SOP inherently promotes service-oriented architecture (SOA), however, it is not the same as SOA. While SOA focuses on communication between systems using "services", SOP provides a new technique to build agile application modules using in-memory services as the unit of work. An in-memory service in SOP can be transparently externalized as a web service operation. Due to language and platform independent Web Service standards, SOP embraces all existing programming paradigms, languages and platforms. In SOP, the design of the programs pivot around the semantics of service calls, logical routing and data flow description across well-defined service interfaces. All SOP program modules are encapsulated as services and a service can be composed of other nested services in a hierarchical manner with virtually limitless depth to this service stack hierarchy. A composite service can also contain programming constructs some of which are specific and unique to SOP. A service can be an externalized component from another system accessed either through using web service standards or any proprietary API through an in-memory plug-in mechanism. While SOP supports the basic programming constructs for sequencing, selection and iteration, it is differentiated with a slew of new programming constructs that provide built-in native ability geared towards data list manipulation, data integration, automated multithreading of service modules, declarative context management and synchronization of services. SOP design enables programmers to semantically synchronize the execution of services in order to guarantee that it is correct, or to declare a service module as a transaction boundary with automated commit/rollback behavior. Semantic design tools and runtime automation platforms can be built to support the fundamental concepts of SOP. For example, a service virtual machine (SVM) that automatically creates service objects as units of work and manages their context can be designed to run based on the SOP program metadata stored in XML and created by a design-time automation tool. In SOA terms, the SVM is both a service producer and a service consumer. Fundamental concepts SOP concepts provide a robust base for a semantic approach to programming integration and application logic. There are three significant benefits to this approach: Semantically, it can raise the level of abstraction for creating composite business applications and thus significantly increase responsiveness to change (i.e. business agility) Gives rise to the unification of integration and software component development techniques under a single concept and thus significantly reduces the complexity of integration. This unified approach enables "inside-out integration" without the need to replicate data, therefore, significantly reducing the cost and complexity of the overall solution Automate multi-threading and virtualization of applications at the granular (unit-of-work) level. The following are some of the key concepts of SOP: Encapsulation In SOP, in-memory software modules are strictly encapsulated through well-defined service interfaces that can be externalized on-demand as web service operations. This minimal unit of encapsulation maximizes the opportunities for reusability within other in-memory service modules as well as across existing and legacy software assets. By using service interfaces for information hiding, SOP extends the service-oriented design principles used in SOA to achieve separation of concerns across in-memory service modules. Service interface A service interface in SOP is an in-memory object that describes a well-defined software task with well-defined input and output data structures. Service interfaces can be grouped into packages. An SOP service interface can be externalized as a WSDL operation and a single service or a package of services can be described using WSDL. Furthermore, service interfaces can be assigned to one or many service groups based on shared properties. In SOP, runtime properties stored on the service interface metadata serve as a contract with the service virtual machine (SVM). One example for the use of runtime properties is that in declarative service synchronization. A service interface can be declared as a fully synchronized interface, meaning that only a single instance of that service can run at any given time. Or, it can be synchronized based on the actual value of key inputs at runtime, meaning that no two service instances of that service with the same value for their key input data can run at the same time. Furthermore, synchronization can be declared across services interfaces that belong to the same service group. For example, if two services, 'CreditAccount" and 'DebitAccount", belong to the same synchronization service group and are synchronized on the accountName input field, then no two instances of 'CreditAccount" and 'DebitAccount" with the same account name can execute at the same time. Service invoker A service invoker makes service requests. It is a pluggable in-memory interface that abstracts the location of a service producer as well as the communication protocol, used between the consumer and producer when going across computer memory, from the SOP runtime environment such as an SVM. The producer can be in-process (i.e. in-memory), outside the process on the same server machine, or virtualized across a set of networked server machines. The use of a service invoker in SOP is the key to location transparency and virtualization. Another significant feature of the service invoker layer is the ability to optimize bandwidth and throughput when communicating across machines. For example, a "SOAP Invoker" is the default service invoker for remote communication across machines using the web service standards. This invoker can be dynamically swapped out if, for example, the producer and consumer wish to communicate through a packed proprietary API for better security and more efficient use of bandwidth. Service listener A service listener receives service requests. It is a pluggable in-memory interface that abstracts the communication protocol for incoming service requests made to the SOP runtime environment such as the SVM. Through this abstract layer, the SOP runtime environment can be virtually embedded within the memory address of any traditional programming environment or application service. Service implementation In SOP, a service module can be either implemented as a Composite or Atomic service. It is important to note that Service modules built through the SOP paradigm have an extroverted nature and can be transparently externalized through standards such as SOAP or any proprietary protocol. Semantic-based approach One of the most important characteristic of SOP is that it can support a fully semantic-based approach to programming. Furthermore, this semantic-based approach can be layered into a visual environment built on top of a fully metadata-driven layer for storing the service interface and service module definitions. Furthermore, if the SOP runtime is supported by a SVM capable of interpreting the metadata layer, the need for automatic code generation can be eliminated. The result is tremendous productivity gain during development, ease of testing and significant agility in deployment. Service implementation: composite service A composite service implementation is the semantic definition of a service module based on SOP techniques and concepts. If you look inside of a black-boxed interface definition of a composite service, you may see other service interfaces connected to each other and connected to SOP programming constructs. A Composite service has a recursive definition meaning that any service inside ("inner service") may be another atomic or composite service. An inner service may be a recursive reference to the same containing composite service. Programming constructs SOP supports the basic programming constructs for sequencing, selection and iteration as well as built-in, advance behavior. Furthermore, SOP supports semantic constructs for automatic data mapping, translation, manipulation and flow across inner services of a composite service. Sequencing A service inside of the definition of a composite service (an "inner service") is implicitly sequenced through the semantic connectivity of built-in success or failure ports of other inner services with its built-in activation port. When an inner service runs successfully, all the inner services connected to its success port will run next. If an inner service fails, all the services connected to its failure port will run next. Selection Logical selection is accomplished through data-driven branching constructs and other configurable constructs. In general, configurable constructs are services built into the SOP platform with inputs and outputs that can assume the input/output shape of other connected services. For example, a configurable construct used for filtering output data of services can take a list of Sales orders, Purchase orders or any other data structure, and filter its data based on user declared filter properties stored on the interface of that instance of the filter construct. In this example, the structure to be filtered becomes the input of the particular instance of the filter construct and the same structure representing the filtered data becomes the output of the configurable construct. Iteration A composite service can be declared to loop. The loop can be bound by a fixed number of iterations with an optional built-in delay between iterations and it can dynamically terminate using a "service exit with success" or "service exit with failure" construct inside of the looping composite service. Furthermore, any service interface can automatically run in a loop or "foreach" mode, if it is supplied with two or more input components upon automatic preparation. This behavior is supported at design-time when a data list structure from one service is connected to a service that takes a single data structure (i.e. non-plural) as its input. If a runtime property of the composite service interface is declared to support "foreach" in parallel, then the runtime automation environment can automatically multi-thread the loop and run it in parallel. This is an example of how SOP programming constructs provide built-in advanced functionality. Data transformation, mapping, and translation Data mapping, translation, and transformation constructs enable automatic transfer of data across inner services. An inner-service is prepared to run, when it is activated and all of its input dependencies are resolved. All the prepared inner-services within a composite service run in a parallel burst called a "hypercycle". This is one of the means by which automatic parallel-processing is supported in SOP. The definition of a composite service contains an implicit directed graph of inner service dependencies. The runtime environment for SOP can create an execution graph based on this directed graph by automatically instantiating and running inner services in parallel whenever possible. Exception handling Exception handling is a run-time error in Java. Exception handling in SOP is simply accomplished by connecting the failure port of inner services to another inner service, or to a programming construct. "Exit with failure" and "exit with success" constructs are examples of constructs used for exception handling. If no action is taken on the failure port of a service, then the outer (parent) service will automatically fail and the standard output messages from the failed inner service will automatically bubble up to the standard output of the parent. Transactional boundary A composite service can be declared as a transaction boundary. The runtime environment for SOP automatically creates and manages a hierarchical context for composite service objects which are used as a transaction boundary. This context automatically commits or rollbacks upon the successful execution of the composite service. Service compensation Special composite services, called compensation services, can be associated with any service within SOP. When a composite service that is declared as a transaction boundary fails without an exception handling routing, the SOP runtime environment automatically dispatches the compensation services associated with all the inner services which have already executed successfully. Service implementation: atomic service An atomic service is an in-memory extension of the SOP runtime environment through a service native interface (SNI) it is essentially a plug-in mechanism. For example, if SOP is automated through an SVM, a service plug-in is dynamically loaded into the SVM when any associated service is consumed. An example of a service plug-in would be a SOAP communicator plug-in that can on-the-fly translate any in-memory service input data to a Web Service SOAP request, post it to a service producer, and then translate the corresponding SOAP response to in-memory output data on the service. Another example of a service plug-in is a standard database SQL plug-in that supports data access, modification and query operations. A further example that can help establish the fundamental importance of atomic services and service plug-ins is using a service invoker as a service plug-in to transparently virtualize services across different instances of an SOP platform. This unique, component-level virtualization is termed "service grid virtualization" in order to distinguish it from traditional application, or process-level virtualization. Cross-cutting concerns SOP presents significant opportunities to support cross-cutting concerns for all applications built using the SOP technique. The following sections define some of these opportunities: Service instrumentation The SOP runtime environment can systematically provide built-in and optimized profiling, logging and metering for all services in real-time. Declarative & context-sensitive service caching Based on declared key input values of a service instance, the outputs of a non time-sensitive inner service can be cached by the SOP runtime environment when running in the context of a particular composite service. When a service is cached for particular key input values, the SOP runtime environment fetches the cached outputs corresponding to the keyed inputs from its service cache instead of consuming the service. Availability of this built-in mechanism to the SOP application developer can significantly reduce the load on back-end systems. Service triggers SOP provides a mechanism for associating a special kind of composite service, trigger service, to any other service. When that service is consumed, the SOP platform automatically creates and consumes an instance of the associated trigger service with an in-memory copy of the inputs of the triggering service. This consumption is non-intrusive to the execution of the triggering service. A service trigger can be declared to run upon activation, failure or success completion of the triggering service. Inter-service communication In addition to the ability to call any service, Service Request Events and Shared Memory are two of the SOP built-in mechanisms provided for inter-service communication. The consumption of a service is treated as an Event in SOP. SOP provides a correlation-based event mechanism that results in the pre-emption of a running composite that has declared, through a "wait" construct, the need to wait for one or more other service consumption events to happen with specified input data values. The execution of the composite service continues when services are consumed with specific correlation key inputs associated with the wait construct. SOP also provides a shared memory space with access control where services can access and update a well-defined data structure that is similar to the input/output structure of services. The shared memory mechanism within SOP can be programmatically accessed through service interfaces. Service overrides In SOP, customizations are managed through an inventive feature called Service Overrides. Through this feature, a service implementation can be statically or dynamically overridden by one of many possible implementations at runtime. This feature is analogous to polymorphism in object-oriented programming. Each possible override implementation can be associated to one or more override configuration portfolios in order to manage activation of groups of related overrides throughout different SOP application installations at the time of deployment. Consumer account provisioning Select services can be deployed securely for external programmatic consumption by a presentation (GUI) layer, or other applications. Once service accounts are defined, the SOP runtime environment automatically manages access through consumer account provisioning mechanisms. Security The SOP runtime environment can systematically provide built-in authentication and service authorization. For the purpose of authorization, SOP development projects, consumer accounts, packages and services are treated as resources with access control. In this way, the SOP runtime environment can provide built-in authorization. Standards or proprietary authorization and communication security is customized through service overrides, plug-in invoker and service listener modules. Virtualization and automatic multithreading Since all artifacts of SOP are well-encapsulated services and all SOP mechanisms, such as shared memory, can be provided as distributable services, large-scale virtualization can be automated by the SOP runtime environment. Also, the hierarchical service stack of a composite service with the multiple execution graphs associated to its inner services, at each level, provides tremendous opportunities for automated multi-threading to the SOP runtime environment. History The term service-oriented programming was first published in 2002 by Alberto Sillitti, Tullio Vernazza and Giancarlo Succi in a book called "Software Reuse: Methods, Techniques, and Tools." SOP, as described above, reflects some aspects of the use of the term proposed by Sillitti, Vernazza and Succi. Today, the SOP paradigm is in the early stages of mainstream adoption. There are four market drivers fueling this adoption: Multi-core Processor Architecture: due to heat dissipation issues with increasing processor clock speeds beyond 4 GHz, the leading processor vendors such as Intel have turned to multi-core architecture to deliver ever increasing performance. Refer to the article "The Free Lunch Is Over" This change in design forces a change in the way we develop our software modules and applications: applications must be written for concurrency in order to utilize multi-core processors and writing concurrent programs is a challenging task. SOP provides a built-in opportunity for automated multithreading. Application Virtualization: SOP promotes built-in micro control over location transparency of the service constituents of any service module. This results in automatic and granular virtualization of application components (versus an entire application process) across a cluster or grid of SOP runtime platforms. Service-oriented architecture (SOA) and demand for integrated and composite applications: in the beginning, the adoption of SOP will follow the adoption curve of SOA with a small lag. This is because services generated through SOA can be easily assembled and consumed through SOP. The more Web services proliferate, the more it makes sense to take advantage of the semantic nature of SOP. On the other hand, since SOA is inherent in SOP, SOP provides a cost-effective way to deliver SOA to mainstream markets. Software as a service (SaaS): capabilities of the current SaaS platforms cannot address the customization and integration complexities required by large enterprises. SOP can significantly reduce the complexity of integration and customization. This will drive SOP into the next generation SaaS platforms. See also Microservices External links http://nextaxiom.com DOI.org "Service-Oriented Programming: A New Paradigm of Software Reuse" https://web.archive.org/web/20090505205415/http://blog.itaniumsolutions.org/2008/01/ http://in.sys-con.com/node/467329 Service-oriented (business computing) Programming paradigms
247921
https://en.wikipedia.org/wiki/Gray%20Areas
Gray Areas
Gray Areas was a quarterly magazine published from 1992 to 1995 by publisher Netta Gilboa. The magazine was based in Phoenix, Arizona. It won several awards including "One Of The Top Ten Magazines of 1992" by Library Journal. It discussed subcultures involving drugs (narcotics), phreaking, cyberpunk, pornography, the Grateful Dead and related issues. It only published 7 issues, but continues on as a website. Issue Contents Issue 1 - Fall 1992 - Volume 1, No. 1 (84 pages) Interview: John Perry Barlow on computer crimes Interview: Kay Parker on the Adult Film Industry Tape History: Grateful Dead Live Video Tapes Interview: The Zen Tricksters Rock Band Issue 2 - Spring 1993 - Volume 2, No. 1 (116 pages) Interview: Attorney/Musician Barry Melton Interview: Adult Film Director Candida Royalle Tape History: Little Feat Live Audio Tapes Tape History: Grateful Dead Bootleg CDs Things To Know About Urine Tests It's Later Than You Thought: Your 4th Amendment Rights Computer Privacy and the Common Man An Essay On The Fan-Artist Relationship My Pilgrimage To Jim Morrison's Grave Interview: Paul Quinn of the Soup Dragons Issue 3 - Summer 1993 - Volume 2, No. 2 (132 pages) Interview: Musician GG Allin Interview: Grateful Dead Hour host David Gans Richard Pacheco on Adult Films Tape History: Jefferson Airplane Live Video Tapes Phone Phun Phenomena: Notes On The Prank Call Underground Interview: Prank Call Expert John Trubee Interview: Urnst Kouch, Computer Virus Writer Howard Stern Is Here To Stay Worlds At War, Pt. 1: The Gray Gods: A Theory of UFO Encounters Issue 4 - Fall 1993 - Volume 2, No. 3 (148 pages) Interview: Ivan Stang of the Church of the SubGenius Interview: RIAA Piracy Director Steven D'Onofrio Interview: Phone Sex Fantasy Girl Review: Defcon I hackers convention Gray Travel: Amsterdam Confessions of an Amerikan LSD Eater Worlds At War, Pt. 2: The Ultimate Sin: UFO Conspiracies A Day With The KKK Run For Your Life: Why The Music Industry Wants You To Record Concerts Plagiarism: Thoughts on Sampling, Originality and Ownership Interview: Solar Circus Rock Band Issue 5 - Spring 1994 - Volume 3, No. 1 (148 pages) Interview: Breaking Into The WELL Interview: S/M Dominatrix Paul Melka on computer viruses Review: Pumpcon II and HoHoCon IV hacker conventions Inside Today's Hacking Mind Interview: Phone Phreak Improving The Prison Environment Teenager Joins In The Fight Against AIDS Concealing My Identity: A Silence Imposed By Society All About Smart Drugs Lollapalooza 1993 Review Entertainment Industry Lawyers Deadheads and the Constitution Tape History: Jefferson Airplane Bootlegs Tape History: A Guide To Vintage Live Soul Tapes The New Music Seminar July 1993 Issue 6 - Fall 1994 - Volume 3, No. 2 (148 pages) Interview: Hacker/Phrack Publisher Erik Bloodaxe Interview: Adult Film Star Taylor Wane Review: Computers, Freedom and Privacy conference Stormbringer of Phalcon/Skism on computer virus writers A Cyberpunk Manifesto For The 90s Interview: Cable TV Thief True Cop, Blue Cop, Gray Cop Is Jury tampering A Crime? Interview: Folk Singer Melanie Jimi Hendrix: The Bootleg CDs Tape History: Big Brother and the Holding Company The Genitorturers Mean You Some Harm Issue 7 - Spring 1995 - Volume 4, No. 1 (148 pages) Interview: Mike Gordon of Phish Interview: Internet Liberation Front Review: Defcon II, HoHoCon and HOPE hacker conventions Tape History: Jethro Tull Live Video Tapes Scanners: Tuning In Illegally On Phone Calls Confessions of an AIDS Activist Adventures In The Porn Store New Conflicts Over The Oldest Profession Two Thousand In Two Years: A Housewife Hooker's Story Psychoanalysis and Feminism Prozac: The Controversial Future of Psychopharmacology The Gray Area of Drug addiction The Art of Deception: Polygraph Lie Detection Voices of Adoptees: The Silent Society The Gray Art of John Wayne Gacy Lollapalooza 1994 Review References External links Official Website Cultural magazines published in the United States Quarterly magazines published in the United States Defunct literary magazines published in the United States Hacker magazines Magazines established in 1992 Magazines disestablished in 1995 Magazines published in Arizona Works about computer hacking Mass media in Phoenix, Arizona
32174372
https://en.wikipedia.org/wiki/Global%20kOS
Global kOS
Global kOS ('kos' pronounced as chaos) were a grey hat (leaning black hat) computer hacker group active from 1996 through 2000, considered a highly influential group who were involved in multiple high-profile security breaches and defacements as well as a releasing notable network security and intrusion tools. Global kOS were involved with the media heavily and were interviewed and profiled by journalist Jon Newton in his blog titled "On The Road in Cyberspace" (OTRiCS). The group were reported multiple times to the FBI by Carolyn Meinel who attempted to bring the group to justice while members of Global kOS openly mocked her. The FBI had a San Antonio based informant within the group and individually raided several members after contact with the informant. Overview Global kOS were a loose collective of members of other hacking groups active in 1996 who released network and security tools for Windows and Linux. The group did not release exploit code as other groups did, each member maintained their own webpage and often acted individually or with their original group. Global kOS continually reiterated that their goal was entirely about rampaging across the internet and creating as much chaos as they could by releasing automated denial of service tools. The group was humorous and irreverent in interviews and press releases, they and affiliates openly taunted webmasters through website defacements. The group's members hacked and defaced a large number of websites in the nightly hacking contests over IRC on EFNet. Releases Up Yours! denial of service tool Digital destruction suite collection of hacker tools Panther modern denial of service tool kOS Crack password cracking utilities BattlePong IRC flooding utility Up Yours! Up Yours! the flagship release for Global kOS was an early point and click denial of service tool which helped to spawn the term 'script kiddie'. Up Yours! first appeared in 1996 and updated versions were released three times. Up Yours! was the denial of service tool used in the well-documented Nizkor attack. It is believed that the hacker 'johnny xchaotic' aka 'Angry Johnny' used Up Yours! to take down the websites of 40 politicians, MTV, Rush Limbaugh the Ku Klux Klan and multiple others. The author claims he came up with the name Up Yours! because he wanted to hear Dan Rather say it on national television. In print: , , and Members Membership between Global kOS and other hacking groups of the time is difficult to determine as most members were involved in multiple groups, crossover appears between Global kOS, Technophoria, SiN and GlobalHell (gH). AcidAngel - Author of the denial of service tools 'Up Yours!' and 'Panther modern'. Materva Glitch The Assassin NaCHoMaN - cracker The Raven - author of 'kOS Crack' Shadow Hunter Silicon Toad Spidey That Guy Zaven Digital Kid Affiliated members Modify BroncBuster Banshee - author of bitchslap The Messiah Deprave Cyan Ide Kiss dvdman The boss Revelation References External links http://gkos.org http://web.textfiles.com/hacking/globlkos.nfo https://web.archive.org/web/19971015025943/http://www.otrics.com/hackr8.html Hacker groups
52632446
https://en.wikipedia.org/wiki/Private%20Cemeteries%20Act%20%28Minnesota%29
Private Cemeteries Act (Minnesota)
The Private Cemeteries Act is a state Act, which provides legislation respecting private cemeteries, human remains and burial sites in the state of Minnesota, United States. The Act is divided into fourteen sections, including: Plat and Record Effect of Recorded Plat Religious Corporations may Acquire Existing Cemeteries Conveyance of Lots Gifts for Proprietary Care of Lots Transfer to Association; How Effected Effect of Transfer Damages; Illegal Molestation of Human Remains; Burials; Cemeteries; Penalty; Authentication Civil Actions Exemptions Vacation; Change of Name Abandoned Lots; Recovery Correction of Interment Errors Relocation Passage and Revisions The Act was first brought into force in 1976 and has undergone fifteen revisions since that time. 1976 Revision 1980 Revision 1983 Revision 1984 Revision 1985 Revision 1986 Revision 1989 Revision 1993 Revision 1994 Revision 1999 Revision 2003 Revision 2005 Revision 2007 Revision 2010 Revision 2013 Revision Effects The Passage of the Private Cemeteries Act impacted the practice of archaeology and treatment of human remains in Minnesota in several ways. The Act: legislates the creation, recording, and transfer of private cemeteries, as well as the sale of burial plots within them; mandates state fiduciary responsibility for burial authentications; requires that data regarding burial locations be made publicly available through state agencies such as Minnesota Geospatial Information Office, and the Unplatted Burial Sites and Earthworks in Minnesota website, and assigns state fiduciary responsibility over associated costs; establishes the roles and responsibilities of the Minnesota Office of the State Archaeologist (MOSA), with respect to human remains, burial sites and cemeteries; and assigns the office prerogative over the licensure and process requirements for burial authentications; provides a legislates requirement proponents who plan to develop with known human remains or burials within their proposed development area., to submit their plans to the State Archaeologist at the MOSA for review; and enacts penalties for violations and non-compliance Archaeological protection The Act affords protection to state archaeological sites under Section MS 307.08, in that the state of Minnesota declares: "that all human burials, human remains, and human burial grounds shall be accorded equal treatment and respect for human dignity without reference to their ethnic origins, cultural backgrounds, or religious affiliations. The provisions of this section shall apply to all human burials, human remains, or human burial grounds found on or in all public or private lands or waters in Minnesota" Compliance and enforcement Penalties for contraventions against the Act are detailed in Section 307.08 of the Act, and include: Felony; gross misdemeanor A person who intentionally, willfully, and knowingly does any of the following …: destroys, mutilates, or injures human burials or human burial grounds; without the consent of the appropriate authority, disturbs human burial grounds or removes human remains. Gross misdemeanor A person who, without the consent of the appropriate authority and the landowner, intentionally, willfully, and knowingly does any of the following: removes any tombstone, monument, or structure placed in any public or private cemetery or authenticated human burial ground; removes any fence, railing, or other work erected for protection or ornament, or any tree, shrub, or plant or grave goods and artifacts within the limits of a public or private cemetery or authenticated human burial ground; or discharges any firearms upon or over the grounds of any public or private cemetery or authenticated burial ground. Minnesota Office of the State Archaeologist The Minnesota Office of the State Archaeologist (MOSA) oversees a portion of the Act and has the following duties stemming from the legislation: Authenticates all unrecorded burial sites over 50 years old; Grants permission for disturbances in unrecorded non-Indian cemeteries; Allows posting and approves signs for authenticated non-Indian cemeteries; Maintains unrecorded cemetery and burial site locations and data; Provides burial sites data to the Minnesota Geospatial Information Office (MnGEO) -- formerly the Land Management Information Center (LMIC); Determines the ethnic identity of burials over 50 years old; Helps determine tribal affiliation of Indian burials; Determines if osteological analysis should be done on recovered remains; Helps establish provisions for dealing with unaffiliated Indian remains; Reviews development plans that may impact unrecorded burials; answering public and agency inquiries about known or suspected burial sites; coordinating with the Minnesota Indian Affairs Council (MIAC) when Indian burials are threatened; formally determining the presence or absence of burial grounds through field work in particular areas (authentication); advising agencies and landowners on legal and management requirements for unrecorded burial grounds. The MOSA has adopted an internal policy, called the State Archaeologists Procedures for Implementing Minnesota’s Private Cemeteries Act, which helps guide the implementation and execution of the Act, within MOSA operations. This procedural manual provides detailed sections on: Authentication; Burial Removal and Analysis; Reinternment; Management of Unrecorded Burial Grounds; Record Keeping and Data Sharing; and Development Plan Review Archaeological licensing The Minnesota Office of the State Archaeologist has mandate over the issuance of archaeological investigation licenses in the state, in conjunction with the Minnesota Historical Society. There are four kinds of licenses, including: Yearly license for reconnaissance/Phase I survey. Site-specific license for site evaluations/Phase II. Site-specific license for major excavations/Phase III. Site-specific license for burial authentications Burial authentication licenses governed by Section MS 307.08 of the Private Cemeteries Act, are issued by the MOSA. These authentications determine the absence or presence of human remains at a site, and often involve extensive sampling of a specific locality. As the State Archaeologist is the only one legislated by the Field Archaeology Act (Minnesota) to authenticate burials older than 50 years old, the State Archaeologist often serves as Co-Principal on these permits. In addition to the five basic professional standards of the MOSA outlined here, the following additional requirements are necessary for burial authentication licenses: "Demonstrated experience or training in dealing with human remains, identifying fragmentary human remains, and with assessing soil conditions and features commonly associated with human burials"; and "If a suspected Indian burial is involved… some experience in consulting with tribal groups and the Minnesota Indian Affairs Council" Coroner; Medical Examiner Act One additional Act which governs the archaeological methods associated with human remains in Minnesota, is the Coroner; Medical Examiner Act, enacted in 1965. This acts makes provisions for the discovery of unidentified deceased persons, and outlines the chain of command with respect to these remains. Section 390.25, Subdivision 5, "Notice to State Archaeologist" reads: "After the coroner or medical examiner has completed the investigation, the coroner or medical examiner shall notify the state archaeologist, according to section 307.08, of all unidentified human remains found outside of platted, recorded, or identified cemeteries and in contexts which indicate antiquity of greater than 50 years". References External links Minnesota Office of the State Archaeologist Minnesota Historical Society SHPM Manual for Archaeological Projects in Minnesota State Archaeologist’s Manual for Archaeological Projects in Minnesota State Archaeologist’s Procedures for Implementing Minnesota’s Private Cemeteries Act 1976 in American law 1976 in Minnesota Cemeteries in Minnesota Minnesota law Minnesota statutes
62907350
https://en.wikipedia.org/wiki/Minecraft%20server
Minecraft server
A Minecraft server is a player-owned or business-owned multiplayer game server for the 2009 Mojang video game Minecraft. In this context, the term "server" often colloquially refers to a network of connected servers, rather than a single machine. Players can start their own server either by setting one up on a computer using software provided by Mojang, or by using a hosting provider so they can have their server run on dedicated machines with guaranteed uptime. The largest and most popular server is Hypixel. Minecraft multiplayer servers are controlled by server operators, who have access to server commands such as setting the time of day, teleporting players and setting the world spawn. The server owner (or users that have access to the live server files) can also set up and install plugins to change the mechanics of the server, add commands among other features, and can also set up restrictions concerning which usernames or IP addresses are allowed or disallowed to enter the server. Multiplayer servers have a wide range of activities, with some servers having unique premises, rules, and customs. Player versus player combat (PvP) can be enabled to allow fighting between players. Many servers have custom plugins that allow actions that are not normally possible in the vanilla form of the game. History Multiplayer was first added to Minecraft on May 31, 2009, during the Classic phase of the game. The oldest server map is called "Freedonia", in the Minecraft server MinecraftOnline. The server and map were created on August 4, 2010, within the first hour of Minecraft multiplayer being released. In 2013, Mojang announced Minecraft Realms, a server hosting service intended to enable players to run server multiplayer games easily and safely without having to set up their own. Unlike a standard server, only invited players can join Realms servers, and these servers do not use IP addresses. At Electronic Entertainment Expo 2016, it was announced that Realms would enable Minecraft to support cross-platform play between Windows 10, iOS, and Android platforms starting in June 2016, with other platforms releasing over the next two years. In June 2014, Mojang began enforcing the EULA of the computer versions of the game to prevent servers from selling microtransactions that unfairly affected gameplay, such as pay-to-win items, only allowing servers to sell cosmetic items. Many servers shut down due to this. On September 20, 2017, the Better Together Update was released for Bedrock codebase-derived editions of the game, which added multiplayer servers, along with five official featured servers: Mineplex, Lifeboat, CubeCraft, Mineville City, Pixel Paradise, and The Hive. Management Managing a Minecraft server can be a full-time job for many server owners. Several large servers employ a staff of developers, managers, and artists. As of 2014, the Shotbow server employed three full-time and five part-time employees. According to Matt Sundberg, the server's owner, "large server networks are incredibly expensive to run and are very time consuming ." According to Chad Dunbar, the founder of MCGamer, "it really costs to run networks above 1000 concurrent players." This includes salaries, hardware, bandwidth, and DDoS protection, and so monthly expenses can cost thousands of dollars. Dunbar stated that MCGamer, which has had over 50,000 daily players, has expenses that can be "well into the five-figure marks" per month. As of 2015, expenses of Hypixel, the largest server, are nearly $100,000 per month. Many servers sell in-game ranks and cosmetics to pay for its expenses. Software Vanilla server software provided by Mojang is maintained alongside client software. While servers must update to support features provided by new updates, many different kinds of modified server software exist. Modifications typically include optimizations, allowing more players to use a server simultaneously, or for larger portions of the world to be loaded at the same time. Modified software almost always acts as a base for plug-ins, which may be added and removed to customize server functionality. These are typically written in Java for the Java Edition, although JavaScript and PHP are used in some Bedrock Edition software. As the vanilla software for Bedrock is made compatible with only Ubuntu and Windows, modifications may allow for added compatibility. Notable plug-in software include CraftBukkit, Spigot, Paper and Sponge for Java and Pocketmine-MP, Nukkit, Altay and Jukebox for Bedrock. Vanilla and modified servers alike communicate with the client using a consistent protocol but may have vastly different internal mechanisms. Certain server software can allow for servers to be linked, allowing players to dynamically cross worlds without "signing out"; these include BungeeCord and Waterfall in Java and WaterDog and Nemisys for Bedrock. In a similar vein, due to close feature parity between up-to-date editions of the game, Java servers may utilize a proxy server such as DragonProxy or Geyser to communicate with both protocols, allowing Bedrock players to join. Notable servers The most popular Java Edition server is Hypixel, which, released in April 2013, has had over 20 million unique players, around half of all active players of the Java Edition itself. Other popular servers include MCGamer, released in April 2012, which has over 3.5 million unique players; Wynncraft, released in April 2013, which has over 1 million unique players; and Emenbee, released in 2011, which also has over 1 million unique players. As of 2014, servers such as Mineplex, Hypixel, Shotbow and Hive Games receive "well over a million unique users every month", according to Polygon. Oldest server The record for the oldest server in Minecraft is a common debate within the community. The community generally considers either MinecraftOnline and its map "Freedonia", or Nerd.Nu, a server that has some of the oldest maps on a server, to be the oldest server in Minecraft. Proponents of Nerd.Nu being the oldest server argue in YouTube video essays that people have built on nerd.nu's maps longer even if some are lost to time. People who argue that MinecraftOnline is the oldest server, again using articles and YouTube video essays, disregard servers in Minecraft's browser or classic form and use when Minecraft's official survival multiplayer was released as when the oldest server would be crowned. Additionally, MinecraftOnline's map has never reset, while nerd.nu's map has over 25 map revisions. While it has since been disproven, the server 2b2t previously was regarded as the oldest server in Minecraft by YouTube creator TheCampingRusher. 2b2t's age and lack of formal rules currently makes it the second oldest map and the oldest anarchy server in Minecraft. List References Further reading
27854734
https://en.wikipedia.org/wiki/1992%20USC%20Trojans%20football%20team
1992 USC Trojans football team
The 1992 USC Trojans football team represented the University of Southern California (USC) in the 1992 NCAA Division I-A football season. In their sixth and final year under head coach Larry Smith, the Trojans compiled a 6–5–1 record (5–3 against conference opponents), finished in a tie for third place in the Pacific-10 Conference (Pac-10), and outscored their opponents by a combined total of 264 to 249. USC's hundredth football season was also Larry Smith's last. Though they placed third in the Pac-10 and secured a bowl berth, they lost their last three games including their rivalry games against Notre Dame and UCLA. Smith was replaced at the end of the season by John Robinson, who returned to USC for a rare second tenure as head coach. Quarterback Rob Johnson led the team in passing, completing 163 of 285 passes for 2,118 yards with 12 touchdowns and 14 interceptions. Estrus Crayton led the team in rushing with 183 carries for 700 yards and five touchdowns. Curtis Conway led the team in receiving with 49 catches for 764 yards and five touchdowns; Johnnie Morton also had 49 catches for 756 yards and six touchdowns. Schedule Roster Game summaries Oklahoma Source: Gainesville Sun References USC USC Trojans football seasons USC Trojans football
67286453
https://en.wikipedia.org/wiki/Elitegroup%20Computer%20Systems%20Headquarters
Elitegroup Computer Systems Headquarters
The Elitegroup Computer Systems Headquarters () is a 21-storey skyscraper office building completed in 2007 and located in Neihu District, Taipei, Taiwan. One of the most prominent landmarks in the Neihu Science Park along the bank of the Keelung River, the building serves as the corporate headquarters of the Taiwanese electronics firm Elitegroup Computer Systems. See also List of tallest buildings in Taiwan List of tallest buildings in Taipei Elitegroup Computer Systems Lite-On Technology Center References 2007 establishments in Taiwan Skyscrapers in Taipei Skyscraper office buildings in Taiwan Office buildings completed in 2007
57068453
https://en.wikipedia.org/wiki/Software%20Tools%20Users%20Group
Software Tools Users Group
The Software Tools Users Group (STUG) was a technical organization started in 1976, in parallel with Usenix. The STUG goal was to develop a powerful and portable Unix-like system that could be implemented on top of virtually any operating system, providing the capabilities and features of Unix in a non-proprietary system. With its focus on building clean, portable, reusable code shared amongst multiple applications and runnable on any operating system, the Software Tools movement reestablished the tradition of open source and the concepts of empowering users to define, develop, control, and freely distribute their computing environment. History In 1976, Brian Kernighan (then of Bell Labs) and P. J. Plauger published Software Tools, the first of their books on programming inspired by the recent creation of the Unix operating system by Kernighan's colleagues at Bell Labs. The "Software Tools" series spread the essence of "C/Unix thinking" with makeovers for Fortran and Pascal. Kernighan's Ratfor (rational FORTRAN preprocessor) was eventually put in the public domain. Deborah K. Scherrer, Dennis E. Hall, and Joseph S. Sventek, then researchers at the Lawrence Berkeley National Laboratory quickly picked up the Software Tools book and philosophy. They expanded the initial set of a few dozen tools from the book into an entire Virtual Operating System (VOS), providing an almost complete set of the Unix tools, a Unix-like programming library, and an operating system interface that could be implemented on top of virtually any system. They freely distributed their VOS collection worldwide. Their work generated ports of the software to over 50 operating systems and a users group of more than 2000. An LBNL research report appeared in Communications of the ACM in September 1980. Scherrer, also on the Usenix Board at the Time, established and coordinated the Software Tools Users Group, aligning itself with Usenix Starting in 1979, STUG and Usenix held parallel conferences. STUG also produced a series of newsletters. STUG also coordinated with the European Unix Users Group and spawned similar groups in other parts of the world. The Software Tools movement eventually triggered several commercial companies to port and distribute the Software Tools to microcomputer systems such as CP/M and MS-DOS. Awards On January 24, 1996, Scherrer's, Hall's, and Sventek's work was recognized with a USENIX Lifetime Achievement Award (“The Flame”). In 1993 Scherrer had previously been honored with a “UNIX Academic Driver” award presented by Bell Labs, for “Outstanding Contributions to the UNIX community”. Her work included the Software Tools movement as well as contributions to USENIX. Other Major Contributors The Software Tools project was the result of efforts from hundreds of people at many, many sites. The USENIX STUG Lifetime Achievement Award includes the names of many, but certainly not all, major contributors to the Software Tools project. Legacy By the late-1980s, Unix was becoming more available, Microsoft had taken over the PC market, and the need for the VOS environment started to subside. The STUG group decided to discontinue, choosing to donate the group's financial legacy to endow a yearly USENIX “STUG Award”. This award “recognizes significant contributions to the community that reflect the spirit and character demonstrated by those who came together in the Software Tools Users Group. Recipients of the annual STUG Award conspicuously exhibit a contribution to the reusable code base to all and/or the provision of a significant enabling technology to users in a widely available form.” . See also USENIX Unix Open-source model Brian Kernighan “The Unix Programming Environment”. Software: Practice and Experience, Vol 9, 1979. Peter H. Salus A Quarter Century of UNIX. Addison-Wesley: 1994. A complete copy of the Software Tools distributions from LBNL, the ports for Unix, CP/M, and MS-DOS, Pascal, and the original set from Addison-Wesley are available in the Computer History Museum and The Unix Heritage Society https://web.archive.org/web/20050831153956/http://www.tuhs.org/. These archives also contain most of the STUG newsletters and related articles. References Lifetime achievement awards Unix history Unix programming tools User groups
10904051
https://en.wikipedia.org/wiki/Terminfo
Terminfo
Terminfo is a library and database that enables programs to use display terminals in a device-independent manner. Mary Ann Horton implemented the first terminfo library in 1981–1982 as an improvement over termcap. The improvements include faster access to stored terminal descriptions, longer, more understandable names for terminal capabilities and general expression evaluation for strings sent to the terminal. Terminfo was included with UNIX System V Release 2 and soon became the preferred form of terminal descriptions in System V, rather than termcap (which BSD continued to use). This was imitated in pcurses in 1982–1984 by Pavel Curtis, and was available on other UNIX implementations, adapting or incorporating fixes from Mary Horton. For more information, refer to the posting on the comp.sources.unix newsgroup from December 1986. A terminfo database can describe the capabilities of hundreds of different display terminals. This allows external programs to be able to have character-based display output, independent of the type of terminal. Some configurations are: Number of lines on the screen Mono mode; suppress color Use visible bell instead of beep Data model Terminfo databases consist of one or more descriptions of terminals. Indices Each description must contain the canonical name of the terminal. It may also contain one or more aliases for the name of the terminal. The canonical name or aliases are the keys by which the library searches the terminfo database. Data values The description contains one or more capabilities, which have conventional names. The capabilities are typed: boolean, numeric and string. The terminfo library has predetermined types for each capability name. It checks the types of each capability by the syntax: string capabilities have an "=" between the capability name and its value, numeric capabilities have a "#" between the capability name and its value, and boolean capabilities have no associated value (they are always true if specified). Applications which use terminfo know the types for the respective capabilities, and obtain the values of capabilities from the terminfo database using library calls that return successfully only when the capability name corresponds to one of the predefined typed capabilities. Like termcap, some of the string capabilities represent escape sequences which may be sent to the host by pressing special keys on the keyboard. Other capabilities represent strings that may be sent by an application to the terminal. In the latter case, the terminfo library functions (as does a termcap library) for substituting application parameters into the string which is sent. These functions provide a stack-based expression parser, which is primarily used to help minimize the number of characters sent for control sequences which have optional parameters such as SGR (Select Graphic Rendition). In contrast, termcap libraries provide a limited set of operations which are useful for most terminals. Hierarchy Terminfo descriptions can be constructed by including the contents of one description in another, suppressing capabilities from the included description or overriding or adding capabilities. No matter what storage model is used, the terminfo library returns the terminal description from the requested description, using data which is compiled using a standalone tool (e.g., tic). Storage model Terminfo data is stored as a binary file, making it less simple to modify than termcap. The data can be retrieved by the terminfo library from the files where it is stored. The data itself is organized as tables for the boolean, numeric and string capabilities, respectively. This is the scheme devised by Mary Horton, and except for some differences regarding the available names is used in most terminfo implementations. X/Open does not specify the format of the compiled terminal description. In fact, it does not even mention the common tic or infocmp utilities. Because the compiled terminfo entries do not contain metadata identifying the indices within the tables to which each capability is assigned, they are not necessarily compatible between implementations. However, since most implementations use the same overall table structure (including sizes of header and data items), it is possible to automatically construct customized terminfo libraries which can read data for a given implementation. For example, ncurses can be built to match the terminfo data for several other implementations. Directory tree The original (and most common) implementation of the terminfo library retrieves data from a directory hierarchy. By using the first character of the name of the terminal description as one component of the pathname, and the name of the terminal description as the name of the file to retrieve, the terminfo library usually outperforms searching a large termcap file. Hashed database Some implementations of terminfo store the terminal description in a hashed database (e.g., something like Berkeley DB version 1.85). These store two types of records: aliases which point to the canonical entry, and the canonical entry itself, which contains the data for the terminal capabilities. Limitations and extensions The Open Group documents the limits for terminfo (minimum guaranteed values), which apply only to the source file. Two of these are of special interest: 14 character maximum for terminal aliases 32,767 maximum for numeric quantities The 14-character limit addresses very old filesystems which could represent filenames no longer than that. While those filesystems are generally obsolete, these limits were as documented from the late 1980s, and unreviewed since then. The 32,767 limit is for positive values in a signed two's complement 16-bit value. A terminfo entry may use negative numbers to represent cancelled or absent values. Unlike termcap, terminfo has both a source and compiled representation. The limits for the compiled representation are unspecified. However, most implementations note in their documentation for tic (terminal information compiler) that compiled entries cannot exceed 4,096 bytes in size. See also Computer terminals Curses (programming library) Termcap tput References External links Current terminfo data Termcap/Terminfo Resources Page at Eric S. Raymond's website man terminfo(5) Database management systems Unix software Text mode 1980s software
19383106
https://en.wikipedia.org/wiki/Advance%20Steel
Advance Steel
Advance Steel is a CAD software application for 3D modeling and detailing of steel structures and automatic creation of fabrication drawings, bill of materials and NC files. It was initially developed by GRAITEC, but was acquired by Autodesk in 2013. The software runs on AutoCAD. Features The application supports all basic AutoCAD concepts and functions (snap points, grip points, copy, etc.). When running on its own CAD platform, the application provides the same capabilities as running on AutoCAD. The Advance Steel CAD platform serves both as a graphic engine and an object oriented database. Compliancy is assured between versions of Advance Steel running on its own CAD platform and running on AutoCAD. The Advance Steel information is stored in DWG format. The application includes AutoLisp (enhancing standard AutoLisp to include Advance Steel commands) and COM (VBA, C++) programming interfaces. This means that users can create their own customized macros for specialist requirements. Advance Steel imports and exports to the following file formats: GTC DWG IFC 2x3 CIS/2 SDNF PSS KISS (“Keep it Simple, Steel”) DSTV DXF The main functions of Advance Steel concern: Creation of 3D model using a library of construction elements (i.e., beams, plates, bolts, welds, etc.) Sheet metal and plate work Advanced tools for element collision detection Clear workshop drawings, automatically labeled and dimensioned Checking the model in order to insure a correctly built 3D model and accurate bills of materials Automatic creation of general arrangement and shop drawings, fabrication drawings, fitting drawings, isometric views and fabrication drawings Drawing creation workflow management (revision control, automatic update, etc.) Automatic creation of lists / bills of materials and NC files Multi-user technology - all users involved in a project can work simultaneously and securely on the same model, without errors. Advance Steel provides instruments for modeling complex structures such as straight and spiral stairs, railings, and ladders. The program creates all necessary documents (including NC files) for the stair fabrication. Specific features Parametric steel connections Advance Steel has a library of more than 300 preset parametric steel connections to connect Advance elements grouped in the following categories: beam end to end joints, base plate joints, general bracing joints, cantilever beam to column joints, plate joints, clip angle joints, pylon joints, tube brace joints, purlin joints, stiffener joints, and turnbuckle bracings. The user creates all connecting elements by a single operation. At the same time, the connected elements are processed (shortened, coped, etc.). The software allows users to customize the connections: Set the parameters of the joint Process the connected elements Transfer the properties from one steel connection to another Update the steel connection Joint Design engine The software dimensions and checks joints according to Eurocodes 3 standards and AISC North American standards. A design report can be created. Drawing styles Based on a 3D model, dimensioned and labeled 2D general arrangement and shop drawings can be automatically created using drawing styles. The drawings are created in separate DWG files; however, they are linked to track changes. Thus, the drawings can be updated after any model modifications and the drawing revision can be managed. The software has a variety of predefined drawing styles for the creation of general arrangement drawings and shop drawings for single parts and assemblies. A drawing style is a set of rules used to create a detail drawing and defines the elements that are displayed including labeling and dimensioning preferences. Drawing styles provide the option to automatically create drawings and to modify the layout exactly to user requirements. Drawing styles are used in a similar way to AutoCAD dimension styles, line styles, etc. The predefined drawing styles are different for each installation and country. Custom drawing styles can also be defined. Software compatibility The application is compatible with: Windows 10, Windows 8, Windows 7, Windows Vista, Windows XP Pro (32-bit and 64-bit) AutoCAD 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 AutoCAD Architecture 2010, 2011, 2012, 2013, 2014, 2015, 2016, 2017 Software interoperability Advance Steel integrates GRAITEC's data synchronization technology, GTC (GRAITEC Transfer Center). This technology offers: Import/export data in standard formats: CIS/2, SDNF, PSS, IFC 2.x3 Multiple Advance Steel users that work simultaneously on the same project and synchronize their models Synchronization in Advance Steel of modifications made by engineers in other GRAITEC software (section changes, addition of structural elements, etc.) Release history See also Comparison of CAD editors for CAE Tekla Structures References External links Advance Steel official page Projects created with Advance Steel Advance Steel demos Computer-aided engineering software Civil engineering 3D graphics software Product lifecycle management Computer-aided design software Autodesk acquisitions Building information modeling GRAITEC products
582329
https://en.wikipedia.org/wiki/OpenMPT
OpenMPT
OpenMPT is an open-source audio module tracker for Windows (with an intended Wine-functionality for UNIX and Linux x86-systems). It was previously called ModPlug Tracker, and was first released by Olivier Lapicque in September 1997. Computer Music magazine listed OpenMPT among the top five free music trackers in 2007, and it is one of the most widely used trackers. History MOD Plugin and ModPlug Tracker OpenMPT was initially developed as a browser plug-in called MOD Plugin, which enabled users to play music and other sounds encoded in module files. ModPlug Tracker, along with a player application named ModPlug Player, evolved from this plug-in. In December 1999, Olivier Lapicque sent the module-playing parts of ModPlug Tracker's source code to Kenton Varda, under the GPL-2.0-or-later, to write a plugin for XMMS based on the code. In 2001, the source code was released in the public domain, and the mod-playing code was split off into a separate library, libmodplug, maintained as part of the ModPlug XMMS Plugin project. This project lay dormant from late 2003 until early 2006, when it was picked up again. Today, libmodplug is included in many Linux distributions as a default audio plugin for playing module files and is a part of the popular open source multimedia framework gstreamer. Due to lack of time, Olivier Lapicque discontinued development of ModPlug Tracker itself, and in January 2004, he released the entire source code under an open-source license. The ModPlug Player source code is still closed as of May 2020. OpenMPT Lapicque's MPT code was taken up by a group of tracker musicians/programmers and is now known as OpenMPT. Also based on the ModPlug code is OpenMPT's "sister project" Schism Tracker which contributed several backports of bugfixes to OpenMPT. OpenMPT is distributed as free software and is, as of May 2020, under active development. Until May 2009 (v1.17.02.53) OpenMPT was licensed under the Copyleft GPL-2.0-or-later and then relicensed under the terms of the permissive BSD-3-Clause. Since OpenMPT 1.23 (March 2014), OpenMPT is also available as a 64-bit application. This allows musicians to use 64-bit VST plugins and make use of the entire physical memory on 64-bit systems. For this purpose, OpenMPT provides its own plugin bridge, which can be used to run plugins with a different bitness than the host in a separate process, or to run plugin in a sandbox and prevent them from crashing the host application. Features OpenMPT's main distinguishing feature is its native Windows user interface. Most trackers, even newer ones such as Renoise, have interfaces modelled after the older DOS trackers such as FastTracker II. It supports samples, VST plugins and OPL3 instruments as sound sources. OpenMPT makes use of features common to Microsoft Windows programs, such as context menus for effect selection, "tree" views (for files, samples, and patterns), drag and drop functionality throughout, and the native look and feel of the Windows platform. It supports both loading and saving of IT (Impulse Tracker), XM (FastTracker Extended Module), MOD (ProTracker and similar), S3M (Scream Tracker 3) and MPTM (its own file format) files, imports many more module and sample file formats, and has some support for DLS banks and SoundFonts. OpenMPT was also one of the first Trackers to support opening and editing of multiple tracker modules in parallel. OpenMPT supports up to 127 tracks/channels, VST Plugins, VST instruments and has ASIO support. MPTM file format Due to limitations of the various mod file formats it is able to save, a new module format called MPTM was created in 2007. OpenMPT introduced some non-standard additions to the older file formats. For example, one can use stereo samples or add VST Plugins to XM and IT modules, which were not supported in the original trackers. Many of these features have gradually been removed from IT and XM files and made available only in MPTM files. libopenmpt libopenmpt is a cross-platform module playing library based on the OpenMPT code with interfaces for C, C++ and other programming languages. To ensure that the code bases do not diverge like in the case of ModPlug Tracker and libmodplug, libopenmpt development takes place in the same code repository as OpenMPT. Official input plug-ins for popular audio players (XMPlay, Winamp and foobar2000) based on libopenmpt are also available from the website. FFmpeg also offers an optional module decoder based on libopenmpt. libopenmpt can also serve as a drop-in replacement for libmodplug and thus offer up-to-date module playback capabilities for software that relies on the libmodplug API. Reception and users Nicolay of the Grammy-nominated The Foreign Exchange has revealed that ModPlug is his "Secret Weapon". Movie and video game music composer Raphaël Gesqua made known his use of OpenMPT in an interview. Peter Hajba and Alexander Brandon used OpenMPT to compose the soundtracks for Bejeweled 2, Bejeweled 3 and other PopCap games. Electronic rock musician Blue Stahli has mentioned that he used ModPlug Tracker and other trackers in the past. References External links ModPlug ModPlug XMMS Plugin (using libmodplug) Audio trackers Free audio software Free software programmed in C++ Windows-only free software Software using the BSD license
7715165
https://en.wikipedia.org/wiki/United%20States%20House%20Homeland%20Security%20Subcommittee%20on%20Cybersecurity%2C%20Infrastructure%20Protection%20and%20Innovation
United States House Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection and Innovation
The Homeland Security Subcommittee on Cybersecurity, Infrastructure Protection and Innovation is a subcommittee within the House Homeland Security Committee. Established in 2007 as a new subcommittee, it handles many of the duties of the former Commerce Subcommittee on Economic Security, Infrastructure Protection, and Cybersecurity. Members, 117th Congress Historical membership rosters 115th Congress 116th Congress External links Official Site Homeland Cybersecurity 2007 establishments in Washington, D.C.
46315592
https://en.wikipedia.org/wiki/Conda%20%28package%20manager%29
Conda (package manager)
Conda is an open-source, cross-platform, language-agnostic package manager and environment management system. It was originally developed to solve difficult package management challenges faced by Python data scientists, and today is a popular package manager for Python and R. At first part of Anaconda Python distribution developed by Anaconda Inc., it ended up being useful on its own and for things other than Python, so it was spun out as a separate package, released under the BSD license. The Conda package and environment manager is included in all versions of Anaconda, Miniconda, and Anaconda Repository. Conda is a NumFOCUS affiliated project. Features Conda allows users to easily install different versions of binary software packages and any required libraries appropriate for their computing platform. Also, it allows users to switch between package versions and download and install updates from a software repository. Conda is written in the Python programming language, but can manage projects containing code written in any language (e.g., R), including multi-language projects. Conda can install Python, while similar Python-based cross-platform package managers (such as wheel or pip) cannot. A popular Conda channel for bioinformatics software is Bioconda, which provides multiple software distributions for computational biology. Comparison to pip The big difference between Conda and the pip package manager is in how package dependencies are managed, which is a significant challenge for Python data science and the reason Conda was created. Before Pip 20.3, Pip installed all Python package dependencies required, whether or not those conflict with other packages previously installed. So a working installation of, for example, Google TensorFlow would suddenly stop working when a user pip-installs a new package that needs a different version of the NumPy library. Everything might still appear to work but the user could get different results, or would be unable to reproduce the same results on a different system because the user did not pip-install packages in the same order. Conda checks the current environment, everything that has been installed, any version limitations that the user specifies (e.g. the user only wants TensorFlow >= 2.0) and figures out how to install compatible dependencies. Otherwise, it will tell the user that what he or she wants can't be done. Pip, by contrast, will just install the package the user specified and any dependencies, even if that breaks other packages. See also Anaconda (Python distribution) References External links Free package management systems Software distribution Free software programmed in Python
973934
https://en.wikipedia.org/wiki/TickIT
TickIT
TickIT is a certification program for companies in the software development and computer industries, supported primarily by the United Kingdom and Swedish industries through UKAS and SWEDAC respectively. Its general objective is to improve software quality. History In the 1980s, the UK government's CCTA organisation promoted the use of IT standards in the UK public sector, with work on BS5750 (Quality Management) leading to the publishing of the Quality Management Library and the inception of the TickIT assessment scheme with DTI, MoD and participation of software development companies. TickITplus The TickIT scheme has been updated to become TickITplus, a new website TickITplus is now available. TickITplus adds a new dimension to the existing TickIT Scheme combining industry best practice with International IT standards. It provides ISO 9001:2008 accredited certification with a Capability Grading for all sizes and types of IT organisations. It cross-references ISO/IEC 15504 (Information technology — Process assessment) and ISO/IEC 12207 (Systems and software engineering — Software life cycle processes) amongst others. In addition it promotes Auditor and Practitioner competency and training within established qualification standards. Functions In addition to a general objective of improving software quality, one of the principles of TickIT is to improve and regulate the behaviour of auditors working in the information technology sector through training, and subsequent certification of auditors. The International Register of Certificated Auditors manages the registration scheme for TickIT auditors. Software development organizations seeking TickIT Certification are required to show conformity with ISO 9000. Major objective was to provide industry with a practical framework for the management of software development quality by developing more effective quality management system certification procedures. These involved: publishing guidance material to assist software organizations interpret the requirements of ISO 9001 training, selecting and registering auditors with IT experience and competence, and introducing rules for the accreditation of certification bodies practising in the software sector The TickIT Guide TickIT also includes a guide. This provides guidance in understanding and applying ISO 9001 in the IT industry. It gives a background to the TickIT scheme, including its origins and objectives. Furthermore, it provides detailed information on how to implement a Quality System and the expected structure and content relevant to software activities. The TickIT guide also assists in defining appropriate measures and/or metrics. The TickIT Guide contains the official guidance material for TickIT. It is directed at a wide audience: senior managers and operational staff of software suppliers and in-house development teams, purchasers and users of software based systems, certification bodies and accreditation authorities, third party and internal auditors, auditor training course providers and IT consultants. Part A: Introduction to TickIT and the Certification Process This presents general information about the operation of TickIT and how it relates to other quality initiatives such as Process Improvement. Part B: Guidance for Customers This describes the issues relating to quality management system certification in the software field from the viewpoint of the customer who is initiating a development project, and explains how the customer can contribute to the quality of the delivered products and services. Part C: Guidance for Suppliers This presents information and guidance to software and software service providing organizations, including in house developers, on the construction of their quality management systems using the TickIT procedures. This part also indicates how organizations can assess and improve the effectiveness of their quality management systems. Part D: Guidance for Auditors This gives guidance to auditors on the conduct of assessments using the TickIT procedures. Part E: Software Quality Management System Requirements – Standards Perspective This contains guidance to help organizations producing software products and providing software-related services interpret the requirements of BS EN ISO 9001:2000. It follows the clause sequence of the Standard. Part F: Software Quality Management System Requirements – Process Perspective This identifies and elaborates upon the good practice required to provide effective and continuous control of a software quality management system. It is organized around the basic processes required for software development, maintenance and support and follows the structure set out in ISO/IEC 12207:1995. Appendix 1: Management and Assessment of IT Processes Appendix 2: Case study: Using the EFQM Excellence Model Appendix 3: Case Study: ISO/IEC 15504 - Compatible Process Assessments Appendix 4: Case study: Software Process Improvement The CMMSM Way Standards information and references Glossary of terms References Bamford, Robert; Deibler, William (2003). ISO 9001: 2000 for Software and Systems Providers: An Engineering Approach (1st ed.). CRC-Press. External links TickITplus IRCA Certification bodies Information assurance standards Information technology governance Information technology organisations based in the United Kingdom Software quality
33883830
https://en.wikipedia.org/wiki/Metamorphic%20testing
Metamorphic testing
Metamorphic testing (MT) is a property-based software testing technique, which can be an effective approach for addressing the test oracle problem and test case generation problem. The test oracle problem is the difficulty of determining the expected outcomes of selected test cases or to determine whether the actual outputs agree with the expected outcomes. Metamorphic relations (MRs) are necessary properties of the intended functionality of the software, and must involve multiple executions of the software. Consider, for example, a program that implements sin x correct to 100 significant figures; a metamorphic relation for sine functions is "sin (π − x) = sin x". Thus, even though the expected value of sin x1 for the source test case x1 = 1.234 correct to the required accuracy is not known, a follow-up test case x2 = π − 1.234 can be constructed. We can verify whether the actual outputs produced by the program under test from the source test case and the follow-up test case are consistent with the MR in question. Any inconsistency (after taking rounding errors into consideration) indicates a failure of the program, caused by a fault in the implementation. MRs are not limited to programs with numerical inputs or equality relations. As an example, when testing a booking website, a web search for accommodation in Sydney, Australia, returns 1,671 results; are the results of this search correct and complete? This is a test oracle problem. Based on a metamorphic relation, we may filter the price range or star rating and apply the search again; it should return a subset of the previous results. A violation of this expectation would similarly reveal a failure of the system. Metamorphic testing was invented by T.Y. Chen in the technical report in 1998. Since then, more than 150 international researchers and practitioners have applied the technique to real-life applications. Some examples include web services, computer graphics, embedded systems, simulation and modeling, machine learning, decision support, bioinformatics, components, numerical analysis, code generators and compilers. The first major survey of the field of MT was conducted in 2016. It was followed by another major survey in 2018, which highlights the challenges and opportunities and clarifies common misunderstandings. Although MT was initially proposed as a software verification technique, it was later developed into a paradigm that covers verification, validation, and other types of software quality assessment. MT can be applied independently, and can also be combined with other static and dynamic software analysis techniques such as proving and debugging. In August 2018, Google acquired GraphicsFuzz, a startup from Imperial College London, to apply metamorphic testing to graphics device drivers for Android smartphones. References External links Software testing
3228812
https://en.wikipedia.org/wiki/M%20%28comic%20strip%29
M (comic strip)
M is a Norwegian comic strip, written by Mads Eriksen. It was published in daily newspapers such as Dagbladet, as well as in its own monthly magazine M, until December 2012. Initially published as a guest strip in Pondus magazine, it established its own dedicated magazine near the end of 2006. The strip gained a large fan base in Norway, much due to its quirky humour and numerous pop-cultural references. To this day, reprints are regularly published by Kollektivet Synopsis A surreal, semi-autobiographical, strip, its events revolve around Mads Eriksen and The Madam, his live-in girlfriend. The Madam is always drawn with asterisks for eyes, because, Eriksen claims, it is impossible to know what is on her mind (in one strip, Eriksen states that The Madam is drawn with asterisks because "she mixes pills and alcohol"). One characteristic of the comic is the unique T-shirts the characters wear for every strip, often containing pop-culture references. Allegations of blasphemy In November 2006, a 94-year-old woman from Selbu brought blasphemy charges against Adresseavisen, a Trondheim-based newspaper that published M, over the content of two strips. In both, Jesus Christ is seen advertising for different products, such as "Pilate's crucifixion cream for manly men". The woman who pressed charges stated that "this has nothing to do with freedom of speech. This is much worse than the Muhammad cartoons." Eriksen responded that M is a strip that pokes fun at all sorts of myths, both old and new. A few days later, Dagbladet published a strip in which Mads wins a boxing match against God on a walkover. No further action was taken, and there has been no censorship in Norway over charges of blasphemy content since Life of Brian was banned from cinemas in 1979. Pop-cultural references Pop cultural phenomena are frequently cited. Among these are: Alien—Eriksen gets attacked by an alien that hatched from an expired egg. Douglas Adams and his The Hitch Hiker's Guide to the Galaxy science fiction novel and radio show. Firefly—M refuses to breathe before the cancelled TV-show is brought back on the air. However, the character of Mal is incorrectly referred to as Mel in one strip. Gremlins—M brings a Mogwai home and wants to feed it after midnight... Hunter S. Thompson—Following the demise of Eriksen's coffee pot, it was blown out of a cannon, while Eriksen was wearing a T-shirt that read gonzo, mimicking Hunter S. Thompson's burial. Linkin Park and Fred Durst— He has, both in strips and in a national newspaper interview stated that he would rather have his pubic hairs individually removed than listen to Linkin Park. Eriksen himself looks quite similar to Fred Durst, exploited in some strips Linux—In one strip, Mads' brother (a computer-oracle) offers advice on how to get rid of trojans, malware, etc., always concluding with "install Linux". At one point a reader asks which Linux distribution he should use, at which point Mads' brother replies: "Gentoo for you, Ubuntu for grandma, HA-HA-HA!!!" In a current ongoing series, Mads has gotten rid of windows and tries to install Linux, but gets help from "Linux Man", A.K.A. his brother. LOST -There has been one reference to the TV-series LOST The Lord of the Rings—The Madam is especially fond of Viggo Mortensen and Aragorn. Marvel Comics—Galactus, antagonist of The Silver Surfer, has appeared in M on some occasions. The Phantom—seen intimately embracing a member of "The Singh Brotherhood", The Phantom's arch-enemies, in an M-strip. Poltergeist—in a story arch of strips, Eriksens dead goldfish comes to haunt his apartment, much in the fashion of Poltergeist. Pondus and Frode Øverli—In an ongoing "strip war" the two comics have regularly made friendly stabs at each other. It started with Eriksen doing a strip about Øverli sending text messages to all comics artists in Norway, parodying the style of John Arne Riise. Øverli retorted with an attack on M, and everything escalated from there. Stephen King's The Shining The Simpsons—There has been several The Simpsons references in the comic. The Silence of the Lambs—In one strip Mads dresses up like the villain Jame Gumb. Star Wars—Mads Eriksen is a big fan of the movie series, and the strip frequently makes references, some so specific that getting the joke is restricted to extreme fans. Only satisfied with the original trilogy, the strip's slogan is "Mye morsommere enn Jar-Jar" (a lot funnier than Jar-Jar). The Transformers—referenced several times Twin Peaks—Eriksen once uses the voice recording feature on his mobile phone to record a message to "Diane" in exactly the same fashion as Agent Cooper and his tape recorder in Twin Peaks. Wikipedia—A door-to-door salesman attempts to sell Mads an encyclopedia, something which causes Mads to go into a laughing fit. The Madam remarks "What are we? The Flintstones", commenting that encyclopedias in books are obsolete. Throughout the strip, Mads is wearing a T-shirt with a symbol resembling the Wikipedia logo. Windows Vista—Mads' brother reviews Vista, beginning by making farting sounds with his mouth, continuing on to pulling his T-shirt over his head whilst still making farting sounds. World of Warcraft—Mads' friend Øyvind is once seen with a shirt saying "Loot Ninja", a term used in the online-game. Yuggoth—Eriksen finds a homeless creature and takes it home. The creature turns out to be Cthulhu from H. P. Lovecraft's The Call of Cthulhu. At one point Eriksen says "Cthulhu Fhtagn"—"Cthulhu Dreams"—while the creature is napping in his living room. David Hasselhoff—David Hasselhoff save's M from a monster clown doll after M dresses in women's clothing and a blond wig and begins to scream. David Hasselhoff hears the screams from a beach and say "that's a blond girl scream!" and comes to M's rescue; fighting off the monster clown doll. Tom Cruise and Katie Holmes-The Madam reads the latest gossip about the (then) couple, and Eriksen pretends he wants to write it down in an imaginary book where he collect "all the important news" about the couple, only to discover that he's filled-up the "book", and therefore must hurry to the store and buy a new one. Sources M albums M book, M magazine Schibsted Footnotes Comics characters introduced in 2004 Norwegian comic strips Gag-a-day comics Autobiographical comics Satirical comics 2004 comics debuts Fictional Norwegian people Norwegian comics characters 2004 establishments in Norway
15822958
https://en.wikipedia.org/wiki/PlayStation%202
PlayStation 2
The PlayStation 2 (PS2) is a home video game console developed and marketed by Sony Computer Entertainment. It was first released in Japan on March 4, 2000, in North America on October 26, 2000, in Europe on November 24, 2000, and in Australia on November 30, 2000. It is the successor to the original PlayStation, as well as the second installment in the PlayStation brand of consoles. As a sixth-generation console, it competed with Sega's Dreamcast, Nintendo's GameCube, and Microsoft's Xbox. Announced in 1999, the PS2 offered backward-compatibility for its predecessor's DualShock controller, as well as its games. The PS2 is the best-selling video game console of all time, having sold over 155 million units worldwide. Over 3,800 game titles have been released for the PS2, with over 1.5 billion copies sold. Sony later manufactured several smaller, lighter revisions of the console known as Slimline models in 2004. Even with the release of its successor, the PlayStation 3, the PS2 remained popular well into the seventh generation. It continued to be produced until 2013 when Sony finally announced that it had been discontinued after over twelve years of production—one of the longest lifespans of a video game console. Despite the announcement, new games for the console continued to be produced until the end of 2013, including Final Fantasy XI: Seekers of Adoulin for Japan, FIFA 14 for North America, and Pro Evolution Soccer 2014 for Europe. Repair services for the system in Japan ended on September 7, 2018. History Background Released in 1994, the original PlayStation proved to be a phenomenal worldwide success and signalled Sony's rise to power in the video game industry. Its launch elicited critical acclaim and strong sales; it eventually became the first computer entertainment platform to ship over 100 million units. The PlayStation enjoyed particular success outside Japan in part due to Sony's refined development kits, large-scale advertising campaigns, and strong third-party developer support. By the late 1990s Sony had dethroned established rivals Sega and Nintendo in the global video game market. Sega, spurred on by its declining market share and significant financial losses, launched the Dreamcast in 1998 as a last-ditch attempt to stay in the industry. By this time rumours of an upcoming PlayStation 2 were already circulating among the press. Development Though Sony has kept details of the PlayStation 2's development secret, Ken Kutaragi, the chief designer of the original PlayStation, reportedly began working on a second console around the time of its launch in late 1994. In 1997, several employees from Argonaut Games, under contract for semiconductor manufacturer LSI Corporation at the time, were instructed to fly to the US West Coast and design a rendering chip for Sony's upcoming console. Jez San, founder of Argonaut, recalled that his team had no direct contact with Sony during the development process. Unbeknownst to him, Sony was designing their own chip in-house and had instructed other companies to design rendering chips merely to diversify their options. By late 1997, reports surfaced that the PlayStation 2 was being developed and would house a built-in DVD player and internet connectivity. Officially, however, Sony continued to deny that a successor to the PlayStation was being developed. Sony announced the PlayStation 2 (PS2) on March 1, 1999. The video game console was positioned as a competitor to Sega's Dreamcast, the first sixth-generation console to be released, although ultimately the main rivals of the PS2 were Nintendo's GameCube and Microsoft's Xbox. The Dreamcast enjoyed a successful US launch later that year; fuelled by a large marketing campaign, it sold over 500,000 units within two weeks. Soon after the Dreamcast's North American launch, Sony unveiled the PlayStation 2 at the Tokyo Game Show on September 20, 1999. Sony showed fully playable demos of upcoming PlayStation 2 games including Gran Turismo 2000 (later released as Gran Turismo 3: A-Spec) and Tekken Tag Tournament—which showed the console's graphic abilities and power. Launch The PS2 was launched in March 2000 in Japan, October in North America, and November in Europe. Sales of the console, games and accessories pulled in $250 million on the first day, beating the $97 million made on the first day of the Dreamcast. Directly after its release, it was difficult to find PS2 units on retailer shelves due to manufacturing delays. Another option was purchasing the console online through auction websites such as eBay, where people paid over a thousand dollars for the console. The PS2 initially sold well partly on the basis of the strength of the PlayStation brand and the console's backward-compatibility, selling over 980,000 units in Japan by March 5, 2000, one day after launch. This allowed the PS2 to tap the large install base established by the PlayStation—another major selling point over the competition. Later, Sony added new development kits for game developers and more PS2 units for consumers. The PS2's built-in functionality also expanded its audience beyond the gamer, as its debut pricing was the same or less than a standalone DVD player. This made the console a low-cost entry into the home theater market. The success of the PS2 at the end of 2000 caused Sega problems both financially and competitively, and Sega announced the discontinuation of the Dreamcast in March 2001, just 18 months after its successful Western launch. Despite the Dreamcast still receiving support through 2001, the PS2 remained the only sixth-generation console for over 6 months before it faced competition from new rivals: Nintendo's GameCube and Microsoft's Xbox. Many analysts predicted a close three-way matchup among the three consoles. The Xbox had the most powerful hardware, while the GameCube was the least expensive console, and Nintendo changed its policy to encourage third-party developers. While the PlayStation 2 theoretically had the weakest specification of the three, it had a head start due to its installed base plus strong developer commitment, as well as a built-in DVD player (the Xbox required an adapter, while the GameCube lacked support entirely). While the PlayStation 2's initial games lineup was considered mediocre, this changed during the 2001 holiday season with the release of several blockbuster games that maintained the PS2's sales momentum and held off its newer rivals. Sony also countered the Xbox by securing timed PlayStation 2 exclusives for highly anticipated games such as the Grand Theft Auto series and Metal Gear Solid 2: Sons of Liberty. Sony cut the price of the console in May 2002 from US$299 to $199 in North America, making it the same price as the GameCube and $100 less than the Xbox. It also planned to cut the price in Japan around that time. It cut the price twice in Japan in 2003. In 2006, Sony cut the cost of the console in anticipation of the release of the PlayStation 3. Unlike Sega's Dreamcast, Sony originally placed little emphasis on online gaming during its first few years, although that changed upon the launch of the online-capable Xbox. Coinciding with the release of Xbox Live, Sony released the PlayStation Network Adapter in late 2002, with several online first-party titles released alongside it, such as SOCOM U.S. Navy SEALs to demonstrate its active support for Internet play. Sony also advertised heavily, and its online model had the support of Electronic Arts (EA); EA did not offer online Xbox titles until 2004. Although Sony and Nintendo both started late, and although both followed a decentralized model of online gaming where the responsibility is up to the developer to provide the servers, Sony's moves made online gaming a major selling point of the PS2. In September 2004, in time for the launch of Grand Theft Auto: San Andreas, Sony revealed a newer, slimmer model of the PlayStation 2. In preparation for the launch of the new models (SCPH-700xx-9000x), Sony stopped making the older models (SCPH-3000x-500xx) to let the distribution channel empty its stock of the units. After an apparent manufacturing issue—Sony reportedly underestimated demand—caused some initial slowdown in producing the new unit caused in part by shortages between the time Sony cleared out the old units and the new units were ready. The issue was compounded in Britain when a Russian oil tanker became stuck in the Suez Canal, blocking a ship from China carrying PS2s bound for the UK. During one week in November, British sales totalled 6,000 units—compared to 70,000 units a few weeks prior. There were shortages in more than 1,700 shops in North America on the day before Christmas. In 2010, Sony introduced a TV with a built-in PlayStation 2. Hardware Technical specifications The PlayStation 2's main central processing unit (CPU) is the 128-bit R5900-based "Emotion Engine", custom-designed by Sony and Toshiba. The Emotion Engine consists of eight separate "units", each performing a specific task, integrated onto the same die. These units include a central CPU core, two Vector Processing Units (VPU), a 10-channel DMA unit, a memory controller, and an Image Processing Unit (IPU). There are three interfaces: an input output interface to the I/O processor, a graphics interface to the graphics synthesiser, and a memory interface to the system memory. The Emotion Engine CPU has a clock rate of 294.9 MHz (299 MHz on newer versions) and 6,000 MIPS, with a floating point performance of 6.2 GFLOPS. The GPU is likewise custom-designed for the console, named the "Graphics Synthesiser". It has a fillrate of 2.4 gigapixels per second, capable of rendering up to 75 million polygons per second. The GPU also runs with a clock frequency of 150 MHz, 4 MB of DRAM is capable of transmitting a display output of 1280 x 1024 pixels on both PAL and NTSC televisions. The PlayStation 2 has a maximum colour depth of 16.7 million true colours. When accounting for features such as lighting, texture mapping, artificial intelligence, and game physics, the console has a real-world performance of 25 million polygons per second. The PlayStation 2 also features two USB ports, and one IEEE 1394 (Firewire) port for SCPH-10000 to 3900x models only. A hard disk drive can be installed in an expansion bay on the back of the console, and is required to play certain games, notably the popular Final Fantasy XI. Software for the PlayStation 2 was distributed primarily on blue-tinted DVD-ROMs, with some titles being published on CD-ROM format. In addition, the console can play audio CDs and DVD films and is backward-compatible with almost all original PlayStation games. The PlayStation 2 also supports PlayStation memory cards and controllers, although original PlayStation memory cards will only work with original PlayStation games and the controllers may not support all functions (such as analogue buttons) for PlayStation 2 games. The standard PlayStation 2 memory card has an 8 megabyte (MB) capacity. There are a variety of non-Sony manufactured memory cards available for the PlayStation 2, allowing for a memory capacity larger than the standard 8 MB. Models The PlayStation 2 has undergone many revisions, some only of internal construction and others involving substantial external changes. The PS2 is primarily differentiated between models featuring the original "fat" case design and "slimline" models, which were introduced at the end of 2004. In 2010, the Sony Bravia KDL-22PX300 was made available to consumers. It was a 22" HD-Ready television which incorporated a built-in PlayStation 2. The PS2 standard color is matte black. Several variations in color were produced in different quantities and regions, including ceramic white, light yellow, metallic blue (aqua), metallic silver, navy (star blue), opaque blue (astral blue), opaque black (midnight black), pearl white, sakura purple, satin gold, satin silver, snow white, super red, transparent blue (ocean blue), and also Limited Edition color Pink, which was distributed in some regions such as Oceania, and parts of Asia. In September 2004, Sony unveiled its third major hardware revision. Available in late October 2004, it was smaller, thinner, and quieter than the original versions and included a built-in Ethernet port (in some markets it also had an integrated modem). Due to its thinner profile, it did not contain the 3.5" expansion bay and therefore did not support the internal hard disk drive. It also lacked an internal power supply until a later revision (excluding the Japan version), similar to the GameCube, and had a modified Multitap expansion. The removal of the expansion bay was criticized as a limitation due to the existence of titles such as Final Fantasy XI, which required the HDD use. Sony also manufactured a consumer device called the PSX that can be used as a digital video recorder and DVD burner in addition to playing PS2 games. The device was released in Japan on December 13, 2003, and was the first Sony product to include the XrossMediaBar interface. It did not sell well in the Japanese market and was not widely released anywhere else. Video and audio The PlayStation 2 can natively output video resolutions on SDTV and HDTV from 480i to 480p while other games, such as Gran Turismo 4 and Tourist Trophy are known to support up-scaled 1080i resolution using any of the following standards: composite video (480i), S-Video (480i), RGB (480i/p), VGA (for progressive scan games and PS2 Linux only), component video (which display most original PlayStation games in their native 240p mode which most HDTV sets do not support), and D-Terminal. Cables are available for all of these signal types; these cables also output analogue stereo audio. Additionally, an RF modulator is available for the system to connect to older TVs. Online support PlayStation 2 users had the option to play select games over the Internet, using dial-up or a broadband Internet connection. The PlayStation 2 Network Adaptor was required for the original models, while the slim models included networking ports on the console. Instead of having a unified, subscription-based online service like Xbox Live as competitor Microsoft later chose for its Xbox console, online multiplayer functionality on the PlayStation 2 was the responsibility of the game publisher and ran on third-party servers. Many games that supported online play exclusively supported broadband Internet access. Controllers The PlayStation 2's DualShock 2 controller retains most of the same functionality as its predecessor. However, it includes analogue pressure sensitivity to over 100 individual levels of depth on the face, shoulder and D-pad buttons, replacing the digital buttons of the original. Like its predecessor, the DualShock 2 controller has force feedback, or "vibration" functionality. It is lighter and includes two more levels of vibration. Specialized controllers include light guns (GunCon), fishing rod and reel controllers, a Dragon Quest VIII "slime" controller, a Final Fantasy X-2 "Tiny Bee" dual pistol controller, an Onimusha 3 katana controller, and a Resident Evil 4 chainsaw controller. Peripherals Optional hardware includes additional DualShock or DualShock 2 controllers, a PS2 DVD remote control, an internal or external hard disk drive (HDD), a network adapter, horizontal and vertical stands, PlayStation or PS2 memory cards, the multitap for PlayStation or PS2, a USB motion camera (EyeToy), a USB keyboard and mouse, and a headset. The original PS2 multitap (SCPH-10090) cannot be plugged into the newer slim models. The multitap connects to the memory card slot and the controller slot, and the memory card slot on the slimline is shallower. New slim-design multitaps (SCPH-70120) were manufactured for these models; however, third-party adapters also permit original multitaps to be used. Early versions of the PS2 could be networked via an i.LINK port, though this had little game support and was dropped. Some third-party manufacturers have created devices that allow disabled people to access the PS2 through ordinary switches, etc. Some third-party companies, such as JoyTech, have produced LCD monitor and speaker attachments for the PS2, which attach to the back of the console. These allow users to play games without access to a television as long as there is access to mains electricity or a similar power source. These screens can fold down onto the PS2 in a similar fashion to laptop screens. There are many accessories for musical games, such as dance pads for Dance Dance Revolution, In the Groove, and Pump It Up titles and High School Musical 3: Senior Year Dance. Konami microphones for use with the Karaoke Revolution games, dual microphones (sold with and used exclusively for SingStar games), various "guitar" controllers (for the Guitar Freaks series and Guitar Hero series), the drum set controller (sold in a box set (or by itself) with a "guitar" controller and a USB microphone (for use with Rock Band and Guitar Hero series, World Tour and newer), and a taiko drum controller for Taiko: Drum Master. Unlike the PlayStation, which requires the use of an official Sony PlayStation Mouse to play mouse-compatible games, the few PS2 games with mouse support work with a standard USB mouse as well as a USB trackball. In addition, some of these games also support the usage of a USB keyboard for text input, game control (instead of a DualShock or DualShock 2 gamepad, in tandem with a USB mouse), or both. Game library PlayStation 2 software is distributed on CD-ROM and DVD-ROM; the two formats are differentiated by their discs' bottoms, with CD-ROMs being blue and DVD-ROMs being silver. The PlayStation 2 offered some particularly high-profile exclusive games. Most main entries in the Grand Theft Auto, Final Fantasy, and Metal Gear Solid series were released exclusively for the console. Several prolific series got their start on the PlayStation 2, including God of War, Ratchet & Clank, Jak and Daxter, Devil May Cry, Kingdom Hearts, and Sly Cooper. Grand Theft Auto: San Andreas was the best-selling game on the console. Game releases peaked in 2004, but declined with the release of the PlayStation 3 in 2006. The last new games for the console were Final Fantasy XI: Seekers of Adoulin in Asia, FIFA 14 in North America, and Pro Evolution Soccer 2014 in Europe. As of June 30, 2007, a total of 10,035 software titles had been released worldwide including games released in multiple regions as separate titles. Reception Initial reviews in 2000 of the PlayStation 2 highly acclaimed the console, with reviewers commending its hardware and graphics capabilities, its ability to play DVDs, and the system's backwards compatibility with games and hardware for the original PlayStation. Early points of criticism included the lack of online support at the time, its inclusion of only two controller ports, and the system's price at launch compared to the Dreamcast in 2000. PC Magazine in 2001 called the console "outstanding", praising its "noteworthy components" such as the Emotion Engine CPU, 32MB of RAM, support for IEEE 1394 (branded as "i.LINK" by Sony and "FireWire" by Apple), and the console's two USB ports while criticizing its "expensive" games and its support for only two controllers without the multitap accessory. Later reviews, especially after the launch of the competing GameCube and Xbox systems, continued to praise the PlayStation 2's large game library and DVD playback, while routinely criticizing the PlayStation 2's lesser graphics performance compared to the newer systems and its rudimentary online service compared to Xbox Live. In 2002, CNET rated the console 7.3 out of 10, calling it a "safe bet" despite not being the "newest or most powerful", noting that the console "yields in-game graphics with more jagged edges". CNET also criticized the DVD playback functionality, claiming that the console's video quality was "passable" and that the playback controls were "rudimentary", recommending users to purchase a remote control. The console's two controller ports and expensiveness of its memory cards were also a point of criticism. The slim model of the PlayStation 2 received positive reviews, especially for its incredibly small size and built-in networking. The slim console's requirement for a separate power adapter was often criticized while the top-loading disc drive was often noted as being far less likely to break compared to the tray-loading drive of the original model. Sales Demand for the PlayStation 2 remained strong throughout much of its lifespan, selling over 1.4 million units in Japan by March 31, 2000. Over 10.6 million units were sold worldwide by March 31, 2001. In 2005, the PlayStation 2 became the fastest game console to reach 100 million units shipped, accomplishing the feat within 5 years and 9 months from its launch; this was surpassed 4 years later when the Nintendo DS reached 100 million shipments in 4 years and 5 months from its launch. By July 2009, the system had sold 138.8 million units worldwide, with 51 million of those units sold in PAL regions. Overall, over 155 million PlayStation 2 units were sold worldwide by March 31, 2012, the year Sony officially stopped supplying updated sales numbers of the system. Homebrew development Using homebrew programs, it is possible to play various audio and video file formats on a PS2. Homebrew programs can also play patched backups of original PS2 DVD games on unmodified consoles and install retail discs to an installed hard drive on older models. Homebrew emulators of older computer and gaming systems have been developed for the PS2. Sony released a Linux-based operating system, Linux for PlayStation 2, for the PS2 in a package that also includes a keyboard, mouse, Ethernet adapter and HDD. In Europe and Australia, the PS2 comes with a free Yabasic interpreter on the bundled demo disc. This allows users to create simple programs for the PS2. A port of the NetBSD project and BlackRhino GNU/Linux, an alternative Debian-based distribution, are also available for the PS2. Successor The PlayStation 3 was released in Japan and North America in November 2006 and Europe in March 2007. See also Linux for PlayStation 2 PCSX2 – PlayStation 2 (PS2) emulator for Microsoft Windows, Linux, and macOS PlayStation Broadband Navigator References Citations Bibliography 2000s in video gaming 2000s toys 2010s toys 2000 in video gaming Backward-compatible video game consoles Computer-related introductions in 2000 Discontinued products Home video game consoles PlayStation (brand) Products introduced in 2000 Products and services discontinued in 2013 Sixth-generation video game consoles Sony consoles
55711036
https://en.wikipedia.org/wiki/History%20of%20computing%20in%20the%20Soviet%20Union
History of computing in the Soviet Union
The History of computing in the Soviet Union began in late 1940s, when the country began to develop its first Small Electronic Calculating Machine (MESM) at the Kiev Institute of Electrotechnology in Feofaniya. Initial ideological opposition to cybernetics in the Soviet Union was outplayed by Khrushchev era policy that encouraged computers production. By early 1970s, uncoordinated work of competing government ministries left the Soviet computer industry in a disarray. Due to lack of common standards for peripherals and lack of digital storage capacity the Soviet Union developed a significant technological lag behind the Western semiconductor industry. The Soviet government decided to abandon development of original computer designs and encouraged cloning of existing Western systems (e.g. 1801 CPU series was scrapped in favor of PDP-11 ISA by early 80s). The Soviet industry lacked technological means to keep up with mass-production of computers within acceptable quality standards; meanwhile locally manufactured copies of Western hardware were unreliable. As personal computers spread to industries and offices in the West, the Soviet Union's technological lag increased. Nearly all Soviet computer manufacturers ceased operations after the breakup of the Soviet Union. A few companies that survived into 1990s used foreign components and never achieved large production volumes. History Early history In 1936, an analog computer known as a water integrator was designed by Vladimir Lukyanov. It was the world's first computer for solving partial differential equations. The Soviet Union began to develop digital computers after World War II. A universally programmable electronic computer was created by a team of scientists directed by Sergey Lebedev at the Kiev Institute of Electrotechnology in Feofaniya. The computer, known as MESM (), became operational in 1950. By some authors it was also depicted as the first such computer in continental Europe, even though the Zuse Z4 and the Swedish BARK preceded it. The MESM's vacuum tubes were obtained from radio manufacturers. The attitude of Soviet officials to computers was skeptical or hostile during the Stalinist era. Government rhetoric portrayed cybernetics in the Soviet Union as a capitalist attempt to further undermine workers' rights. The Soviet weekly newspaper Literaturnaya Gazeta published a 1950 article strongly critical of Norbert Wiener and his book, Cybernetics: Or Control and Communication in the Animal and the Machine, describing Wiener as one of the "charlatans and obscurantists whom capitalists substitute for genuine scientists". After the publication of the article, his book was removed from Soviet research libraries. The first large-scale computer, the BESM-1, was assembled in Moscow at the Lebedev Institute of Precision Mechanics and Computer Engineering. Soviet work on computers was first made public at the Darmstadt Conference in 1955. Post-Stalin era As in the United States, early computers were intended for scientific and military calculations. Automatic data processing systems made their debut by the mid-1950s with the Minsk and Ural systems, both designed by the Ministry of Radio Technology. The Ministry of Instrument Making also entered the computer field with the ASVT system, which was based on the PDP-8. The Strela computer, commissioned in December 1956, performed calculations for Yuri Gagarin's first manned spaceflight. The Strela was designed by Special Design Bureau 245 (SKB-245) of the Ministry of Instrument Making. Strela chief designer Y. Y. Bazilevsky received the Hero of Socialist Labor title for his work on the project. Setun, an experimental ternary computer, was designed and manufactured in 1959. The Khrushchev Thaw relaxed ideological limitations, and by 1961 the government encouraged the construction of computer factories. The Mir-1, Mir-2 and Mir-3 computers were produced at the Kiev Institute of Cybernetics during the 1960s. Victor Glushkov began his work on OGAS, a real-time, decentralised, hierarchical computer network, in the early 1960s, but the project was never completed. Soviet factories began manufacturing transistor computers during the early years of the decade. At that time, ALGOL was the most common programming language in Soviet computing centers. ALGOL 60 was used with a number of domestic variants, including ALGAMS, MALGOL and Alpha. ALGOL remained the most popular language for university instruction into the 1970s. The MINSK-2 was a solid-state digital computer that went into production in 1962, and the Central Intelligence Agency attempted to obtain a model. The BESM-6, introduced in 1965, performed at about 800 KIPS on the Gibson Mix benchmark—ten times greater than any other serially-produced Soviet computer of the period, and similar in performance to the CDC 3600. From 1968 to 1987, 355 BESM-6 units were produced. With instruction pipelining, memory interleaving and virtual address translation, the BESM-6 was advanced for the era; however, it was less well known at the time than the MESM. The Ministry of the Electronics Industry was established in 1965, ending the Ministry of Radio Technology's primacy in computer production. The following year, the Soviet Union signed a cooperation agreement with France to share research in the computing field after the United States prevented France from purchasing a CDC 6600 mainframe. In 1967, the Unified System of Electronic Computers project was launched to create a general-purpose computer with the other Comecon countries. Soyuz 7K-L1 was the first Soviet piloted spacecraft with an onboard digital computer, the Argon-11S. Construction of the Argon-11S was completed in 1968 by the Scientific Research Institute of Electronic Machinery. According to Piers Bizony, lack of computing power was a factor in the failure of the Soviet manned lunar program. 1970s By the early 1970s, the lack of common standards in peripherals and digital capacity led to a significant technological lag behind Western producers. Hardware limitations forced Soviet programmers to write programs in machine code until the early 1970s. Users were expected to maintain and repair their own hardware; local modifications made it difficult (or impossible) to share software, even between similar machines. According to the Ninth five-year plan (1971–1975), Soviet computer production would increase by 2.6 times to a total installed base of 25,000 by 1975, implying about 7,000 computers in use . The plan discussed producing in larger quantities the integrated circuit-based Ryad, but BESM remained the most common model, with ASVT still rare. Rejecting Stalin's opinion, the plan foresaw using computers for national purposes such as widespread industrial automation, econometrics, and a statewide central planning network. Some experts such as Barry Boehm of RAND and Victor Zorza thought that Soviet technology could catch up to the West with intensive effort like the Soviet space program, but others such as Marshall Goldman believed that such was unlikely without capitalist competition and user feedback, and failures of achieving previous plans' goals. The government decided to end original development in the industry, encouraging the pirating of Western systems. An alternative option, a partnership with the Britain-based International Computers Limited, was considered but ultimately rejected. The ES EVM mainframe, launched in 1971, was based on the IBM/360 system. The copying was possible because although the IBM/360 system implementation was protected by a number of patents, IBM published a description of the system's architecture (enabling the creation of competing implementations). The Soviet Academy of Sciences, which had been a major player in Soviet computer development, could not compete with the political influence of the powerful ministries and was relegated to a monitoring role. Hardware research and development became the responsibility of research institutes attached to the ministries. By the early 1970s, with chip technology becoming increasingly relevant to defense applications, Zelenograd emerged as the center of the Soviet microprocessing industry; foreign technology designs were imported, legally or otherwise. The Ninth five-year plan approved a scaled-back version of the earlier OGAS project, and the EGSVT network, which was to link the higher echelons of planning departments and administrations. The poor quality of Soviet telephone systems impeded remote data transmission and access. The telephone system was barely adequate for voice communication, and a Western researcher deemed it unlikely that it could be significantly improved before the end of the 20th century. In 1973, Lebedev stepped down from his role as director of the Institute of Precision Mechanics and Computer Engineering. He was replaced by Vsevolod Burtsev, who promoted development of the Elbrus computer series. In the spirit of detente, in 1974 the Nixon administration decided to relax export restrictions on computer hardware and raised the allowed computing power to 32 million bits per second. In 1975, the Soviet Union placed an order with IBM to supply process-control and management computers for its new Kamaz truck plant. IBM systems were also purchased for Intourist to establish a computer reservation system before the 1980 Summer Olympics. Early 1980s The Soviet computer industry continued to stagnate through the 1980s. As personal computers spread to offices and industries in the United States and most Western countries, the Soviet Union failed to keep up. By 1989, there were over 200,000 computers in the country. In 1984 the Soviet Union had about 300,000 trained programmers, but they did not have enough equipment to be productive. Although the Ministry of Radio Technology was the leading manufacturer of Soviet computers by 1980, the ministry's leadership viewed the development of a prototypical personal computer with deep skepticism and thought that a computer could never be personal. The following year, when the Soviet government adopted a resolution to develop microprocessor technology, the ministry's attitude changed. The spread of computer systems in Soviet companies was similarly slow, with one-third of Soviet plants with over 500 workers having access to a mainframe computer in 1984 (compared to nearly 100 percent in the United States). The success of Soviet managers was measured by the degree to which they met plan goals, and computers made it more difficult to alter accounting calculations to artificially reach targets; companies with computer systems seemed to perform worse than companies without them. The computer hobby movement emerged in the Soviet Union during the early 1980s, drawing from a long history of radio and electric hobbies. In 1978, three employees of the Moscow Institute of Electronic Engineering built a computer prototype based on the new KR580IK80 microprocessor and named it Micro-80. After failing to elicit any interest from the ministries, they published schematics in Radio magazine and made it into the first Soviet DIY computer. The initiative was successful (although the necessary chips could then only be purchased on the black market), leading to the Radio-86RK and several other computer projects. Piracy was especially common in the software industry, where copies of Western applications were widespread. American intelligence agencies, having learned about Soviet piracy efforts, placed bugs in copied software which caused later, catastrophic failures in industrial systems. One such bug caused an explosion in a Siberian gas pipeline in 1982, after pump and valve settings were altered to produce pressures far beyond the tolerance of pipeline joints and welds. The explosion caused no casualties, but led to significant economic damage. In July 1984, the COCOM sanctions prohibiting the export of a number of common desktop computers to the Soviet Union were lifted; at the same time, the sale of large computers was further restricted. In 1985, the Soviet Union purchased over 10,000 MSX computers from Nippon Gakki. The state of scientific computing was particularly backwards, with the CIA commenting that "to the Soviets, the acquisition of a single Western supercomputer would give a 10%–100% increase in total scientific computing power." Perestroika A program to expand computer literacy in Soviet schools was one of the first initiatives announced by Mikhail Gorbachev after he came to power in 1985. That year, the Elektronika BK-0010 was the first Soviet personal computer in common use in schools and as a consumer product. It was the only Soviet personal computer to be manufactured in more than a few thousand units. Between 1986 and 1988, Soviet schools received 87,808 computers out of a planned 111,000. About 60,000 were BK-0010s, as part of the KUVT-86 computer-facility systems. Although Soviet hardware copies lagged somewhat behind their Western counterparts in performance, their main issue was generally-poor reliability. The Agat, an Apple II clone, was particularly prone to failure; disks read by one system could be unreadable by others. An August 1985 issue of Pravda reported, "There are complaints about computer quality and reliability". The Agat was ultimately discontinued due to problems with supplying components, such as disk drives. The Vector-06C, released in 1986, was noted for its relatively advanced graphics capability. The Vector could display up to 256 colors when the BK-0010 had only four hard-coded colors, without palettes. In 1987, it was learned that Kongsberg Gruppen and Toshiba had sold CNC milling machines to the Soviet Union in what became known as the Toshiba-Kongsberg scandal. The president of Toshiba resigned, and the company was threatened with a five-year ban from the US market. The passage of the Law on Cooperatives in May 1987 led to a rapid proliferation of companies trading computers and hardware components. Many software cooperatives were established, employing as much as one-fifth of all Soviet programmers by 1988. The Tekhnika cooperative, created by Artyom Tarasov, managed to sell its own software to state agencies including Gossnab. IBM-compatible Soviet-made computers were introduced during the late 1980s, but their cost put them beyond the reach of Soviet households. The Poisk, released in 1989, was the most common IBM-compatible Soviet computer. Because of production difficulties, no personal computer model was ever mass-produced. As Western technology embargoes were relaxed during the late perestroika era, the Soviets increasingly adopted foreign systems. In 1989, the Moscow Institute of Thermal Technology acquired 70 to 100 IBM XT-AT systems with 8086 microprocessors. The poor quality of domestic manufacturing led the country to import over 50,000 personal computers from Taiwan in 1989. Increasingly-large import deals were signed with Western manufacturers but, as the Soviet economy unraveled, companies struggled to obtain hard currency to pay for them and deals were postponed or canceled. Control Data Corporation reportedly agreed to barter computers for Soviet Christmas cards. Human-rights groups in the West pressured the Soviet government to grant exit visas to all computer experts who wanted to emigrate. Soviet authorities eventually complied, leading to a massive loss of talent in the computing field. 1990s and legacy In August 1990, RELCOM (a UUCP computer network working on telephone lines) was established. The network connected to EUnet through Helsinki, enabling access to Usenet. By the end of 1991, it had about 20,000 users. In September 1990, the .su domain was created. By early 1991, the Soviet Union was on the verge of collapse; procurement orders were cancelled en masse, and half-finished products from computer plants were discarded as the breakdown of the centralized supply system made it impossible to complete them. The large Minsk Computer Plant attempted to survive the new conditions by switching to the production of chandeliers. Western export restrictions on civilian computer equipment were lifted in May 1991. Although this technically allowed the Soviets to export computers to the West, their technological lag gave them no market there. News of the August 1991 Soviet coup attempt was spread to Usenet groups through Relcom. With the fall of the Soviet Union, many prominent Soviet computer developers and engineers (including future Intel processor architect Vladimir Pentkovski) moved abroad. The large companies and plants which had manufactured computers for the Soviet military ceased to exist. The few computers made in post-Soviet countries during the early 1990s were aimed at the consumer market and assembled almost exclusively with foreign components. None of these computers had large production volumes. Soviet computers remained in common use in Russia until the mid-1990s. Post-Soviet Russian consumers preferred to buy Western-manufactured computers, due to the machines' higher perceived quality. Western sanctions Since computers were considered strategic goods by the United States, their sale by Western countries was generally not allowed without special permission. As a result of the CoCom embargo, companies from Western Bloc countries could not export computers to the Soviet Union (or service them) without a special license. Even when sales were not forbidden by CoCom policies, the US government might still ask Western European countries to refrain from exporting computers because of foreign-policy matters, such as protesting the arrest of Soviet dissidents. Software sales were not regulated as strictly, since Western policymakers realized that software could be copied (or smuggled) much more easily. Appraisal Soviet computer software and hardware designs were often on par with Western ones, but the country's persistent inability to improve manufacturing quality meant that it could not make practical use of theoretical advances. Quality control, in particular, was a major weakness of the Soviet computing industry. The decision to abandon original development in the early 1970s, rather than closing the gap with Western technology, is seen as another factor causing the Soviet computer industry to fall further behind. According to Vlad Strukov, this decision destroyed the country's indigenous computer industry. The software industry followed a similar path, with Soviet programmers moving their focus to duplicating Western operating systems (including DOS/360 and CP/M). According to Boris Babayan, the decision was costly in terms of time and resources; Soviet scientists had to study obsolete Western software and then rewrite it, often in its entirety, to make it work with Soviet equipment. Valery Shilov considered this view subjective and nostalgic. Dismissing the notion of a "golden age" of Soviet computing hardware, he argued that except for a few world-class achievements, Soviet computers had always been far behind their Western equivalents (even before large-scale cloning). Computer manufacturers in countries such as Japan also based their early computers on Western designs, but had unrestricted access to foreign technology and manufacturing equipment. They also focused their production on the consumer market (rather than military applications), allowing them to achieve better economies of scale. Unlike Soviet manufacturers, they gained experience in marketing their products to consumers. Piracy of Western software such as WordStar, SuperCalc and dBase was endemic in the Soviet Union, a situation attributed to the inability of the domestic software industry to meet the demand for high-quality applications. Software was not shared as commonly or easily as in the West, leaving Soviet scientific users highly dependent on the applications available at their institutions. The State Committee for Computing and Informatics estimated that out of 700,000 computer programs developed by 1986, only 8,000 had been officially registered, and only 500 were deemed good enough to be distributed as production systems. According to Hudson Institute researchers Richard W. Judy and Robert W. Clough, the situation in the Soviet software industry was such that "it does not deserve to be called an industry". The Soviet Union, unlike contemporary industrializing countries such as Taiwan and South Korea, did not establish a sustainable computer industry. Robert W. Strayer attributed this failure to the shortcomings of the Soviet command economy, where monopolistic ministries closely controlled the activities of factories and companies. Three government ministries (the Ministry of Instrument Making, the Ministry of the Radio Industry and the Ministry of the Electronics Industry) were responsible for developing and manufacturing computer hardware. They had scant resources and overlapping responsibilities. Instead of pooling resources and sharing development, they were locked in conflicts and rivalries and jockeyed for money and influence. Soviet academia still made notable contributions to computer science, such as Leonid Khachiyan's paper, "Polynomial Algorithms in Linear Programming". The Elbrus-1, developed in 1978, implemented a two-issue out-of-order processor with register renaming and speculative execution; according to Keith Diefendorff, this was almost 15 years ahead of Western superscalar processors. Timeline November 1950 – MESM, the first universally programmable electronic computer in the Soviet Union, becomes operational. 1959 – Setun, an experimental ternary computer, is designed and manufactured. 1965 – the Ministry of the Electronics Industry is established, ending the Ministry of Radio Technology's primacy in computer production. 1971 – the ES EVM mainframe, based on the IBM/360 system, is launched. 1974 – NPO Tsentrprogrammsistem (Центрпрограммсистем) is established under the Ministry of Instrument Making to act as a centralized fund and distributor of software. November 1975 – the State Committee on Inventions and Discovery rules that computer programs are ineligible for protection under the Soviet Law of Inventions. 1982 – the Belle chess machine is impounded by the United States Customs Service before it can reach a Moscow chess exhibition because they thought it might be useful to the Soviet military. 1984 – the popular video game Tetris is invented by Alexey Pajitnov. August 1988 – The Soviet Union's first computer virus, known as DOS-62, is detected in the Institute of Program Systems of the Soviet Academy of Sciences. August 1990 – RELCOM (a UUCP computer network working on telephone lines) is established. December 1991 – the Soviet Union is dissolved. See also History of computer hardware in Soviet Bloc countries List of Soviet computer systems List of Soviet microprocessors List of Russian IT developers List of Russian microprocessors List of computer hardware manufacturers in the Soviet Union Internet in Russia Information technology in Russia Notes References External links Russian Virtual Computer Museum Museum of the USSR Computers history Pioneers of Soviet Computing Archive software and documentation for Soviet computers UK-NC, DVK and BK0010. Oral history interview with Seymour E. Goodman, Charles Babbage Institute, University of Minnesota: discusses social and political analysis of computers, especially in the Soviet Union and other East Bloc states, notable the MOSAIC project including Trip Reports, 1957-1970, 1981-1992. History of computing
41948397
https://en.wikipedia.org/wiki/10th%20British%20Academy%20Games%20Awards
10th British Academy Games Awards
The 10th British Academy Video Game Awards awarded by the British Academy of Film and Television Arts, is an award ceremony that was held on 12 March 2014 at Tobacco Dock. The ceremony honoured achievement in video gaming in 2013 and was hosted by Dara Ó Briain. Winners and nominees The nominees for the 10th British Academy Video Games Awards were announced on 12 February 2014. The winners were announced during the awards ceremony on 12 March 2014. Awards Winners are shown first in bold. {| class=wikitable | valign="top" width="50%" | The Last of Us – Naughty Dog/Sony Computer Entertainment Assassin's Creed IV: Black Flag – Ubisoft Montreal/Ubisoft Badland – Johannes Vuorinen and Juhana Myllys, Frogmind Games/Frogmind Games Grand Theft Auto V – Rockstar North/Rockstar Games Lego Marvel Super Heroes – Jon Burton, Arthur Parsons and Phillip Ring, Traveller's Tales/Warner Bros. Interactive Entertainment Tomb Raider – Crystal Dynamics/Square Enix | valign="top" | ''Brothers: A Tale of Two Sons – Starbreeze Studios/505 GamesGrand Theft Auto V – Rockstar North/Rockstar Games Papers, Please – Lucas Pope, 3909 LLC/3909 LLC The Stanley Parable – Galactic Cafe/Galactic Cafe Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment Europe Year Walk – Simogo/Simogo |- | valign="top" width="50%" |Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment EuropeBeyond: Two Souls – John Rostron, David Cage and Guillaume De Fondaumiere, Quantic Dream/Sony Computer Entertainment BioShock Infinite – Patrick Balthrop, Scott Haraldsen and James Bonney, Irrational Games/2K Games Device 6 – Simogo/Simogo The Last of Us – Naughty Dog/Sony Computer Entertainment Ni no Kuni: Wrath of the White Witch – Yoshiyuki Momose, Level-5/Bandai Namco Games | valign="top" |Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment EuropeBadland – Johannes Vuorinen and Juhana Myllys, Frogmind Games/Frogmind Games Device 6 – Simogo/Simogo Plants vs. Zombies 2: It's About Time – PopCap Games/Electronic Arts Ridiculous Fishing – Vlambeer/Vlambeer The Room Two – Fireproof Games |- | valign="top" width="50%" |The Last of Us – Naughty Dog/Sony Computer EntertainmentBattlefield 4 – EA DICE/Electronic Arts BioShock Infinite – Patrick Balthrop, Scott Haraldsen and James Bonney, Irrational Games/2K Games Device 6 – Simogo/Simogo Grand Theft Auto V – Rockstar North/Rockstar Games Tomb Raider – Crystal Dynamics/Square Enix | valign="top" |Grand Theft Auto V – Rockstar North/Rockstar GamesWorld of Tanks – Wargaming/Wargaming Super Mario 3D World – Nintendo EAD Tokyo and 1-Up Studio/Nintendo The Last of Us – Naughty Dog/Sony Computer Entertainment Dota 2 – Valve/Valve Battlefield 4 – EA DICE/Electronic Arts |- | valign="top" width="50%" |The Last of Us – Naughty Dog/Sony Computer EntertainmentAssassin's Creed IV: Black Flag – Ubisoft Montreal/Ubisoft Grand Theft Auto V – Rockstar North/Rockstar Games Papers, Please – Lucas Pope, 3909 LLC/3909 LLC Super Mario 3D World – Nintendo EAD Tokyo and 1-Up Studio/Nintendo Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment Europe | valign="top" |BioShock Infinite – James Bonney and Garry Schyman, Irrational Games/2K GamesTearaway – Kenneth C M Young and Brian D'Oliveira, Tarsier Studios and Media Molecule/Sony Computer Entertainment Europe Super Mario 3D World – Mahito Yokota and Koji Kondo, Nintendo EAD Tokyo and 1-Up Studio/Nintendo The Last of Us – Gustavo Santaolalla, Naughty Dog/Sony Computer Entertainment Beyond: Two Souls – Lorne Balfe, Quantic Dream/Sony Computer Entertainment Assassin's Creed IV: Black Flag – Brian Tyler and Aldo Sampaio, Ubisoft Montreal/Ubisoft |- | valign="top" width="50%" |Grand Theft Auto V – Rockstar North/Rockstar GamesTearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment Europe The Room Two – Fireproof Games Lego Marvel Super Heroes – Jon Burton, Arthur Parsons and Phillip Ring, Traveller's Tales/Warner Bros. Interactive Entertainment Gunpoint – Tom Francis, John Roberts and Ryan Ike, Suspicious Developments/Suspicious Developments DmC: Devil May Cry – Ninja Theory/Capcom | valign="top" |Ashley Johnson – The Last of Us as EllieCourtnee Draper – BioShock Infinite as Elizabeth Ellen Page – Beyond: Two Souls as Jodie Kevan Brighting – The Stanley Parable as The Narrator Steven Ogg – Grand Theft Auto V as Trevor Philips Troy Baker – The Last of Us as Joel |- | valign="top" width="50%" |Gone Home – Fullbright/FullbrightThe Stanley Parable – Galactic Cafe/Galactic Cafe Remember Me – Jean-Maxime Moris, Hervé Bonin and Oskar Guilbert, Dontnod Entertainment/Capcom Gunpoint – Tom Francis, John Roberts and Ryan Ike, Suspicious Developments/Suspicious Developments Castles in the Sky – Jack de Quidt and Dan Pearce Badland – Johannes Vuorinen, Juhana Myllys, Frogmind Games/Frogmind Games | valign="top" |FIFA 14 – EA Canada/EA SportsF1 2013 – Codemasters/Codemasters NBA 2K14 – Visual Concepts/2K Sports GRID 2 – Codemasters/Codemasters Forza Motorsport 5 – Bill Giese, Dave Gierok and Barry Feather, Turn 10 Studios/Microsoft Studios Football Manager 2014 – Sports Interactive/Sega |- | valign="top" width="50%" |Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment EuropeAnimal Crossing: New Leaf – Nintendo EAD/Nintendo Super Mario 3D World – Nintendo EAD Tokyo and 1-Up Studio/Nintendo Skylanders: Swap Force – Vicarious Visions/Activision Rayman Legends – Michael Ancel, Christophe Heral and Jean-Christophe Alessandri, Ubisoft/Ubisoft Brothers: A Tale of Two Sons – Starbreeze Studios/505 Games | valign="top" |The Last of Us – Neil Druckmann and Bruce Straley, Naughty Dog/Sony Computer EntertainmentThe Stanley Parable – Galactic Cafe/Galactic Cafe Ni no Kuni: Wrath of the White Witch – Akihiro Hino, Level-5/Bandai Namco Games Grand Theft Auto V – Dan Houser, Rupert Humphries, Rockstar North/Rockstar Games Gone Home – Fullbright/Fullbright Brothers: A Tale of Two Sons – Starbreeze Studios/505 Games |- | valign="top" width="50%" |Grand Theft Auto V – Rockstar North/Rockstar GamesTomb Raider – Crystal Dynamics/Square Enix Tearaway – Tarsier Studios and Media Molecule/Sony Computer Entertainment Europe Papers, Please – Lucas Pope, 3909 LLC/3909 LLC The Last of Us – Naughty Dog/Sony Computer Entertainment Assassin's Creed IV: Black Flag – Ubisoft Montreal/Ubisoft | valign="top" |Papers, Please – Lucas Pope, 3909 LLC/3909 LLCCivilization V: Brave New World – Firaxis Games/2K Games Democracy 3 – Cliff Harris, Positech Games/Positech Games Forza Motorsport 5 – Bill Giese, Davie Gierok and Barry Feather, Turn 10 Studios/Microsoft Studios Surgeon Simulator 2013 – Bossa Studios/Bossa Studios XCOM: Enemy Within – Firaxis Games/2K Games |} BAFTA Fellowship AwardRockstar GamesBAFTA Ones to Watch AwardSize DOES Matter'' – Mattis Delerud, Silje Dahl, Lars Anderson, Trond Fasteraune and Nick La Rooy Games with multiple nominations and wins Nominations Wins References External links 10th BAFTA Video Games Awards page British Academy Video Games Awards ceremonies 2014 awards in the United Kingdom 2013 in video gaming March 2014 events in the United Kingdom
2050495
https://en.wikipedia.org/wiki/Princess%20Fiona
Princess Fiona
Princess Fiona is a fictional character in DreamWorks' Shrek franchise, first appearing in the animated film Shrek (2001). One of the film series' main characters, Fiona is introduced as a beautiful princess placed under a curse that transforms her into an ogre at night. She is initially determined to break the enchantment by kissing a prince, only to meet and fall in love with Shrek, an ogre, instead. The character's origins and relationships with other characters are further explored in subsequent films; she introduces her new husband Shrek to her parents in Shrek 2 (2004), becomes a mother by Shrek the Third (2007), and is an empowered warrior in Shrek Forever After (2010), much of which takes place in an alternate reality in which Fiona and Shrek never meet. Created by screenwriters Ted Elliott and Terry Rossio, Fiona is loosely based on the unsightly princess in William Steig's children's book Shrek! (1990), from which her role and appearance were significantly modified. The screenwriters adapted the character into a princess under a shapeshifting enchantment, an idea initially greatly contested by other filmmakers. Fiona is voiced by actress Cameron Diaz. Comedian and actress Janeane Garofalo was originally cast as the character until she was fired from the first film with little explanation. Fiona was one of the first human characters to have a lead role in a computer-animated film, thus the animators aspired to make her both beautiful and realistic in appearance. However, an early test screening resulted in children reacting negatively towards the character's uncanny realism, prompting the animators to re-design Fiona into a more stylized, cartoonish heroine. Several revolutionary achievements in computer animation were applied to the character to render convincing skin, hair, clothing and lighting. The character is considered a parody of traditional princesses in both fairy tales and animated Disney films. Reception towards Fiona has been mostly positive, with critics commending her characterization, martial arts prowess and Diaz's performance. However, reviewers were divided over the character's human design, some of whom were impressed by her technological innovations, while others found her realism unsettling and too similar to Diaz. Several media publications consider Fiona a feminist icon, crediting her with subverting princess and gender stereotypes by embracing her flaws. Diaz also became one of Hollywood's highest-paid actresses due to her role in the Shrek franchise, earning $3 million for her performance in the first film and upwards of $10 million for each sequel. Development Creation and writing Shrek is loosely based on William Steig's children's book Shrek! (1990), but significantly deviates from its source material, particularly pertaining to its main characters. In Steig's story, a witch foretells that Shrek will marry an unnamed princess, who she describes as uglier in appearance than Shrek himself, enticing the ogre to seek her. Described as "the most stunningly ugly princess on the surface of the planet", Steig's princess bears little resemblance to Fiona, but the two characters are immediately attracted to each other and wed with little conflict. Animation historian Maureen Furniss, writing for Animation World Network, identified the fact that Shrek's love interest is altered from "a really ugly woman" into a beautiful princess as the film's most significant modification. In an effort to expand the plot while making its characters more visually appealing and marketable "from a Hollywood" perspective, the writers decided to adapt Shrek!'s princess into a beautiful maiden who has been cursed to become ugly only during evenings, which she is forced to conceal from the film's other characters, thus providing "narrative motivation for not showing her ogre manifestation." Furthermore, Furniss observed that Lord Farquaad's romantic interest in Fiona is more practical since he is vain and only attracted to her beauty, while his main motivation remains to marry a princess so that he can rule Duloc. Feeling that her curse remaining undiscovered until the end was unsuitable for a feature-length film, screenwriters Ted Elliott and Terry Rossio introduced the concept of a shapeshifting princess, which was rejected by the other filmmakers for six months because they found it "too complex" for a fairy tale. Elliot and Rossio contested that similar ideas had been used successfully in Disney's The Little Mermaid (1989) and Beauty and the Beast (1991), ultimately convincing the studio by referring to Fiona as an enchanted princess instead. Some writers expressed concerns over whether turning Fiona into an ogre full-time once she professes her love for Shrek suggested "that ugly people belong with ugly people." Rossio explained that since Fiona shape-shifts, the best moral is "'Even princesses who change their shapes can find love too.' And Shrek would love her in all of her varied forms." Elliot elaborated that this prompts audiences to debate if Fiona's "true form" is beautiful or unattractive: "Her true form is beautiful by day, ugly by night.' ... and she was trying to rid herself of part of who she truly was, because society maintained that was wrong." The studio ultimately conceded that Fiona remain an ogre, which Elliot considers to be "a more conventional idea". In early drafts of the script, Fiona is born an ogre to human parents, who lock her in a tower to conceal the true nature of their daughter's appearance, lying to the kingdom that she is a beautiful princess. One day, Fiona escapes and seeks assistance from a witch named Dama Fortuna, who offers her a choice between two potions: one will turn the princess beautiful, while the other guarantees Fiona's happily ever after. Fiona ignorantly drinks the "Beauty" potion for which she does not realize there is a catch, as the potion renders her human during the day only to revert her to an ogre every night. The writers originally intended for Fiona's backstory to be fully animated and used as the film's prologue, but discarded the idea after test audiences deemed it too depressing. Entitled "Fiona's Prologue", the sequence was storyboarded but never animated. A second abandoned scene entitled "Fiona Gets Them Lost" follows Fiona, Shrek and Donkey after she is and they become trapped in a cave; an action sequence inspired by the film Indiana Jones and the Temple of Doom (1984) ensues. In the writers' original draft, Fiona's monstrous form was to have a physical altercation reminiscent of Hong Kong action films with Shrek once he discovers her, assuming that the monster has harmed Fiona. The idea was abandoned because, according to Elliot, few were familiar Hong Kong cinema's "emphasis on action and physicality" in comparison to more violent American films, explaining, "no matter how much we described it, [the studio] ... imagined this violent, knock-down, Steven Segal-type, bone-cracking fight", while some female crew members protested that the concept was misogynistic towards Fiona. Elliott and Rossio had suggested revisiting the discussion about Fiona's true nature is beautiful or an ogre in a potential sequel, but the idea was rejected. The directors spent four months brainstorming several new ideas for the sequel, before ultimately determining that the only logical "jump off point" was one of the few areas not explored in the first film: Fiona's parents' reaction to their daughter both marrying and remaining an ogre. Shrek 2 director Kelly Asbury explained that introducing Fiona's parents presented an entirely "new story to go on, and a whole new place to go." Additionally, Shrek 2 reveals why Fiona was locked in a tower in the first place, with the filmmakers realizing they could use some of the first film's abandoned concepts to gradually uncover more details about Fiona's story throughout the remainder of the series. For Shrek 2, the filmmakers decided to resurrect the idea of Dama Fortuna, re-imagining her as Fiona's conniving fairy godmother and the sequel's main villain, who uses magic against Fiona and Shrek's marriage. Voice Fiona is voiced by American actress Cameron Diaz, one of the franchise's three main cast members. Diaz voiced Fiona in all four installments of the film series over the course of ten years. The role was originally intended for comedian and actress Janeane Garofalo, who was fired from the first film and ultimately replaced with Diaz. Garofalo maintains that she was fired without an explanation, joking, "I assume [it is] because I sound like a man sometimes". However, it is believed that re-casting Fiona resulted from the death of comedian Chris Farley, who was originally cast as Shrek and had already recorded most of the character's dialogue until he died during production, at which point he was replaced with actor Mike Myers. According to film historian Jim Hill, the filmmakers originally cast Garofalo as Fiona because they had felt that the actress' "abrasive, sarcastic comic persona" would serve as an ideal foil to Farley's positive approach to the titular character, but eventually relented that Garofalo was "too downbeat" for the film's lighter tone, offering the role to Diaz. With a "sweeter" version of Fiona introduced, Shrek was developed into a more pessimistic character. Fiona was Diaz's first animated role. DreamWorks invited Diaz to star in an animated film about an ogre and a princess who learn to accept both themselves and each other. In addition to the film's positive message, Diaz was drawn to the idea of co-starring alongside Myers, Eddie Murphy and John Lithgow. Approaching her role as though it were a dramatic performance, Diaz recorded most of her dialogue before a full script had been written, working closely with director Andrew Adamson to stage scenes before the film had been storyboarded. Prior to Shrek, Diaz starred in the action-comedy film Charlie's Angels (2000), a role for which she had undergone martial arts training. While recording the scene in which her character fights Monsieur Hood and his Merry Men, Diaz became quite animated, gesturing and occasionally uttering Cantonese phrases; her martial arts background is credited with benefiting the sequence. Diaz once burped during a recording session, which was written into a scene for Fiona. Without a proper screenplay to aid her, Diaz found the improvisation required for some scenes one of the most challenging aspects of the recording process. The actress did not see the film's completed story until after she had finished working on the project on-and-off for two years, by which point she finally truly understood her "character and ... what she was going through". Myers was both impressed with and inspired by Diaz's commitment to her role, to the point that he felt he was acting opposite Fiona herself. Asbury recalled that Diaz immediately "nailed" her character, elaborating, "She had this certain thing about her voice where she could be headstrong and know exactly what she wants and be confident, but also have this touch of sweet naivete and all make it completely believable." Despite admiring the performances of her predominately male co-stars, Diaz seldom worked directly with them throughout the Shrek series. Diaz enjoyed "the good feeling" she experienced playing Fiona, and preferred voicing her character as an ogre over a princess, the former of which she finds truly beautiful. Apart from the Charlie's Angels sequel Charlie's Angels: Full Throttle (2003), Shrek is the only franchise in which Diaz reprised a role. The origins of Fiona's parents had not yet been disclosed in the first film, therefore Diaz voiced Fiona using an American accent. After discovering that English actors Julie Andrews and John Cleese would voice her parents Queen Lillian and King Harold, respectively, in Shrek 2, Diaz regretted voicing her character with her default Californian accent as opposed to a British accent. She identified her accent as one of the few things she would have changed about her performance in retrospect. Bob Thompson of the Ottawa Citizen observed that few, if any, critics took issue with Diaz's inconsistency. Although admitting that working on the films for only a few hours at a time sporadically sometimes resulted in her feeling as though she is not "100 per cent involved ... at the same time, that character is so my character. I feel very possessive of Fiona. It's interesting to see something that's not actually tangible so fully embody your essence. It feels like I've lent something to this film that I could never give to any other film, in a weird way." Diaz would often defend Fiona's appearance from the press asking how she feels about playing an "ugly" character, explaining, "It's shocking to me that that's the perception, just because she's big and round ... Her body is everything that she is inside. I love that she is the princess who isn’t like all the other princesses. She doesn’t look like them, and she's just as beloved and accepted." In Shrek the Third (2007), Diaz co-starred alongside her ex-boyfriend, singer Justin Timberlake, with whom she had broken up the previous year. Timberlake plays her character's cousin Arthur Pendragon, heir to her late father's throne. Shrek 2 features a brief reference to Timberlake; a picture of a young knight named "Sir Justin" appears in Fiona's childhood bedroom, which is believed to be a reference to their relationship. Diaz was unaware of Timberlake's cameo until watching the film, believing it had been finalized before they were a couple. Although Timberlake was initially cast as Arthur while he was still dating Diaz, producer Aron Warner maintains that Timberlake's involvement was not influenced by their relationship, insisting that he earned the role based on his own merit and comedic timing. The film's May 2007 premiere in Los Angeles was the first media event at which the former couple had been photographed since the end of their relationship. Director Mike Mitchell denied media speculation that Timberlake and his character's omission from Shrek Forever After (2010) correlated to Diaz and Timberlake's breakup, explaining that Arthur was written out solely to allow more screen time for more relevant characters. A filmmaker described Diaz as "the rock" of the franchise because "She brings such a great spirit to these movies." Following the release of Shrek Forever After, the series' final installment, Diaz reflected that the Shrek films had remained her "safety net" for several years, describing the period as "a decade of knowing that you finish one and for the next two years we'll be making another one". She remains hopeful for future sequels, joking, "I'm ready for 'Shrek 18,' if they haven't killed Fiona off by then." Diaz was saddened to bid farewell to her character, admitting that she took the films and Fiona for granted until the end because she always assumed she would be invited back within a few months for another installment. Considering the role "a privilege and honor", Diaz maintains that Fiona is the role for which she is most recognized by children, but she prefers when parents allow them to pretend that her character truly exists without revealing her voice actress, often attempting to prevent parents from exposing the truth. Diaz elaborated that Fiona has become "part of my screen persona. Rather than me putting myself through her I think she comes through me in a weird way. When people think of me they think of Fiona, it's not the other way around." Diaz believes that her popularity has greatly increased since voicing the character. Despite being currently in development, Diaz has yet to confirm whether or not she will reprise her role in a fifth film, although she had previously said that she would return for a fifth installment if asked. Diaz's role in the Shrek series is believed to have contributed to her becoming one of Hollywood's wealthiest actresses by 2008. After being paid $3 million for the first film, Diaz originally re-negotiated to receive $5 million for Shrek 2, estimated to be an hourly salary of $35,000. She ultimately earned between $10 to $15 million for reprising her role. For Shrek the Third, Diaz was paid $30 million, her highest salary at that point, due to securing a significant portion of the installment's profits. She earned $10 million for Shrek Forever After. In 2010, Forbes ranked Diaz Hollywood's second highest-earning voice actor, behind only Myers. On the actress' lucrative earnings, filmmaker Herschell Gordon Lewis wrote in an article for the Sun-Sentinel "Sure, she captured the character well. Yes, the 'Shrek' movies invariably are box office successes. But can anyone say that if the voice of Princess Fiona were that of a competent actress other than Cameron Diaz, the movie would have flopped?" Actress Holly Fields has provided the character's singing voice in the film, in addition to voicing the character in several video games, toys, commercials and amusement park rides. Fields is often hired to imitate Diaz, describing the experience as one of her "coolest jobs". Design and animation Fiona is the franchise's female lead and Shrek's romantic interest. Shrek was the first computer-animated film to feature human characters in lead roles, thus director Vicki Jenson believed its heroine should be beautiful yet convincing. Elliott and Rossio had originally envisioned Fiona's monstrous form as furry in appearance, wanting her to resemble an entirely unique character as opposed to simply a female version of Shrek, but the filmmakers struggled to agree upon her final design. Aiming to achieve stylized realism, the animators found that they could emphasize Fiona's face most efficiently by focusing "on the subtleties of the human form" and compiling translucent layers of skin to prevent the character from resembling plastic, a task they found particularly daunting due to people's familiarity with human skin. To make Fiona's skin more believable, the animators studied dermatology books to learn how various light sources interact with human skin, which visual effects supervisor Ken Bielenberg approached as though they were lighting Diaz herself. Bielenberg joked, "You want the sunset to reflect off her face in a way that's flattering ... Fiona may be a computerized princess, but she has her bad side." The animators painted a combination of freckles and warmer tones onto some of her skin's deeper layers, through which they then filtered light. A shader was used to penetrate, refract and re-emerge layers of light, the concentration of which was adjusted to achieve Fiona's desired radiance; they learned that too much exposure resulted in a mannequin-like appearance. The lighting department consulted with makeup artist Patty York to learn different approaches to creating realistic effects on Fiona's face, while the computer graphics software Maya was used to animate her hair, which consists of more than 1 million polygons. The animators felt that Fiona's design was "too real" at times. When the film was previewed to test audiences, some children cried because they found Fiona's hyperrealism disturbing; the character was suffering from a phenomenon known as the uncanny valley. Consequently, DreamWorks ordered that the character be re-animated to appear more like a cartoon and less like a human simulation. Animator Lucia Modesto recalled that her team was instructed to "pull back" on the character's design because her realism was growing unpleasant. Subsequently, Fiona was modified to fit in among the film's more fantastical characters, which supervising animator Raman Hui credits with improving the believability of Fiona and Shrek's relationship. To make Fiona a more "cartoony-looking love interest," the animators enlarged her eyes and smoothed her skin. Hui acknowledged that Fiona was much more difficult to animate as a human because any errors were quite apparent. In total, Fiona's face required a year of constant experimentation before the animators were satisfied with her final design: a realistic yet softer interpretation of the princess. Director Andrew Adamson admitted that making Fiona beautiful yet viscerally familiar posed several unique challenges for the filmmakers. For example, her eyebrows sometimes cast shadows over her eyes, while her upturned lip and large eyes resulted in a "spooky" appearance. They wanted Fiona's appearance to be relatable without "stick[ing] out among Shrek and the other fantastic characters and distract from the fairy-tale mood." Adamson identified Fiona as the film's most difficult character to animate due to people's familiarity with human mannerisms and expressions, whereas audiences are not nearly as accustomed to talking animals, such as Donkey. Hui maintains that Fiona's appearance was not based on that of any specific individual. Although the animators wanted to avoid making the character resemble Diaz too closely, elements of the actress's movements and mannerisms, which were videotaped during recording sessions, were incorporated into Fiona nonetheless, which they drew onto a different face to create a unique new character. Studying Diaz's mannerisms inspired the animators to exaggerate Fiona's expressions and reactions, instead of striving for realism. For example, Adamson believes Fiona squinting her eyes and compressing her lips while listening to someone else offers "a richness you’ve never seen before", despite their difficulty to animate. Diaz was shocked and ran out of the studio screaming joyfully when she saw her character animated to her voice for the first time. Although Diaz did not think the character resembles her, she recognized that Fiona had many of her mannerisms in addition to her own voice, appearing "more real than she had imagined". The actress explained that "the experience was so weird she felt like she was watching some kind of strange sister." Fiona's body consists of 90 muscles, but her entire model is made up of more than 900 movable muscles. Even in her ogre form, Fiona is significantly smaller than Shrek, with layout supervisor Nick Walker confirming that Shrek is capable of swallowing Fiona's head whole. Actor Antonio Banderas, who voices Puss in Boots, originally found it challenging to accept Fiona's unconventional appearance. Banderas explained that he experienced "a certain resistance as a spectator for her to be an ogre", initially wishing for her and Shrek to end the film as humans before finally accepting the character's appearance and sequel's ending. The actor believes several audience members "went through this process when they were observing this movie" because "We are used to rejecting ugliness without reason." Costume designer Isis Mussenden designed the character's costumes for the first two Shrek films, for which she helped develop new technology to animate clothing in the then-new computer animation medium. The filmmakers wanted a more realistic approach to costumes than previous computer animated films, in which clothing was typically depicted as a tight layer over the figure, adorned with a few wrinkles. The filmmakers had envisioned Fiona's velvet gown as one that moves independently from her body, therefore one of the film's producers recruited Mussenden, with whom they had worked prior, to assist them with the process. Mussenden began by creating a one-quarter scale replica of the skirt. To determine the gown's volume, fullness and where certain areas would rest on the character's form, the costume designer worked with both a pattern maker and designer. The patterns and seams were labeled and forwarded to the animators, who would replicate the images on the computer. Mussenden decided to give Fiona's dresses tight sleeves as opposed to the long, flowing sleeves associated with traditional medieval clothing due to the difficulty the latter would have been for the animators. Unlike Shrek, Fiona has several costume changes in Shrek 2. In the sequel, both Fiona's ogre and human forms are shown wearing the same green dress. To ensure that both forms looked equally flattering in the same outfit, Mussenden lowered the dress' waistline to make it more medieval in appearance than the costume she wears in the first film. Fiona's first costume is a lilac dress, which Mussenden designed to appear "organic and textured, because she's been living in the swamp". Towards the end of the film, she changes into a white ballgown with rhinestones inspired by an image of a 1958 dress the costume designer had found. The scene in which Fiona single-handedly defeats Monsieur Hood and his Merry Men references the slow motion special effects popularized by The Matrix (1999), as well as Diaz's own Charlie's Angels films. In a DVD bonus feature, Fiona explains that she performed her own stunts in the film, claiming that she based her kung fu on Charlie's Angels. Despite concerns that references to The Matrix would eventually date the film, Rossio believes the gag will remain funny because it is a parody instead of merely an imitation. A similar reference is made when Fiona defeats a mob at the beginning of Shrek 2, a complex sequence for which animators used powerful data processors to store and manipulate millions of computer generated images. Modesto created new character models for Fiona and Shrek in Shrek the Third, while new software and servers were implemented to animate individual strands of the princess's hair much faster than had been possible during production of the first film. In Shrek Forever After's alternate reality, the character wears her hair unbraided for the first time, which was inspired by singer Janis Joplin. Due to its costliness, Fiona's new hairstyle first needed to be approved by DreamWorks, with Mitchell likening the process to "prepar[ing] like a lawyer". The re-design was a difficult, expensive process that required 20 animators to animate each strand individually, as Mitchell was particularly determined to render it correctly due to audiences' familiarity with long hair. One group was specifically tasked with setting up Fiona's hair, which head of production technology Darin Grant believes "allowed the process to be optimized and work across many, many shots" as it "flows and cascades throughout" the entire film, reinforcing Fiona's liberated personality. Personality According to Rossio, the first film's four main characters are written "around the concept of self-esteem, and appropriate and/or inappropriate reactions to appropriate or inappropriate self-assessment", explaining that Fiona seeks validation from others because she believes "there's something not correct about herself". Adamson elaborated that the character's main issue revolves around living up to stereotypes and ideas "represented in fairy tales that if ... you look a certain way and act a certain way and put the right dress and slippers on a handsome man is going to come", dismissing this as an unrealistic and unhealthy approach to finding romance. Diaz confirmed that Fiona only becomes her true self once she is freed from the tower and realizes her Prince Charming differs from who she had been taught to expect. A scene during which Fiona duets with a bird who explodes once the princess sings a high note, subsequently frying its eggs for breakfast, is considered to be a parody of Disney fairy tales such as Cinderella (1950), about which Adamson explained "pok[es] fun at people's expectations" of princesses. Diaz believes her character's personality "shattered" children's perception of princess characters from the moment she was freed from the tower, explaining that Fiona had always been capable of freeing herself but chose to remain in the tower solely because she was "following the rules of a fairy tale book". In the sequel, Diaz explained that Fiona "has a lot of pressure from all the people who told her about Prince Charming to take everything materialistically and monetarily. And she literally is just kind of baffled by it and says, 'Sorry, but I don't need any of those things.' All she needs is this man who she loves and loves her and accepts her." Diaz considers her character to be an empowered, positive role model for young girls, explaining, "She's never depended on anyone to rescue her, which is a different message from Snow White and Rapunzel ... She was capable of getting out of the tower herself" and "took on Shrek as her partner rather than as her rescuer." She believes that the moment she accepts herself as an ogre is her most empowered moment, as well as "the biggest stride in her evolution as a person". Diaz considers Fiona to be "the anchor that holds all these kooky characters", identifying her as the comedy's straight man. Revealing that she "hate[s] naggy women", Diaz sometimes found herself wishing that Fiona would be "less naggy" and more compassionate and understanding towards the difficult changes Shrek is undergoing since marrying her. During production of Shrek the Third, Diaz observed that the filmmakers had made Fiona into more of a nag and asked that they tone this down, explaining, "just because she got married it doesn't mean she has to become a nag'." This was one of only a few things Diaz asked that they adjust about Fiona. In Shrek Forever After's alternate reality, Fiona frees herself from the tower on her own and subsequently becomes a warrior and leader of an army of ogres, which some commentators found to be a more empowering approach to the princess; Diaz contested that her character has "always been a warrior ... of love through all these films. What she’s worked for, what she’s fought for is the love that she has for herself and the love that she has for Shrek and her family and her friends." Diaz concluded that, due to the fourth film's tone, Fiona's responsibilities are simply more apparent, believing that in this film she is "fighting for what she believes in." Characterization and themes Todd Anthony of the Sun-Sentinel cited Fiona among several elements that make Shrek resemble an archetypal fairy tale initially. Furniss identified Fiona's character arc as struggling with insecurities about her identity and appearance before finally "accept[ing] herself in a so-called 'ugly' physical manifestation", which she described as merely "cute" as opposed to "push[ing] the boundaries of true ugliness." Demonstrated by her "very definite ideas about how she wants to be rescued," Bob Waliszewski of Plugged In believes Fiona "has bought into the conventions of fairy tale romanticism hook, line and sinker", writing, "Her skewed perspective on love and marriage undermines agape love and spiritual discernment in relationships." Similarly, TV Guide film critic Frank Lovece described Fiona as a "beautiful and headstrong princess" who has spent too much time thinking about true love. Michael Sragow, film critic for The Baltimore Sun, agreed that the character is "fixated on being treated like a fairy-tale princess", resulting in a precarious outlook on reality. Although Fiona is originally disappointed upon discovering her rescuer is not a Prince Charming, her expectations are more-so grounded in "rituals of self-loathing". Furniss believes Fiona's story is targeted towards Disney films in which princesses are constantly rescued from "horrible fates by knights". However, despite her efforts to look, speak and behave like a traditional princess, Fiona is soon proven to be an nontraditional princess, exemplified by her traits as a skilled fighter, unusual diet occasionally consisting of wild animals and tendency to belch spontaneously. James Clarke, author of Animated Films - Virgin Film, described Fiona as "both an old-school and new-school heroine, in love with the notion of a charming prince who will rescue her but also tough talking and tough acting". Although she originally possesses traits associated with a traditional princess, being tall and slender, both Shrek and audiences soon agree that Fiona is different, and the princess is merely "following a script from a storybook" herself. Paul Byrnes of The Sydney Morning Herald wrote that Fiona's depiction in the first film offers "a sense of how gender roles had shifted" by resembling "a bottom-kicking heroine". Among her unusual characteristics, John Anderson of Newsday observed that Fiona is "perfectly capable of taking care of herself. She's just been waiting for some classic romance." Although in the context of the film Shrek initially observes Fiona's differences once she belches, "it rapidly also becomes apparent that she is indeed not a prototypical fairy-tale princess", according to author Johnny Unger. The New York Press observed that Shrek emphasizes "that the ogre falls in love with the heroine not because of her conventional good looks, but in spite of them ... looking past Fiona's skinny, blond human surface and seeing the belching, bug-eating ogre beneath." Journalist Steve Sailer, writing for UPI, similarly wrote that "Fiona wins Shrek's heart by belching, beating up Robin Hood's Merry Men (who act like Broadway chorus boys) with cool "Matrix"-style kung fu, and cooking the Blue Bird of Happiness' eggs for breakfast." Elliot believes that Fiona's storyline explores "the actual prevalence of attitudes about appearance in society", identifying a theme of lacking self-esteem as particularly prevalent with Fiona. Film critic Emanuel Levy shared that "Fiona suffers/benefits from duality", transitioning from a "sexy, opinionated, and feisty" character into an outcast once "her secret is revealed", after which she becomes closer to Shrek. Matt Zoller Seitz, film critic for the New York Press, wrote that Fiona takes the film's metaphor pertaining the people "passing for something they're not" to "a whole different level", explaining, "At first you think she's a standard-issue princess who's willing to let her hair down and hang with the riffraff", describing her as "a modern-day Disney heroine." Seitz also observed "interracial overtones" in Fiona and Shrek's relationship. PopMatters contributor Evan Sawdey wrote that the Shrek films use Fiona to promote acceptance, particularity the moment she "discovers that her true form is that of an ogre", by which she is not saddened. Believing that Fiona would happily battle and defend whatever she loves or believes in, Diaz identified the character as "the anchor everyone has attached themselves to", to whom Shrek looks to for guidance, which she would not have been able to provide unless she possessed the strength herself. In terms of character development and evolvement, Diaz recalled that, despite having been raised in a "storybook life", Fiona eventually comes to terms with the fact that "her Prince Charming didn’t come in the package she thought he would. She’s learned to have patience with Shrek, accept him for who he his", particularly going against being taught that her Prince Charming must look and act a certain way. Thus, Adamson considers Fiona to be "an empowering character" for young girls. Unlike Farquaad, Shrek respects Fiona for speaking up for and defending herself. Fiona's final transformation sequence in which she transforms into an ogre permanently is considered to be a parody and critique of the Beast's transformation into a human in Disney's Beauty and the Beast (1991), with Fiona coming to realize that her "true love's true form" is in fact an ogre. Novelist and film critic Jeffrey Overstreet considered it to be "part of society’s downfall that we embrace the Princess Fionas when they’re glamorous rather than real." Film critic Roger Ebert observed that Fiona is the only princess competing to be Farquaad's bride (opposite Cinderella and Snow White) "who has not had the title role in a Disney animated feature", which he considered to be "inspired by feelings DreamWorks partner Jeffrey Katzenberg has nourished since his painful departure from Disney". In a review for Salon, film critic Stephanie Zacharek observed that Fiona "has two little frecklelike beauty spots, one on her cheek and one on her upper chest", which she interpreted as "symbols of her human authenticity, but they also serve as a sort of factory trademark left by her creators: 'You see, we've thought of every last detail.'" Rick Groen of The Globe and Mail observed that Fiona "appears to replicate the body of Cameron Diaz", describing her as "a cute brunette with a retroussé nose, ample curves, and cleavage that broadens whenever she bends low in her scoop-neck frock." Fiona is skilled in hand-to-hand combat and martial arts. The New York Times journalist A. J. Jacobs wrote that Fiona's kung fu skills rival those of actor Bruce Lee, abilities she is explained to have inherited from her mother Queen Lillian. Describing Fiona as tough and clever, museum curator Sarah Tutton observed that, despite being a love interest, the character "doesn’t play the typical supporting role ... Just because Princess Fiona subverts the idea of beauty, it doesn’t mean that beauty is not important. It means that the film isn’t taking it as a cliche." In the third film, Fiona teaches the classic princesses, who are naturally inclined "to assume passive positions", not to wait for their princes to rescue them, making them over into action heroines themselves when Prince Charming takes over the kingdom while teaching them to stand up for themselves. Several critics considered this moment to be about girl power and female empowerment, as well as a Charlie's Angels reference. Diaz believes that the films and her character "retain the best qualities of" classic fairy tale characters, "infusing them with contemporary wit, style and relevance" for a more contemporary generation. Diaz elaborated, "We do love those girls ... But now they have a whole new life. They can exist in our current culture, our pop culture again ... Where before, they were forgotten. It’s a celebration of them. It’s a rebirth." Furthermore, Diaz believes that the princesses' independence is a positive message for both women and men, interpreting it as "a message for everyone … You have to be proactive in your own life." Miller believes that Fiona's skills as a martial artist prove naturally beneficial to her adjustment to motherhood because "she can use her whole body. She’s very adaptable." Appearances Film series Fiona first appears in Shrek (2001) as a bride chosen by Lord Farquaad, who intends to marry the princess solely so that he can become King of Duloc. In order to regain ownership of his swamp, Shrek and Donkey agree to retrieve Fiona from her dragon-guarded tower and deliver her to Farquaad. Fiona is rescued successfully but disappointed upon discovering that Shrek is an ogre instead of a knight, proceeding to act coldly towards him at the beginning of their journey back to Duloc. However, her attitude softens once she overhears Shrek explain that he is constantly misjudged by his appearance, and the two gradually develop a camaraderie as Fiona falls in love with Shrek. Late one evening, Donkey discovers that Fiona is under an enchantment that transforms her into an ogre every night, and she wishes to break the spell by kissing Farquaad before the next sunset. When she finally decides to tell Shrek the truth the following morning, she learns that Shrek has already summoned Farquaad to take her back to Duloc himself, having overheard and misinterpreted some of her conversation with Donkey. The princess and ogre part ways, Fiona returning to Duloc with Farquaad and Shrek returning to his swamp alone. Shrek and Donkey soon interrupt Fiona and Farquaad's wedding ceremony, where Shrek professes his love for her. With the sun setting, Fiona allows herself to transform into an ogre in front of Shrek for the first time, prompting Farquaad to threaten to lock her back in her tower for eternity. However, the dragon that had once imprisoned Fiona eats Farquaad, killing him. Fiona finally confesses her feelings for Shrek and, upon kissing him, turns into an ogre full-time; the two ogres marry. In Shrek 2 (2004), Fiona and Shrek return home from their honeymoon to find that Fiona's parents are inviting them to the kingdom of Far, Far Away to celebrate and bless their marriage. Shrek is apprehensive about meeting his parents-in-law, but Fiona insists. Fiona's parents, King Harold and Queen Lillian, are surprised to find that their daughter married an ogre, with Harold acting particularly coldly towards his new son-in-law, which in turn strains Fiona and Shrek's relationship. When a tearful Fiona unintentionally summons her Fairy Godmother, who discovers that the princess married someone other than Prince Charming – her own son – she conspires with Harold to kill Shrek and trick Fiona into falling in love with Charming, as per their original agreement. Fiona is briefly returned to her human form when Shrek consumes a potion that turns both him and his true love beautiful, but Shrek must obtain a kiss from Fiona before midnight, otherwise the spell will revert. However, Fairy Godmother, from whom Shrek steals the potion, tricks Fiona into believing Charming is Shrek's human form. Despite their efforts, Fiona continues to resent Charming's impression of her husband to the point where Fairy Godmother imprisons Shrek and insists that Harold feed Fiona a potion that will force her to fall in love with whomever she kisses first, intending for this to be Charming. However, the king refuses upon seeing how unhappy Fiona has become, thwarting Fairy Godmother's plan. Fairy Godmother and Charming are defeated by Fiona, Shrek and their friends. Although Shrek offers to kiss Fiona so that they can remain human forever, Fiona refuses, insisting that she would rather spend forever with the ogre she fell in love with and married, and they turn back into ogres. In Shrek the Third (2007), Fiona and Shrek take on the roles of acting Queen and King of Far Far Away while Harold is ill. When Harold passes away. Shrek is reluctantly named next-in-line to Harold's throne, a position he declines because becoming king would prevent him and Fiona from returning to their swamp. Determined to locate a suitable heir, Shrek sets out to recruit Fiona's cousin Arthur Pendragon to convince him to assume the throne. Before Shrek departs, Fiona finally reveals that she is pregnant, forcing Shrek to come to terms with the idea of fatherhood. While Shrek, Donkey and Puss venture to Camelot to recruit Arthur, Fiona remains at Far, Far Away, where her princess friends Rapunzel, Snow White, Sleeping Beauty, Cinderella and her stepsister Doris host a baby shower for her. The shower is interrupted by Prince Charming, still bitter over losing both the kingdom and Fiona to Shrek. Charming stages an invasion so that he can proclaim himself king of Far Far Away. Instead of waiting to be rescued, an idea that Fiona finds appalling, she encourages the princesses to free themselves and fight back. After escaping the dungeon, Fiona, Lillian and the princesses (albeit Rapunzel, who has betrayed them to marry Charming) organize a resistance to defend themselves and the kingdom. Artie makes a speech to convince the villains to go straight. In the end, Fiona and Shrek return to the swamp, where Fiona gives birth to ogre triplets named Felicia, Fergus and Farkle. Shrek Forever After (2010) reveals that, during the events of the first film, Fiona's parents had nearly lost the kingdom to Rumpelstiltskin, nearly signing it over in return for their daughter's freedom, but his plans are thwarted when Fiona is rescued by and falls in love with Shrek. Fiona confronts Shrek, who has grown frustrated with his mundane repetitive life since becoming a father, about losing his temper during their children's birthday; a heated argument between the two prompts Shrek to wish he had never rescued Fiona from the tower, a comment by which Fiona is hurt. When Shrek makes his deal with Rumpelstiltskin, for a single day he is taken to an alternate reality where he was never born. Here, Rumpelstiltskin has seized power by tricking Fiona's parents out of ruling the kingdom of Far Far Away. Since Shrek never frees Fiona from the tower, she escapes on her own and is still under the witch's spell – human by day and ogre by night – and has subsequently become the leader of a group of Ogre resistance fighters. Shrek initially believes his relationship with Fiona still exists there but when she doesn't even recognize him, he finally accepts completely that the reality he is in is not his own and that Rumpelstiltskin has truly altered reality to be as if he never existed until now. Fiona is shown to still be kindhearted and caring but bitterly cynical and disillusioned about the power of true love, because she was never rescued from her tower, having grown traumatized from her years of solitary imprisonment. She begins to fall in love with him again when he starts training with her, but still does not kiss him (having only started to find him likable). But Fiona's attitude towards Shrek changes as she and the other ogres head off to take down Rumpelstiltskin once and for all. During the day, Shrek realizes that a loophole will negate the deal if he can receive a True Love's kiss from Fiona. After a failed attempt, they realize that he has succeeded when Fiona's curse has been broken. The timeline returns to normal, and Shrek returns to his children's birthday party before he lashed out at everybody and warmly greets Fiona. Television specials and shorts Fiona has appeared in two holiday-themed television specials: Shrek the Halls (2007) and Scared Shrekless (2010). The animated short Shrek in the Swamp Karaoke Dance Party! (2001) is included on home video releases of Shrek, featuring several of the film's characters performing covers of well-known songs. In the short, Fiona sings an excerpt from Madonna's song "Like a Virgin" (1984). Fiona appears in the short Shrek 4-D, a 4-D film originally shown at various amusement and theme parks. The short was renamed Shrek 3-D and The Ghost of Lord Farquaad for home video and streaming service releases. In it, Fiona and Shrek's honeymooning plans are interrupted by Farquaad's ghost, who abducts Fiona and intends to kill the princess so that he can remarry her ghost in the afterlife. Shrek and Donkey pursue Farquaad determined to rescue her, assisted by Dragon. Fiona appears in the short film Far, Far Away Idol, a parody of the reality television singing competition American Idol, which is included as a bonus feature on home video releases of Shrek 2. First serving as a judge alongside Shrek and an animated version of American Idol judge Simon Cowell, offering feedback about the other characters performances, Fiona eventually duets The Romantics' "What I Like About You" with Shrek. Stage Fiona appeared in the stage musical adaptation of the film, which ran on Broadway from 2008 to 2010. The role was originated by actress Sutton Foster, who had been involved in the project three years before its premiere, having learned about it from composer Jeanine Tesori and director Jason Moore. She was drawn towards the idea of playing a princess for the first time, the prospect of which she found "fun", as well as the opportunity to collaborate with lyricist and librettist David Lindsay-Abaire. Actresses Keaton Whittaker and Marissa O'Donnell portrayed younger versions of the character. Before production, Foster described Fiona as an atypical princess who is "a little bipolar, but rightfully so" having "grown up, like we all have, with ideas of how the world works" while trying to surround herself with, and emulate, fairy tales. Foster believes Fiona constantly struggles with her "inner ogre" despite trying to be perfect. "Everything she's been told is that she's supposed to look a certain way and act a certain way, but everything on the inside is telling her something different." Although Fiona longs to be a "proper princess", Foster identifies herself as "more of a tomboy", while Fiona's body contradicts with her desires: "as soon as she starts farting and burping, she has a really great time! And I just love that, that she finds herself in just having fun with an ogre, with Shrek. And I love that she falls in love with him through something crude." Foster found it "fun to play a truly conflicted character and to be a princess who burps and farts and gets to do silly things." Foster earned a Tony Award nomination for Best Actress in a Musical. Despite being a fan of the musical adaption, Diaz has stated that she has no intention of reprising her role on stage. Reception Critical response During early press screenings, critics were amused by Fiona's bluebird scene to the point where they laughed hysterically. David Ansen of Newsweek reported that the sequence consistently "sends audiences into fits of delight". Time film critic Richard Schickel called Fiona "an excellent character," highlighting her confrontation with Monsieur Hood. Similarly, the New York Post film critic Lou Lumenick identified Fiona's encounters with Monsieur Hood and the bluebird as clever, delightful "sendups of a long line of Disney classics". Kelly Vance of the East Bay Express wrote, "Armed with Diaz' vocal portrayal ... Fiona is more charming, more vulnerable, perkier, and even more sensitive than if she were played by a human actress." Film critic Emanuel Levy believes Shrek benefits from Fiona, writing that "Diaz applies well skills she had acquired for Charlie's Angels". Hollywood.com's Robert Sims joked that "Fiona could teach Charlie's Angels a lesson or two in romance and survival skills." Malcolm Johnson of the Hartford Courant lauded Fiona as "a marvel, as beautiful and shapely as a real star but capable of moves that go beyond the wirework in The Matrix." Johnson continued, "Every turn of Fiona's head, every glance, every shift of mouth lift character animation to new heights." Similarly, the London Evening Standard wrote that "every bright ringlet on Princess Fiona ... the liquefaction flow of her velvet robe, even her skin tones have the feel of organic root, thread or cell." Slant Magazine's Ed Gonzalez identified Fiona's struggles with self-loathing as the film's strongest asset. Reviewing Shrek the Third, Entertainment Weekly film critic Lisa Schwarzbaum described Fiona as "fabulously resourceful", identifying the moment she reinvents her princess friends into independent women as the film's sole "Cool Thing." Diaz has also received positive attention for her voice acting. The Washington Post film critic Desson Howe wrote that Diaz's performance offers "a funny, earthy princess." GamesRadar+ wrote Fiona "nestle[s] comfortably between the movie's storybook style and photo-realistic convincingness," continuing that Diaz's performance "reinforces her game-for-a-laugh reputation". Kim Morgan of OregonLive.com said, "Diaz's sweet yet tough demeanor shines through all her computer-generated-imagery beauty," citing her vulnerability as an asset. The Daily Telegraph's film critic Andrew O'Hagan believes Diaz imbues Fiona with "easygoing shrillness that modern eight-year-olds may find likeable", while the Deseret News' Jeff Vice wrote that Diaz proves more than merely "a pretty face." Bruce Westbrook of the Houston Chronicle reviewed Diaz as an improvement upon "the spunkiness of today's heroines" by "packing surprise punches that would have suited her role in Charlie's Angels". Turner Classic Movies believes Diaz's performance earned the actress "a legion of younger fans", a sentiment with which TV Guide agreed. PopMatters' Cynthia Fuchs, reviewing the fourth film, described the princess as "always at least a little wonderful, patient, and smart (and now awesomely Amazonian)," and found herself wishing Fiona would discover a parallel universe in which she is truly appreciated. Not all reviews were positive. Finding Fiona's fight scene unnecessary, Derek Armstrong of AllMovie wrote that it "leaves things feeling scattershot" despite its appealing visuals. In a negative review, CNN's Paul Tatara dismissed Fiona as "bland" and the film's "only miscue among the characters". Criticizing her design, Tatara felt the princess "gives off the creepy air of a possessed Barbie Doll" while "Diaz's California-girl line readings simply don't fit the character." Similarly, the Chicago Tribune's Mark Caro found Fiona's design generic and Barbie-like, but admitted these characteristics benefit the film's plot and themes. Anthony Quinn of The Independent found Fiona's realism particularly troubling, suggesting that the animators should have simply "invite[d] Cameron Diaz to play her as well as voice her." Similarly, The New Yorker film critic Anthony Lane felt the character was too realistic, writing, "What I don't want is to gaze at Princess Fiona ... and wonder if she is supposed to resemble Cameron Diaz". Peter Bradshaw, film critic for The Guardian, dismissed Fiona and the film's human characters as "disappointingly ordinary looking and unexpressive," comparing them to claymation, while New York's Peter Rainer agreed that human characters such as Fiona "are less interesting". Paul Malcolm of LA Weekly described Diaz's performance as "insuperably flat". Philippa Hawker of The Age felt the third film could benefited from Fiona being named Harold's heir, opposing the idea of relegating her to "a cursory girl-power scenario". Feminist analysis Some media publications have regarded Fiona as a feminist icon. Upon her debut, Fiona was celebrated by most critics "as a radical new take on the princess myth". Fiona's subversion of common princess tropes continues to be widely discussed in the media. Wired contributor Claudia Puig felt the first film boasts "a wonderfully affirming message for girls courtesy of Fiona". Jack Rear, writing for Pretty 52, described Fiona as "feminism goals" due to her martial arts proficiency. Affinity Magazine contributor Isabel Tovar identified the moment Fiona defeats Monsieur Hood as "female empowering", believing "Fiona has been feminist queen since day one." Teresa Brickey of The Odyssey said Fiona contested the patriarchy by "accept[ing] her body ... who she loved, and fought for right to do her thing." Reviewing Shrek Forever After, Rachel Giese of CBC found the character's "girl-power turn as a warrior princess" to be one of the installment's most endearing changes. Crowning the character "the best feminist action hero around", Emily Shire of The Week deemed Fiona "the kind of feminist action hero movies need more of", describing her as a strong heroine who "saves herself and loved ones" while accepting the "'ugly' and 'gross' aspects of herself". Shire also voiced her preference for Fiona over The Hunger Games' Katniss Everdeen and the superheroine Wonder Woman. Allison Maloney of The Shriver Report shared Shire's sentiments. Felicity Sleeman, a writer for Farrago, believes "Fiona completely dispels any misconceptions of the passive princess trope", citing her as a strong female character "able to stand up for herself and fight in ways that would typically be considered masculine." Sleeman continued that one of the most important components of Fiona's personality "is that the films don’t ignore or degrade any of her qualities that are considered typically feminine", elaborating that her struggles over her appearance are "significant in that it presents the ways in which so many girls are pressured by society to uphold a certain standard of beauty." Sleeman concluded, "In an industry where female characters have so often portrayed as secondary characters defined by their beauty ... Fiona is a well-rounded character who represents an eclectic mix of traits that are representative of real women", remaining feminine yet strong. Rachel O'Neill, a writer for The Daily Edge, identified Fiona as "the first badass princess ... able to speak for herself", joking, "nobody can fling a mermaid quite like Fiona." In 2008, BBC News named Fiona "the next feminist icon", believing the character retains "a certain sex appeal which continues even after she changes into an ogre - perfectly underlining how attitudes have changed towards women in the 21st Century." HuffPost contributor Hayley Krischer cited Fiona as a rare example of a princess who "br[oke] the mold". Iona Tytler of Babe.net recognized Fiona among childhood feminist characters "who got you where you are today". Praising her independence, Tytler said Fiona "overc[ame] the societal prejudice in her world that came with being an ogre" while becoming "more comfortable in her own skin." Sarah Tutton, curator of the Australian Centre for the Moving Image's DreamWorks exhibit, credits Fiona with "br[eaking] the mould of the helpless princess," citing her as a modern-day feminist. Tutton also said the character "completely subverts what it means to be a beautiful princess." Forbes contributor Dani Di Placido believes Fiona embodied characteristics associated with the unconventional, rebellious warrior princess several years before such traits became standard in film and television. Similarly, the British Film Institute's So Mayer wrote that heroines such as Merida and Elsa from Disney's Brave (2012) and Frozen (2013), respectively, were both "late to the party compared to" Fiona, reflecting, "over the course of the trilogy she wanders the wilderness, turns down Lord Farquaad, survives imprisonment, decides she prefers being ogre to being human, and organises a resistance composed of fairytale princesses." Furthermore, Female Action Heroes: A Guide to Women in Comics, Video Games, Film, and Television author Gladys L. Knight wrote that Fiona challenged the manner in which medieval women are portrayed on screen. Mary Zeiss Stange, author of Encyclopedia of Women in Today's World, Volume 1, cited Fiona as an example of an "outstanding female action hero".Refinery 29's Anne Cohen felt Fiona remains a strong heroine despite Shrek's "un-feminist plot" featuring several men making decisions about her future without her involvement. Cohen praised Fiona for defending herself, defying stereotypes, speaking her mind and accepting her own flaws. Crowing Fiona an "important cultural milestone", the author concluded that she is "fierce, honest [and] wonderful" despite her unconventional appearance. Some critics felt Fiona's fighting prowess was otherwise undermined by her insecurities and motivations. Despite being impressed with the character's fighting ability, Furniss believes this contradicts with "her need to seek authentication from a male romantic partner", arguing that a true martial artist would few have concerns about outward appearance. Although acknowledging that the film demonstrates themes of inner beauty among "women of all types", the author argued that Fiona's understanding relies on male approval, referring to her relationships with both Farquaad and Shrek, and further observing that she struggles to use this same martial arts prowess to fend off Farquaad's guards. Furniss found it disappointing that her arc is "activated by the kiss of a man", but admitted the completion of Shrek's character development is similarly determined by him kissing Fiona. Furniss doubts Fiona would not have been able to accept her ogre form had Shrek decided to retreat to his swamp alone after kissing her. Author Margot Mifflin, writing for Salon, felt that some of Fiona's actions contradict with the film's morals about looks being less important, citing that she dislikes Farquaad more for his short stature than his cruelty towards others. She also found the princess in Steig's original story to be more liberated and less of a damsel in distress than Fiona. Despite describing the character's ogre form as "an overfed Cabbage Patch doll with the drowning eyes and apologetic expression of a Hummel figurine", Mifflin found the fact that Fiona remains an ogre, fights, talks back and has more realistic body proportions to be ground-breaking, while describing her musical solo as one of the film's "hilarious" highlights. The Conversation's Michelle Smith was unimpressed, writing that despite the character's fighting skills, Fiona remains "desperate to follow the fairy tale script" and believes marrying her rescuer is "her ultimate reward". Recognition Fiona was celebrated as a positive role model by the Girl Scouts of the USA, who used the character's likeness in several tie-in media to promote the organization's "Issues for Girl Scouts" movement and encourage "girls to develop self-confidence and embrace diversity." The organization also hosted a free screening of the film in 2001, which was attended by an audience of 340. For her performance in Shrek, Diaz won a Kid's Choice Award for Best Burp, which the actress claims to be one of her greatest achievements. According to Daniel Kurland of Screen Rant, Diaz "remains a crucial component of what makes the movie work" despite resembling an "unsung hero" throughout the franchise. Summarizing the actress' career, Kendall Fisher of E! Online said Diaz "voiced one of our favorite animated characters". The Ringer ranked Shrek Diaz's best film, believing her performance as Fiona aged better than the film's soundtrack and animation. Author Alison Herman elaborated that Fiona embraced her flaws and offered children "an important lesson in both self-esteem and the comedic value of fart jokes", while the actress "holds her own against" Myers and Murphy; "as a character, Fiona subverts the pretty-princess trope enough to provide fuel for undergrad media studies papers for decades to come". Marie Claire ranked Fiona Diaz's third best "Movie Moments That Made Us Fall In Love With Her". In addition to ranking Fiona the fourth best role of Diaz's career following her retirement in 2018, Samarth Goyal of the Hindustan Times crowned Fiona "one of the most loved animated characters of the 21st century", crediting her with making Diaz "a big star." In 2011, Gulf News ranked Diaz among "Hollywood's A-list of most popular voice actors", with Forbes reporting in 2010 that the actress was mentioned in the media approximately 1,809 times while promoting the most recent Shrek film. Teen Vogue considered Fiona among the "17 Best Princesses in Movies and TV", praising the character for learning "to love herself." NBC New York's Bryan Alexander described Fiona as "the world's hottest ogre", while Stephen Hunter, film critic for The Washington Post, found hearing Diaz's voice from a computer-animated character "kind of hot". To promote Shrek 2, ice cream restaurant Baskin-Robbins named a flavor after the character, entitled Fiona's Fairytale. Described as "pink and purple swirled", the ice cream was cotton candy-flavored. References Animated human characters Animated film characters introduced in 2001 Characters created by Ted Elliott and Terry Rossio Female characters in animation Female characters in film Fictional female martial artists Fictional ogres Fictional princesses Martial artist characters in films Shrek characters Universal Pictures cartoons and characters Fictional feminists and women's rights activists Fictional shapeshifters
48483869
https://en.wikipedia.org/wiki/Mktemp
Mktemp
mktemp is a command available in many Unix-like operating systems that creates a temporary file or directory. Originally released in 1997 as part of OpenBSD 2.1, a separate implementation exists as a part of GNU Coreutils. There used to be a similar named C library function, which is now deprecated for being unsafe, and has safer alternatives. See also Filesystem Hierarchy Standard Temporary folder TMPDIR Unix filesystem References Unix software 1997 software
31268
https://en.wikipedia.org/wiki/TurboGrafx-16
TurboGrafx-16
The TurboGrafx-16, known as the in Japan, is a fourth-generation home video game console designed by Hudson Soft and sold by NEC Home Electronics. It was the first console marketed in the fourth generation of game consoles, commonly known as the 16-bit era, though the console has an 8-bit central processing unit (CPU). It was released in Japan in 1987 and in North America in 1989. In Japan, the system was launched as a competitor to the Famicom, but the delayed United States release meant that it ended up competing with the Sega Genesis and later the Super NES. The console has an 8-bit CPU and a dual 16-bit graphics processing unit (GPU) chipset consisting of a video display controller (VDC) and video color encoder. The GPUs are capable of displaying 482 colors simultaneously, out of 512. With dimensions of just 14 cm × 14 cm × 3.8 cm (5.5 in × 5.5 in × 1.5 in), the Japanese PC Engine is the smallest major home game console ever made. Games were released on HuCard cartridges and later the CD-ROM optical format with the TurboGrafx-CD add-on. The "16" in its North American name and the marketing of the console as a 16-bit platform despite having an 8-bit CPU was criticized by some as deceptive. In Japan, the PC Engine was very successful. It gained strong third-party support and outsold the Famicom at its 1987 debut, eventually becoming the Super Famicom's main rival. However, the TurboGrafx-16 failed to break into the North American market and sold poorly, which has been blamed on the delayed release and inferior marketing. In Europe the Japanese models were grey market imported, modified, and distributed in France and the United Kingdom beginning in 1988, but an official PAL model (named simply "TurboGrafx" without the "16") planned for 1990 was cancelled following the disappointing North American launch with the already-manufactured stock of systems liquidated via mail-order retailers. At least 17 distinct models of the console were made, including portable versions and those that integrated the CD-ROM add-on. An enhanced model, the PC Engine SuperGrafx, was rushed to market in 1989. It featured many performance enhancements and was intended to supersede the standard PC Engine. It failed to catch on—only six titles were released that took advantage of the added power and it was quickly discontinued. The final model was discontinued in 1994. It was succeeded by the PC-FX, which was released only in Japan and was not successful. History The PC Engine was created as a collaborative effort between Hudson Soft, who created video game software, and NEC, a company which was dominant in the Japanese personal computer market with their PC-88 and PC-98 platforms. NEC lacked the vital experience in the video gaming industry and approached numerous video game studios for support. By pure coincidence, NEC's interest in entering the lucrative video game market coincided with Hudson's failed attempt to sell designs for then-advanced graphics chips to Nintendo. The two companies successfully joined to then develop the new system. The PC Engine made its debut in the Japanese market on October 30, 1987, and it was a tremendous success. The PC Engine had an elegant, "eye-catching" design, and it was very small compared to its rivals. This, coupled with a strong software lineup and third-party support from high-profile developers such as Namco and Konami gave NEC a temporary lead in the Japanese market. The PC Engine sold 500,000 units in its first week of release. The CD-ROM expansion was a major success for the CD-ROM format, selling 60,000 units in its first five months of release in Japan. By 1989, NEC had sold over consoles and more than 80,000 CD-ROM units in Japan. In 1988, NEC decided to expand to the American market and directed its U.S. operations to develop the system for the new audience. NEC Technologies boss Keith Schaefer formed a team to test the system. They found out that there was a lack of enthusiasm in its name "PC Engine" and also felt its small size was not very suitable to American consumers who would generally prefer a larger and "futuristic" design. They decided to call the system the "TurboGrafx-16", a name representing its graphical speed and strength and its 16-bit GPU. They also completely redesigned the hardware into a large, black casing. This lengthy redesign process and NEC's questions about the system's viability in the United States delayed the TurboGrafx-16's debut. The TurboGrafx-16 was eventually released in the New York City and Los Angeles test markets in late August 1989. However, this was two weeks after Sega of America released the Sega Genesis with a 16-bit CPU to test markets. Unlike NEC, Sega did not waste time redesigning the original Japanese Mega Drive system, making only slight aesthetic changes. The Genesis quickly eclipsed the TurboGrafx-16 after its American debut. NEC's decision to pack-in Keith Courage in Alpha Zones, a Hudson Soft game unknown to western gamers, proved costly as Sega packed-in a port of the hit arcade title Altered Beast with the Genesis. NEC's American operations in Chicago were also overhyped about its potential and quickly produced 750,000 units, far above actual demand. This was very profitable for Hudson Soft as NEC paid Hudson Soft royalties for every console produced, whether sold or not. By 1990, it was clear that the system was performing very poorly and severely edged out by Nintendo and Sega's marketing. In late 1989, NEC announced plans for a coin-op arcade video game version of the TurboGrafx-16. However, NEC cancelled the plans in early 1990. In Europe, PC Engine imports from Japan drew a cult following, with a number of unauthorized PC Engine imports available along with NTSC-to-PAL adapters in the United Kingdom during the late 1980s. In 1989, a British company called Mention manufactured an adapted PAL version called the PC Engine Plus. However, the system was not officially supported by NEC. From November 1989 to 1993, PC Engine consoles as well as some add-ons were imported from Japan by French importer Sodipeng (Société de Distribution de la PC Engine), a subsidiary of Guillemot International. This came after considerable enthusiasm in the French press. The PC Engine was largely available in France and Benelux through major retailers. It came with French language instructions and also an AV cable to enable its compatibility with SECAM television sets. After seeing the TurboGrafx-16 falter in America, NEC decided to cancel their European releases. Units for the European markets were already produced, which were essentially US models modified to run on PAL television sets. NEC sold this stock to distributors; in the United Kingdom, Telegames released the console in 1990 in extremely limited quantities. By March 1991, NEC claimed that it had sold 750,000 TurboGrafx-16 consoles in the United States and 500,000 CD-ROM units worldwide. In an effort to relaunch the system in the North American market, in mid-1992 NEC and Hudson Soft transferred management of the system in North America to a new joint venture called Turbo Technologies Inc. and released the TurboDuo, an all-in-one unit that included the CD-ROM drive built in. However the North American console gaming market continued to be dominated by the Genesis and Super NES, which was released in North America in August 1991. In May 1994 Turbo Technologies announced that it was dropping support for the Duo, though it would continue to offer repairs for existing units and provide ongoing software releases through independent companies in the U.S. and Canada. In Japan, NEC had sold a total of PC Engine units and CD-ROM² units . This adds up to a total of more than PC Engine/TurboGrafx-16 units sold in Japan and the United States , and CD-ROM² units sold in Japan. The final licensed release for the PC Engine was Dead of the Brain Part 1 & 2 on June 3, 1999, on the Super CD-ROM² format. Add-ons TurboGrafx-CD/CD-ROM² The CD-ROM² is an add-on attachment for the PC Engine that was released in Japan on December 4, 1988. The add-on allows the core versions of the console to play PC Engine games in CD-ROM format in addition to standard HuCards. This made the PC Engine the first video game console to use CD-ROM as a storage media. Moreover, the PC Engine was also the very first machine of any type, computer or game console, to offer game software on CD-ROM format. (Whereas the first CD-ROM game software on a computer was a conversion from floppy disc of Mediagenic/Activision's The Manhole for the Macintosh computer, in black & white, released December, 1989, a year after PC Engine Fighting Street, a conversion of Capcom's arcade Street Fighter, and No-Ri-Ko, an adventure/dating simulator notable for the being the first multimedia game, utilizing RedBook Audio digital speech and digitized sprite graphics.) The PC Engine CD-ROM2 add-on consisted of two devices - the CD player itself and the interface unit, which connects the CD player to the console and provides a unified power supply and output for both. It was later released as the TurboGrafx-CD in the United States in November 1989, with a remodeled interface unit in order to suit the different shape of the TurboGrafx-16 console. The TurboGrafx-CD had a launch price of $399.99 and did not include any bundled games. Fighting Street and Monster Lair were the TurboGrafx-CD launch titles; Ys Book I & II soon followed. Super CD-ROM² In 1991, NEC introduced an upgraded version of the CD-ROM² System known as the Super CD-ROM², which updates the BIOS to Version 3.0 and increases buffer RAM from 64kB to 256kB. This upgrade was released in several forms: the first was the PC Engine Duo on September 21, a new model of the console with a CD-ROM drive and upgraded BIOS/RAM already built into the system. This was followed by the Super System Card released on October 26, an upgrade for the existing CD-ROM² add-on that serves as a replacement to the original System Card. PC Engine owners who did not already own the original CD-ROM² add-on could instead opt for the Super-CD-ROM² unit, an updated version of the add-on released on December 13, which combines the CD-ROM drive, interface unit and Super System Card into one device. Arcade Card On March 12, 1994, NEC introduced a third upgrade known as the , which increases the amount of onboard RAM of the Super CD-ROM² System to 2MB. This upgrade was released in two models: the Arcade Card Duo, designed for PC Engine consoles already equipped with the Super CD-ROM² System, and the Arcade Card Pro, a model for the original CD-ROM² System that combines the functionalities of the Super System Card and Arcade Card Duo into one. The first games for this add-on were ports of the Neo-Geo fighting games Fatal Fury 2 and Art of Fighting. Ports of World Heroes 2 and Fatal Fury Special were later released for this card, along with several original games released under the Arcade CD-ROM² standard. By this point support for both the TurboGrafx-16 and Turbo Duo was already waning in North America; thus, no North American version of either Arcade Card was produced, though a Japanese Arcade Card can still be used on a North American console through a HuCard converter. Variations Many variations and related products of the PC Engine were released. CoreGrafx The PC Engine CoreGrafx is an updated model of the PC Engine, released in Japan on December 8, 1989. It has the same form factor as the original PC Engine, but it changes the color scheme from white and red to black and blue and replaces the original's RF connector with an A/V port. It also used a revised CPU, the HuC6280a, which supposedly fixed some minor audio issues. A recolored version of the model, known as the PC Engine CoreGrafx II, was released on June 21, 1991. Aside from the different coloring (light grey and orange), it is nearly identical to the original CoreGrafx except that the CPU was changed back to the original HuC6280. SuperGrafx The PC Engine SuperGrafx, released on the same day as the CoreGrafx in Japan, is an enhanced variation of the PC Engine hardware with updated specs. This model has a second HuC6270A (VDC), a HuC6202 (VDP) that combines the output of the two VDCs, four times as much RAM, twice as much video RAM, and a second layer/plane of scrolling. It also uses the revised HuC6280a CPU, but the sound and color palette were not upgraded, making the expensive price tag a big disadvantage to the system. As a result, only five exclusive SuperGrafx games and two hybrid games (Darius Plus and Darius Alpha were released as standard HuCards which took advantage of the extra video hardware if played on a SuperGrafx) were released, and the system was quickly discontinued. The SuperGrafx has the same expansion port as previous PC Engine consoles, but requires an adapter in order to utilize the original CD-ROM² System add-on, due to the SuperGrafx console's large size. Shuttle The PC Engine Shuttle was released in Japan on November 22, 1989, as a less expensive model of the console, retailing at ¥18,800. It was targeted primarily towards younger players with its spaceship-like design and came bundled with a TurboPad II controller, which is shaped differently from the other standard TurboPad controllers. The reduced price was made possible by slimming down the expansion port of the back, making it the first model of the console that was not compatible with the CD-ROM² add-on. However, it does have a slot for a memory backup unit, which is required for certain games. The RF output used on the original PC Engine was also replaced with an A/V port for the Shuttle. TurboExpress The PC Engine GT is a portable version of the PC Engine, released in Japan on December 1, 1990, and then in the United States as the TurboExpress. It can play only HuCard games. It has a backlit, active-matrix color LCD screen, the most advanced on the market for a portable video game unit at the time. The screen contributed to its high price and short battery life, however, which hurt its performance in the market. It also has a TV tuner adapter as well as a two-player link cable. LT The PC Engine LT is a model of the console in a laptop form, released on December 13, 1991, in Japan, retailing at ¥99,800. The LT does not require a television display (and does not have any AV output) as it has a built-in flip-up screen and speakers, just as a laptop would have, but, unlike the GT, the LT runs on a power supply. Its expensive price meant that few units were produced compared to other models. The LT has full expansion port capability, so the CD-ROM² unit is compatible with the LT the same way as it is with the original PC Engine and CoreGrafx. However, the LT requires an adapter to use the enhanced Super CD-ROM² unit. Duo NEC Home Electronics released the PC Engine Duo in Japan on September 21, 1991, which combined the PC Engine and Super CD-ROM² unit into a single console. The system can play HuCards, audio CDs, CD+Gs, standard CD-ROM² games and Super CD-ROM² games. The North American version, the TurboDuo, was launched in October 1992. Two updated variants were released in Japan: the PC Engine Duo-R on March 25, 1993, and the PC Engine Duo-RX on June 25, 1994. The changes were mostly cosmetic, but the RX included a new 6-button controller. Third-party models The PC-KD863G is a CRT monitor with built-in PC Engine console, released on September 27, 1988, in Japan for ¥138,000. Following NEC's PCs' naming scheme, the PC-KD863G was designed to eliminate the need to buy a separate television set and a console. It output its signals in RGB, so it was clearer at the time than the console which was still limited to RF and composite. However, it has no BUS expansion port, which made it incompatible with the CD-ROM² System and memory backup add-ons. The X1-Twin was the first licensed PC Engine-compatible hardware manufactured by a third-party company, released by Sharp in April 1989 for ¥99,800. It is a hybrid system that can run PC Engine games and X1 computer software. Pioneer Corporation's LaserActive supports an add-on module which allows the use of PC Engine games (HuCard, CD-ROM² and Super CD-ROM²) as well as new "LD-ROM²" titles that work only on this device. NEC also released their own LaserActive unit (NEC PCE-LD1) and PC Engine add-on module, under an OEM license. A total of eleven LD-ROM2 titles were produced, with only three of them released in North America. Other foreign markets Outside North America and Japan, the TurboGrafx-16 console was released in South Korea by a third-party company, Haitai, under the name Vistar 16. It was based on the American version but with a new curved design. Daewoo Electronics distributed the PC Engine Shuttle into the South Korean market as well. Technical specifications The TurboGrafx-16 uses a Hudson Soft HuC6280 CPU—an 8-bit CPU modified with two 16-bit graphics processors—running at 7.6 MHz. It features 8 KB of RAM, 64 KB of Video RAM, and the ability to display 482 colors at once from a 512-color palette. The sound hardware, built into the HuC6280 CPU, includes a PSG running at 3.58 MHz and a 5-10 bit stereo PCM. TurboGrafx-16 games use the HuCard ROM cartridge format, thin credit card-sized cards that insert into the front slot of the console. PC Engine HuCards feature 38 connector pins, while TurboGrafx-16 HuCards (alternatively referred to as "TurboChips") feature eight of these pins in reverse order as a region lockout method. The power switch on the console also acts as a lock that prevents HuCards from being removed while the system is powered on. The European release of the TurboGrafx-16 did not have its own PAL-formatted HuCards as a result of its limited release, with the system instead supporting standard HuCards and outputting a PAL 50 Hz video signal. Peripherals In Japan the PC Engine was originally sold with a standard controller known simply as the Pad, which features a rectangular shape with a directional pad, two action buttons numbered "I" and "II", and two rubber "Select" and "Run" buttons, matching the number of buttons on the Famicom's primary controller (as well as a standard NES controller). Another controller known as the TurboPad was also launched separately with the console, which added two "Turbo" switches for the I and II buttons with three speed settings. The switches allow for a single button press to register multiple inputs at once (for instance, this allows for rapid fire in scrolling shooters). The TurboPad became standard-issue with the TurboGrafx-16 in North America, as well as subsequent models of the PC Engine in Japan starting with the PC Engine Coregrafx, immediately phasing out the original PC Engine Pad. All PC Engine and TurboGrafx-16 consoles only have one controller port; in order to use multiple controllers on the same system and play multiplayer games, a separate peripheral, known in Japan as the MultiTap and in North America as the TurboTap, was required, which allowed up to five controllers to be plugged into the system. The Cordless Multitap was also available exclusively in Japan, sold as a set with a single Cordless Pad, with additional wireless controllers available separately. Due to using different diameter controller ports, PC Engine controllers and peripherals are not compatible with TurboGrafx-16 consoles and vice-versa. The TurboDuo would revert to using the same controller port that the PC Engine uses, resulting in new TurboDuo-branded versions of the TurboPad and TurboTap peripherals, known as the DuoPad and the DuoTap respectively, to be made. An official TurboGrafx-16/Duo Adapter was also produced, which was an extension cable that allowed any TurboGrafx-16 controller or peripheral to be connected into the TurboDuo console (as well as any PC Engine console as a side-effect). Many peripherals were produced for both the TurboGrafx-16 and PC Engine. The TurboStick is a tabletop joystick designed to replicate the standard control layout of arcade games from the era. Other similar joystick controllers were produced by third-party manufacturers, such as the Python 4 by QuickShot and the Stick Engine by ASCII Corporation. The TurboBooster attached to the back of the system and allowed it to output composite video and stereo audio. Hudson released the Ten no Koe 2 in Japan, which enabled the ability to save progress in compatible HuCard titles. In 1991, NEC Avenue released the Avenue Pad 3, which added a third action button labelled "III" that could be assigned via a switch to function as either the Select or Run button, as many games had begun to use one of those for in-game commands. The Avenue Pad 6 was released in 1993 in conjunction with the PC Engine port of Street Fighter II: Champion Edition, adding four action buttons numbered "III" through "VI"; unlike the three-button pad, these buttons did not duplicate existing buttons, and instead added new functionalities in compatible titles. Another six-button controller, the Arcade Pad 6, was released by NEC Home Electronics in 1994, replacing the TurboPad as the bundled controller of the PC Engine Duo-RX (the last model of the console). Library A total of 686 commercial games were released for the TurboGrafx-16. In North America, the system featured Keith Courage in Alpha Zones as a pack-in game, which was absent from the PC Engine. The PC Engine console received strong third-party support in Japan, while the TurboGrafx-16 console struggled to gain the attention of other developers. Hudson brought over many of its popular franchises, such as Bomberman, Bonk, and Adventure Island, to the system with graphically impressive follow-ups. Hudson also designed and published several original titles such as Air Zonk and Dungeon Explorer. Compile published Alien Crush and Devil's Crush, two well-received virtual pinball games. Namco contributed several high-quality conversions of its arcade games, such as Valkyrie no Densetsu, Pac-Land, Galaga '88, Final Lap Twin, and Splatterhouse, as did Capcom with a port of Street Fighter II': Champion Edition. A large portion of the TurboGrafx-16's library is made up of horizontal and vertical-scrolling shooters. Examples include Konami's Gradius and Salamander, Hudson's Super Star Soldier and Soldier Blade, Namco's Galaga '88, Irem's R-Type, and Taito's Darius Alpha, Darius Plus and Super Darius. The console is also known for its platformers and role-playing games; Victor Entertainment's The Legendary Axe won numerous awards and is seen among the TurboGrafx-16's definitive titles. Ys I & II, a compilation of two games from Nihon Falcom's Ys series, was particularly successful in Japan. Cosmic Fantasy 2 was an RPG ported from Japan to the United States that earned Electronic Gaming Magazine RPG of the year in 1993. Reception In Japan, the PC Engine was very successful, and at one point it was the top-selling console in the nation. In North America and Europe the situation was reversed, with both Sega and Nintendo dominating the console market at the expense of NEC. Initially, the TurboGrafx-16 sold well in the U.S., but eventually it suffered from lack of support from third-party software developers and publishers. In 1990, ACE magazine praised the console's racing game library, stating that, compared to "all the popular consoles, the PC Engine is way out in front in terms of the range and quality of its race games." Reviewing the Turbo Duo model in 1993, GamePro gave it a "thumbs down". Though they praised the system's CD sound, graphics, and five-player capability, they criticized the outdated controller and the games library, saying the third party support was "almost nonexistent" and that most of the first party games were localizations of games better suited to the Japanese market. In 2009, the TurboGrafx-16 was ranked the 13th greatest video game console of all time by IGN, citing "a solid catalog of games worth playing," but also a lack of third party support and the absence of a second controller port. The controversy over bit width marketing strategy reappeared with the advent of the Atari Jaguar console. Mattel did not market its 1979 Intellivision system with bit width, although it used a 16-bit CPU. Legacy In 1994, NEC released a new console, the Japan-only PC-FX, a 32-bit system with a tower-like design; it enjoyed a small but steady stream of games until 1998, when NEC finally abandoned the video games industry. NEC supplied rival Nintendo with the RISC-based CPU, V810 (same one used in the PC-FX) for the Virtual Boy and VR4300 CPU for the Nintendo 64, released in 1995–1996, and both SNK updated VR4300 CPU (64-bit MIPS III) on Hyper Neo Geo 64, as well as to former rival Sega with a version of its PowerVR 2 GPU for the Dreamcast, released in 1997–1998.Emulation programs for the TurboGrafx-16 exist for several modern and retro operating systems and architectures. Popular and regularly updated programs include Mednafen and BizHawk. In 2006, a number of TurboGrafx-16 (TurboChip/HuCARD), TurboGrafx-CD (CD-ROM²) and Turbo Duo (Super CD-ROM²) games were released on Nintendo's Virtual Console download service for the Wii, and later the Wii U, and Nintendo 3DS, including several that were originally never released outside Japan. In 2011, ten TurboGrafx-16 games were released on the PlayStation Network for play on the PlayStation 3 and PlayStation Portable in the North American region. In 2010, Hudson released an iPhone application entitled "TurboGrafx-16 GameBox" which allowed users to buy and play a number of select Turbo Grafx games via in-app purchases. The 2012 JRPG Hyperdimension Neptunia Victory features a character known as Peashy, that pays homage to the console. In 2016, rapper Kanye West's 8th solo album was initially announced to be titled "Turbo Grafx 16". The album, however, was eventually scrapped. In 2019, Konami announced at E3 2019 and at Tokyo Game Show 2019 the TurboGrafx-16 Mini, a dedicated console featuring many built-in games. It's the first release of the official hardware of the TurboGrafx-16 family since the closure of Hudson Soft in 2012. On March 6, 2020, Konami announced that the TurboGrafx-16 Mini and its peripheral accessories will be delayed indefinitely from its previous March 19, 2020, launch date due to the COVID-19 pandemic disrupting supply chains in China. It was released in North America on May 22, 2020, and released in Europe on June 5, 2020. Notes References External links PC Engine / TurboGrafx-16 Architecture: A Practical Analysis 1980s toys 1990s toys CD-ROM-based consoles Discontinued products Home video game consoles Fourth-generation video game consoles NEC consoles Products introduced in 1987
18559380
https://en.wikipedia.org/wiki/Ksplice
Ksplice
Ksplice is an open-source extension of the Linux kernel that allows security patches to be applied to a running kernel without the need for reboots, avoiding downtimes and improving availability (a technique broadly referred to as dynamic software updating). Ksplice supports only the patches that do not make significant semantic changes to kernel's data structures. Ksplice has been implemented for Linux on the IA-32 and x86-64 architectures. It was developed by Ksplice, Inc. until 21 July 2011, when Oracle acquired Ksplice and then offered support for Oracle Linux. Support for Red Hat Enterprise Linux was dropped and turned into a free 30-day trial for RHEL customers as an incentive to migrate to Oracle Linux Premier Support. At the same time, use of the Oracle Unbreakable Enterprise Kernel (UEK) became a requirement for using Ksplice on production systems. , Ksplice is available for free on desktop Linux installations, with official support available for Fedora and Ubuntu Linux distributions. Design Ksplice takes as input a unified diff and the original kernel source code, and it updates the running kernel in memory. Using Ksplice does not require any preparation before the system is originally booted, (the running kernel needs no special prior compiling, for example). In order to generate an update, Ksplice must determine what code within the kernel has been changed by the source code patch. Ksplice performs this analysis at the Executable and Linkable Format (ELF) object code layer, rather than at the C source code layer. To apply a patch, Ksplice first freezes execution of a computer so it is the only program running. The system verifies that no processors were in the middle of executing functions that will be modified by the patch. Ksplice modifies the beginning of changed functions so that they instead point to new, updated versions of those functions, and modifies data and structures in memory that need to be changed. Finally, Ksplice resumes each processor running where it left off. To be fully automatic, Ksplice's design was originally limited to patches that did not introduce semantic changes to data structures, since most Linux kernel security patches do not make these kinds of changes. An evaluation against Linux kernel security patches from May 2005 to May 2008 found that Ksplice was able to apply fixes for all the 64 significant kernel vulnerabilities discovered in that interval. In 2009, major Linux vendors asked their customers to install a kernel update more than once per month. For patches that do introduce semantic changes to data structures, Ksplice requires a programmer to write a short amount of additional code to help apply the patch. This was necessary for about 12% of the updates in that time period. History The Ksplice software was created by four MIT students based on Jeff Arnold's master's thesis, and they later created Ksplice, Inc. Around May 2009, the company won the MIT $100K Entrepreneurship Competition and the Cyber Security Challenge of Global Security Challenge. Whereas the Ksplice software was provided under an open source license, Ksplice, Inc. provided a service to make it easier to use the software. Ksplice, Inc. provided prebuilt and tested updates for the Red Hat, CentOS, Debian, Ubuntu and Fedora Linux distributions. The virtualization technologies OpenVZ and Virtuozzo were also supported. Updates for Ubuntu Desktop and Fedora systems were provided free of charge, whereas other platforms were offered on a subscription basis. On 21 July 2011, Oracle Corporation announced that they acquired Ksplice, Inc. At the time the company was acquired, Ksplice, Inc. claimed to have over 700 companies using the service to protect over 100,000 servers. While the service had been available for multiple Linux distributions, it was stated at the time of acquisition that "Oracle believes it will be the only enterprise Linux provider that can offer zero downtime updates." More explicitly, "Oracle does not plan to support the use of Ksplice technology with Red Hat Enterprise Linux." Existing legacy customers continue to be supported by Ksplice, but no new customers are being accepted for other platforms. , Ksplice is available for free on Fedora and Ubuntu. In January 2016, Ksplice was integrated into Oracle's Unbreakable Enterprise Kernel Release 4 for Oracle Linux 6 and 7, which is Oracle's redistribution of Red Hat Enterprise Linux. See also kexec, a method for loading a whole new kernel from a running system kGraft, kpatch and KernelCare, other Linux kernel live patching technologies developed by SUSE, Red Hat and CloudLinux, respectively Loadable kernel module References External links Free security software programmed in C Linux kernel live patching Linux-only free software
19114867
https://en.wikipedia.org/wiki/Samuel%20Madden%20%28computer%20scientist%29
Samuel Madden (computer scientist)
Samuel R. Madden (born August 4, 1976) is an American computer scientist specializing in database management systems. He is currently a professor of computer science at the Massachusetts Institute of Technology. Career Madden was born and raised in San Diego, California. After completing bachelor's and master's degrees at MIT, he earned a Ph.D. specializing in database management at the University of California Berkeley under Michael Franklin and Joseph M. Hellerstein. Before joining MIT as a tenure-track professor, Madden held a post-doc position at Intel's Berkeley Research center. Madden has been involved in a number database research projects, including TinyDB, TelegraphCQ, Aurora/Borealis, C-Store, and H-Store. In 2005, at the age of 29 he was named to the TR35 as one of the Top 35 Innovators Under 35 by MIT Technology Review magazine. Recent projects include DataHub - a "github for data" platform that provides hosted database storage, versioning, ingest, search, and visualization (commercialized as Instabase), CarTel - a distributed wireless platform that monitors traffic and on-board diagnostic conditions in order to generate road surface reports, and Relational Cloud - a project investigating research issues in building a database-as-a-service. Madden's has published more than 250 scholarly articles, with more than 59,000 citations, with an h-index of 101. In addition, Madden is a co-founder of Cambridge Mobile Telematics and Vertica Systems. Before enrolling at MIT and while an undergraduate student there, Madden wrote printer driver software for Palomar Software, a San Diego-area Macintosh software company. He is also a Technology Expert Partner at Omega Venture Partners. Education Ph.D., Computer Science, 2003. University of California Berkeley. M.Eng., Electrical Engineering and Computer Science, 1999. Massachusetts Institute of Technology. B.S., Electrical Engineering and Computer Science, 1999. Massachusetts Institute of Technology. Morse High School, 1994. Samuel F.B. Morse High School. References 1976 births Living people American computer scientists Massachusetts Institute of Technology alumni Massachusetts Institute of Technology faculty People from San Diego University of California, Berkeley alumni
4271664
https://en.wikipedia.org/wiki/List%20of%20atmospheric%20dispersion%20models
List of atmospheric dispersion models
Atmospheric dispersion models are computer programs that use mathematical algorithms to simulate how pollutants in the ambient atmosphere disperse and, in some cases, how they react in the atmosphere. US Environmental Protection Agency models Many of the dispersion models developed by or accepted for use by the U.S. Environmental Protection Agency (U.S. EPA) are accepted for use in many other countries as well. Those EPA models are grouped below into four categories. Preferred and recommended models IBS The IBS flow and dispersion model was developed as part of a funding project by the Ministry for the Environment and Agriculture of the State of Saxony-Anhalt, Schenk (1996). The mathematics and mechanics used are described in the Habilitation Schenk (1980). The wind field and pollutant dispersion calculation is carried out by numerical solution of the coupled differential equation system of momentum, heat and mass transfer. It is an excellent model development, which is described in detail in Schenk (1996). Flow models that are based on the complete solution of the three-dimensional NAVIER-STOKES's differential equations are referred to as high-quality. IBS also waives freedom from divergence. This takes into account the fact that air is to be regarded as a compressible fluid and not, conversely, incompressible, such as lubricants or water. Before numerical algorithms can be used, it must be demonstrated that they have been validated. Reference solutions are used for this, such as the GAUSS distribution function here. It is a three-dimensional solution function that is often used to validate dispersion models. Analytical processes and numerical solutions can easily be compared. This applies equally to spatially distributed solutions and soil concentra AERMOD – An atmospheric dispersion model based on atmospheric boundary layer turbulence structure and scaling concepts, including treatment of multiple ground-level and elevated point, area and volume sources. It handles flat or complex, rural or urban terrain and includes algorithms for building effects and plume penetration of inversions aloft. It uses Gaussian dispersion for stable atmospheric conditions (i.e., low turbulence) and non-Gaussian dispersion for unstable conditions (high turbulence). Algorithms for plume depletion by wet and dry deposition are also included in the model. This model was in development for approximately 14 years before being officially accepted by the U.S. EPA. CALPUFF – A non-steady-state puff dispersion model that simulates the effects of time- and space-varying meteorological conditions on pollution transport, transformation, and removal. CALPUFF can be applied for long-range transport and for complex terrain. BLP – A Gaussian plume dispersion model designed to handle unique modelling problems associated with industrial sources where plume rise and downwash effects from stationary line sources are important. CALINE3 – A steady-state Gaussian dispersion model designed to determine pollution concentrations at receptor locations downwind of highways located in relatively uncomplicated terrain. CAL3QHC and CAL3QHCR – CAL3QHC is a CALINE3 based model with queuing calculations and a traffic model to calculate delays and queues that occur at signalized intersections. CAL3QHCR is a more refined version based on CAL3QHC that requires local meteorological data. CTDMPLUS – A complex terrain dispersion model (CTDM) plus algorithms for unstable situations (i.e., highly turbulent atmospheric conditions). It is a refined point source Gaussian air quality model for use in all stability conditions (i.e., all conditions of atmospheric turbulence) for complex terrain. OCD – Offshore and coastal dispersion model (OCD) is a Gaussian model developed to determine the impact of offshore emissions from point, area or line sources on the air quality of coastal regions. It incorporates overwater plume transport and dispersion as well as changes that occur as the plume crosses the shoreline. Alternative models ADAM – Air force dispersion assessment model (ADAM) is a modified box and Gaussian dispersion model which incorporates thermodynamics, chemistry, heat transfer, aerosol loading, and dense gas effects. ADMS 3 – Atmospheric dispersion modelling system (ADMS 3) is an advanced dispersion model developed in the United Kingdom for calculating concentrations of pollutants emitted both continuously from point, line, volume and area sources, or discretely from point sources. AFTOX – A Gaussian dispersion model that handles continuous or puff, liquid or gas, elevated or surface releases from point or area sources. DEGADIS – Dense gas dispersion (DEGADIS) is a model that simulates the dispersion at ground level of area source clouds of denser-than-air gases or aerosols released with zero momentum into the atmosphere over flat, level terrain. HGSYSTEM – A collection of computer programs developed by Shell Research Ltd. and designed to predict the source-term and subsequent dispersion of accidental chemical releases with an emphasis on dense gas behavior. HOTMAC and RAPTAD – HOTMAC is a model for weather forecasting used in conjunction with RAPTAD which is a puff model for pollutant transport and dispersion. These models are used for complex terrain, coastal regions, urban areas, and around buildings where other models fail. HYROAD – The hybrid roadway model integrates three individual modules simulating the pollutant emissions from vehicular traffic and the dispersion of those emissions. The dispersion module is a puff model that determines concentrations of carbon monoxide (CO) or other gaseous pollutants and particulate matter (PM) from vehicle emissions at receptors within 500 meters of the roadway intersections. ISC3 – A Gaussian model used to assess pollutant concentrations from a wide variety of sources associated with an industrial complex. This model accounts for: settling and dry deposition of particles; downwash; point, area, line, and volume sources; plume rise as a function of downwind distance; separation of point sources; and limited terrain adjustment. ISC3 operates in both long-term and short-term modes. OBODM – A model for evaluating the air quality impacts of the open burning and detonation (OB/OD) of obsolete munitions and solid propellants. It uses dispersion and deposition algorithms taken from existing models for instantaneous and quasi-continuous sources to predict the transport and dispersion of pollutants released by the open burning and detonation operations. PANACHE – Fluidyn-PANACHE is an Eulerian (and Lagrangian for particulate matter), 3-dimensional finite volume fluid mechanics model designed to simulate continuous and short-term pollutant dispersion in the atmosphere, in simple or complex terrain. PLUVUEII – A model that estimates atmospheric visibility degradation and atmospheric discoloration caused by plumes resulting from the emissions of particles, nitrogen oxides, and sulfur oxides. The model predicts the transport, dispersion, chemical reactions, optical effects and surface deposition of such emissions from a single point or area source. SCIPUFF – A puff dispersion model that uses a collection of Gaussian puffs to predict three-dimensional, time-dependent pollutant concentrations. In addition to the average concentration value, SCIPUFF predicts the statistical variance in the concentrations resulting from the random fluctuations of the wind. SDM – Shoreline dispersion model (SDM) is a Gaussian dispersion model used to determine ground-level concentrations from tall stationary point source emissions near a shoreline. SLAB – A model for denser-than-air gaseous plume releases that utilizes the one-dimensional equations of momentum, conservation of mass and energy, and the equation of state. SLAB handles point source ground-level releases, elevated jet releases, releases from volume sources and releases from the evaporation of volatile liquid spill pools. Screening models These are models that are often used before applying a refined air quality model to determine if refined modelling is needed. AERSCREEN – The screening version of AERMOD. It produces estimates of concentrations, without the need for meteorological data, that are equal to or greater than the estimates produced by AERMOD with a full set of meteorological data. The U.S. EPA released version 11060 of AERSCREEN on 11 March 2010 with a subsequent update, version 11076, on 17 March 2010. The U.S. EPA published the "Clarification memorandum on AERSCREEN as the recommended screening model" on 11 April 2011. CTSCREEN – The screening version of CTDMPLUS. SCREEN3 – The screening version of ISC3. TSCREEN – Toxics screening model (TSCREEN) is a Gaussian model for screening toxic air pollutant emissions and their subsequent dispersion from possible releases at superfund sites. It contains 3 modules: SCREEN3, PUFF, and RVD (Relief Valve Discharge). VALLEY – A screening, complex terrain, Gaussian dispersion model for estimating 24-hour or annual concentrations resulting from up to 50 point and area emission sources. COMPLEX1 – A multiple point source screening model with terrain adjustment that uses the plume impaction algorithm of the VALLEY model. RTDM3.2 – Rough terrain diffusion model (RTDM3.2) is a Gaussian model for estimating ground-level concentrations of one or more co-located point sources in rough (or flat) terrain. VISCREEN – A model that calculates the impact of specified emissions for specific transport and dispersion conditions. Photochemical models Photochemical air quality models have become widely utilized tools for assessing the effectiveness of control strategies adopted by regulatory agencies. These models are large-scale air quality models that simulate the changes of pollutant concentrations in the atmosphere by characterizing the chemical and physical processes in the atmosphere. These models are applied at multiple geographical scales ranging from local and regional to national and global. Models-3/CMAQ – The latest version of the community multi-scale air quality (CMAQ) model has state-of-the-science capabilities for conducting urban to regional scale simulations of multiple air quality issues, including tropospheric ozone, fine particles, toxics, acid deposition, and visibility degradation. CAMx – The comprehensive air quality model with extensions (CAMx) simulates air quality over many geographic scales. It handles a variety of inert and chemically active pollutants, including ozone, particulate matter, inorganic and organic PM2.5/PM10, and mercury and other toxics. REMSAD – The regional modeling system for aerosols and deposition (REMSAD) calculates the concentrations of both inert and chemically reactive pollutants by simulating the atmospheric processes that affect pollutant concentrations over regional scales. It includes processes relevant to regional haze, particulate matter and other airborne pollutants, including soluble acidic components and mercury. UAM-V – The urban airshed model was a pioneering effort in photochemical air quality modelling in the early 1970s and has been used widely for air quality studies focusing on ozone. Other models developed in the United States CHARM – A model capable of simulating dispersion of toxics and particles. It can calculate impacts of thermal radiation from fires, overpressures from mechanical failures and explosions, and nuclear radiation from radionuclide releases. CHARM is capable of handling effects of complex terrain and buildings. A Lagrangian puff screening version and Eulerian full-function version are available. More information is available here. HYSPLIT – Hybrid Single Particle Lagrangian Integrated Trajectory Model. Developed at NOAA's Air Resources Laboratory. The HYSPLIT model is a complete system for computing simple air parcel trajectories to complex dispersion and deposition simulations. More information about this model can be found at PUFF-PLUME – A Gaussian chemical/radionuclide dispersion model that includes wet and dry deposition, real-time input of meteorological observations and forecasts, dose estimates from inhalation and gamma shine, and puff or plume dispersion modes. It is the primary model for emergency response use for atmospheric releases of radioactive materials at the Savannah River Site of the United States Department of Energy. It was first developed by the Pacific Northwest National Laboratory (PNNL) in the 1970s. Puff model – Puff is a volcanic ash tracking model developed at the University of Alaska Fairbanks. It requires NWP wind field data on a geographic grid covering the area over which ash may be dispersed. Representative ash particles are initiated at the volcano's location and then allowed to advect, diffuse, and settle within the atmosphere. The location of the particles at any time after the eruption can be viewed using the post-processing software included with the model. Output data is in netCDF format and can also be viewed with a variety of software. More information on the model is available here. Models developed in the United Kingdom ADMS-3 – See the description of this model in the alternative models section of the models accepted by the U.S. EPA. ADMS-URBAN – A model for simulating dispersion on scales ranging from a street scale to citywide or county-wide scale, handling most relevant emission sources such as traffic, industrial, commercial, and domestic sources. It is also used for air quality management and assessments of current and future air quality vis-a-vis national and regional standards in Europe and elsewhere. ADMS-Roads – A model for simulating dispersion of vehicular pollutant emissions from small road networks in combination with emissions from industrial plants. It handles multiple road sources as well as multiple point, line or area emission sources and the model operation is similar to the other ADMS models ADMS-Screen – A screening model for rapid assessment of the air quality impact of a single industrial stack to determine if more detailed modelling is needed. It combines the dispersion modelling algorithms of the ADMS models with a user interface requiring minimal input data. GASTAR – A model for simulating accidental releases of denser-than-air flammable and toxic gases. It handles instantaneous and continuous releases, releases from jet sources, releases from evaporation of volatile liquid pools, variable terrain slopes and ground roughness, obstacles such as fences and buildings, and time-varying releases. NAME – Numerical atmospheric-dispersion modelling environment (NAME) is a local to global scale model developed by the UK's Met Office. It is used for: forecasting of air quality, air pollution dispersion, and acid rain; tracking radioactive emissions and volcanic ash discharges; analysis of accidental air pollutant releases and assisting in emergency response; and long-term environmental impact analysis. It is an integrated model that includes boundary layer dispersion modelling. UDM – Urban dispersion model is a Gaussian puff based model for predicting the dispersion of atmospheric pollutants in the range of 10m to 25 km throughout the urban environment. It is developed by the Defense Science and Technology Laboratory for the UK Ministry of Defence. It handles instantaneous, continuous, and pool releases, and can model gases, particulates, and liquids. The model has a three regime structure: that of single building (area density < 5%), urban array (area density > 5%) and open. The model can be coupled with the US model SCIPUFF to replace the open regime and extend the model's prediction range. Models developed in continental Europe The European Topic Centre on Air and Climate Change, which is part of the European Environment Agency (EEA), maintains an online Model Documentation System (MDS) that includes descriptions and other information for almost all of the dispersion models developed by the countries of Europe. The MDS currently (July 2012) contains 142 models, mostly developed in Europe. Of those 142 models, some were subjectively selected for inclusion here. Anyone interested in seeing the complete MDS can access it here. Some of the European models listed in the MDS are public domain and some are not. Many of them include a pre-processor module for the input of meteorological and other data, and many also include a post-processor module for graphing the output data and/or plotting the area impacted by the air pollutants on maps. The country of origin is included for each of the European models listed below. AEROPOL (Estonia) – The AERO-POLlution model developed at the Tartu Observatory in Estonia is a Gaussian plume model for simulating the dispersion of continuous, buoyant plumes from stationary point, line and area sources over flat terrain on a local to regional scale. It includes plume depletion by wet and/or dry deposition as well as the effects of buildings in the plume path. Airviro Gauss (Sweden) – A gaussian dispersion model that handles point, road, area and grid sources developed by SMHI. Plumes follow trajectories from a wind model and each plume has a cutoff dependent on wind speed. The model also support irregular calculation grids. Airviro Grid (Sweden) – A simplified eulerian model developed by SMHI. Can handle point, road, area and grid sources. Includes dry and wet deposition and sedimentation. Airviro Heavy Gas (Sweden) – A model for heavy gas dispersion developed by SMHI. Airviro receptor model (Sweden)- An inverse dispersion model developed by SMHI. Used to find emission sources. ATSTEP (Germany) – Gaussian puff dispersion and deposition model used in the decision support system RODOS (real-time on-line decision support) for nuclear emergency management. RODOS is operational in Germany by the Federal Office for Radiation Protection (BfS) and test-operational in many other European countries. More information on RODOS is available here and on the ATSTEP model here. AUSTAL2000 (Germany) – The official air dispersion model to be used in the permitting of industrial sources by the German Federal Environmental Agency. The model accommodates point, line, area and volume sources of buoyant plumes. It has capabilities for building effects, complex terrain, plume depletion by wet or dry deposition, and first order chemical reactions. It is based on the LASAT model developed by Ingenieurbüro Janicke Gesellschaft für Umweltphysik. BUO-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) specifically for estimating the atmospheric dispersion of neutral or buoyant plume gases and particles emitted from fires in warehouses and chemical stores. It is a hybrid of a local scale Gaussian plume model and another model type. Plume depletion by dry deposition is included but wet deposition is not included. CAR-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) for evaluating atmospheric dispersion and chemical transformation of vehicular emissions of inert (CO, NOx) and reactive (NO, NO2, O3) gases from a road network of line sources on a local scale. It is a Gaussian line source model which includes an analytical solution for the chemical cycle NO-O3-NO2. CAR-International (The Netherlands) – Calculation of air pollution from road traffic (CAR-International) is an atmospheric dispersion model developed by the Netherlands Organisation for Applied Scientific Research. It is used for simulating the dispersion of vehicular emissions from roadway traffic. DIPCOT (Greece) – Dispersion over complex terrain (DIPCOT) is a model developed in the National Centre of Scientific Research "DEMOKRITOS" of Greece that simulates dispersion of buoyant plumes from multiple point sources over complex terrain on a local to regional scale. It does not include wet deposition or chemical reactions. DISPERSION21 (Sweden) – This model was developed by the Swedish Meteorological and Hydrological Institute (SMHI) for evaluating air pollutant emissions from existing or planned industrial or urban sources on a local scale. It is a Gaussian plume model for point, area, line and vehicular traffic sources. It includes plume penetration of inversions aloft, building effects, NOx chemistry and it can handle street canyons. It does not include wet or dry deposition, complex atmospheric chemistry, or the effects of complex terrain. DISPLAY-2 (Greece) – A vapour cloud dispersion model for neutral or denser-than-air pollution plumes over irregular, obstructed terrain on a local scale. It accommodates jet releases as well as two-phase (i.e., liquid-vapor mixtures) releases. This model was also developed at the National Centre of Scientific Research "DEMOKRITOS" of Greece. EK100W (Poland) – A Gaussian plume model used for air quality impact assessments of pollutants from industrial point sources as well as for urban air quality studies on a local scale. It includes wet and dry deposition. The effects of complex terrain are not included. FARM (Italy) – The Flexible Air quality Regional Model (FARM) is a multi-grid Eulerian model for dispersion, transformation and deposition of airborne pollutants in gas and aerosol phases, including photo-oxidants, aerosols, heavy metals and other toxics. It is suited for case studies, air quality assessments, scenarios analyses and pollutants forecast. FLEXPART (Austria/Germany/Norway) – An efficient and flexible Lagrangian particle transport and diffusion model for regional to global applications, with capability for forward and backward mode. Freely available. Developed at BOKU Vienna, Technical University of Munich, and NILU. GRAL (Austria) – The GRAz Lagrangian model was initially developed at the Graz University of Technology and it is a dispersion model for buoyant plumes from multiple point, line, area and tunnel portal sources. It handles flat or complex terrain (mesoscale prognostic flow field model) including building effects (microscale prognostic flow field model) but it has no chemistry capabilities. The model is freely available: http://lampz.tugraz.at/~gral/ HAVAR (Czech Republic) – A Gaussian plume model integrated with a puff model and a hybrid plume-puff model, developed by the Czech Academy of Sciences, is intended for routine and/or accidental releases of radionuclides from single point sources within nuclear power plants. The model includes radioactive plume depletion by dry and wet deposition as well as by radioactive decay. For the decay of some nuclides, the creation of daughter products that then grow into the plume is taken into account. IFDM (Belgium) – The immission frequency distribution model, developed at the Flemish Institute for Technological Research (VITO), is a Gaussian dispersion model used for point and area sources dispersing over flat terrain on a local scale. The model includes plume depletion by dry or wet deposition and has been updated to handle building effects and the O3-NOx-chemistry. It is not designed for complex terrain or other chemically reactive pollutants. INPUFF-U (Romania) – This model was developed by the National Institute of Meteorology and Hydrology in Bucharest, Romania. It is a Gaussian puff model for calculating the dispersion of radionuclides from passive emission plumes on a local to urban scale. It can simulate accidental or continuous releases from stationary or mobile point sources. It includes wet and dry deposition. Building effects, buoyancy effects, chemical reactions and effects of complex terrain are not included. LAPMOD (Italy) – The LAPMOD (LAgrangian Particle MODel) modeling system is developed by Enviroware and it is available for free. LAPMOD is a Lagrangian partile model fully coupled to the diagnostic meteorological model CALMET and can be used to simulate the dispersion of inert pollutants as well as odors and radioactive substances. It includes dry and wet deposition algorithms and advanced numerical schemes for plume rise (Janicke and Janicke, Webster and Thomson). It can simulate inert pollutants, odors and radioactive substances and it is part of ARIES, the official Italian modeling system for nuclear emergencies operated by ISPRA and by the regional environmental protection agency of Emilia-Romagna, Italy. LOTOS-EUROS (The Netherlands) – the long term ozone simulation – European operational smog (LOTOS-EUROS) model was developed by the Netherlands Organisation for Applied Scientific Research (TNO) and Netherlands National Institute for Public Health and the Environment (RIVM) in The Netherlands. It is designed for modelling the dispersion of pollutants (such as: photo-oxidants, aerosols, heavy metals) over all of Europe. It includes simple reaction chemistry as well as wet and dry deposition. MATCH (Sweden) – A multi-scale atmospheric transport and chemistry (MATCH). A three-dimensional, Eulerian model, suitable from urban to global scale. MEMO (Greece) – A Eulerian non-hydrostatic prognostic mesoscale model for wind flow simulation. It was developed by the Aristotle University of Thessaloniki in collaboration with the Universität Karlsruhe. This model is designed for describing atmospheric transport phenomena in the local-to-regional scale, often referred to as mesoscale air pollution models. MERCURE (France) – An atmospheric dispersion modeling CFD code developed by Electricite de France (EDF) and distributed by ARIA Technologies, a French company. The code is a version of the CFD software ESTET, developed by EDF's Laboratoire National d'Hydraulique. MODIM (Slovak Republic) – A model for calculating the dispersion of continuous, neutral or buoyant plumes on a local to regional scale. It integrates a Gaussian plume model for single or multiple point and area sources with a numerical model for line sources, street networks and street canyons. It is intended for regulatory and planning purposes. MSS (France) – Micro-swift-spray is a Lagrangian particle model used to predict the transport and dispersion of contaminants in urban environments. The SWIFT portion of this model predicts a mass-consistent wind field that considers terrain; no-penetration conditions for building boundaries; Rockle zones for recirculation, edge, and rooftop separation; and background and locally generated turbulence. The spray portion of the tool handles the dispersion of passive gases, dense gases, and particulates. Spray also accounts for plume buoyancy effects, wet and dry depositions, and calculates microscale pressure fields for integration with building models. The MSS development team is found at ARIA Technologies (France) and U.S. integration activities are led by Leidos. Validation testing of MSS has been done in conjunction with JEM and HPAC tool releases and the model is coupled with SCIPUFF/UDM to create a nested dispersion capability inside HPAC. For more information on MSS see http://www.aria.fr. MUSE (Greece) – A photochemical atmospheric dispersion model developed by Professor Nicolas Moussiopoulos at the Aristotle University of Thessaloniki in Greece. It is intended for the study of photochemical smog formation in urban areas and assessment of control strategies on a local to regional scale. It can simulate dry deposition and transformation of pollutants can be treated using any suitable chemical reaction mechanism. OML (Denmark) – A model for dispersion calculations of continuous neutral or buoyant plumes from single or multiple, stationary point and area sources. It has some simple methods for handling photochemistry (primarily for NO2) and for handling complex terrain. The model was developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For further reference see as well: OML home page ONM9440 (Austria) – A Gaussian dispersion model for continuous, buoyant plumes from stationary sources for use in flat terrain areas. It includes plume depletion by dry deposition of solid particulates. OSPM (Denmark) – The operational street pollution model (OSPM) is a practical street pollution model, developed by the National Environmental Research Institute of Denmark. It is now maintained by the Department of Environmental Science, Aarhus University. For almost 20 years, OSPM has been routinely used in many countries for studying traffic pollution, performing analyses of field campaign measurements, studying efficiency of pollution abatement strategies, carrying out exposure assessments and as reference in comparisons to other models. OSPM is generally considered as state-of-the-art in applied street pollution modelling. For further reference see as well: OSPM home page PANACHE (France) – fluidyn-PANACHE is a self-contained fully 3D fluid dynamics software package designed to simulate accidental or continuous industrial and urban pollutant dispersion into the atmosphere. It simulates release and toxic/flammables pollutants dispersion in various weather conditions in calculated 3D complex winds and turbulence fields. Gas, particles, droplets induced flow and transport/diffusion is simulated with Navier-Stokes equations for jet-like, dense, cold, cryogenic or hot, buoyant releases. The application covers the very short scale (tens of meters) and the local scale (ten kilometers) where the complex flow pattern as related to obstacles, variable land uses, topography is calculated explicitly. PROKAS-V (Germany) – A Gaussian dispersion model for evaluating the atmospheric dispersion of air pollutants emitted from vehicular traffic on a road network of line sources on a local scale. PLUME (Bulgaria) – A conventional Gaussian plume model used in many regulatory applications. The basis of the model is a single simple formula which assumes constant wind speed and reflection from the ground surface. The horizontal and vertical dispersion parameters are a function of downwind distance and stability. The model was developed for routine applications in air quality assessment, regulatory purposes and policy support. POLGRAPH (Portugal) – This model was developed at the University of Aveiro, Portugal by Professor Carlos Borrego. It was designed for evaluating the impact of industrial pollutant releases and for air quality assessments. It is a Gaussian plume dispersion model for continuous, elevated point sources to be used on a local scale over flat or gently rolling terrain. RADM (France) – The random-walk advection and dispersion model (RADM) was developed by ACRI-ST, an independent research and development organization in France. It can model gas plumes and particles (including pollutants with exponential decay or formation rates) from single or multiple stationary, mobile or area sources. Chemical reaction, radioactive decay, deposition, complex terrain, and inversion conditions are accommodated. RIMPUFF (Denmark) – A local and regional scale real-time puff diffusion model developed by Risø National Laboratory for Sustainable Energy, Technical University of Denmark. Risø DTU. RIMPUFF is an operational emergency response model in use for assisting emergency management organisations dealing with chemical, nuclear, biological and radiological (CBRN) releases to the atmosphere. RIMPUFF is in operation in several European national emergency centres for preparedness and prediction of nuclear accidental releases (RODOS, EURANOS, ARGOS), chemical gas releases (ARGOS), and serves also as a decision support tool during active combatting of airborne transmission of various biological infections, including e.g. Foot-and Mouth Disease outbreaks. DEFRA Foot and Mouth Disease. SAFE AIR II (Italy) – The simulation of air pollution from emissions II (SAFE AIR II) was developed at the Department of Physics, University of Genoa, Italy to simulate the dispersion of air pollutants above complex terrain at local and regional scales. It can handle point, line, area and volume sources and continuous plumes as well as puffs. It includes first-order chemical reactions and plume depletion by wet and dry deposition, but it does not include any photochemistry. SEVEX (Belgium) – The Seveso expert model simulates the accidental release of toxic and/or flammable material over flat or complex terrain from multiple pipe and vessel sources or from evaporation of volatile liquid spill pools. The accidental releases may be continuous, transient or catastrophic. The integrated model can handle denser-than-air gases as well as neutral gases (i.e., neither denser than or lighter than air). It does not include handling of multi-component material, nor does it provide for chemical transformation of the releases. The model's name is derived from the major disaster caused by the accidental release of highly toxic gases that occurred in Seveso, Italy in 1976. SPRAY (Italy, France) – A Lagrangian particle dispersion model (LPDM) which simulates the transport, dispersion and deposition of pollutants emitted from sources of different kind over complex terrain and with the presence of obstacles. The model easily takes into account complex situations, such as the presence of breeze cycles, strong meteorological inhomogeneities and non-stationary, low wind calm conditions and recirculations. Simulations can cover area ranging from very local (less than one kilometer) to regional (hundreds of kilometres) scales. Plume rise of hot emission from stack is taken into account using a Briggs formulation. Algorithms for particle-oriented dry/wet deposition processes and for considering the gravitational settling are present. Dry deposition can be computed on ground and also on ceil/roof and on lateral faces of obstacles. Dispersion under generalized geometries like arches, tunnels and walkways can be performed. Dense gas dispersion is simulated using five conservation equations (mass, energy, vertical momentum and two horizontal momenta) based on Glandening et al. (1984) and Hurley and Manins (1995). Plume spread at the ground due to gravity is also simulated by a method (Anfossi et al., 2009), based on Eidsvik (1980). STACKS (The Netherlands) – A Gaussian plume dispersion model for point and area buoyant plumes to be used over flat terrain on a local scale. It includes building effects, NO2 chemistry and plume depletion by deposition. It is used for environmental impact studies and evaluation of emission reduction strategies. STOER.LAG (Germany) – A dispersion model designed to evaluate accidental releases of hazardous and/or flammable materials from point or area sources in industrial plants. It can handle neutral and denser-than-air gases or aerosols from ground-level or elevated sources. The model accommodates building and terrain effects, evaporation of volatile liquid spill pools, and combustion or explosion of flammable gas-air mixtures (including the impact of heat and pressure waves caused by a fire or explosion). SYMOS'97 (Czech Republic) – A model developed by the Czech Hydrometeorological Institute for dispersion calculations of continuous neutral or buoyant plumes from single or multiple point, area or line sources. It can handle complex terrain and it can also be used to simulate the dispersion of cooling tower plumes. TCAM is a multiphase three-dimensional eulerian grid model designed by ESMA group of University of Brescia, for modelling dispersion of pollutants (in particular photochemical and aerosol) at mesoscale. UDM-FMI (Finland) – This model was developed by the Finnish Meteorological Institute (FMI) as an integrated Gaussian urban scale model intended for regulatory pollution control. It handles multiple point, line, area and volume sources and it includes chemical transformation (for NO2), wet and dry deposition (for SO2), and downwash phenomena (but no building effects). VANADIS (Poland) – 3D unsteady state eulerian type model – Demo – 3d dispersion model – please read vanadis_eng.txt. Models developed in Australia AUSPLUME – A dispersion model that has been designated as the primary model accepted by the Environmental Protection Authority (EPA) of the Australian state of Victoria. (update:AUSPLUME V6 will no longer be the air pollution dispersion regulatory model in Victoria from 1 January 2014. From this date the air pollution dispersion regulatory model in Victoria will be AERMOD.) pDsAUSMOD – Australian graphical user interface for AERMOD pDsAUSMET – Australian meteorological data processor for AERMOD LADM – An advanced model developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO) for simulating the dispersion of buoyant pollution plumes and predicting the photochemical formation of smog over complex terrain on a local to regional scale. The model can also handle fumigated plumes (see the books listed below in the "Further reading" section for an explanation of fumigated plumes). TAPM – An advanced dispersion model integrated with a pre-processor for providing meteorological data inputs. It can handle multiple pollutants, and point, line, area and volume sources on a local, city or regional scale. The model capabilities include building effects, plume depletion by deposition, and a photochemistry module. This model was also developed by Australia's Commonwealth Scientific and Industrial Research Organisation (CSIRO). DISPMOD – A Gaussian atmospheric dispersion model for point sources located in coastal regions. It was designed specifically by the Western Australian Department of Environment to simulate the plume fumigation that occurs when an elevated onshore pollution plume intersects a growing thermal internal boundary layer (TIBL) contained within offshore air flow coming onshore. AUSPUFF – A Gaussian puff model designed for regulatory use by CSIRO. It includes some simple algorithms for the chemical transformation of reactive air pollutants. Models developed in Canada MLCD – Modèle Lagrangien à courte distance is a Lagrangian particle dispersion model (LPDM) developed in collaboration by Environment Canada's Canadian Meteorological Centre (CMC) and by the Department of Earth and Atmospheric Sciences of University of Alberta. This atmospheric dispersion and deposition model is designed to estimate air concentrations and surface deposition of pollutants for very short range emergency problems (less than ~10 km from the source). MLDPn – Modèle Lagrangien de dispersion de particules d'ordre n is a Lagrangian particle dispersion model (LPDM) developed by Environment Canada's Canadian Meteorological Centre (CMC). This atmospheric and aquatic transport and dispersion model is designed to estimate air and water concentrations and ground deposition of pollutants for various emergency response problems at different scales (local to global). It is used to forecast and track volcanic ash, radioactive material, forest fire smoke, chemical hazardous substances as well as oil slicks. Trajectory – The trajectory model, developed by Environment Canada's Canadian Meteorological Centre (CMC), is a simple tool designed to calculate the trajectory of a few air parcels moving in the 3D wind field of the atmosphere. The model provides a quick estimate of the expected trajectory of an air parcel by the advection transport mechanism, originating from (forward trajectory) or arriving at (backward trajectory) a specified geographical location and a vertical level. Models developed in India HAMS-GPS – Software used for management of environment, health and safety (EHS). It can be used for training as well as research involving dispersion modeling, accident analysis, fires, explosions, risk assessments and other related subjects. Air pollution dispersion models ADMS 3 AERMOD CALPUFF DISPERSION21 PUFF-PLUME MERCURE NAME OSPM SAFE AIR RIMPUFF HAMS-GPS EIA modeling Others Air pollution dispersion terminology Atmospheric dispersion modeling Bibliography of atmospheric dispersion modeling Roadway air dispersion modeling Wind profile power law References Schenk R (1996) Entwicklung von IBS Verkehr, Fördervorhaben des Ministeriums für Umwelt und Landwirtschaft des Landes Sachsen-Anhalt, FKZ 76213//95, 1996 Schenk R (1980) Numerische Behandlung instationärer Transportprobleme,Habilitation an der TU Dresden, 1980 Further reading For those who would like to learn more about atmospheric dispersion models, it is suggested that either one of the following books be read: www.crcpress.com External links Air Quality Modeling – From the website of Stuff in the Air The Model Documentation System (MDS) of the European Topic Centre on Air and Climate Change (part of the European Environment Agency) USA EPA Preferred/Recommended Models Alternative Models Screening Models Photochemical Models Wiki on Atmospheric Dispersion Modelling. Addresses the international community of atmospheric dispersion modellers – primarily researchers, but also users of models. Its purpose is to pool experiences gained by dispersion modellers during their work. The ADMS models and the GASTAR model The AUSPLUME model The CHARM model Fluidyn-PANACHE: 3D Computational Fluid Dynamcis(CFD) model for Dispersion Analysis The HAMS-GPS software The LADM, DISPMOD and AUSPUFF models The LAPMOD model The NAME model The RIMPUFF model The SPRAY model The TAPM model Validation of the Urban Dispersion Model (UDM) Atmospheric dispersion modeling
240716
https://en.wikipedia.org/wiki/Netochka%20Nezvanova%20%28author%29
Netochka Nezvanova (author)
Netochka Nezvanova is the pseudonym used by the author(s) of nato.0+55+3d, a real-time, modular, video and multi-media processing environment. Alternate aliases include "=cw4t7abs", "punktprotokol", "0f0003", "maschinenkunst" (preferably spelled "m2zk!n3nkunzt"), "integer", and "antiorp". The name itself is adopted from the main character of Fyodor Dostoyevsky's first novel Netochka Nezvanova (1849) and translates as "nameless nobody." Identity Netochka Nezvanova has been described by cultural critics as "an elusive online identity" and "a collective international project". In 2020, art critic Amber Husain describes NN as an "avatar of avant-garde internet performance" that "became as known for her abstract and usable software artworks as she did for aggressive displays of anonymous cyber-domination". In 2001, Netochka Nezvanova was named as one of the Top 25 Women on the Web by a San Francisco non-profit group. In her article published March 2002 for online magazine Salon, Katharine Mieszkowki dubbed NN the "most feared woman on the Internet", and speculated on her real identity – "A female New Zealander artist, a male Icelander musician or an Eastern European collective conspiracy?". Florian Cramer stated that NN "was a collective international project" that "presented itself as a sectarian cult, with its software as the object of worship". Cramer describes how the origin of NN's messages was obscured by "a web of servers and domain registrations spanning New Zealand, Denmark and Italy", while Mieszkowki observes that "e-mail from Netochka's various aliases has also been sent from ISPs in Chicago, New Zealand, Australia and Amsterdam". In 2006, the Austrian Institut für Medienarchäologie (IMA) released a 20-minute documentary film, directed by Elisabeth Schimana, titled Rebekah Wilson aka Netochka Nezvanova. In this video documentary, Rebekah Wilson discusses her central role in the NN collective, disclosing how she legally changed her name to Netochka Nezvanova in 1999, before changing it back to her birth name in 2005. Software works Besides the numerous software projects, her CD entitled "KROP3ROM||A9FF" was released by Decibel Records in 1997. A second CD entitled sin(x) was released by 0f0003 in 2000. Other software created by NN 0f0003 propaganda (1998) - this program algorithmically generates animated graphics and synthetic sounds. b1257+12 (1998) - a software for sound deconstruction and composition. The intricate operator interface allows for radical manipulation of soundloops in realtime, offering a large amount of control parameters which, every now and then, take a life of their own. The name of the software refers to a rapidly rotating neutron star. @¶31®�≠ Ÿ (1998) - this software extracts random samples from a CD and creates a stochastical remix, accompanied by futuristic-looking graphics (according to the reference documents, it is intended for use with the krop3rom||a9ff release). m9ndfukc.0+99 and k!berzveta.0+2 (1999) - two programs written in Java interpreting network data, very likely preliminary versions of nebula.m81. kinematek.0+2 (1999) - another Java application that performs "animated image generation from internet www data", incorporating parts of nebula.m81. nebula.m81 (1999) - an experimental web browser written in Java, rendering HTML code into abstract sounds and graphics. Awarded at the International Music Software Competition in Bourges 1999 and at Transmediale 2001 (first prize in the category "Artistic Software"). Described by jury member Florian Cramer as "an experimental web browser that turned browsing into something resembling measurement data evaluation". !=z2c!ja.0+38 (1999) - an application that generates a dense visual texture based on the user's keyboard input. It (ab)uses Mac OS' QuickDraw capability and can therefore be seen as a preliminary step towards nato.0+55. Musical Works "krop3ropm||a9ff", audio CD, Decibel Records, 1997 (re-released by 0f0003 in 1998, with additional material by The Hafler Trio). "A9FF" (1997), a piece for tape by Rebekah Wilson, performed at Victoria University of Wellington in February 1998. "8'|sin.x(2^n)", hybrid CD containing audio and mp2 data tracks (1999, self-produced). Trilogy of theatre pieces in collaboration with vocalist Ayelet Harpaz, based on haikus by the Japanese poet Masaoka Shiki ("Two Autumns", "I Keep Asking How Deep The Snow's Gotten"; "Spring: Still Unfurled"; presented in Utrecht, Amsterdam and Moscow, 2001–2002). "Poztgenom!knuklearporekomplekz", included on the CD Strewth! An Abstract Electronic Compilation, published in 2002 by the Australian record label Synaesthesia. "La lumière, la lumière .. c'est la seule .." (2002), for viola, piano, percussion and electronics. Commissioned by Ensemble Intégrales, performed in Ireland, Belgium, Switzerland, Austria and Germany (2003–2005). "A History of Mapmaking" or "Aerial Photography and 31 Variations on a Cartographer's Theme", for amplified cello and light controller, composed and performed by Rebekah Wilson. Performances in Auckland (Transacoustic Festival 2005), Ljubljana (City of Women 2006 Festival), Berlin (Transmediale 2007), Linz (Ars Electronica 2007). See also Decoder (film) Notes References Neue Kraft, Neues Werk (Transcodeur Express), a documentary film by Ninon Liotet, Olivier Schulbaum and Platoniq, shown on ARTE on April 25, 2002, features an interview with NN. http://www.platoniq.net/nknw/ IMA fiction: portrait #2 06, a video portrait of Rebekah Wilson, directed by Elisabeth Schimana and produced by The Austrian Institute for Media Archeology. Presented at the Transmediale festival on January 31, 2007. http://www.ima.or.at External links Salon.com Technology: The most feared woman on the Internet, by Katharine Mieszkowski (March 1, 2002). A portrait of Netochka Nezvanova, artistic software by Adrian Ward. Classic version (2001) / Mac OS X version (2003) A conversation between Netochka Nezvanova and Frederic Madre (March 22, 2000) for the Walker's Art Entertainment Network Living in Limnos, Betwixt and Between: A Trans-Reality Balkan Odyssey, by Gheorghe Dan - "The Strange Adventures of Netochka Nezvanova in the Lands Without" - "Dramatical NN", seminar/lecture script by Amanda Steggell, June 11, 2002 Net.artists Internet celebrities Anonymity pseudonyms Living people Digital artists Women digital artists New media artists 21st-century women artists Year of birth missing (living people)
3112649
https://en.wikipedia.org/wiki/PL-3
PL-3
PL-3 or POS-PHY Level 3 is a network protocol. It is the name of the interface that the Optical Internetworking Forum's SPI-3 Interoperability Agreement is based on. It was proposed by PMC-Sierra to the Optical Internetworking Forum and adopted in June 2000. The name means Packet Over SONET Physical layer level 3. PL-3 was developed by PMC-Sierra in conjunction with the SATURN Development Group. The name is an acronym of an acronym of an acronym as the P in PL stands for "POS-PHY" and the S in POS-PHY stands for "SONET" (Synchronous Optical Network). The L in PL stands for "Layer". Context There are two broad categories of chip-to-chip interfaces. The first, exemplified by PCI-Express and HyperTransport, supports reads and writes of memory addresses. The second broad category carries user packets over 1 or more channels and is exemplified by the IEEE 802.3 family of Media Independent Interfaces and the Optical Internetworking Forum family of System Packet Interfaces. Of these last two, the family of System Packet Interfaces is optimized to carry user packets from many channels. The family of System Packet Interfaces is the most important packet-oriented, chip-to-chip interface family used between devices in the Packet over SONET and Optical Transport Network, which are the principal protocols used to carry the internet between cities. Applications It was designed to be used in systems that support OC-48 SONET interfaces . A typical application of PL-3 (SPI-3) is to connect a framer device to a network processor. It has been widely adopted by the high speed networking marketplace. Technical details The interface consists of (per direction): 32 TTL signals for the data path 8 TTL signals for control one TTL signal for clock 8 TTL signals for optional additional multi-channel status There are several clocking options. The interface operates around 100 MHz. Implementations of SPI-3 (PL-3) have been produced which allow somewhat higher clock rates. This is important when overhead bytes are added to incoming packets. PL-3 in the marketplace PL-3 and SPI-3 were highly successful interfaces with many semiconductor devices produced to it. See also System Packet Interface SPI-4.2 External links OIF Interoperability Agreements Network protocols
22431015
https://en.wikipedia.org/wiki/WBAdmin
WBAdmin
In computing, WBAdmin is a command-line utility built into Windows Vista, Windows Server 2008, Windows 7, Windows Server 2008 R2, Windows 8, Windows Server 2012, and Windows 10 operating systems. The command is used to perform backups and restores of operating systems, drive volumes, computer files, folders, and applications from a command-line interface. Features WBAdmin is a disk-based backup system. It can create a "bare metal" backup used to restore the Windows operating system to similar OR dissimilar hardware. The Backup file(s) created are primarily in the form of Microsoft's Virtual Hard Disk (.VHD) files with some accompanying .xml configuration files. The backup .VHD file can be mounted in Windows Disk Manager to view content. However, the .VHD backup file is not a direct "disk clone". The utility replaces the previous Microsoft Windows Backup command-line tool, NTBackup, which came built-into Windows NT 4.0, Windows 2000, Windows XP and Windows Server 2003. It is the command-line version of Backup and Restore. WBAdmin also has a Graphical User Interface (GUI) option available to simplify creation of computer backup (and restore). Workstation editions such as Windows 7 use a backup wizard located in Control Panel. The server version is done through an (easily installed) Windows Feature using the Windows Management Console WBAdmin.MSC. The WBAdmin Management Console simplifies restoration, whether single file or multiple folders. Using the command-line or Graphical User Interface, WBAdmin creates a backup which can be quickly restored using just the Windows Media install DVD and the backup files located on a removable USB disk without the need to re-install from scratch. WBAdmin uses a differencing engine to update the backup files. Once the original backup file is created Volume Shadow Copy Service updates changes, subsequent full backups take a matter of moments rather than many minutes taken to create the original backup file. Automatic backups can be scheduled on a regular basis using a wizard. Two kinds of restore operations are supported: Bare metal restore: using the Windows Recovery Environment you can complete a full server restoration to either the same server or to a server with dissimilar hardware (known as Hardware Independent Restore – HIR) Individual file and folder, and system state restore: files, folders, or the machine’s system state can be restored from the command-line using WBAdmin. References Further reading External links Guide to scheduling Complete PC Backup using WBAdmin NTBackup Restore utility for Windows Server 2008 Backup Basics from Microsoft Microsoft's Windows Server Backup Guide WBAdmin commands list WBAdmin.info for general discussion and information to assist users with WBAdmin Microsoft Guide to Windows Vista Backup Microsoft Backup and Recovery overview How does Windows Server 2008 Backup work? Backup software Backup software for Windows Windows administration
58428725
https://en.wikipedia.org/wiki/Crypto-PAn
Crypto-PAn
Crypto-PAn (Cryptography-based Prefix-preserving Anonymization) is a cryptographic algorithm for anonymizing IP addresses while preserving their subnet structure. That is, the algorithm encrypts any string of bits to a new string , while ensuring that for any pair of bit-strings which share a common prefix of length , their images also share a common prefix of length . A mapping with this property is called prefix-preserving. In this way, Crypto-PAn is a kind of format-preserving encryption. The mathematical outline of Crypto-PAn was developed by Jinliang Fan, Jun Xu, Mostafa H. Ammar (all of Georgia Tech) and Sue B. Moon. It was inspired by the IP address anonymization done by Greg Minshall's TCPdpriv program circa 1996. Algorithm Intuitively, Crypto-PAn encrypts a bit-string of length by descending a binary tree of depth , one step for each bit in the string. Each of the binary tree's non-leaf nodes has been given a value of "0" or "1", according to some pseudo-random function seeded by the encryption key. At each step of the descent, the algorithm computes the th bit of the output by XORing the th bit of the input with the value of the current node. The reference implementation takes a 256-bit key. The first 128 bits of the key material are used to initialize an AES-128 cipher in ECB mode. The second 128 bits of the key material are encrypted with the cipher to produce a 128-bit padding block . Given a 32-bit IPv4 address , the reference implementation performs the following operation for each bit of the input: Compose a 128-bit input block . Encrypt with the cipher to produce a 128-bit output block . Finally, XOR the th bit of that output block with the th bit of , and append the result — — onto the output bitstring. Once all 32 bits of the output bitstring have been computed, the result is returned as the anonymized output which corresponds to the original input . The reference implementation does not implement deanonymization; that is, it does not provide a function such that . However, decryption can be implemented almost identically to encryption, just making sure to compose each input block using the plaintext bits of decrypted so far, rather than using the ciphertext bits: . The reference implementation does not implement encryption of bitstrings of lengths other than 32; for example, it does not support the anonymization of 128-bit IPv6 addresses. In practice, the 32-bit Crypto-PAn algorithm can be used in "ECB mode" itself, so that a 128-bit string might be anonymized as . This approach preserves the prefix structure of the 128-bit string, but does leak information about the lower-order chunks; for example, an anonymized IPv6 address consisting of the same 32-bit ciphertext repeated four times is likely the special address ::, which thus reveals the encryption of the 32-bit plaintext 0000:0000:0000:0000. In principle, the reference implementation's approach (building 128-bit input blocks ) can be extended up to 128 bits. Beyond 128 bits, a different approach would have to be used; but the fundamental algorithm (descending a binary tree whose nodes are marked with a pseudo-random function of the key material) remains valid. Implementations Crypto-PAn's C++ reference implementation was written in 2002 by Jinliang Fan. In 2005, David Stott of Lucent made some improvements to the C++ reference implementation, including a deanonymization routine. Stott also observed that the algorithm preserves prefix structure while destroying suffix structure; running the Crypto-PAn algorithm on a bit-reversed string will preserve any existing suffix structure while destroying prefix structure. Thus, running the algorithm first on the input string, and then again on the bit-reversed output of the first pass, destroys both prefix and suffix structure. (However, once the suffix structure has been destroyed, destroying the remaining prefix structure can be accomplished far more efficiently by simply feeding the non-reversed output to AES-128 in ECB mode. There is no particular reason to reuse Crypto-PAn in the second pass.) A Perl implementation was written in 2005 by John Kristoff. Python and Ruby implementations also exist. Versions of the Crypto-PAn algorithm are used for data anonymization in many applications, including NetSniff and CAIDA's CoralReef library. References Advanced Encryption Standard Symmetric-key algorithms
44524327
https://en.wikipedia.org/wiki/ReBoot%3A%20The%20Guardian%20Code
ReBoot: The Guardian Code
ReBoot: The Guardian Code is a Canadian live-action/CGI-animated television series produced by Mainframe Studios. Originally announced in 2013, the first ten episodes debuted on Netflix worldwide (excluding Canada) on March 30, 2018. YTV aired all twenty episodes from June 4 to July 5, 2018. Plot Four teenaged gamers, who are members of an online game's highest-scoring team, meet in person on their first day at Alan Turing High School. Their enrollment was arranged by Vera, an artificial intelligence who has recruited the team as "Guardians" to physically enter and protect cyberspace. Early in the series, Vera is given a human body and locked out of cyberspace, so she enrolls as an exchange student. The Guardians battle the Sourcerer, a human hacker. Dark code is the Sourcerer's primary weapon against the world's computer systems. After his initial run-in with the Guardians, the Sourcerer reactivates the computer virus named Megabyte, the main antagonist of the original ReBoot, to help him from inside cyberspace. Cast Guardians, high school students who physically enter cyberspace under pseudonyms to fight viruses and hackers: Ty Wood as Austin Carter, a.k.a. Vector, the leader of the team whose color is red and is the son of Adam Carter/Sourcerer. Sydney Scotia as Tamra, a.k.a. Enigma, the agility like ninja of the team whose color is yellow. Ajay Friese as Parker, a.k.a. Googz, the brains of the team and a hacker in the internet, his color is green. Gabriel Darku as Trey Davies, a.k.a. D-Frag, the muscle of the team who is an athlete in the real world and a hulking strong man in cyberspace, his color is blue. Hannah Vandenbygaart as V.E.R.A. (Virtual Evolutionary Recombinary Avatar), an artificial intelligence given a human body in the first episode, and the Guardians' leader. Bob Frazer as Adam Carter, Austin's father, who became infected by dark code and turned into the Sourcerer, an evil hacker trying to send the world back into the Dark Ages or the early 1900s, before everything was working with computers. He is Megabyte's master. Kirsten Robek as Judy Carter, Austin's mother, who is unaware of her son's guardian double life. Luvia Petersen as Special Agent Nance, an agent at the Department of Internet Security (DIS), she once threatened Adam Carter into giving up his information in the mainframe. She wishes to weaponize the Guardian tech so DIS can destroy computers of America's enemies. Nicholas Lea as Mark Rowan, an agent at the Department of Internet Security and a friend of Adam Carter. Timothy E. Brummund as Megabyte (voice), a virus from the original series, is the Sourcerer's servant. Due the fact he is over 20 years out of date, Megabyte was upgraded by the Sourcerer upon reactivation. Alex Zahara as the Alpha Sentinels (voice), Megabyte's lead minion at any given time. Megabyte destroys one an episode even if they carry out his orders flawlessly. Shirley Millner as Hexadecimal (voice), a virus from the original series and Megabyte's sister, who was later imprisoned by the guardians to Viruslym, she was later freed by Megabyte. Unlike her brother, Hex is chaos incarnate and wishes to turn cyberspace upside down. Episodes Season 1 (YTV) (2018) Season 1 (Netflix) (2018) Season 2 (Netflix) (2018) Development On October 3, 2013, Rainmaker announced the development of a new ReBoot television series alongside the reintroduction of the Mainframe company brand for its small screen productions. Speaking to Canada.com later that month, Rainmaker's president and Chief Creative Officer Michael Hefferon stated that the show wouldn't be the same as the "world of technology has changed drastically in the 20 years from when ReBoot first started" and cautioned that the original show's characters would likely be limited to cameo appearances. He then said that the company planned to pitch the series in February, with the hope of getting YTV on board as the broadcast partner. In November 2014, Rainmaker revealed the show would be called ReBoot: The Guardian Code. The following May, Deadline reported the series would be a live-action/CG-animated hybrid distributed by The Weinstein Company. On June 8, 2015, Corus Entertainment, owners of YTV, ordered a 26-episode first season, stating that the series was created by Hefferon and confirming the details of the Deadline story. Shortly after, various characters from the original series, including Bob, Dot, Enzo and Megabyte, were confirmed to appear in the series, though focus had shifted to a group of four teenagers recruited into protecting cyberspace by the Guardian program V.E.R.A. The four teens were named as Austin, Parker, Grey and Tamra. A poster showcasing Austin in his guardian form was released. Commenting on the inclusion of live-action material in the series, Hefferon stated, "I talked with broadcasters around the world. The one [resounding] thing — and I hate to break it to the fans — was nobody wanted the reboot of what [the show originally] was. Nobody was willing to buy it." He later added that two thirds of an average episode would be animated content. At the time, the series was planned for a late 2016/early 2017 launch. Production Casting calls for the series went out in May 2016. They listed a shoot date between August and November of that year in Vancouver, British Columbia, with YTV attached as the broadcast partner and the episode count reduced to 20. Production was delayed with filming taking place in British Columbia in February and March 2017. On March 28, 2017, Corus confirmed the information revealed in the casting calls and announced the show's executive staff and main cast. Worldwide distribution, licensing, and merchandising rights had moved to Corus's Nelvana Enterprises with YTV set to debut the series in 2018. A mobile virtual reality experience and digital trading card game were confirmed to be in development. Rainmaker's parent company Wow Unlimited Media reported that the first 8 episodes of the series had been delivered to broadcast partners in the third quarter of 2017, with the remaining 12 scheduled for the fourth quarter of that year. ReBoot: The Guardian Code is modeled, rigged, and animated in Autodesk Maya, and rendered in 4K resolution using Unreal Engine 4. Hefferon stated that using Unreal gave them an advantage in speed: "Some of these shots could have taken 3 to 13 hours in a traditional pipeline, per frame," whereas Unreal rendered each frame in seconds or minutes. It also easily allowed the crew to reuse animation assets for the virtual reality tie-in. Casting Ty Wood, Sydney Scotia, Ajay Parikh-Friese and Gabriel Darku were cast as group of teenagers who enter Mainframe to protect the virtual and real world from viruses such as Megabyte. Three voice actors from the original series reprise their voice roles in this series: Michael Benyaer returns as Bob, Kathleen Barr as Dot and Shirley Millner as Hexadecimal. Timothy E. Brummund replaces Tony Jay as Megabyte due to the original actor's death in 2006. Distribution A trailer for the series and the virtual reality experience debuted on February 21, 2018. The first ten episodes debuted on Netflix globally, excluding Canada, on March 30, followed by ten more episodes on September 28 as a "second" season.<ref name="TGC on Netflix">{{cite web|website=Netflix|title=ReBoot: The Guardian Code|url=https://www.netflix.com/title/80217957|access-date=September 16, 2018}}</ref> All twenty episodes aired on YTV in Canada between June 4 and July 5. Other media Following the debut of the show's first 10 episodes on Netflix, the official ReBoot: The Guardian Code YouTube channel uploaded a series of 10 one-minute virtual reality shorts. Similarly, YTV released a series of live-action shorts featuring the characters from the show alongside the Canadian broadcast. A number of these shorts are directly tied to events in the series. A free-to-play mobile game, titled ReBoot: The Guardian Code – Code Hacker was released for iOS and Android devices on March 22, 2018. Developed by A.C.R.O.N.Y.M. Digital, the game is a match-5 puzzle title that features audio detection technology allowing users watching the YTV broadcast to unlock in-game cards. A web-browser version was also hosted on the official website for the series, as well as on YTV's site. As of January 5, 2022 the mobile game is no longer on the Google Play Store. Reception Pre-release On February 21, 2018, an official trailer was released. It was not well received; by March 10, 2018, the trailer had reached 12,000 dislikes and 983 likes on YouTube. On February 25, 2018, the French website Codelyoko.fr, a fansite for the TV show Code Lyoko, published a negative review of the trailer. The review accused ReBoot: The Guardian Code of plagiarizing Code Lyoko, as the trailer showed many similarities to Code Lyokos premise and characters. Shamus Kelley from Den of Geek! also noticed the similarities, claiming that "ReBoot: The Guardian Code is going for the whole Code Lyoko thing" and added that "There isn't a single reference to the old series outside of the term Guardians. It feels more like a teen drama with elements from Code Lyoko and Super Human Samurai [Syber-Squad]." Another concurring opinion came from Digital Spy writer Jon Anderton, who claimed that the original show "took place inside a computer system and there was no schoolkid element, making The Guardian Code more similar to 2000's series Code Lyoko (or Tron, to use a more mainstream example)". Shortly after the trailer's release, Code Lyoko co-creator Thomas Romain responded to the official ReBoot: The Guardian Code Twitter account, stating, "Wow you really liked Code Lyoko, didn’t you?" The Guardian Code has also been seen as a derivative of Zixx, an earlier CG/live-action TV show Rainmaker helped produce. Release Reviewing the first ten episodes of the series for Collider, Dave Trumbore had mixed feelings, giving the show a 2-star rating. While he praised the performance of Hannah Vandenbygaart, the character interactions and most of the visual aesthetic, he felt the show's animation was of poor quality for a 2018 series. He criticized the writing, acting and camerawork, saying that it is "stuck in the mid-'90s." His biggest issue was the show's pacing, commenting that the series took too long to introduce emotional moments and callbacks to the original show. Emily Ashby awarded the show 3 stars in her review for Common Sense Media. She felt the show had a number of positive role models for kids and while the series wasn't educational in nature, its use of technology could spur interest in STEM fields. Shamus Kelley of Den of Geek!'' was especially critical of the tenth episode. Describing it as "one of the worst episodes of television" he has ever seen. He derided the decision to include a character mocking fans and felt the cameos from the original characters were superficial. Conversely, io9's Charles Pulliam-Moore found it to be the only episode of the first ten to be worth watching. Calling it "legitimately fantastic," he enjoyed the appearance of characters from the original series and said "you can get away with not watching the rest of the season, jumping to the finale, and actually having a good time." References External links 2018 Canadian television series debuts 2018 Canadian television series endings 2010s Canadian animated television series 2010s Canadian children's television series 2010s Canadian comic science fiction television series 2010s Canadian high school television series Canadian children's action television series Canadian children's adventure television series Canadian children's drama television series Canadian children's science fiction television series Canadian computer-animated television series Canadian television series with live action and animation Cyberpunk television series English-language television shows Space adventure television series Television series reboots Television shows about virtual reality Television shows filmed in Vancouver Television series about teenagers Television series by Rainmaker Studios Television shows about video games YTV (Canadian TV channel) original programming English-language Netflix original programming Netflix children's programming
2451142
https://en.wikipedia.org/wiki/XOSL
XOSL
xOSL (meaning Extended Operating System Loader) is the name of a bootloader, which is a program product class that launches operating systems from a bootable device such as a hard disk or floppy drive. xOSL was originally developed by Geurt Vos. History xOSL is free software released under the GPL-2.0-only license. The project was actively developed by Geurt Vos between 1999 and 2001 and spanned four major revisions and two minor revisions after its initial creation. From its origin in xOSL version 1.0.0, xOSL underwent major changes in ver. 1.1.0, 1.1.1, 1.1.2 and 1.1.3. These revisions were significant departures from one another, and introduced new features to the program. These features ranged from drastic user interface improvements to improved compatibility on diverse hardware platforms. xOSL ver. 1.1.4 and 1.1.5 only introduced improvements to existing functionality and repaired features that should have been functional in their predecessors. Although their improvements were subtle, they did serve to stabilize a developing protocol, and are the most polished revisions of the original to date. The project lapsed into a dormant state and was abandoned by its original developer from 2001 to 2007. xOSL remained available for download and use throughout this period. Survivability Despite the lack of active product development, an enthusiastic community of xOSL users began exchanging ideas and product results through the use of Yahoo! Groups and other support sites on the internet. These groups became the foundation of the 'xOSL Culture'. The xOSL groups assisted fellow members with advice and accomplishments through the use of xOSL. After the original xOSL web site expired it was mirrored in multiple locations by Filip Komar and Mikhail Ranish. Very few enhancements to the original product occurred during this time, most of them being fairly inconsequential. One such enhancement gave the user the ability to change wallpapers and the image displayed at startup, and like most other revisions, it did not add a great deal to the program in terms of core functionality. Other revisions included the translation of xOSL into several different languages, including German, Czech and French, among others. XOSL-OW XOSL-OW is an Open Watcom Port of XOSL. XOSL is developed by Geurt Vos using the Borland C++ 3.1 tool set while XOSL-OW is based on the Open Watcom version 1.8 tool set. The XOSL-OW Open Watcom Port allows for future development of XOSL using an Open Source development tool set. XOSL-OW has no new functionality compared to XOSL but it does give improved behavior on specific PC hardware. In fact stability issues with XOSL on some PC platforms have been the reason for porting XOSL to the Open Watcom tool set. Examples of stability issues on specific PC hardware are: Launching the Ranish Partition Manager from within the XOSL boot manager (Ctrl-P) results in a non-responsive keyboard. Booting into the Smart Boot Manager (used to support booting from CD/DVD) results in a non-responsive keyboard. Booting into Linux using the XOSL boot manager is not successful because of a non-responsive keyboard after the XOSL boot manager hands over control to the Linux boot process. In XOSL-OW these stability issues have been solved by an improved A20 line switching algorithm and flushing the keyboard buffer before the XOSL boot manager hands over control to either the Ranish Partition manager, the Smart Boot Manager or the Operating System Bootloader. Compatible file-systems Currently xOSL is capable of booting operating systems from a variety of format types. These include, and may not be limited to: Windows FAT12 (File Allocation Table) FAT16 FAT32 NTFS (New Technology File System) Linux- EXT2 EXT3 ReiserFS External links XOSL Hq(Official site) XOSL Users Group at Yahoo XOSL Historical Archive from 2000 xOSL and Windows Vista XOSL Mirror at www.ranish.com xOSL2 Sourceforge Page XOSL-OW Home page Czech Translation Understanding MultiBooting by Dan Goodell Boot loaders
384541
https://en.wikipedia.org/wiki/Lilith%20%28computer%29
Lilith (computer)
The DISER Lilith is a custom built workstation computer based on the Advanced Micro Devices (AMD) 2901 bit slicing processor, created by a group led by Niklaus Wirth at ETH Zürich. The project began in 1977, and by 1984 several hundred workstations were in use. It has a high resolution full page portrait oriented cathode ray tube display, a mouse, a laser printer interface, and a computer networking interface. Its software is written fully in Modula-2 and includes a relational database program named Lidas. The Lilith processor architecture is a stack machine. Citing from Sven Erik Knudsen's contribution to "The Art of Simplicity": "Lilith's clock speed was around 7 MHz and enabled Lilith to execute between 1 and 2 million instructions (called M-code) per second. (...) Initially, the main memory was planned to have 65,536 16-bit words memory, but soon after its first version, it was enlarged to twice that capacity. For regular Modula-2 programs however, only the initial 65,536 words were usable for storage of variables." History The development of Lilith was influenced by the Xerox Alto from the Xerox PARC (1973) where Niklaus Wirth spent a sabbatical from 1976 to 1977. Unable to bring back one of the Alto systems to Europe, Wirth decided to build a new system from scratch between 1978 and 1980, selling it under the company name DISER (Data Image Sound Processor and Emitter Receiver System). In 1985, he had a second sabbatical leave to PARC, which led to the design of the Oberon System. Ceres, the follow-up to Lilith, was released in 1987. Operating system The Lilith operating system (OS), named Medos-2, was developed at ETH Zurich, by Svend Erik Knudsen with advice from Wirth. It is a single user, object-oriented operating system built from modules of Modula-2. Its design influenced design of the OS Excelsior, developed for the Soviet Kronos workstation (see below), by the Kronos Research Group (KRG). Soviet variants From 1986 into the early 1990s, Soviet Union technologists created and produced a line of printed circuit board systems, and workstations based on them, all named Kronos. The workstations were based on Lilith, and made in small numbers. Mouse The computer mouse of the Lilith was custom-designed, and later used with the Smaky computers. It then inspired the first mice produced by Logitech. References External links Documentation on BitSavers Geissman, L et al. (August 1982) Lilith Handbook Wirth, N (1981) The Personal Computer Lilith Emulith emulator for the Lilith, homepage and documentation Lilith and Modula-2 ETHistory - Lilith Workstation AMD AM2901DC entry on CPU World Computer workstations Computers using bit-slice designs High-level language computer architecture
18618196
https://en.wikipedia.org/wiki/California%20NanoSystems%20Institute
California NanoSystems Institute
The California NanoSystems Institute (CNSI) is an integrated research center operating jointly at UCLA and UC Santa Barbara. Its missions are to foster interdisciplinary collaborations for discoveries in nanosystems and nanotechnology; train the next generation of scientists, educators and technology leaders; and facilitate partnerships with industry, fueling economic development and the social well-being of California, the United States and the world. CNSI was created by Governor Gray Davis as part of a science and innovation initiative, it was established in 2000 with $100 million from the state of California and an additional $250 million in federal research grants and industry funding. At the institute, scientists in the areas of biology, chemistry, biochemistry, physics, mathematics, computational science and engineering measure, modify and manipulate the building blocks of our world – atoms and molecules. These scientists benefit from an integrated laboratory culture enabling them to conduct dynamic research at the nanoscale, leading to significant breakthroughs in the areas of health, energy, the environment and information technology. History On December 7, 2000, California Governor Gray Davis announced the location of the federally sponsored California NanoSystems Institute section of the California Institutes for Science and Innovation (Cal ISI) initiative. The California legislature put forth $100 million for three research facilities to advance the future of the state's economy. The California NanoSystems Institute (CNSI) was selected out of the proposals along with three other Cal ISIs: California Institute for Quantitative Biosciences (QB3), California Institute for Telecommunications and Information Technology (Cal-(IT)2), and Center for Information Technology Research in the Interest of Society (CITRIS). In August, 2000, CNSI was founded on both campuses of UCSB and UCLA. Martha Krebs, the former director of the U.S. Department of Energy's Office of Science, was named the founder. Active leaders UCLA The people in charge of UCLA CNSI fall into two categories: directors and associate directors. Directorship Jeff F. Miller, Ph.D. - Director Associate Directors Heather Maynard, Ph.D. - Associate Director of Technology & Development Andre Nel, M.B., CH.B., Ph.D. - Associate Director of Research Aydogan Ozcan, Ph.D. - Associate Director of Entrepreneurship, Industry and Academic Exchange Leonard H. Rome, Ph.D. - Associate Director of Facilities Management Adam Z. Stieg, Ph.D. - Associate Director of Technology Centers UCSB The people in charge of UCSB CNSI fall into two categories: administrative staff and the faculty. Directorship Craig Hawker - Director Javier Read de Alaniz - Associate Director Megan Valentine - Associate Director Stephen Wilson - Associate Director Administrative staff Holly Woo - Assistant Director, Administration Eva Deloa - Financial Manager Bob Hanson - Building Manager The building manager is responsible for the maintenance, facility resource leads, and infrastructure of CNSI. The building manager oversees any changes in infrastructure or maintenance to the labs or the building as a whole. Biology and biomedical The research fields of nanobiology (nanobiotechnology) and biomedicine show promise in the connection of nanoscale science to biological/nonbiological matter. New diagnostic methods as well as new ways to administer increasingly efficient disease specific treatments are also being researched and developed. Energy efficiency Nanotechnology has promise to help fight global warming. Nanoscale research can promise more efficient, less wasteful technologies. Also, nanoscale allows to control, transform and store energy more efficiently. Information technologies Both UCLA and UCSB CNSI labs show potential to develop upgrades in the processing, storage, and transmission of information as well as increases in the speed of information processing. Partnerships The California NanoSystems Institute depends on partnerships with technological companies to help fund and run its research facilities. Partnerships fund the operation and expansions of CNSI in addition to the $250 million government research grants received in 2000. Increasing numbers of partnerships were created due to budget cuts by the state. UCLA CNSI has international partnerships with the Chinese Academy of Sciences, the Beijing Nano Center, the University of Tokyo, the University of Kyoto, Kyushu University, Yonsei University, Seoul National University, KAIST, University of Bristol, and Zhejiang University. Founding partners Partners that joined when the institute was originally created include: Abraxis BioScience BASF The Chemical Company Intel HP Associate partners Partners that joined after creation include: NEC Solarmer Energy, Inc. Keithley Instruments Company Photron UCSB Applied Materials Hewlett-Packard Labs Intel Microsoft Research Sputtered Films / Tegal Corporation Sun Microsystems VEECO Educational opportunities K-12 Both campuses offer several educational opportunities including hands-on laboratory research experience for junior high students and their teachers. These activities are done in collaboration with graduate students doing research in similar fields. UCSB scientists and researchers run family science nights at local junior highs to give families the opportunity to participate in scientific activities with their children. along with after-school engineering and science club for grades 3-8 to explore science with UCSB undergrad club leaders. CNSI also hosts research opportunities for high school juniors and local Santa Barbara teachers on the UCSB campus. In addition, CNSI at UCSB holds a summer program called SIMS (Summer Institute of Math and Science) for incoming freshmen. Undergraduate Both UCLA and UCSB contribute to various scholarships for incoming freshmen. They offer undergraduate courses that give insight to all fields and majors of math and science. Undergraduates have the opportunity to act as club leaders and mentors to younger ages in grades K-12. Undergraduates also have extensive research opportunities in several fields during the year and through summer on either campus. Students within CNSI's UCSB affiliation, UCSB Department of Electrical and Computer Engineering, can choose to intern or volunteer at the institute for lab experience. Graduate Graduate opportunities are limited to: Mentoring: community college students incoming freshmen high school juniors high school teachers undergraduates Assisting researchers in the lab See also AlloSphere Scientists, Technologists and Artists Generating Exploration (STAGE) Incubator Program References External links California NanoSystems Institute - CNSI UCLA chapter (main) California NanoSystems Institute - CNSI UCSB chapter Nanotechnology institutions NanoSystems Institute University of California, Los Angeles University of California, Santa Barbara 2000 establishments in California
1643236
https://en.wikipedia.org/wiki/GXL
GXL
GXL (Graph eXchange Language) is designed to be a standard exchange format for graphs. GXL is an extensible markup language (XML) sublanguage and the syntax is given by an XML document type definition (DTD). This exchange format offers an adaptable and flexible means to support interoperability between graph-based tools. Overview In particular, GXL was developed to enable interoperability between software reengineering tools and components, such as code extractors (parsers), analyzers and visualizers. GXL allows software reengineers to combine single-purpose tools especially for parsing, source code extraction, architecture recovery, data flow analysis, pointer analysis, program slicing, query techniques, source code visualization, object recovery, restructuring, refactoring, remodularization, etc., into a single powerful reengineering workbench. There are two innovative features in GXL that make it well-suited to an exchange format for software data. The conceptual data model is a typed, attributed, directed graph. This is not to say that all software data ought to be manipulated as graphs, but rather that they can be exchanged as graphs. It can be used to represent instance data as well as schemas for describing the structure of the data. Moreover, the schema can be explicitly stated along with instance data. The structure of graphs exchanged by GXL streams is given by a schema represented as a Unified Modeling Language (UML) class diagram. Since GXL is a general graph exchange format, it can also be used to interchange any graph-based data, including models between computer-aided software engineering (CASE) tools, data between graph transformation systems, or graph visualization tools. GXL includes support for hypergraphs and hierarchical graphs, and can be extended to support other types of graphs. GXL originated in the merger of GRAph eXchange format (GraX: University of Koblenz, DE) for exchanging typed, attributed, ordered, directed graphs (TGraphs), Tuple Attribute Language (TA: University of Waterloo, CA), and the graph format of the PROGRES graph rewriting system (University Bw München, DE). Furthermore, GXL includes ideas from exchange formats from reverse engineering, including Relation Partition Algebra (RPA: Philips Research Eindhoven, NL) and Rigi Standard Format (RSF: University of Victoria, CA). The development of GXL was also influenced by various formats used in graph drawing (e.g. daVinci, Graph Modelling Language (GML), Graphlet, GraphXML) and current discussions on exchange formats for graph transformation systems. Presentations of former GXL versions At the 2000 International Conference on Software Engineering (ICSE 2000) Workshop on Standard Exchange Formats (WoSEF), GXL was accepted as working draft for an exchange format by numerous research groups working in the domain of software reengineering and graph transformation. During the APPLIGRAPH Subgroup Meeting on Exchange Formats for Graph Transformation, an overview of GXL was given [Schürr, 2000] and participants decided to use GXL to represent graphs within their exchange format for graph transformation systems (GTXL). The 2000 IBM Centers for Advanced Studies Conference (CASCON 2000) included two half-day workshops on GXL. In the morning, 'Software Data Interchange with GXL: Introduction and Tutorial' gave a primer on the syntax and concepts in the format, while the afternoon workshop, 'Software Data Interchange with GXL: Implementation Issues' discussed the development of converters and standard schemas. At the Seventh Working Conference on Reverse Engineering (WCRE 2000), GXL was presented in a tutorial [Holt et al., 2000] and during the workshop on exchange formats [Holt/Winter, 2000]. Central results were a simpler representation of ordering information, the usage of UML class diagrams to present graph schemata and the representation of UML class diagrams by GXL graphs. The Dagstuhl Seminar on Interoperability of Reengineering Tools ratified GXL 1.0 as a standard interchange format for exchanging reengineering related data. Numerous groups from industry and research committed to using GXL, to import and export GXL documents to their tools, and to write various GXL tools. GXL Partners During various conferences and workshops the following groups from industry and academics committed to refining GXL to be the standard graph exchange format, write GXL filters and tools or use GXL as exchange format in their tools: Bell Canada (Datrix Group) Centrum Wiskunde & Informatica (CWI), The Netherlands (Interactive Software Development and Renovation and Information Visualization) IBM Centre for Advanced Studies, Canada Mahindra British Telecom, India Merlin Software-Engineering GmbH, Germany Nokia Research Center, Finland (Software Technology Laboratory) Philips Research, The Netherlands (Software Architecture Group) RWTH Aachen, Germany (Department of Computer Science III) TU Berlin, Germany (Theoretical CS/Formal Specification Group) University of Berne, Switzerland (Software Composition Group) University of Bremen, Germany (Software Engineering Group) Bundeswehr University Munich, Germany (Institute for Software Technology) University of Edinburgh, UK, (Edinburgh Concurrency Workbench) University of Koblenz, Germany (GUPRO Group) University of Oregon, USA (Department of Computer Science) University of Paderborn, Germany (AG Softwaretechnik) University of Stuttgart, Germany (BAUHAUS Group) University of Szeged, Hungary (Research Group on Artificial Intelligence) University of Toronto, Canada (Software Architecture Group) University of Victoria, Canada (RIGI Group) University of Waterloo, Canada (Software Architecture Group) External links GXL homepage XML-based standards Computer file formats Graph description languages
1686326
https://en.wikipedia.org/wiki/Combat%20Mission%3A%20Beyond%20Overlord
Combat Mission: Beyond Overlord
Combat Mission: Beyond Overlord is a 2000 computer wargame developed and published by Big Time Software. It is a simulation of tactical land battles in World War II. Combat Mission began development at Big Time Software as Computer Squad Leader, an adaptation of the board wargame Advanced Squad Leader. It was set to be published by Avalon Hill. Big Time and Avalon parted ways shortly before the publisher's closure by Hasbro, and Big Time continued development independently, under the new title Combat Mission. Combat Mission was a commercial and critical hit, and began the Combat Mission series. Gameplay Rather than a purely turn-based system, Combat Mission uses the "WeGo" structure for handling player turns. Development Origins at Avalon Hill In January 1997, Avalon Hill contracted Big Time Software to create a computer version of the publisher's board wargame Advanced Squad Leader. Earlier collaborations between the two included Flight Commander 2 and Over the Reich. Avalon Hill had considered a computer adaptation of Squad Leader for several years, as its board incarnations had sold over 1 million copies by 1997, but the company was initially hesitant to pursue this idea because of the series' complexity. While it ultimately attempted the project with Atomic Games, this version fell through and became the unrelated Close Combat in 1996. Terry Coleman of Computer Gaming World called the decision to try again with Big Time "a breath of fresh air". Discussing the new effort in early 1997, Bill Levay of Avalon forecast a release date of late 1998 or potentially 1999 for Big Time's version, as the developer was slated to create Achtung Spitfire! and an unannounced title for Avalon beforehand. The game was set primarily to adapt Advanced Squad Leader, rather than the original Squad Leader, and Levay remarked that it would stay closer to its board roots than had Atomic's project. However, it was not planned to be a one-to-one translation of the board version. Coleman noted at the time, "The only thing for sure is that the ASL design will be turn-based". Later in 1997, the title of Big Time Software's game was revealed as Computer Squad Leader. The developer's Charles Moylan explained that the project would automate many of the board version's numerical calculations, and would offer scaling levels of complexity to accommodate novices and veterans. Support for online and play-by-email multiplayer modes was planned. Moylan described Computer Squad Leader as an effort to merge Advanced Squad Leader with "new material" from Big Time. It began production in mid-1997. However, Moylan "suspected problems with Avalon Hill [...] and acted accordingly" as the year progressed, Tom Chick of CNET Gamecenter later wrote. To safeguard his project, he avoided the inclusion of features specific to Advanced Squad Leader until the end of development; early production was focused entirely on material whose copyright was owned by Big Time, rather than Avalon Hill. As a result, the project had "maximum flexibility" in its response to changing circumstances, Moylan said. Computer Squad Leader was canceled in July 1998 when Big Time parted ways with Avalon Hill, a decision that Moylan attributed to "boring business stuff". The following month, Hasbro purchased Avalon Hill and laid off its entire staff. Despite the shakeup, Computer Squad Leaders lack of copyrighted material allowed Big Time to continue development independently and without any delays, under the new title Combat Mission. A tentative release date was set for spring 1999. By the time of Computer Squad Leaders cancellation, Big Time had implemented the WeGo structure, and a 3D graphics engine had been created that rendered both the environments and characters as polygonal models. The inspiration for the WeGo system came from the wargame TacOps, and was an attempt to merge the strategic depth of turn-based gameplay with the intensity and realism of real-time gameplay. Independent production Following the split from Avalon Hill, Moylan continued work on Combat Mission as Big Time Software's sole employee. He began meeting with Impressions Games' Steve Grammont—who had worked on Robert E. Lee: Civil War General and Lords of the Realm 2—to complete Combat Missions core design. In a few weeks, the pair devised the game's WeGo system as a compromise between turn-based and real-time strategy mechanics, both of which Moylan and Grammont considered to be flawed approaches. Moylan later said that his core goal with Combat Mission was to create "the first strategy/war game to combine serious simulation with compelling 3D graphics and sound", and to match the technological advances of first-person shooters and flight simulators that he believed had outpaced wargames. Publication Battlefront.com was established in May 1999. Combat Mission was released on May 31, 2000. The Special Edition of the game was released three years later. Demo scenarios Three playable public demo scenarios were offered by Battlefront.com at various times. A beta demo was released first in October 1999. A gold demo was released later, and because the file format changed, two new scenarios were included on it that were not compatible with the earlier beta version. Neither demo included access to the mission editor, but did permit solo, hotseat, or email play. Last Defense and Riesberg were included with the original beta demo. Chance Encounter was originally released as an add-on for the beta demo, at Christmas 1999. It was later also released as part of the Gold Demo. It depicted a meeting engagement late in the war between an American rifle company supported by Sherman tanks, and a German rifle company supported by assault guns. The terrain depicted a rural crossroads overlooked by a forested hill. Chance Encounter has remained a popular scenario and versions of this battle have been recreated by third parties in Combat Mission: Afrika Korps and even Combat Mission: Shock Force. Valley of Trouble was a new scenario for the gold demo that depicted an American assault through a fortified valley. Reception Sales According to Moylan, Combat Mission was a hit for Big Time Software, and accrued "good sales" in its first two months. Its simultaneous launch for Mac OS brought a 20% increase in purchases. The game's initial print run was intended to last for one year, but unexpectedly high demand exhausted Big Time's supplies by June 26, following the game's release on May 31. Moylan wrote that the team "drastically underestimated" the game's sales potential, which left them unprepared for and "overwhelmed" by its popularity. The second run was put up for sale on June 29. Writer T. Liam McDonald highlighted Combat Missions online distribution model as an element of its success, since word of mouth would allow its audience to grow indefinitely, while wargames sold at brick and mortar retailers had limited shelf lives. By February 2001, the lack of publisher and retailer fees allowed Big Time's revenues on the game to equal roughly "200,000 to 250,000 sales in retail", according to Moylan. A writer for Computer Gaming World considered this proof "that a developer can sell its games exclusively over the Internet and make money", after the relative failure of Sid Meier's Antietam! to break into this market. McDonald similarly called Battlefront.com and Combat Mission positive signs for the future of computer wargames, after mainstream publishers had lost interest in the genre. However, GameSpot proceeded to nominate Combat Mission for its 2000 "Best Game No One Played" award. The website's editors wrote, "[A]lthough sales have exceeded the publisher's initial expectations, they are much lower than typical retail releases." Critical reviews Combat Mission received "favorable" reviews according to the review aggregation website GameRankings; McDonald noted in 2000 that it received "the most positive reviews of any wargame in recent memory". GameSpot named it an "Editor's Choice" and noted that "It's sure to appeal to anyone interested in serious military simulations, but even those just looking for a good World War II computer game should find that it has a lot to offer." The editors of Computer Gaming World named Combat Mission the best wargame of 2000. They argued that the game "altered the basic idea of what a wargame can be", and revolutionized wargames "the way Doom changed first-person shooters." PC Gamer US named it the best turn-based strategy game of 2000, The Electric Playground named it 2000's "Best Independent PC Game" and the editors of Computer Games Magazine nominated it for their 2000 "Wargame of the Year" award. Combat Mission was also a nominee for GameSpot's "Strategy Game of the Year" and Electric Playgrounds "Best Strategy Game for PC" awards, which went to Shogun: Total War and Sacrifice, respectively. Legacy In 2010, the editors of PC Gamer US named Combat Mission the 53rd best computer game ever, and wrote, "Before CMBO, PC wargames were dusty, hexagon-littered creatures. Battlefront's dramatic 3D depiction changed all that." References External links Official website 2000 video games Battlefront.com games Computer wargames Classic Mac OS games Multiplayer hotseat games Video games developed in the United States Video games set in France Windows games World War II video games
161935
https://en.wikipedia.org/wiki/Animator
Animator
An animator is an artist who creates multiple images, known as frames, which give an illusion of movement called animation when displayed in rapid sequence. Animators can work in a variety of fields including film, television, and video games. Animation is closely related to filmmaking and like filmmaking is extremely labor-intensive, which means that most significant works require the collaboration of several animators. The methods of creating the images or frames for an animation piece depend on the animators' artistic styles and their field. Other artists who contribute to animated cartoons, but who are not animators, include layout artists (who design the backgrounds, lighting, and camera angles), storyboard artists (who draw panels of the action from the script), and background artists (who paint the "scenery"). Animated films share some film crew positions with regular live action films, such as director, producer, sound engineer, and editor, but differ radically in that for most of the history of animation, they did not need most of the crew positions seen on a physical set. In hand-drawn Japanese animation productions, such as in Hayao Miyazaki's films, the key animator handles both layout and key animation. Some animators in Japan such as Mitsuo Iso take full responsibility for their scenes, making them become more than just the key animator. Specialized fields Animators often specialize. One important distinction is between character animators (artists who specialize in character movement, dialogue, acting, etc.) and special effects animators (who animate anything that is not a character; most commonly vehicles, machinery, and natural phenomena such as rain, snow, and water). Stop-motion animators don't draw their images, instead they move models or cut-outs frame-by-frame, famous animators of this genre being Ray Harryhausen and Nick Park. Inbetweeners and cleanup artists In large-scale productions by major studios, each animator usually has one or more assistants, "inbetweeners" and "clean-up artists", who make drawings between the "key poses" drawn by the animator, and also re-draw any sketches that are too roughly made to be used as such. Usually, a young artist seeking to break into animation is hired for the first time in one of these categories, and can later advance to the rank of full animator (usually after working on several productions). Methods Historically, the creation of animation was a long and arduous process. Each frame of a given scene was hand-drawn, then transposed onto celluloid, where it would be traced and painted. These finished "cels" were then placed together in sequence over painted backgrounds and filmed, one frame at a time. Animation methods have become far more varied in recent years. Today's cartoons could be created using any number of methods, mostly using computers to make the animation process cheaper and faster. These more efficient animation procedures have made the animator's job less tedious and more creative. Audiences generally find animation to be much more interesting with sound. Voice actors and musicians, among other talent, may contribute vocal or music tracks. Some early animated films asked the vocal and music talent to synchronize their recordings to already-extant animation (and this is still the case when films are dubbed for international audiences). For the majority of animated films today, the soundtrack is recorded first in the language of the film's primary target market and the animators are required to synchronize their work to the soundtrack. Evolution of animator's roles As a result of the ongoing transition from traditional 2D to 3D computer animation, the animator's traditional task of redrawing and repainting the same character 24 times a second (for each second of finished animation) has now been superseded by the modern task of developing dozens (or hundreds) of movements of different parts of a character in a virtual scene. Because of the transition to computer animation, many additional support positions have become essential, with the result that the animator has become but one component of a very long and highly specialized production pipeline. Nowadays, visual development artists will design a character as a 2D drawing or painting, then hand it off to modelers who build the character as a collection of digital polygons. Texture artists "paint" the character with colorful or complex textures, and technical directors set up rigging so that the character can be easily moved and posed. For each scene, layout artists set up virtual cameras and rough blocking. Finally, when a character's bugs have been worked out and its scenes have been blocked, it is handed off to an animator (that is, a person with that actual job title) who can start developing the exact movements of the character's virtual limbs, muscles, and facial expressions in each specific scene. At that point, the role of the modern computer animator overlaps in some respects with that of his or her predecessors in traditional animation: namely, trying to create scenes already storyboarded in rough form by a team of story artists, and synchronizing lip or mouth movements to dialogue already prepared by a screenwriter and recorded by vocal talent. Despite those constraints, the animator is still capable of exercising significant artistic skill and discretion in developing the character's movements to accomplish the objective of each scene. There is an obvious analogy here between the art of animation and the art of acting, in that actors also must do the best they can with the lines they are given; it is often encapsulated by the common industry saying that animators are "actors with pencils". More recently, Chris Buck has remarked that animators have become "actors with mice." Some studios bring in acting coaches on feature films to help animators work through such issues. Once each scene is complete and has been perfected through the "sweat box" feedback process, the resulting data can be dispatched to a render farm, where computers handle the tedious task of actually rendering all the frames. Each finished film clip is then checked for quality and rushed to a film editor, who assembles the clips together to create the film. While early computer animation was heavily criticized for rendering human characters that looked plastic or even worse, eerie (see uncanny valley), contemporary software can now render strikingly realistic clothing, hair, and skin. The solid shading of traditional animation has been replaced by very sophisticated virtual lighting in computer animation, and computer animation can take advantage of many camera techniques used in live-action filmmaking (i.e., simulating real-world "camera shake" through motion capture of a cameraman's movements). As a result, some studios now hire nearly as many lighting artists as animators for animated films, while costume designers, hairstylists, choreographers, and cinematographers have occasionally been called upon as consultants to computer-animated projects. See also Animation Computer animation Computer graphics Key frame List of animators Sweat box References External links Animation Toolworks Glossary: Who Does What In Animation How An Animated Cartoon Is Made Visual arts occupations Filmmaking occupations Computer occupations
34986915
https://en.wikipedia.org/wiki/Google%20Play
Google Play
Google Play, also branded as the Google Play Store and formerly Android Market, is a digital distribution service operated and developed by Google. It serves as the official app store for certified devices running on the Android operating system and its derivatives as well as Chrome OS, allowing users to browse and download applications developed with the Android software development kit (SDK) and published through Google. Google Play also serves as a digital media store, offering music, books, movies, and television programs. Content that has been purchased on Google Play Movies & TV and Google Play Books can be accessed on a web browser, and through the Android and iOS apps. Applications are available through Google Play either free of charge or at a cost. They can be downloaded directly on an Android device through the proprietary Google Play Store mobile app or by deploying the application to a device from the Google Play website. Applications utilizing hardware capabilities of a device can be targeted to users of devices with specific hardware components, such as a motion sensor (for motion-dependent games) or a front-facing camera (for online video calling). The Google Play Store had over 82 billion app downloads in 2016 and reached over 3.5 million apps published in 2017, while after a purge of apps is back to over 3 million. It has been the subject of multiple issues concerning security, in which malicious software has been approved and uploaded to the store and downloaded by users, with varying degrees of severity. Google Play was launched on March 6, 2012, bringing together Android Market, and the Google under one brand, marking a shift in Google's digital distribution strategy. The services included in Google Play are Play Books, Play Games and Google TV. Google Play Music was in service till September, 2020. It has been discontinued in favor of YouTube Music in December 2020. Following their re-branding, Google has gradually expanded the geographical support for each of the services. Catalog content Android applications By 2017, Google Play featured more than 3.5 million Android applications. After Google purged a lot of apps from the Google Play Store, the number of apps has risen back to over 3 million Android applications. As of 2017, developers in more than 150 locations could distribute apps on Google Play, though not every location supports merchant registration. Developers receive 85% of the application price, while the remaining 15% goes to the distribution partner and operating fees. Developers can set up sales, with the original price struck out and a banner underneath informing users when the sale ends. Google Play allows developers to release early versions of apps to a select group of users, as alpha or beta tests. Users can pre-order select apps (as well as movies, music, books, and games) to have the items delivered as soon as they are available. Some network carriers offer billing for Google Play purchases, allowing users to opt for charges in the monthly phone bill rather than on credit cards. Users can request refunds within 48 hours after a purchase. Games At the Google I/O 2013 Developer Conference, Google announced the introduction of Google Play Games. Google Play Games is an online gaming service for Android that features real-time multiplayer gaming capabilities, cloud saves, social and public leaderboards, and achievements. Its standalone mobile app was launched on July 24, 2013. Books Google Play Books is an ebook digital distribution service. Google Play offers over five million ebooks available for purchase, and users can also upload up to 1,000 of their own ebooks in the form of PDF or EPUB file formats. , Google Play Books is available in 75 countries. Movies and TV shows Google Play Movies & TV was a video on demand service offering movies and television shows available for purchase or rental, depending on availability. , movies are available in over 110 countries, while TV shows are available only in Australia, Austria, Canada, France, Germany, Japan, Switzerland, the United States and the United Kingdom. In October 2020, Google Play Movies & TV was renamed Google TV. Play Pass On September 23, 2019, Google launched its Google Play Pass games and apps subscription service in the US. As of September 2019, subscribers could access the games and apps without ads and in-app purchases. The program is invitation-only for app developers, who then can integrate the service into their existing apps. Device updates Google introduced Project Mainline in Android 10, allowing core OS components to be updated via the Google Play Store, without requiring a complete system update. Android 10 supports updates for core OS components including: Security: Media Codecs, Media Framework Components, DNS Resolver, Conscrypt Privacy: Documents UI, Permission Controller, ExtServices Consistency: Time zone data, ANGLE (developers opt-in), Module Metadata, Networking components, Captive Portal Login, Network Permission Configuration On December 4, 2019, Qualcomm announced their Snapdragon 865 supports GPU drivers updated via the Google Play Store. This feature was initially introduced with Android Oreo but vendors had not added support yet. Teacher Approved In 2020, Google launched a new children-focused 'Teacher Approved' section for the Google Play Store. Apps marked as 'Teacher Approved' meet higher standards approved for educational purposes. History Google Play (previously styled Google play) originated from three distinct products: Android Market, Google Music and Google eBookstore. Android Market was announced by Google on August 28, 2008, and was made available to users on October 22. In December 2010, content filtering was added to Android Market, each app's details page started showing a promotional graphic at the top, and the maximum size of an app was raised from 25 megabytes to 50 megabytes. The Google eBookstore was launched on December 6, 2010, debuting with three million ebooks, making it "the largest ebooks collection in the world". In November 2011, Google announced Google Music, a section of the Google Play Store offering music purchases. In March 2012, Google increased the maximum allowed size of an app by allowing developers to attach two expansion files to an app's basic download; each expansion file with a maximum size of 2 gigabytes, giving app developers a total of 4 gigabytes. Also in March 2012, Android Market was re-branded as Google Play. The Google Play Store, including all Android apps, came to Chrome OS in September 2016. In May 2021, Google Play announced plans to implement a new section with privacy information for all applications in its storefront. The project is similar to App Store's privacy labels and is expected to be released in full in the first half of 2022. The feature will show users what kind of information each app collects, whether the data it stores is encrypted and whether users can opt out of being tracked by the application. Music Google Play Music was a music and podcast streaming service and online music locker. It features over 40 million songs, and gives users free cloud storage of up to 50,000 songs. , Google Play Music was available in 64 countries. In June 2018, Google announced plans to shut down Play Music by 2020 and offered users to migrate to YouTube Music, migration to Google Podcasts was announced in May 2020. In October 2020, the music store for Google Play Music was shutdown. Google Play Music was shut down in December 2020 and was replaced by YouTube Music and Google Podcasts. News publications and magazines Google Play Newsstand was a news aggregator and digital newsstand service offering subscriptions to digital magazines and topical news feeds. Google released Newsstand in November 2013, combining the features of Google Play Magazines and Google Currents into a single product. , the basic Newsstand service, was available worldwide. As of 2017, paid Newsstand content was available in more than 35 countries. On May 15, 2018, the mobile app merged with Google News & Weather to form Google News. The Newsstand section continued to appear on the Google Play website until November 5, 2018, but now is only available through the Google News app. Devices Until March 2015, Google Play had a "Devices" section for users to purchase Google Nexus devices, Chromebooks, Chromecasts, other Google-branded hardware, and accessories. A separate online hardware retailer called the Google Store was introduced on March 11, 2015, replacing the Devices section of Google Play. User interface Apart from searching for content by name, apps can also be searched through keywords provided by the developer. When searching for apps, users can press on suggested search filters, helping them to find apps matching the determined filters. For the discoverability of apps, Google Play Store consists of lists featuring top apps in each category, including "Top Free", a list of the most popular free apps of all time; "Top Paid", a list of the most popular paid apps of all time; "Top Grossing", a list of apps generating the highest amounts of revenue; "Trending Apps", a list of apps with recent installation growth; "Top New Free", a list of the most popular new free apps; "Top New Paid", a list of the most popular new paid apps; "Featured", a list of new apps selected by the Google Play team; "Staff Picks", a frequently-updated list of apps selected by the Google Play team; "Editors' Choice", a list of apps considered the best of all time; and "Top Developer", a list of apps made by developers considered the best. In March 2017, Google added a "Free App of the Week" section, offering one normally-paid app for free. In July 2017, Google expanded its "Editors' Choice" section to feature curated lists of apps deemed to provide good Android experiences within overall themes, such as fitness, video calling and puzzle games. Google Play enables users to know the popularity of apps, by displaying the number of times the app has been downloaded. The download count is a color-coded badge, with special color designations for surpassing certain app download milestones, including grey for 100, 500, 1,000 and 5,000 downloads, blue for 10,000 and 50,000 downloads, green for 100,000 and 500,000 downloads, and red/orange for 1 million, 5 million, 10 million and 1 billion downloads. Users can submit reviews and ratings for apps and digital content distributed through Google Play, which are displayed publicly. Ratings are based on a 5-point scale. App developers can respond to reviews using the Google Play Developer Console. Design Google has redesigned Google Play's interface on several occasions. In February 2011, Google introduced a website interface for the then-named Android Market that provides access through a computer. Applications purchased are downloaded and installed on an Android device remotely, with a "My Market Account" section letting users give their devices a nickname for easy recognition. In May 2011, Google added new application lists to Android Market, including "Top Paid", "Top Free", "Editor's Choice", "Top Grossing", "Top Developers", and "Trending". In July, Google introduced an interface with a focus on featured content, more search filters, and (in the US) book sales and movie rentals. In May 2013, a redesign to the website interface matched the then-recently redesigned Android app. In July 2014, the Google Play Store Android app added new headers to the Books/Movies sections, a new Additional Information screen offering a list featuring the latest available app version, installed size, and content rating, and simplified the app permissions prompt into overview categories. A few days later, it got a redesign consistent with the then-new Material Design design language, and the app was again updated in October 2015 to feature new animations, divide up the content into "Apps and Games" and "Entertainment" sections, as well as added support for languages read right-to-left. In April 2016, Google announced a redesign of all the icons used for its suite of Play apps, adding a similar style and consistent look. In May 2017, Google removed the shopping bag from the Google Play icon, with only the triangle and associated colors remaining. In March 2018, Google experimented by changing the format of the screenshots used for the App pages from the WebP format to PNG but reverted the change after it caused the images to load more slowly. The update also saw small UI tweaks to the Google Play Store site with the reviews section now opening to a dedicated page and larger images in the light box viewer. Google Play Instant Apps Launched in 2017, Google Play Instant, also known as Google Instant Apps, allows to use an app or game without installing it first. App monetization Google states in its Developer Policy Center that "Google Play supports a variety of monetization strategies to benefit developers and users, including paid distribution, in-app products, subscriptions, and ad-based models", and requires developers to comply with the policies in order to "ensure the best user experience". It requires that developers charging for apps and downloads through Google Play must use Google Play's payment system. In-app purchases unlocking additional app functionality must also use the Google Play payment system, except in cases where the purchase "is solely for physical products" or "is for digital content that may be consumed outside of the app itself (e.g. songs that can be played on other music players)." Support for paid applications was introduced on February 13, 2009, for developers in the United States and the United Kingdom, with support expanded to an additional 29 countries on September 30, 2010. The in-app billing system was originally introduced in March 2011. All developers on Google Play are required to feature a physical address on the app's page in Google Play, a requirement established in September 2014. In February 2017, Google announced that it would let developers set sales for their apps, with the original price struck out and a banner underneath informing users when the sale ends. Google also announced that it had made changes to its algorithms to promote games based on user engagement and not just downloads. Finally, it announced new editorial pages for what it considers "optimal gaming experiences on Android", further promoting and curating games. Payment methods Google allows users to purchase content with credit or debit cards, carrier billing, gift cards, or through PayPal. Google began rolling out carrier billing for purchases in May 2012, followed by support for PayPal in May 2014. Gift cards The rumor of Google Play gift cards started circulating online in August 2012 after references to it were discovered by Android Police in the 3.8.15 version update of the Google Play Store Android app. Soon after, images of the gift cards started to leak, and on August 21, 2012, they were made official by Google and rolled out over the next few weeks. As of April 2017 Google Play gift cards are available in the following countries: Australia, Austria, Belgium, Brazil, Canada, Denmark, Finland, France, Germany, Greece, Hong Kong, India, Indonesia, Ireland, Italy, Japan, Malaysia, Mexico, Netherlands, New Zealand, Norway, Poland, Portugal, Saudi Arabia, Singapore, South Africa, South Korea, Spain, Sweden, Switzerland, Thailand, Turkey, the United Kingdom and the United States. Subscriptions Google introduced in-app subscriptions to Google Play in May 2012. In June 2016, some sources reported that Google announced that subscriptions charged through Google Play would now split the revenue 85/15, where developers receive 85% of revenue and Google takes 15%, a change from the traditional 70/30 split in years prior. The move followed Apple's then-recently announced change of the same model, although commentators were quick to point out that while Apple grants the 85/15 revenue share after one year of active subscriptions, Google's subscription change takes effect immediately. As of January 1, 2018, the transaction fee for subscription products decreased to 15% for any subscribers developers retain after 12 paid months, establishing that, unlike what sources were reporting, Google is using the same model as Apple with in-app subscriptions on the App Store. Google Play Store on Android Google Play Store, shorten to Play Store on the Home screen and App screen, is Google's official pre-installed app store on Android-certified devices. It provides access to content on Google Play, including apps, books, magazines, music, movies, and television programs. Devices do not ship with the Google Play Store in China, with manufacturers offering their own alternative,. Google Play Store filters the list of apps to those compatible with the user's device. Developers can target specific hardware components (such as compass), software components (such as widget), and Android versions (such as 7.0 Nougat). Carriers can also ban certain apps from being installed on users' devices, for example tethering applications. There is no requirement that Android applications be acquired using the Google Play Store. Users may download Android applications from a developer's website or through a third-party app store alternative. Google Play Store applications are self-contained Android Package files (APK), similar to .exe files to install programs on Microsoft Windows computers. On Android devices, an "Unknown sources" feature in Settings allows users to bypass the Google Play Store and install APKs from other sources. Depending on developer preferences, some apps can be installed to a phone's external storage card. Installation history The Google Play Store app features a history of all installed apps. Users can remove apps from the list, with the changes also synchronizing to the Google Play website interface, where the option to remove apps from the history does not exist. Compatibility Google publishes the source code for Android through its "Android Open Source Project", allowing enthusiasts and developers to program and distribute their own modified versions of the operating system. However, not all these modified versions are compatible with apps developed for Google's official Android versions. The "Android Compatibility Program" serves to "define a baseline implementation of Android that is compatible with third-party apps written by developers". Only Android devices that comply with Google's compatibility requirements may install and access the Google Play Store application. As stated in a help page for the Android Open Source Project, "Devices that are "Android compatible" may participate in the Android ecosystem, including Android Market; devices that don't meet the compatibility requirements exist outside that ecosystem. In other words, the Android Compatibility Program is how we separate "Android compatible devices" from devices that merely run derivatives of the source code. We welcome all uses of the Android source code, but only Android compatible devices—as defined and tested by the Android Compatibility Program—may participate in the Android ecosystem." Some device manufacturers choose to use their own app store instead of or in addition to the Google Play Store. Google Play Services In 2012, Google began decoupling certain aspects of its Android operating system (particularly its core applications) so they could be updated through the Google Play store independently of the OS. One of those components, Google Play Services, is a closed-source system-level process providing APIs for Google services, installed automatically on nearly all devices running Android 2.2 "Froyo" and higher. With these changes, Google can add new system functionality through Play Services and update apps without having to distribute an upgrade to the operating system itself. As a result, Android 4.2 and 4.3 "Jelly Bean" contained relatively fewer user-facing changes, focusing more on minor changes and platform improvements. History of app growth Google Play Awards and yearly lists In April 2016, Google announced the Google Play Awards, described as "a way to recognize our incredible developer community and highlight some of the best apps and games". The awards showcase five nominees across ten award categories, and the apps are featured in a dedicated section of Google Play. Google stated that "Nominees were selected by a panel of experts on the Google Play team based on criteria emphasizing app quality, innovation, and having a launch or major update in the last 12 months", with the winners announced in May. Google has also previously released yearly lists of apps it deemed the "best" on Google Play. On March 6, 2017, five years after Google Play's launch, Google released lists of the best-selling apps, games, movies, music, and books over the past five years. In June 2017, Google introduced "Android Excellence", a new editorial program to highlight the apps deemed the highest quality by the Google Play editors. In 2020, Google Play awarded the Disney+ emerged as the top app of the year for users in the US, and SpongeBob: Krusty Cook-Off taking the honors in the gaming category. Application approval Google places some restrictions on the types of apps that can be published, in particular not allowing sexually explicit content, child endangerment, violence, bullying & harassment, hate speech, gambling, illegal activities, and requiring precautions for user-generated content. In March 2015, Google disclosed that over the past few months, it had been begun using a combination of automated tools and human reviewers to check apps for malware and terms of service violations before they are published in the Google Play Store. At the same time, it began rolling out a new age-based ratings system for apps and games, based on a given region's official ratings authority (for example, ESRB in the US). In October 2016, Google announced a new detection and filtering system designed to provide "additional enhancements to protect the integrity of the store". The new system is aimed to detect and filter cases where developers have been attempting to "manipulate the placement of their apps through illegitimate means like fraudulent installs, fake reviews, and incentivized ratings". In April 2019, Google announced changes to the store's app review process, stating that it would take several days to review app submissions from new and less-established developers. The company later clarified that, in exceptional cases, certain apps may be subject to an expanded review process, delaying publication by seven days or longer. Application bans Some mobile carriers can block users from installing certain apps. In March 2009, reports surfaced that several tethering apps were banned from the store. However, the apps were later restored, with a new ban preventing only T-Mobile subscribers from downloading the apps. Google released a statement: In April 2011, Google removed the Grooveshark app from the store due to unspecified policy violations. CNET noted that the removal came "after some of the top music labels have accused the service of violating copyright law". TechCrunch wrote approximately two weeks later that Grooveshark had returned to Android, "albeit not through the official App Market", but rather "Playing on Android's ability to install third-party applications through the browser, Grooveshark has taken on the responsibility of distributing the application themselves". In May 2011, Google banned the account of the developer of several video game emulators. Neither Google nor the developer publicly revealed the reason for the ban. In March 2013, Google began to pull ad blocking apps from the Google Play Store, per section 4.4 of the developers' agreement, which prohibits apps that interfere with servers and services. Apps that exempt themselves from power management policies introduced on Android Marshmallow without being "adversely affected" by them, are banned. In July 2018, Google banned additional categories of apps, including those that perform cryptocurrency mining on-device, apps that "facilitate the sale of explosives, firearms, ammunition, or certain firearms accessories", are only used to present ads, contain adult content but are aimed towards children, "multiple apps with highly similar content and user experience," and "apps that are created by an automated tool, wizard service, or based on templates and submitted to Google Play by the operator of that service on behalf of other persons." Application security In February 2012, Google introduced a new automated antivirus system, called Google Bouncer, to scan both new and existing apps for malware (e. g. spyware or trojan horses). In 2017, the Bouncer feature and other safety measures within the Android platform were rebranded under the umbrella name Google Play Protect, a system that regularly scans apps for threats. Android apps can ask for or require certain permissions on the device, including access to body sensors, calendar, camera, contacts, location, microphone, phone, SMS, storage, WI-FI, and access to Google accounts. In July 2017, Google described a new security effort called "peer grouping", in which apps performing similar functionalities, such as calculator apps, are grouped together and attributes compared. If one app stands out, such as requesting more device permissions than others in the same group, Google's systems automatically flag the app and security engineers take a closer inspection. Peer grouping is based on app descriptions, metadata, and statistics such as download count. Security issues In early March 2011, DroidDream, a trojan rootkit exploit, was released to the then-named Android Market in the form of several free applications that were, in many cases, pirated versions of existing priced apps. This exploit allowed hackers to steal information such as IMEI and IMSI numbers, phone model, user ID, and service provider. The exploit also installed a backdoor that allowed the hackers to download more code to the infected device. The exploit only affected devices running Android versions earlier than 2.3 "Gingerbread". Google removed the apps from the Market immediately after being alerted, but the apps had already been downloaded more than 50,000 times, according to Android Polices estimate. Android Police wrote that the only method of removing the exploit from an infected device was to reset it to a factory state, although community-developed solutions for blocking some aspects of the exploit were created. A few days later, Google confirmed that 58 malicious apps had been uploaded to Android Market, and had been downloaded to 260,000 devices before being removed from the store. Google emailed affected users with information that "As far as we can determine, the only information obtained was device-specific (IMEI/IMSI, unique codes which are used to identify mobile devices, and the version of Android running on your device)" as opposed to personal data and account information. It also announced the then-new "remote kill" functionality, alongside a security update, that lets Google remotely remove malicious apps from users' devices. However, days later, a malicious version of the security update was found on the Internet, though it did not contain the specific DroidDream malware. New apps featuring the malware, renamed DroidDream Light, surfaced the following June, and were also removed from the store. At the Black Hat security conference in 2012, security firm Trustwave demonstrated their ability to upload an app that would circumvent the Bouncer blocker system. The application used a JavaScript exploit to steal contacts, SMS messages, and photos, and was also capable of making the phone open arbitrary web pages or launch denial-of-service attacks. Nicholas Percoco, senior vice president of Trustwave's SpiderLabs advanced security team, stated that "We wanted to test the bounds of what it's capable of". The app stayed on Google Play for more than two weeks, being repeatedly scanned by the Bouncer system without detection, with Percoco further saying that "As an attack, all a malware attacker has to do to get into Google Play is to bypass Bouncer". Trustwave reached out to Google to share their findings, but noted that more manual testing of apps might be necessary to detect apps using malware-masking techniques. According to a 2014 research study released by RiskIQ, a security services company, malicious apps introduced through Google Play increased 388% between 2011 and 2013, while the number of apps removed by Google dropped from 60% in 2011 to 23% in 2013. The study further revealed that "Apps for personalizing Android phones led all categories as most likely to be malicious". According to PC World, "Google said it would need more information about RiskIQ's analysis to comment on the findings." In October 2016, Engadget reported about a blog post named "Password Storage in Sensitive Apps" from freelance Android hacker Jon Sawyer, who decided to test the top privacy apps on Google Play. Testing two applications, one named "Hide Pictures Keep Safe Vault" and the other named "Private Photo Vault", Sawyer found significant errors in password handling in both, and commented, "These companies are selling products that claim to securely store your most intimate pieces of data, yet are at most snake oil. You would have near equal protection just by changing the file extension and renaming the photos." In April 2017, security firm Check Point announced that a malware named "FalseGuide" had been hidden inside approximately 40 "game guide" apps in Google Play. The malware is capable of gaining administrator access to infected devices, where it then receives additional modules that let it show popup ads. The malware, a type of botnet, is also capable of launching DDoS attacks. After being alerted to the malware, Google removed all instances of it in the store, but by that time, approximately two million Android users had already downloaded the apps, the oldest of which had been around since November 2016. In June 2017, researchers from the Sophos security company announced their finding of 47 apps using a third-party development library that shows intrusive advertisements on users' phones. Even after such apps are force-closed by the user, advertisements remain. Google removed some of the apps after receiving reports from Sophos, but some apps remained. When asked for comment, Google didn't respond. In August 2017, 500 apps were removed from Google Play after security firm Lookout discovered that the apps contained an SDK that allowed for malicious advertising. The apps had been collectively downloaded over 100 million times, and consisted of a wide variety of use cases, including health, weather, photo-editing, Internet radio and emoji. In all of 2017, over 700,000 apps were banned from Google Play due to abusive contents; this is a 70% increase over the number of apps banned in 2016. In March 2020, Check Point discovered 56 apps containing a malware program that had infected a total of 1 million devices. The program, called Tekya, was designed to evade detection by Google Play Protect and VirusTotal and then fraudulently click on ads. Around the same time, Dr. Web discovered at least six apps with 700,000 total downloads containing at least 18 modifications program called Android.Circle.1. In addition to performing click fraud, Android.Circle.1 can also operate as adware and perform phishing attacks. On July 1, 2021, Dr. Web discovered malicious apps on Google Play that steal Facebook users' logins and passwords. Their specialists uncovered 9 trojans that were available on Google Play Store with over 5.8 million installs among them. The apps tricked victims into logging into their Facebook accounts and hijacked the credentials via JavaScript code. Google removed these apps later on. On September 29, 2021, Zimperium zLabs recently discovered a large-scale malware campaign has infected more than 10 million Android devices from over 70 countries and likely stole hundreds of millions from its victims by subscribing to paid services without their knowledge. GriftHorse, the trojan used in these attacks, was discovered by researchers who first spotted this illicit global premium services campaign. This campaign has been active for roughly five months, between November 2020 and April 2021, when their malicious apps were last updated. The malware was delivered using over 200 trojanized Android applications delivered through Google's official Play Store and third-party app stores. Google has removed the apps after being notified of their malicious nature but this malware are still available for download on third-party repositories. On November 30, 2021, ThreatFabric, researchers explain how they discovered four different malware dropper campaigns distributing banking trojans on the Google Play Store. This evolution includes creating small realistic-looking apps that focus on common themes such as fitness, cryptocurrency, QR codes, and PDF scanning to trick users into installing the app. Once these "dropper" apps are installed, they will silently communicate with the threat actor's server to receive commands. When ready to distribute the banking trojan, the threat actor's server will tell the installed app to perform a fake "update" that "drops" and launches the malware on the Android device. Patent issues Some developers publishing on Google Play have been sued for patent infringement by "patent trolls", people who own broad or vaguely worded patents that they use to target small developers. If the developer manages to successfully challenge the initial assertion, the "patent troll" changes the claim of the violation in order to accuse the developer of having violated a different assertion in the patent. This situation continues until the case goes into the legal system, which can have substantial economic costs, prompting some developers to settle. In February 2013, Austin Meyer, a flight simulator game developer, was sued for having used a copy-protection system in his app, a system that he said "Google gave us! And, of course, this is what Google provides to everyone else that is making a game for Android!" Meyer claimed that Google would not assist in the lawsuit, and he stated that he would not settle the case. His battle with the troll continued for several years, uploading a video in June 2016 discussing that he was then being sued for uploading his app to Google Play, because "the patent troll apparently owns the of the Google Play Store itself". Android Authority wrote that "This scenario has played out against many other app developers for many years", and have prompted discussions over "a larger issue at stake", in which developers stop making apps out of fear of patent problems. Availability Users outside the countries/regions listed below only have access to free apps and games through Google Play. See also List of mobile app distribution platforms List of most-downloaded Google Play applications Notes References External links Android (operating system) Android (operating system) software E-book sources E-book suppliers Play Mobile software distribution platforms Android Market Products introduced in 2012 Software update managers Software distribution platforms Online marketplaces of the United States
219066
https://en.wikipedia.org/wiki/Arial
Arial
Arial, sometimes marketed or displayed in software as Arial MT, is a sans-serif typeface and set of computer fonts in the neo-grotesque style. Fonts from the Arial family are packaged with all versions of Microsoft Windows from Windows 3.1 onwards, some other Microsoft software applications, Apple's macOS and many PostScript 3 computer printers. The typeface was designed in 1982, by Robin Nicholas and Patricia Saunders, for Monotype Typography. It was created to be metrically identical to the popular typeface Helvetica, with all character widths identical, so that a document designed in Helvetica could be displayed and printed correctly without having to pay for a Helvetica license. The Arial typeface comprises many styles: Regular, Italic, Medium, Medium Italic, Bold, Bold Italic, Black, Black Italic, Extra Bold, Extra Bold Italic, Light, Light Italic, Narrow, Narrow Italic, Narrow Bold, Narrow Bold Italic, Condensed, Light Condensed, Bold Condensed, and Extra Bold Condensed. The extended Arial type family includes more styles: Rounded (Light, Regular, Bold, Extra Bold); Monospaced (Regular, Oblique, Bold, Bold Oblique). Many of these have been issued in multiple font configurations with different degrees of language support. The most widely used and bundled Arial fonts are Arial Regular, Italic, Bold, and Bold Italic; the same styles of Arial Narrow; and Arial Black. More recently, Arial Rounded has also been widely bundled. In Office 2007, Arial was replaced by Calibri as the default typeface in PowerPoint, Excel, and Outlook. Design characteristics Embedded in version 3.0 of the OpenType version of Arial is the following description of the typeface: In 2005, Robin Nicholas said, "It was designed as a generic sans serif; almost a bland sans serif." Arial is a neo-grotesque typeface: a design based on nineteenth-century sans-serifs, but regularized to be more suited to continuous body text and to form a cohesive font family. Apart from the need to match the character widths and approximate/general appearance of Helvetica, the letter shapes of Arial are also strongly influenced by Monotype's own Monotype Grotesque designs, released in or by the 1920s, with additional influence from 'New Grotesque', an abortive redesign from 1956. The designs of the R, G and r also resemble Gill Sans. The changes cause the typeface to nearly match Linotype Helvetica in both proportion and weight (see figure), and perfectly match in width. Monotype executive Allan Haley observed, "Arial was drawn more rounded than Helvetica, the curves softer and fuller and the counters more open. The ends of the strokes on letters such as c, e, g and s, rather than being cut off on the horizontal, are terminated at the more natural angle in relation to the stroke direction." Matthew Carter, a consultant for IBM during its design process, described it as "a Helvetica clone, based ostensibly on their Grots 215 and 216". The styling of Arabic glyphs comes from Times New Roman, which have more varied stroke widths than the Latin, Greek, Cyrillic glyphs found in the font. Arial Unicode MS uses monotone stroke widths on Arabic glyphs, similar to Tahoma. The Cyrillic, Greek and Coptic Spacing Modifier Letters glyphs initially introduced in Arial Unicode MS, but later debuted in Arial version 5.00, have different appearances. History IBM debuted two printers for the in-office publishing market in 1982: the 240-DPI 3800-3 laserxerographic printer, and the 600-DPI 4250 electro-erosion laminate typesetter. Monotype was under contract to supply bitmap fonts for both printers. The fonts for the 4250, delivered to IBM in 1983, included Helvetica, which Monotype sub-licensed from Linotype. For the 3800-3, Monotype replaced Helvetica with Arial. The hand-drawn Arial artwork was completed in 1982 at Monotype by a 10-person team led by Robin Nicholas and Patricia Saunders and was digitized by Monotype at 240 DPI expressly for the 3800-3. IBM named the font Sonoran Sans Serif due to licensing restrictions and the manufacturing facility's location (Tucson, Arizona, in the Sonoran Desert), and announced in early 1984 that the Sonoran Sans Serif family, "a functional equivalent of Monotype Arial", would be available for licensed use in the 3800-3 by the fourth quarter of 1984. There were initially 14 point sizes, ranging from 6 to 36, and four style/weight combinations (Roman medium, Roman bold, italic medium, and italic bold), for a total of 56 fonts in the family. Each contained 238 graphic characters, providing support for eleven national languages: Danish, Dutch, English, Finnish, French, German, Italian, Norwegian, Portuguese, Spanish, and Swedish. Monotype and IBM later expanded the family to include 300-DPI bitmaps and characters for additional languages. In 1989, Monotype produced PostScript Type 1 outline versions of several Monotype fonts, but an official PostScript version of Arial was not available until 1991. In the meantime, a company called Birmy marketed a version of Arial in a Type 1-compatible format. In 1990, Robin Nicholas, Patricia Saunders and Steve Matteson developed a TrueType outline version of Arial which was licensed to Microsoft. In 1992, Microsoft chose Arial to be one of the four core TrueType fonts in Windows 3.1, announcing the font as an "alternative to Helvetica". Matthew Carter has noted that the deal was complex and included a bailout of Monotype, which was in financial difficulties, by Microsoft. Microsoft would later extensively fund the development of Arial as a font that supported many languages and scripts. Monotype employee Rod MacDonald noted: As to the widespread notion that Microsoft did not want to pay licensing fees [for Helvetica], [Monotype director] Allan Haley has publicly stated, more than once, that the amount of money Microsoft paid over the years for the development of Arial could finance a small country. Arial ultimately became one of several clones of PostScript standard fonts created by Monotype in collaboration with or sold to Microsoft around this time, including Century Gothic (a clone of ITC Avant Garde), Book Antiqua (Palatino) and Bookman Old Style (ITC Bookman). Distribution TrueType editions of Arial have shipped as part of Microsoft Windows since the introduction of Windows 3.1 in 1992; Arial was the default font. From 1999 until 2016, Microsoft Office shipped with Arial Unicode MS, a version of Arial that includes many international characters from the Unicode standard. This version of the typeface was for a time the most widely distributed pan-Unicode font. The font was dropped from Microsoft Office 2016 and has been deprecated; continuing growth of the number of characters in Unicode and limitations on the number of characters in a font meant that Arial Unicode could no longer perform the job it was originally created for. Arial MT, a PostScript version of the Arial font family, was distributed with Acrobat Reader 4 and 5. PostScript does not require support for a specific set of fonts, but Arial and Helvetica are among the 40 or so typeface families that PostScript Level 3 devices typically support. Mac OS X (now known as macOS) was the first Mac OS version to include Arial; it was not included in classic Mac OS. The operating system ships with Arial, Arial Black, Arial Narrow, and Arial Rounded MT. However, the default macOS font for sans-serif/Swiss generic font family is Helvetica. The bundling of Arial with Windows and macOS has contributed to it being one of the most widely distributed and used typefaces in the world. In 1996, Microsoft launched the Core fonts for the Web project to make a standard pack of fonts for the Internet. Arial in TrueType format was included in this project. The project allowed anyone to download and install these fonts for their own use (on end user's computers) without any fee. The project was terminated by Microsoft in August 2002, allegedly due to frequent EULA violations. For MS Windows, the core fonts for the web were provided as self-extracting executables (.exe); each included an embedded cabinet file, which can be extracted with appropriate software. For the Macintosh, the files were provided as BinHexed StuffIt archives (.sit.hqx). The latest font version that was available from Core fonts for the Web was 2.82, published in 2000. Later versions (such as version 3 or version 5 which include many new characters) were not available from this project. A Microsoft spokesman declared in 2002 that members of the open source community "will have to find different sources for updated fonts. ... Although the EULA did not restrict the fonts to just Windows and Mac OS, they were only ever available as Windows .exe's and Mac archive files." The chief technical officer of Opera Software cited the cancellation of the project as an example of Microsoft resisting interoperability. Arial variants The known variants of Arial include: Arial: Sometimes called Arial Regular to distinguish its width from Arial Narrow, it contains Arial (Roman text weight), Arial Italic, Arial Bold, Arial Bold Italic Arial Unicode MS Arial Black: Arial Black, Arial Black Italic. This weight is known for being particularly heavy. This is because the face was originally drawn as a bitmap, and to increase the weight, stroke widths for bold went from a single pixel width to two pixels in width. It only supports Latin, Greek and Cyrillic. Arial Narrow: Arial Narrow Regular, Arial Narrow Bold, Arial Narrow Italic, Arial Narrow Bold Italic. This family is a condensed version. Arial Rounded: Arial Rounded Light, Arial Rounded Regular, Arial Rounded Medium, Arial Rounded Bold, Arial Rounded Extra Bold. The regular versions of the rounded glyphs can be found in Gulim, Microsoft's Korean font set. Originally only available in bold form as Arial Rounded MT Bold, extra fonts appeared as retail products. In Linotype's retail version, only Arial Rounded Regular supports WGL character set. Arial Special: Arial Special G1, Arial Special G2. They are included with Microsoft Encarta Virtual Globe 99, Expedia Streets and Trips 2000, MapPoint 2000. Arial Light, Arial Medium, Arial Extra Bold, Arial Light Condensed, Arial Condensed, Arial Medium Condensed, Arial Bold Condensed: These fonts first appeared in the Linotype online stores. The condensed fonts do not have italic counterparts. Arial Monospaced: In this monospaced variant, letters such as @, I (uppercase i), i, j, l (lowercase L), M, W are redesigned. Arial Alternative Arial Alternative Regular and Arial Alternative Symbol are standard fonts in Windows ME, and can also be found on Windows 95 and Windows XP installation discs, and on Microsoft's site. Both fonts are Symbol-encoded. These fonts emulate the monospaced font used in Minitel/Prestel teletext systems, but vectorized with Arial styling. These fonts are used by HyperTerminal. Arial Alternative Regular contains only ASCII characters, while Arial Alternative Symbol contains only 2 × 3 semigraphics characters. Code page variants Arial Baltic, Arial CE, Arial Cyr, Arial Greek, Arial Tur are aliases created in the FontSubstitutes section of WIN.INI by Windows. These entries all point to the master font. When an alias font is specified, the font's character map contains different character set from the master font and the other alias fonts. In addition, Monotype also sells Arial in reduced character sets, such as Arial CE, Arial WGL, Arial Cyrillic, Arial Greek, Arial Hebrew, Arial Thai. Arial Unicode is a version supporting all characters assigned with Unicode 2.1 code points. Arial Nova Arial Nova's design is based on the 1982's Sonora Sans bitmapped fonts, which were in fact Arial renamed to avoid licensing issues. It was bundled with Windows 10, and is offered free of charge on Microsoft Store. It contains Regular, Bold and Light weights, corresponding italics and corresponding Condensed widths. Monotype/Linotype retail versions Arial The TrueType core Arial fonts (Arial, Arial Bold, Arial Italic, Arial Bold Italic) support the same character sets as the version 2.76 fonts found in Internet Explorer 5/6, Windows 98/ME. Version sold by Linotype includes Arial Rounded, Arial Monospaced, Arial Condensed, Arial Central European, Arial Central European Narrow, Arial Cyrillic, Arial Cyrillic Narrow, Arial Dual Greek, Arial Dual Greek Narrow, Arial SF, Arial Turkish, Arial Turkish Narrow. In addition, Monotype also sells Arial in reduced character sets, such as Arial CE, Arial WGL, Arial Cyrillic, Arial Greek, Arial Hebrew, Arial Thai, Arial SF. Arial WGL It is a version that covers only the Windows Glyph List 4 (WGL4) characters. They are only sold in TrueType format. The family includes Arial (regular, bold, italics), Arial Black, Arial Narrow (regular, bold, italics), Arial Rounded (regular, bold). Ascender Corporation fonts Ascender Corporation sells the font in Arial WGL family, as well as the Arial Unicode. Arial in other font families Arial glyphs are also used in fonts developed for non-Latin environments, including Arabic Transparent, BrowalliaUPC, Cordia New, CordiaUPC, Miriam, Miriam Transparent, Monotype Hei, Simplified Arabic. Free alternatives Arial is a proprietary typeface to which Monotype Imaging owns all rights, including software copyright and trademark rights (under U.S. copyright law, Monotype cannot legally copyright the shapes of the actual glyphs themselves). Its licensing terms prohibit derivative works and free redistribution. There are some free software metric-compatible fonts used as free Arial alternatives or used for Arial font substitution: Liberation Sans is a metrically equivalent font to Arial developed by Ascender Corp. and published by Red Hat in 2007, initially under the GPL license with some exceptions. Versions 2.00.0 onwards are published under SIL Open Font License. It is used in some Linux distributions as default font replacement for Arial. Liberation Sans Narrow is a metrically equivalent font to Arial Narrow contributed to Liberation fonts by Oracle in 2010, but is not included in 2.00.0. Google commissioned a variation named Arimo for Chrome OS. URW++ produced a version of Helvetica called Nimbus Sans L in 1987, and it was eventually released under the GPL and AFPL (as Type 1 font for Ghostscript) in 1996. It is one of the Ghostscript fonts, free alternatives to 35 basic PostScript fonts (which include Helvetica). FreeSans, a free font descending from URW++ Nimbus Sans L, which in turn descends from Helvetica. It is one of free fonts developed in GNU FreeFont project, first published in 2002. It is used in some free software as Arial replacement or for Arial font substitution. TeX Gyre Heros, a free font descending from URW++ Nimbus Sans L, which in turn descends from Helvetica. It is one of free fonts developed by the Polish TeX Users Group (GUST), first published in 2007. It is licensed under the GUST Font License. See also Core fonts for the Web List of fonts :Category:Monotype typefaces – typefaces owned by or designed for Monotype Imaging References External links Microsoft Typography: Arial, Arial Black, Arial Narrow, Arial Rounded MT, Arial Special G1/G2, Arial Narrow Special G1/G2, Arial Unicode MS Linotype/Monotype Arial families: Arial, Arial WGL, Arial Arabic, Arial Nova, Arial OS, Arial Unicode Fonts in Use Monotype typefaces Neo-grotesque sans-serif typefaces Typefaces and fonts introduced in 1982 Microsoft typefaces Windows XP typefaces
3673825
https://en.wikipedia.org/wiki/Tony%20Slaton
Tony Slaton
Anthony Tyrone Slaton (born April 12, 1961) is a former American college and professional football player who was an offensive lineman in the National Football League (NFL) for six seasons during the 1980s and early 1990s. Slaton played college football for the University of Southern California, and thereafter he played professionally for the Los Angeles Rams and Dallas Cowboys of the NFL. Early years Slaton was born in Merced, California. He went to Merced High School. College career Slaton attended the University of Southern California, where he played for the USC Trojans football team from 1980 to 1983. As the Trojans' senior center in 1983, he received consensus first-team All-American honors. Professional career Slaton was drafted in the sixth round of the 1984 NFL draft by the Buffalo Bills but did not make that team's opening day roster. He played for the Los Angeles Rams between 1985 and 1989 and the Dallas Cowboys in 1990. Personal life He is currently the executive director at the Boys and Girls Clubs of America in Merced and participates in other local youth development initiatives. References 1961 births Living people American football offensive linemen Dallas Cowboys players Los Angeles Rams players USC Trojans football players All-American college football players People from Merced, California Players of American football from California
67739688
https://en.wikipedia.org/wiki/Air%20India%20data%20breach
Air India data breach
The 2021 Air India cyberattack was a cyberattack that affected more than 4.5 million customers of Air India airlines. Cyberattack On 21 May 2021, it was reported that Air India was subjected to a cyberattack whereas the personal details of about 4.5 million customers around the world were compromised including passport, credit card details, birth dates, name and ticket information. Air India's data processor, SITA which is a Swiss technology company which is known for offering passenger processing and reservation system services reported the data breach to Air India in around February 2021. The data breach involved all information which was registered in the SITA data processor between 26 August 2011 and 20 February 2021. It was also revealed that the cyberattackers gained access to the systems for a period of 22 days. It was reported that the compromised servers by the hackers were later secured and Air India took steps by engaging external data security specialists. Air India also guaranteed its passengers that there was no conclusive evidence on whether any misuse of the personal data has been reported. The airlines also urged and encouraged the passengers to immediately change their passwords. References See also EasyJet hack 2018 British Airways cyberattack List of security hacking incidents List of data breaches Cybercrime in India Computer security Air India Cyberattacks on airlines
11969012
https://en.wikipedia.org/wiki/Kerbango
Kerbango
Kerbango was both a company acquired by 3Com and its lead product. Kerbango was founded in 1998 in Silicon Valley by former executives from Apple Computer and Power Computing Corporation. On June 27, 2000, 3Com announced it was acquiring the Kerbango company in an $80 million deal. As part of the deal, Kerbango's CEO, Jon Fitch, became vice president and general manager of 3Com's Internet Audio division, working under Julie Shimer, then vice president and general manager of 3Com's Consumer Networks Business. Kerbango Internet Radio The "Kerbango Internet Radio" was intended to be the first stand-alone product that let users listen to Internet radio without a computer. Linux Journal quipped that the Kerbango 100E, the prototype, looked "like a cross between an old Wurlitzer jukebox and the dashboard of a '54 Buick." This initial model was even advertised on Amazon.com in anticipation of its sale, although it was never released. The Kerbango 100E was an embedded Linux device (running Montavista's Hard Hat Linux), reportedly using RealNetworks' G2 Player to play Internet audio streams (RealAudio G2, 5.0, 4.0, and 3.0 streams as well as streaming MP3). A broadband connection to the Internet was required as dial-up connections were not supported. In addition to Internet streams, the 100E featured an AM/FM tuner. The Kerbango radio's tuning user interface was designed by Alan Luckow and long-time Apple QuickTime developer Jim Reekes and was later adopted for use within iTunes. The Kerbango radio also had a companion website which allowed the user to control various aspects of the radio, save presets and edit account information. The website also acted as a streaming radio search engine, where users could search for, and listen to streaming radio stations through their browser. References Internet audio players Online companies of the United States Internet radio Electronics companies of the United States
4537037
https://en.wikipedia.org/wiki/213th%20Regional%20Support%20Group%20%28United%20States%29
213th Regional Support Group (United States)
The 213th Regional Support Group (213th RSG) is a unit of the Pennsylvania Army National Guard (PA ARNG). The 213th RSG mission is to provide command and control of the twenty-two separate Pennsylvania Army National Guard units assigned to the headquarters for operational and administrative control. This force consist of more than 1,000 soldiers from the eastern and central parts of the state. The 213th RSG is one of the two major commands in the Pennsylvania Army National Guard, the other being the 28th Infantry Division (Keystone). History The Group traces antecedents to 1849, but the designation '213th Support Group' did not appear until 1975, and the '213th Regional Support Group' until 2011. The 1st and 2nd Battalions, 176th Air Defense Artillery, were part of the 218th Artillery Group (AD) from 1 June 1959 to 1 April 1963, after which the 2-176 joined the 213th Artillery Group (Air Defense) until 17 February 1968, and thereafter until 1974 just with the PA ARNG. Headquarters and Headquarters Battery, 213th Artillery Group was converted and re-designated 1 December 1971 as Headquarters and Headquarters Detachment, 213th Military Police Group, and then, four years later, again converted and re-designated 1 April 1975 as Headquarters and Headquarters Company, 213th Support Group. The units of the 213th Group continually provide support to units and organizations throughout the United States and the world. In 2009, the Group provided abbreviated Reception, Staging, and Onward Integration services for the G-20 Summit in Pittsburgh, Pennsylvania, processing over 2,500 troops into the Area of Responsibility. In 2007–2008 the Group deployed to Balad, Iraq in support of Operation Iraqi Freedom. During this mission, it served as one of five sustainment brigades assigned to the 316th Expeditionary Sustainment Command headquartered in Coraopolis, Pennsylvania. In 2005, immediately after Hurricane Katrina made landfall, the Group quickly deployed to Metairie, New Orleans in support of Katrina Relief, acting as a command and control for the Intermediate Staging Base at Zephyr Stadium. A tour in Afghanistan from 2003–2004 found the area support group providing command and control for bases in Bargram, Kandahar, and Kabal, Afghanistan along with an additional command and control cell in Uzbekistan. In 1996 the group provided combat service support to an active-duty brigade during a field training exercise at the Joint Readiness Training Center, Fort Polk, Louisiana. In 1997, the group Headquarters Company sent 50 members to Hungary to support the Stabilisation Force in Bosnia and Herzegovina (S-FOR). Throughout the past decade, Group subordinate units have performed yeoman's work in fighting the War on Terror. During blizzards, floods and civil emergencies, all the units of the Group stand ready to assist the communities and people of the Commonwealth. Missions Federal The mission of the 213th Regional Support Group is to provide command and control, structure for non-major combat operations, and assist assigned Active Component or Reserve Component units in meeting training, readiness and deployment requirements. State Provide command and control, assist assigned units in meeting training, readiness and deployment requirements. Support State Emergency Operations as required, and operate Logistics Base Operations at multiple locations. Units The peacetime structure of the 213th Regional Support Group consists of the following elements: Headquarters and Headquarters Detachment (HHD); Allentown, Pennsylvania 108th Area Support Medical Company (108th ASMC); Allentown, Pennsylvania 109th Mobile Public Affairs Detachment (109th MPAD); Fort Indiantown Gap, Pennsylvania 1902nd Contingency Contracting Detachment (1902nd CCD), Allentown, Pennsylvania 1928th Senior Contingency Contracting Detachment (1928th SCCD), Allentown, Pennsylvania 1955th Contingency Contracting Detachment (1955th CCT), Allentown, Pennsylvania 228th Transportation Battalion, Annville, Pennsylvania Headquarters Detachment, 228th Transportation Battalion, Annville, Pennsylvania 131st Transportation Company; Williamstown, Pennsylvania Detachment 1, 131st Transportation Company; Lehighton, Pennsylvania 1067th Transportation Company; Phoenixville, Pennsylvania Detachment 1, 1067th Transportation Company; Philadelphia, Pennsylvania Detachment 2, 1067th Transportation Company; Annville, Pennsylvania 728th Combat Sustainment Support Battalion (728th CSSB); Spring City, Pennsylvania 28th Financial Management Support Unit (28th FMSU); Lebanon, Pennsylvania 528th Finance Detachment; Lebanon, Pennsylvania 628th Finance Detachment; Lebanon, Pennsylvania 828th Finance Detachment; Lebanon, Pennsylvania 928th Finance Detachment; Lebanon, Pennsylvania 213th Personnel Company (Human Resources); Fort Indiantown Gap, Pennsylvania 252nd Quartermaster Company; Philadelphia, Pennsylvania 3622nd Support Maintenance Company (3622nd SMC); Fort Indiantown Gap, Pennsylvania Lineage HHD/ 213th Regional Support Group Organized 6 August 1849 in the Pennsylvania militia at Allentown as the Lehigh Fencibles. Re-designated 10 July 1850 as the Allen rifles. Consolidated 18 April 1861 with the Jordan Artillerists and consolidated unit reorganized and re-designated as the Union Rifles. Mustered into Federal service 20 April 1861 at Harrisburg, Pennsylvania as Company I, 1st Pennsylvania Volunteer Infantry Regiment; mustered out of Federal service 27 July 1861 at Harrisburg. Former Allen Rifles reorganized and mustered into Federal service 30 August 1861 at Harrisburg as Company B, 47th Pennsylvania Volunteer Infantry Regiment; mustered out of Federal service 25 December 1865 Reorganized 3 June 1870 in the Pennsylvania National Guard at Allentown as the Allen Rifles. Re-designated 30 June 1874 as Company D, 4th Infantry Regiment. Mustered into Federal service 8 July 1899 at Allentown as Company D, 4th Infantry Regiment *Mustered into Federal service 27 July 1916 at Mount Gretna; mustered out of Federal service 5 August 1917. Reorganized and re-designated 11 October 1917 as Company, 109th Machine Gun Battalion, an element of the 28th Division. Demobilized 4 May 1919 at Camp Dix, New Jersey. Reorganized 24 March 1921 in the Pennsylvania National Guard at Allentown as Company D, 3rd Separate Battalion of Infantry; federally recognized 8 April 1921. Converted and re-designated 1 May 1922 as Headquarters Detachment and Combat Train, 1st Battalion, 213th Artillery (Antiaircraft). Re-designated 1 August 1924 as headquarters Detachment and Combat Train, 1st Battalion, 213th Coast Artillery. Reorganized and re-designated 1 April 1939 as Headquarters Battery, 213th Coast Artillery. Inducted into Federal service 16 September 1940 at Allentown. Reorganized and Federally recognized 9 October 1946 at Allentown. Consolidated 1 June 1959 with the 151st Antiaircraft Artillery Detachment and consolidated unit re-designated as Headquarters and Headquarters Battery, 213th Artillery Group. Converted and re-designated 1 December 1971 as Headquarters and Headquarters Detachment, 213th Military Police Group Converted and re-designated 1 April 1975 as Headquarters and Headquarters Company, 213th Support Group. HHC (-Det 1), 213th Area Support Group ordered into active Federal service on 7 January 1997 in support of Operation Joint Endeavor/Guard HHC (-Det 1), 213th Area Support Group released from active Federal service on 2 October 1997. Headquarters and Headquarters Company, 213th Area Support Group mobilized and ordered into active Federal service on 15 March 2003 at Allentown in support of Operation Enduring Freedom. Trained at Fort Dix, New Jersey and then deployed, serving in Afghanistan and Uzbekistan. Released from active Federal Service and returned to Fort Dix on 18 April 2004. Headquarters and Headquarters Company, 213th Area Support Group mobilized and ordered into active Federal service on 23 April 2007 at Allentown in support of Operation Iraqi Freedom. Trained at Fort Bragg, North Carolina and then deployed to Kuwait and Iraq. Released from active Federal service and returned to Fort Bragg on 13 April 2008. Headquarters and Headquarters Company, 213th Area Support Group reorganized 1 September 2011 as Headquarters and Headquarters Detachment, 213th Regional Support Group. Honors Campaign participation credit The 213th Regional Support Group has the following campaign participation credit. Civil War Shenandoah Florida 1862 South Carolina 1862 Louisiana 1864 War with Spain Puerto Rico World War I Champagne-Marne Aisne-Marne Oise-Aisne Meuse-Argonne Champagne 1918 Lorraine 1918 World War II Tunisia Naples-Foggia Rome-Arno Normandy Northern France Southern France Rhineland Central Europe Southwest Asia Defense of Saudi Arabia Liberation and Defense of Kuwait Cease-Fire War on Terrorism Iraq: Iraqi Surge Afghanistan: Consolidation I Decorations The 213th Regional Support Group has been awarded the following decorations. Meritorious Unit Commendation (Army), Streamer embroidered SOUTHWEST ASIA 1990–1991 Meritorious Unit Commendation (Army), Streamer embroidered IRAQ 2007-2008 References External links 213th Area Support Group Unit Insignia, United States Army Institute of Heraldry Regional Support 213 Military units and formations in Pennsylvania Groups of the United States Army National Guard Military units and formations established in 2011
18934946
https://en.wikipedia.org/wiki/Uniform%20Computer%20Information%20Transactions%20Act
Uniform Computer Information Transactions Act
Uniform Computer Information Transactions Act (UCITA) was an attempt to introduce a Uniform Act for the United States to follow. As a model law, it only specifies a set of guidelines, and each of the States should decide if to pass it or not, separately. UCITA has been drafted by National Conference of Commissioners on Uniform State Laws (NCCUSL). UCITA has been designed to clarify issues which were not addressed by existing Uniform Commercial Code. "Few disagree that the current Uniform Commercial Code is ill-suited for use with licensing and other intangible transactions", said practicing attorney Alan Fisch. UCITA has only been passed in two states, Virginia and Maryland. The law did not pass in other states. Nevertheless, legal scholars, such as noted commercial law professor Jean Braucher, believe that the UCITA offers academic value. A resolution recommending approval of UCITA by the American Bar Association (ABA) has been withdrawn by the NCCUSL in 2003, indicating that UCITA lacks the consensus which is necessary for it to become a uniform act. UCITA has faced significant opposition from various groups. Provisions UCITA focuses on adapting current commercial trade laws to the modern software era. It is particularly controversial in terms of computer software. The code would automatically make a software maker liable for defects and errors in the program. However, it allows a shrinkwrap license to override any of UCITA's provisions. As a result, commercial software makers can include such a license in the box and not be liable for errors in the software. Free software that is distributed gratis and through downloads, however, would not be able to force a shrinkwrap license and would therefore be liable for errors. Small software makers without legal knowledge would also be at risk. UCITA would explicitly allow software makers to make any legal restrictions they want on their software by calling the software a license in the EULA, rather than a sale. This would therefore take away purchasers right to resell used software under the first sale doctrine. Without UCITA, courts have often ruled that despite the EULA claiming a license, the actual actions by the software company and purchaser clearly shows it was a purchase, meaning that the purchaser has the right to resell the software to anyone. History UCITA started as an attempt to modify the Uniform Commercial Code by introducing a new article: Article 2B (also known as UCC2B) on Licenses. The committee for drafting UCC2B consisted of members from both the NCCUSL and the American Law Institute (ALI). At a certain stage of the process, ALI withdrew from the drafting process, effectively killing UCC2B. Afterwards, the NCCUSL renamed UCC2B into UCITA and proceeded on its own. Passage record Before ratification, each state may amend its practices, thus creating different conditions in each state. This means that the final "as read" UCITA document is what is actually passed and signed into law by each state governor. The passage record typically indicates each version of UCITA submitted for ratification. Two states, Virginia and Maryland, passed UCITA in 2000, shortly after its completion by the NCCUSL in 1999. However, beginning with Iowa that same year, numerous additional states have passed so-called "bomb-shelter" laws enabling citizen protections against UCITA-like provisions. Passage of UCITA Maryland (passed April 2000): http://mlis.state.md.us/2000rs/billfile/hb0019.htm Virginia (passed 2000) : http://leg1.state.va.us/cgi-bin/legp504.exe?001+ful+SB372ER Passage of Anti-UCITA Bomb Shelter Laws (UETA) Iowa (passed 2000) North Carolina (passed 2001) West Virginia (2001) Vermont (passed 2003) Idaho Reception There have been concerns regarding the "one size fits all" approach of the UCITA, the UCITA's favoring of software manufacturers, and the UCTITA's deference to new industry standards. Some of the other criticisms regarding the adoption of the UCITA include the inclusion of self-help provisions included in the UCITA, which first existed in a limited form in the first version of the UCITA and was later prohibited. References See also List of Uniform Acts (United States) Computing legislation Computer information transactions 1999 in American law
5445060
https://en.wikipedia.org/wiki/Live%20PA
Live PA
Live PA (meaning live public address, or live personal appearance) is the act of performing live electronic music in settings typically associated with DJing, such as nightclubs, raves, and more recently dance music festivals. In a performative context, the term was originally used to refer to live appearances, initially at rave events in the late 1980s, of studio based producers of electronic dance music who released music using fixed media formats such as 12-inch single, CD, or music download. The concept of the live PA helped provide a public face for a scene that was criticized as "faceless" by the mainstream music press. The trend was quickly exploited by a music industry desperate to market dance music to a popular audience. Execution Generally, live PA artists and performers use a central sequencer which triggers and controls sound generating devices like synthesizers, drum machines, and digital samplers. The resulting audio outputs of these devices are then mixed and modified with effects using a mixing console. Interconnected drum machines and synthesizers allow the electronic live PA artist to effectively orchestrate a single-person concert. Even though the live PA artist performs alone, she or he may be triggering a large number of musical parts, including a bassline, drum beats on a drum machine, synthesizer chords, and sampled riffs from other recording. Live PA artists typically add to these sequenced and triggered parts with hand-played electronic keyboards, hand-triggered audio samples, live vocals/singing, and other live instruments. Some artists like Brian Transeau and Jamie Lidell utilize hardware and software tools custom-designed for live expression and improvisation. By arranging, muting, and cueing pre-composed basic musical data (notes, loops, patterns, and sequences), the live PA artist has the freedom to manipulate major elements of the performance and alter a song's progression in real-time. As such, each performance may be different, as the live PA artist changes the loops and patterns. Many live PA artists try to combine the qualities of both traditional bands and dancefloor DJs, taking the live music element from bands, and the buildup and progression from song to song of DJs, as well as the sheer volume of music controlled by a single person (of a DJ as opposed to a band). From hardware to software Technological progress has kept live PAs evolving to the 2010s. In the 1980s, a live PA artist would need a van full of synthesizer keyboards, drum machines, and large cases of rackmounted effects units. With advances in computer processing power and in software-based audio tools and instruments, the live PA artist in the 2010s can pack a single laptop (loaded with digital audio workstation software and digital effects and mixers) into a bag, go out to a venue that has a house sound reinforcement system and perform a show. This possibility creates a point of discussion, as the ability to perform one's own music live using a single, generic device creates yet another range of performative styles. On one end, a laptop-based performer has the option of simply playing a polished, premade audio file that she or he prepared in the recording and editing studio. On the other end, the performer can be creating sound completely from scratch using software-based synthesizers, sequencers, etc. Somewhere in the middle is where the majority of performance setups fall. Incredibly popular is the software tool Ableton Live. This gives a laptop-based performing artist the ability to sequence and trigger software synthesizers, external MIDI-controlled instruments, and internally stored sampled audio clips and loops. This can all be achieved in real-time, with the resulting audio being manipulated by Ableton Live's mixer and effect processors. The feasibility of using a laptop computer as an all-in-one electronic music creation and performance tool created a massive wave of new artists, performers, and performance events. An international contest known as the Laptop Battle has gained momentum. Degree of "liveness" A topic of debate amongst listeners, critics, and artists themselves is to what degree a performance is actually "live". A possible determining factor could be the degree to which the performing artist has real-time control over individual elements of the final musical output. Using this criterion, an artist who mimics or mimes the playing of instruments whilst simply having a prerecorded CD or digital audio track sound over the PA system or broadcast, might not be considered particularly "live" by most people. On the far opposite end of the spectrum, some artists choose to take only an idea or motif (e.g. a bassline, rhythm pattern, or chord progression), realize it from scratch with electronic instruments on-the-spot, and then build upon it, modify it, and continue in this way for the entire performance. This requires a degree of discipline, technical musical skill and creativity to achieve. Additionally, some electronic musicians are also able to play keyboards, percussion and other conventional instruments, and will incorporate instrument playing with live manipulations of samples, effects, and electronics. Such situations meet the criteria of a live musical performance, since physical movements directly affect music. Some might argue that the visual aspect of a performance, such as the movements of the performer and the light show, would be sufficient to call it "live". Codifying what defines "live" and what does not has been an ongoing topic of debate for many years. To date, nobody has successfully created a definition with which everyone involved seems satisfied. See also Laptop battle References Electronic music Music performance
15500490
https://en.wikipedia.org/wiki/1880%20Troy%20Trojans%20season
1880 Troy Trojans season
The 1880 Troy Trojans improved slightly from the previous season, finishing with a 41–42 record and in 4th place in the National League. Regular season Season standings Record vs. opponents Roster Player stats Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts References 1880 Troy Trojans season at Baseball Reference Troy Trojans (MLB team) seasons Troy Trojans season Troy Trojans
21345214
https://en.wikipedia.org/wiki/Phorm
Phorm
Phorm, formerly known as 121Media, was a digital technology company known for its contextual advertising software. Phorm was incorporated in Delaware, United States, but relocated to Singapore as Phorm Corporation (Singapore) Ltd in 2012. Founded in 2002, the company originally distributed programs that were considered spyware, from which they made millions of dollars in revenue. It stopped distributing those programs after complaints from groups in the United States and Canada, and announced it was talking with several United Kingdom Internet service providers (ISPs) to deliver targeted advertising based on the websites that users visited. Phorm partnered with ISPs Oi, Telefonica in Brazil, Romtelecom in Romania, and TTNet in Turkey. In June 2012, Phorm made an unsuccessful attempt to raise £20 million for a 20% stake in its Chinese subsidiary. The company's proposed advertising system, called Webwise, was a behavioral targeting service (similar to NebuAd) that used deep packet inspection to examine traffic. Phorm said that the data collected would be anonymous and would not be used to identify users, and that their service would include protection against phishing (fraudulent collection of users' personal information). Nonetheless, World Wide Web creator Tim Berners-Lee and others spoke out against Phorm for tracking users' browsing habits, and the ISP BT Group was criticised for running secret trials of the service. The UK Information Commissioner's Office voiced legal concerns over Webwise, and has said it would only be legal as an "opt-in" service, not an opt-out system. The European Commission called on the UK to protect Web users' privacy, and opened an infringement proceeding against the country in regard to ISPs' use of Phorm. Some groups, including Amazon.com and the Wikimedia Foundation (the non-profit organization that operates collaborative wiki projects), requested an opt-out of their websites from scans by the system. Phorm changed to an opt-in policy. According to Phorm’s website, the company would not collect any data from users who had not explicitly opted in to its services. Users had to provide separate consent for each web browsing device they used. Due to increasing issues, Phorm ceased trading on 14 April 2016. Company history In its previous incarnation as 121Media, the company made products that were described as spyware by The Register. 121Media distributed a program called PeopleOnPage, which was classified as spyware by F-Secure. PeopleOnPage was an application built around their advertising engine, called ContextPlus. ContextPlus was also distributed as a rootkit called Apropos, which used tricks to prevent the user from removing the application and sent information back to central servers regarding a user's browsing habits. The Center for Democracy and Technology, a U.S.-based advocacy group, filed a complaint with the U.S. Federal Trade Commission in November 2005 over distribution of what it considered spyware, including ContextPlus. They stated that they had investigated and uncovered deceptive and unfair behaviour. This complaint was filed in concert with the Canadian Internet Policy and Public Internet Center, a group that was filing a similar complaint against Integrated Search Technologies with Canadian authorities. ContextPlus shut down its operations in May 2006 and stated they were "no longer able to ensure the highest standards of quality and customer care". The shutdown came after several major lawsuits against adware vendors had been launched. By September 2007, 121Media had become known as Phorm, and admitted a company history in adware and stated it had closed down the multimillion-dollar revenue stream from its PeopleOnPage toolbar, citing consumers’ identification of adware with spyware as the primary cause for the decision. In early 2008 Phorm admitted to editing its article on Wikipedia—removing a quotation from The Guardian'''s commercial executives describing the opposition they have towards its tracking system, and deleting a passage explaining how BT admitted misleading customers over covert Phorm trials in 2007. The changes were quickly noticed and reversed by the online encyclopedia's editors. Phorm currently resides in Mortimer Street, London, UK with staffing levels of around 35. Trading in Phorm's shares was suspended on London's AIM market on 24 February 2016, pending "clarification of the company's financial position". According to Phorm, it had been "unable to secure the requisite equity funding..." and was in "advanced discussions with certain of its shareholders and other parties regarding possible alternative financing..." and that there was "no guarantee" that such discussions would "result in any funds being raised. Pending conclusion of those discussions the Company has requested suspension of its shares from trading on AIM." Financial losses The company made a loss of $32.1 million in 2007, a loss of $49.8 million in 2008 and a loss of $29.7 million in 2009. 2010 was by no means better, with a net loss of $27.9 million By the end of 2010 the company had lost more than $107 million, with no perceivable revenue stream. In 2011, Phorm reported losses of $30.5 million and conducted an equity placing of £33.6 million, which paid off the company's debt. Cessation of trading On 14 April 2016 Phorm's Board of Directors announced to the London Stock Exchange that the company was ceasing to trade and that shareholders were unlikely to recover any of their investments. According to RNS Number : 2561Y FTSE 13 May 2016. Changes in FTSE UK Index Series FTSE AIM All-Share Index Effective From Start of Trading 18 May 2016 Phorm (UK): Constituent Deletion. Proposed advertisement service Phorm had worked with major U.S. and British ISPs—including BT Group (formerly British Telecom), Virgin Media, and TalkTalk (at the time owned by The Carphone Warehouse)—on a behavioral targeting advertisement service to monitor browsing habits and serve relevant advertisements to the end user. Phorm say these deals would have given them access to the surfing habits of 70% of British households with broadband. The service, which uses deep packet inspection to check the content of requested web pages, has been compared to those of NebuAd and Front Porch. The service, which would have been marketed to end-users as "Webwise", (in 2009 the BBC took legal advice over the trade mark Webwise), would work by categorising user interests and matching them with advertisers who wish to target that type of user. "As you browse we're able to categorise all of your Internet actions", said Phorm COO Virasb Vahidi. "We actually can see the entire Internet". The company said that data collected would be completely anonymous and that Phorm would never be aware of the identity of the user or what they have browsed, and adds that Phorm's advertising categories exclude sensitive terms and have been widely drawn so as not to reveal the identity of the user. By monitoring users' browsing, Phorm even says they are able to offer some protection against online fraud and phishing. Phorm formerly maintained an opt-out policy for its services. However, according to a spokesman for Phorm, the way the opt-out works means the contents of the websites visited will still be mirrored to its system. All computers, all users, and all http applications used by each user of each computer will need to be configured (or supplemented with add ons) to opt out. It has since been declared by the Information Commissioner's Office that Phorm would only be legal under UK law if it were an opt-in service. Implementation Richard Clayton, a Cambridge University security researcher, attended an on-the-record meeting with Phorm, and published his account of how their advertising system is implemented. Phorm's system, like many websites, uses HTTP cookies (small pieces of text) to store user settings. The company said that an initial web request is redirected three times (using HTTP 307 responses) within their system, so that they can inspect cookies to determine if the user has opted out. The system then sets a unique Phorm tracking identifier (UID) for the user (or collects it if it already exists), and adds a cookie that is forged to appear to come from the requested website. In an analysis titled "Stealing Phorm Cookies", Clayton wrote that Phorm's system stores a tracking cookie for each website visited on the user's PC, and that each contains an identical copy of the user's UID. Where possible, Phorm's system strips its tracking cookies from http requests before they are forwarded across the Internet to a website's server, but it cannot prevent the UID from being sent to websites using https. This would allow websites to associate the UID to any details the website collects about the visitor. Phorm Senior Vice President of Technology Marc Burgess has said that the collected information also includes a timestamp. Burgess said, "This is enough information to accurately target an ad in [the] future, but cannot be used to find out a) who you are, or b) where you have browsed". Incentives In 2008 Phorm considered offering an incentive, in addition to the phishing protection it originally planned, as a means to convince end-users to opt into its Webwise system. The alternate incentives, suggested in a Toluna.com market research survey carried out on behalf of Phorm, included further phishing protection, a donation to charity, a free technical support line, or one pound off opted-in users' monthly broadband subscriptions. Following the decision by Wikimedia Foundation and Amazon to opt their websites out of being profiled by Phorm's Webwise system, and as an incentive for websites to remain opted into the Phorm profiling system, Phorm launched Webwise Discover. The Korean launch of this web publisher incentive was announced in a press conference in Covent Garden in London on 3 June 2009. A survey by polling firm Populus revealed that after watching a demonstration video, 66% of the 2,075 individuals polled claimed to either like the idea or like it a lot . Website publishers are invited to upload a web widget which will provide a small frame to display recommended web links, based on the tracked interests of any Phorm-tracked website visitors (those whose ISP uses Phorm Deep Packet Inspection to intercept and profile web traffic). There would be no charge to the website, and Phorm do not stand to make any money from Webwise Discover; however, there are plans to display targeted adverts in the future. The widget would only deliver link recommendations if the user was signed up for targeted advertising with a Phorm-affiliated ISP, the widget would be invisible to everyone else.. At the press launch Phorm spokespersons admitted that at present not a single UK ISP or website has yet signed up to Webwise Discover system, although they emphasised it was part of the current Korea Telecom Webwise trials. Legal advice has been offered to websites considering signing up to the OIX system by Susan Singleton. Legality The Open Rights Group (ORG) raised questions about Phorm's legality and asked for clarification of how the service would work. FIPR has argued that Phorm's online advert system is illegal in the UK. Nicholas Bohm, general counsel at FIPR, said: "The need for both parties to consent to interception in order for it to be lawful is an extremely basic principle within the legislation, and it cannot be lightly ignored or treated as a technicality." His open letter to the Information Commissioner has been published on the FIPR web site. The Conservative peer Lord Northesk has questioned whether the UK government is taking any action on the targeted advertising service offered by Phorm in the light of the questions about its legality under the Data Protection and Regulation of Investigatory Powers Acts. On 9 April 2008, the Information Commissioner's Office ruled that Phorm would only be legal under UK law if it were an opt-in service. The Office stated it will closely monitor the testing and implementation of Phorm, in order to ensure data protection laws are observed. The UK Home Office has indicated that Phorm's proposed service is only legal if users give explicit consent. The Office itself became a subject of controversy when emails between it and Phorm were released. The emails showed that the company edited a draft legal interpretation by the Office, and that an official responded "If we agree this, and this becomes our position do you think your clients and their prospective partners will be comforted". Liberal Democrat spokeswoman on Home Affairs, Baroness Sue Miller, considered it an act of collusion: "The fact the Home Office asks the very company they are worried is actually falling outside the laws whether the draft interpretation of the law is correct is completely bizarre."The Register reported in May 2008 that Phorm's logo strongly resembled that of an unrelated UK company called Phorm Design. They quoted the smaller company's owner, Simon Griffiths: "I've had solicitors look at it and they say we'd have to go to court. [Phorm are] obviously a big player with a lot of clout. I'm a small design agency in Sheffield that employs three people." Until 21 September 2010, Phorm's Webwise service also shared the same name as BBC WebWise. Monitoring of the Phorm website using a Website change detection service alerted interested parties to changes on 21 September 2010. Phorm's website had been edited to remove references to the word 'Webwise'. Phorm's Webwise product had become 'PhormDiscover'. The Office for Harmonization in the Internal Market (OHIM) Trade Marks and Designs Registration Office of the European Union website CTM-Online database lists Phorm's application for use of the 'Webwise' trade mark name. The British Broadcasting Corporation is listed as an opponent on grounds of 'Likelihood of confusion'. The City of London-based legal firm Bristows wrote to the OHIM on 22 September 2010, withdrawing the BBC's opposition saying, "The British Broadcasting Corporation have instructed us to request the withdrawal of the above Opposition No. B11538985" On 28 October 2010, BT removed the Webwise pages from their company website although it was not until 12 November 2010 that all pages had finally been confirmed as removed by forum contributors at the campaign group called "NoDPI.org". , Virgin Media had not removed their Phorm and Webwise FAQs from their customer-news section. European Commission case against UK over Phorm European Union communications commissioner Viviane Reding has said that the commission was concerned Phorm was breaching consumer privacy directives, and called on the UK Government to take action to protect consumers' privacy. The European Commission wrote to the UK government on 30 June 2008 to set out the context of the EU's interest in the controversy, and asked detailed questions ahead of possible Commission intervention. It required the UK to respond to the letter one month after it was sent. A spokeswoman for the Department for Business, Enterprise and Regulatory Reform (BERR) admitted on 16 August that the UK had not met the deadline. On 16 September, BERR refused The Register request to release the full text of their reply to the European Commission, but released a statement to the effect that the UK authorities consider Phorm's products are capable of being operated in a lawful, appropriate and transparent fashion. Unsatisfied by the response, the European Commission wrote to the UK again on 6 October. Martin Selmayr, spokesman for Reding's Information Society and Media directorate-general said, "For us the matter is not finished. Quite the contrary." The UK government responded again in November, but the Commission sent another letter to the government in January 2009. This third letter was sent because the Commission was not satisfied with explanations about implementation of European law in the context of the Phorm case. Selmayr was quoted in The Register as saying, "The European Commission's investigation with regard to the Phorm case is still ongoing", and he went on to say that the Commission may have to proceed to formal action if the UK authorities do not provide a satisfactory response to the Commission's concerns. On 14 April, the European Commission said they had "opened an infringement proceeding against the United Kingdom" regarding ISPs' use of Phorm: That day, in response to a news item by The Register regarding the European Commission's preparations to sue the UK government, Phorm said their technology "is fully compliant with UK legislation and relevant EU directives. This has been confirmed by BERR and by the UK regulatory authorities and we note that there is no suggestion to the contrary in the Commission's statement today." However, BERR denied such confirmation when they responded to a Freedom of Information (FOI) request also made that day: In January 2012, the EU dropped its case against the UK government. Reaction Initial reaction to the proposed service highlighted deep concerns with regards to individual privacy and property rights in data. Phorm has defended its technology in the face of what it called "misinformation" from bloggers claiming it threatens users' privacy. Most security firms classify Phorm's targeting cookies as adware. Kaspersky Lab, whose anti-virus engine is licensed to many other security vendors, said it would detect the cookie as adware. Trend Micro said there was a "very high chance" that it would add detection for the tracking cookies as adware. PC Tools echoed Trend's concerns about privacy and security, urging Phorm to apply an opt-in approach. Specialist anti-spyware firm Sunbelt Software also expressed concerns, saying Phorm's tracking cookies were candidates for detection by its anti-spyware software. Ross Anderson, professor of security engineering at Cambridge University, said: "The message has to be this: if you care about your privacy, do not use BT, Virgin or Talk-Talk as your internet provider." He added that, historically, anonymising technology had never worked. Even if it did, he stressed, it still posed huge privacy issues. Phorm has engaged a number of public relations advisers including Freuds, Citigate Dewe Rogerson and ex-House of Commons media adviser John Stonborough in an attempt to save its reputation, and has engaged with audiences via moderated online webchats. The creator of the World Wide Web, Sir Tim Berners-Lee, has criticised the idea of tracking his browsing history saying that "It's mine - you can't have it. If you want to use it for something, then you have to negotiate with me. I have to agree, I have to understand what I'm getting in return." He also said that he would change his ISP if they introduced the Phorm system. As Director of the World Wide Web Consortium, Berners-Lee also published a set of personal design notes titled "No Snooping", in which he explains his views on commercial use of packet inspection and references Phorm. Simon Davies, a privacy advocate and founding member of Privacy International, said "Behavioural advertising is a rather spooky concept for many people." In a separate role at 80/20 Thinking, a consultancy start-up, he was engaged by Phorm to look at the system. He said: "We were impressed with the effort that had been put into minimising the collection of personal information". He was subsequently quoted as saying "[Privacy International] DOES NOT endorse Phorm, though we do applaud a number of developments in its process". "The system does appear to mitigate a number of core privacy problems in profiling, retention and tracking ... [but] we won't as PI support any system that works on an opt-out basis." Kent Ertugrul later said he made a mistake when he suggested Privacy International had endorsed Phorm: "This was my confusion I apologise. The endorsement was in fact from Simon Davies, the MD of 80 / 20 who is also a director of privacy international." Stopphoulplay.com Ertugrul has set up a website called "Stopphoulplay.com", in reaction to Phorm critics Alexander Hanff and Marcus Williamson. Ertugrul called Hanff a "serial agitator" who has run campaigns against both Phorm and other companies such as Procter & Gamble, and says Williamson is trying to disgrace Ertugrul and Phorm through "serial letter writing". Hanff believes the Stopphoulplay website's statements are "completely irrelevant" to his campaign and that they will backfire on Ertugrul, while Williamson laments that Phorm "has now stooped to personal smears". When it launched on 28 April 2009, Stopphoulplay.com discussed a petition to the UK Prime Minister on the Downing Street website. When originally launched the web page claimed, "The website managers at 10 Downing Street recognised their mistake in allowing a misleading petition to appear on their site, and have since provided assurances to Phorm that they will not permit this to happen again". That same day, the Freedom of Information (FOI) Act was used to request confirmation of the claim by Phorm and on 29 April Phorm removed the quoted text from the website and replaced it with nothing. The Prime Minister's Office replied to the FOI request on 28 May, stating they held no information in relation to the request concerning Phorm's claim. A day after the site's launch, BBC correspondent Darren Waters wrote, "This is a battle with no sign of a ceasefire, with both sides [Phorm and anti-Phorm campaigners] settling down to a war of attrition, and with governments, both in the UK and the EU, drawn into the crossfire." The site was closed down in September 2009 and is now an online casino. However, the pages http://stopphoulplay.com/this-is-how-they-work/ and http://stopphoulplay.com/this-is-who-they-are/ still contain the comments against Hanff and NoDPI. BT trials After initial denials, BT Group confirmed they ran a small scale trial, at one exchange, of a "prototype advertising platform" in 2007. The trial involved tens of thousands of end users. BT customers will be able to opt out of the trial—BT said they are developing an improved, non-cookie based opt-out of Phorm—but no decision has been made as to their post-trial approach.The Register reported that BT ran an earlier secret trial in 2006, in which it intercepted and profiled the web browsing of 18,000 of its broadband customers. The technical report states that customers who participated in the trial were not made aware of the profiling, as one of the aims of the validation was not to affect their experience. On 4 June 2008, a copy of a 52-page report allegedly from inside BT, titled "PageSense External Technical Validation", was uploaded to WikiLeaks. The report angered many members of the public; there are questions regarding the involvement of charity ads for Oxfam, Make Trade Fair and SOS Children's Villages, and whether or not they were made aware that their ads were being used in what many feel were highly illegal technical trials. FIPR's Nicholas Bohm has said that trials of an online ad system carried out by BT involving more than 30,000 of its customers were potentially illegal. BT's third trial of Phorm's Webwise system repeatedly slipped. The trial was to last for approximately two weeks on 10,000 subscribers, and was originally due to start in March 2008, then pushed to April and again to the end of May; it has yet to occur. The company is facing legal action over trials of Phorm that were carried out without user consent. On 2 September 2008, while investigating a complaint made by anti-Phorm protestors, the City of London Police met with BT representatives to informally question them about the secret Phorm trials. On 25 September the Police announced that there will be no formal investigation of BT over its secret trials of Phorm in 2006 and 2007. According to Alex Hanff, the police said there was no criminal intent on behalf of BT and there was implied consent because the service was going to benefit customers. Bohm said of that police response: On 29 September 2008, it was announced in BT's support forum that their trial of Phorm's Webwise system would commence the following day. BT press officer Adam Liversage stated that BT is still working on a network-level opt-out, but that it will not be offered during the trial. Opted-out traffic will pass through the Webwise system but will not be mirrored or profiled. The final full roll-out of Webwise across BT's national network will not necessarily depend the completion of the work either. The Open Rights Group urged BT's customers not to participate in the BT Webwise trials, saying their "anti-fraud" feature is unlikely to have advantages over features already built into web browsers. Subscribers to BT forums had used the Beta forums to criticise and raise concerns about BT's implementation of Phorm, but BT responded with a statement: According to Kent Ertugrul, BT would have completed the rollout of its software by the end of 2009. The Wall Street Journal, however, reported in July 2009 that BT had no plans to do so by then, and was concentrating on "other opportunities". Phorm's share price fell 40% on the news. On 6 July 2009 BT's former chief press officer, Adam Liversage, described his thoughts using Twitter: "A year of the most intensive, personal-reputation-destroying PR trench warfare all comes to nothing". He ended his comment with "Phantastic". In October 2009, Sergeant Mike Reed of the City of London Police answered a Freedom of Information (FOI) request. He confirmed the crime reference number as 5253/08. In his response, he stated that after originally passing case papers to the Crown Prosecution Service (CPS) in December 2008, the police were '"asked to provide further evidence, by the CPS in October 2009". Asked to "Disclose the date when that investigation was reopened", he said that it was "on instruction of the CPS in October 2009". In Sergeant Reed's response he named the officer in charge as "D/S Murray". On 25 February 2010, it was reported that the CPS continued to work on a potential criminal case against BT over its secret trials of Phorm's system. Prosecutors considered whether or not to press criminal charges against unnamed individuals under Part I of the Regulation of Investigatory Powers Act. It was not until April 2011 the CPS decided not to prosecute as it would not be in the public interest, stating that neither Phorm nor BT had acted in bad faith and any penalty imposed would be nominal. In April 2012, reports said that an officer of the City of London Police had been taken to lunch by Phorm. A police spokesperson was quoted as saying they were aware of the allegation, and that while no formal complaint had been received, "The force is reviewing the information available to it before deciding the best course of action." The spokesperson also highlighted that, "City of London Police were not involved in an investigation into BT Phorm and that the decision not to investigate was prompted by CPS advice". Advertisers and websites Advertisers which had initially expressed an interest about Phorm include: Financial Times, The Guardian, Universal McCann, MySpace, iVillage, MGM OMD, Virgin Media and Unanimis. The Guardian has withdrawn from its targeted advertising deal with Phorm; in an email to a reader, advertising manager Simon Kilby stated "It is true that we have had conversations with them [Phorm] regarding their services but we have concluded at this time that we do not want to be part of the network. Our decision was in no small part down to the conversations we had internally about how this product sits with the values of our company." In response to an article published in The Register on 26 March 2008, Phorm has stated that MySpace has not joined OIX as a Publisher. The Financial Times has decided not to participate in Phorm's impending trial. The ORG's Jim Killock said that many businesses "will think [commercial] data and relationships should simply be private until they and their customers decide," and might even believe "having their data spied upon is a form of industrial espionage". David Evans of the British Computer Society has questioned whether the act of publishing a website on the net is the same as giving consent for advertisers to make use of the site's content or to monitor the site's interactions with its customers. Pete John created an add on, called Dephormation, for servers and web users to opt out and remain opted-out of the system; however, John ultimately recommends that users switch from Phorm-equipped Internet providers: "Dephormation is not a solution. It is a fig leaf for your privacy. Do not rely on Dephormation to protect your privacy and security. You need to find a new ISP." In April 2009, Amazon.com announced that it would not allow Phorm to scan any of its domains. The Wikimedia Foundation has also requested an opt-out from scans, and took the necessary steps to block all Wikimedia and Wikipedia domains from being processed by the Phorm system on the 16th of that month. In July 2009 the Nationwide Building Society confirmed that it would prevent Phorm from scanning its website, in order to protect the privacy of its customers. Internet service providers MetroFi, an American municipal wireless network provider linked to Phorm, ceased operations in 2008. Three other ISPs linked to Phorm all changed or clarified their plans since first signing on with the company. In response to customer concerns, TalkTalk said that its implementation would have been "opt-in" only (as opposed to BT's "opt-out") and those that don't "opt in" will have their traffic split to avoid contact with a WebWise (Phorm) server. In July 2009, the company confirmed it would not implement Phorm; Charles Dunstone, boss of its parent company, told the Times "We were only going to do it [Phorm] if BT did it and if the whole industry was doing it. We were not interested enough to do it on our own." Business news magazine New Media Age reported on 23 April that Virgin Media moved away from Phorm and was expected to sign a deal with another company named Audience Science, while BT would meet with other advertising companies to gain what the ISP calls "general market intelligence" about Phorm.NMA'' had called the moves "a shift in strategy by the two media companies". A day later, the magazine said both companies' relationships with Phorm actually remain unchanged. Although Virgin Media were reported to have "moved away from Phorm", in November 2010 they were the only UK-based ISP to still carry information about Phorm's Webwise system on their website. In addition, Phorm partners with international ISPs Oi, Telefónica in Brazil, TTNET-Türk Telekom in Turkey, and Romtelecom in Romania. Countries Post United Kingdom South Korea Phorm announced the beginning of a market trial in South Korea via the London Stock Exchange's Regulatory News Service (RNS) on 30 March 2009. Subsequently, they announced via RNS on 21 May 2009 that they had commenced the trial. On 8 July 2009 Phorm indicated that the trials were proceeding as expected. In their Notice of 2009 Interim Report & Accounts, published on 14 September 2009, Phorm stated they were "Nearing completion of a substantial market trial, launched in May, with KT, the largest ISP in South Korea". The existence of the trial in South Korea was publicised by OhmyNews on 2 September 2009. On 9 September 2009 OhMyNews announced that the trial had been shut down. Brazil On 26 March 2010, Phorm announced that its plans for commercial deployment in Brazil. In May 2012, the Brazilian Internet Steering Committee issued a resolution recommending against the use of Phorm products by any internet service providers in the country, citing privacy risks and concerns that Phorm's products would degrade the quality of internet services. In respect of the proposed partnership with Telemar (now known as Oi) the claim is that iG, a web portal, only has 5% penetration in the market and Phorm did not clear R$400 million "last year". Turkey Since launching with TTNET, a subsidiary of Türk Telecom Group, in Turkey in 2012, Phorm has launched its platform with five additional ISPs. Accordingly, on a global basis, there are now over 20 million daily users on Phorm's platform. According to RNS Number : 3504C, as of 16 January 2015, Phorm moved to a remote cookie option whilst scaling back its operations in Turkey. China Phorm announced on 3 October 2013 that it had launched operationally in China and had commenced a nationwide opt-in process. The company announced that it has commenced commercial operations in China and is serving advertisements on a paid basis. Privacy concerns in China and Hong Kong are growing, and there have been significant developments in privacy regulation, which could impact on Phorm operations in both the mainland and Hong Kong. In May 2012 mainland China passed new regulations which implement measures protecting consumer privacy from commercial exploitation. Further privacy legislation arrived in April 2013, with the publication of two draft rules from the Ministry of Industry and Information Technology: "Provisions on the Protection of the Personal Information of Telecommunications (Provisions for Telecommunications and Internet Users)", and "Internet Users and the Provisions on Registration of the True Identity Information of Phone Users" (Provisions on Phone Users), along with draft amendments to the 1993 Law of Consumer Rights. The laws are emerging as e-commerce in China becomes an increasingly significant part of the Chinese economy. These new regulations, which include provisions regulating data collection by smart devices, are discussed in an article published by the International Association of Privacy Professionals' "Privacy Tracker" blog called "Making Sense of China's New Privacy Laws". In Hong Kong, The Office of the Privacy Commissioner for Personal Data ("PCPD") has taken a robust approach to the protection of consumer privacy, as they seek to enforce the provisions of the Personal Data (Privacy) (Amendment) Ordinance 2012 ("Amendment Ordinance") which came into force in April 2013. Notes References External links Internet privacy Online advertising Spyware Rootkits
30860321
https://en.wikipedia.org/wiki/Linux%20Intrusion%20Detection%20System
Linux Intrusion Detection System
In computer security, the Linux Intrusion Detection System (LIDS) is a patch to the Linux kernel and associated administrative tools that enhances the kernel's security by implementing mandatory access control (MAC). When LIDS is in effect all system network administration operations, chosen file access, any capability use, raw device, memory, and I/O access can be made impossible, even for root. One can define which programs can access specific files. It uses and extends the system capabilities bounding set to control the whole system and adds some network and filesystem security features to the kernel to enhance the security. One can finely tune the security protections online, hide sensitive processes, receive security alerts through the network, and more. LIDS currently supports Linux kernel 2.6, 2.4. LIDS is released under the terms of the GNU General Public License (GPL). Current Status As of 2013, the Project appears to be dead. The last updates on the homepage and in the associated forum were from 2010, and as of 2018 the website is no longer running. Awards Top 75 security tools in 2003 Top 50 Security tools in 2000 Best of Linux for October 9, 2000 See also AppArmor Security-Enhanced Linux (SELinux) References External links LIDS homepage (archive) Linux security software
6075929
https://en.wikipedia.org/wiki/Timex%20Computer%202048
Timex Computer 2048
The Timex Computer 2048 or TC 2048 is a 1984 computer developed by Timex Portugal (the Portuguese branch of Timex Corporation), at the time part of Timex Sinclair. It was based on the Timex Sinclair 2048 prototype (see below), with a similar redesign case, composite video output, Kempston joystick interface, and additional video modes, while being highly compatible with the Sinclair ZX Spectrum computer (although ROM differences prevented 100% compatibility). After connecting an external disk drive, Timex FDD3000, the computer could work under TOS - Timex Operating System or CP/M. As Timex Portugal sold the Timex Sinclair models in Portugal and Poland under the Timex Computer brand, this computer is named "Timex Computer 2048", even though the "Timex Sinclair 2048" was never produced. Timex Sinclair 2048 (prototype) The Timex Sinclair 2048 was not released by Timex Sinclair because of the failure of the T/S 1500. According to an early Timex Sinclair 2000 computer flyer, it would be a cut-down Timex Sinclair 2068 with 16 KB of RAM. It had an added Kempston-compatible joystick interface and a monochrome high resolution mode for 80 column text. History In contrast with the ZX Spectrum, which was the best-selling computer in Britain at the time, the T/S 2068 and T/S 1500 were considered failures. Timex Corporation withdrew from the U.S. home computer market in February 1984. Timex Portugal continued to manufacture and sold the TC 2048 in Portugal and Poland, where it was very successful, selling more than 10000 units. Also, a NTSC version was sold in Chile. TC 2048 started to be sold in Poland in 1986, with 5000 units available at the "Central Scouting Store" for a price of PLZ 106,000. It was the equivalent of 4 average salaries (24,095 PLZ), and slightly higher than a ZX Spectrum (PLZ 70,000-80,000). Peripherals where also made available at the time of release: Production of the computer ended in 1989. Further developments This computer forms the basis of an proposal for an improved Spectrum-compatible machine, the ZX Spectrum SE. Based on the Timex TC 2048 and the ZX Spectrum 128, with Timex graphic modes and 280K RAM, it was proposed by Andrew Owen and Jarek Adamski in 2000. A prototype was created, and this configuration is supported by different emulators. Two modifications of the TC 2048 exist: the TC 2128 (by STAVI) and the TC 2144 (by Jarek Adamski). Both extend the RAM to 128K and upgrade the ULA to use four screen areas. Technical specifications CPU Zilog Z80A @ 3.50 MHz ROM 16 KB RAM 48 KB Display Improved ULA offering additional Extended Color, Dual Screen and High Resolution screen modes: Text: 32×24 characters (8×8 pixels, rendered in graphics mode) Graphics: 256×192 pixels, 15 colours (two simultaneous colours - "attributes" - per 8×8 pixels, causing attribute clash) Extended Color: 256×192 pixels, 15 colors with colour resolution of 32×192 (two simultaneous colours - "attributes" - per 1×8 pixels) Dual Screen: (two 256×192 pixels screens can be placed in memory) High Resolution: 512×192 mode with 2 colours (Four palettes: Black & White, Blue & Yellow, Red & Cyan, Magenta & Green). Sound Beeper (1 channel, 10 octaves and 10+ semitones via internal speaker) [By separate purchase the Joystick/Sound Unit was available to enhance sound and provide a joystick port.] I/O Z80 bus in/out Line audio in/out for external cassette tape storage RF television out Composite video monitor out Kempston Joystick input Storage External cassette tape recorder 1–8 external ZX Microdrives (using ZX Interface 1) Timex FDD (Floppy Disk Drive System Power Supply, Controller and Disk Drive in separate cases. 16K RAM, Timex Operating System (TOS)) Timex FDD3000 (Enhanced version (all in one case) of the Timex FDD but upgraded to 64K RAM & TOS with two Hitachi 3″ disk drives) See also Timex Sinclair Timex Sinclair 2068 References External links Timex Computer World Computer-related introductions in 1984 ZX Spectrum clones
46445121
https://en.wikipedia.org/wiki/Samsung
Samsung
The Samsung Group (or simply Samsung, stylized in logo as SΛMSUNG) ( ) is a South Korean multinational manufacturing conglomerate headquartered in Samsung Town, Seoul, South Korea. It comprises numerous affiliated businesses, most of them united under the Samsung brand, and is the largest South Korean (business conglomerate). Samsung has the 8th highest global brand value. Samsung was founded by Lee Byung-chul in 1938 as a trading company. Over the next three decades, the group diversified into areas including food processing, textiles, insurance, securities, and retail. Samsung entered the electronics industry in the late 1960s and the construction and shipbuilding industries in the mid-1970s; these areas would drive its subsequent growth. Following Lee's death in 1987, Samsung was separated into five business groups – Samsung Group, Shinsegae Group, CJ Group and Hansol Group, and Joongang Group. Notable Samsung industrial affiliates include Samsung Electronics (the world's largest information technology company, consumer electronics maker and chipmaker Samsung Heavy Industries (the world's 2nd largest shipbuilder and Samsung Engineering and Samsung C&T Corporation (respectively the world's 13th and 36th largest construction companies). Other notable subsidiaries include Samsung Life Insurance (the world's 14th largest life insurance company), Samsung Everland (operator of Everland Resort, the oldest theme park in South Korea) and Cheil Worldwide (the world's 15th largest advertising agency, Etymology According to Samsung's founder, the meaning of the Korean hanja word Samsung () is "three stars". The word "three" represents something "big, numerous and powerful", while "stars" means "everlasting" or "eternal", like stars in the sky. History 1938–1970 In 1938, during Japanese-ruled Korea, Lee Byung-chul (1910–1987) of a large landowning family in the Uiryeong county moved to nearby Daegu city and founded Mitsuboshi Trading Company (), or Samsung Sanghoe (주식회사 삼성상회). Samsung started out as a small trading company with forty employees located in Su-dong (now Ingyo-dong). It dealt in dried-fish, locally-grown groceries and noodles. The company prospered and Lee moved its head office to Seoul in 1947. When the Korean War broke out, he was forced to leave Seoul. He started a sugar refinery in Busan named Cheil Jedang. In 1954, Lee founded Cheil Mojik and built the plant in Chimsan-dong, Daegu. It was the largest woollen mill ever in the country. Samsung diversified into many different areas. Lee sought to establish Samsung as a leader in a wide range of industries. Samsung moved into lines of business such as insurance, securities, and retail. In 1947, Cho Hong-jai, the Hyosung group's founder, jointly invested in a new company called Samsung Mulsan Gongsa, or the Samsung Trading Corporation, with the Samsung's founder Lee Byung-chull. The trading firm grew to become the present-day Samsung C&T Corporation. After a few years, Cho and Lee separated due to differences in management style. Cho wanted a 30 equity share. Samsung Group was separated into Samsung Group and Hyosung Group, Hankook Tire and other businesses. In the late 1960s, Samsung Group entered the electronics industry. It formed several electronics-related divisions, such as Samsung Electronics Devices, Samsung Electro-Mechanics, Samsung Corning and Samsung Semiconductor & Telecommunications, and made the facility in Suwon. Its first product was a black-and-white television set. 1970–1990 In 1980, Samsung acquired the Gumi-based Hanguk Jeonja Tongsin and entered telecommunications hardware. Its early products were switchboards. The facility was developed into the telephone and fax manufacturing systems and became the center of Samsung's mobile phone manufacturing. They have produced over 800 million mobile phones to date. The company grouped them together under Samsung Electronics in the 1980s. After Lee, the founder's death in 1987, Samsung Group was separated into five business groups—Samsung Group, Shinsegae Group, CJ Group, Hansol Group and the JoongAng Group. Shinsegae (discount store, department store) was originally part of Samsung Group, separated in the 1990s from the Samsung Group along with CJ Group (Food/Chemicals/Entertainment/logistics), Hansol Group (Paper/Telecom), and the JoongAng Group (Media). Today these separated groups are independent and they are not part of or connected to the Samsung Group. One Hansol Group representative said, "Only people ignorant of the laws governing the business world could believe something so absurd", adding, "When Hansol separated from the Samsung Group in 1991, it severed all payment guarantees and share-holding ties with Samsung affiliates." One Hansol Group source asserted, "Hansol, Shinsegae, and CJ have been under independent management since their respective separations from the Samsung Group". One Shinsegae department store executive director said, "Shinsegae has no payment guarantees associated with the Samsung Group". In the 1980s, Samsung Electronics began to invest heavily in research and development, investments that were pivotal in pushing the company to the forefront of the global electronics industry. In 1982, it built a television assembly plant in Portugal; in 1984, a plant in New York; in 1985, a plant in Tokyo; in 1987, a facility in England; and another facility in Austin, Texas, in 1996. As of 2012, Samsung has invested more than in the Austin facility, which operates under the name Samsung Austin Semiconductor. This makes the Austin location the largest foreign investment in Texas and one of the largest single foreign investments in the United States. In 1987, United States International Trade Commission order that the Samsung Group of South Korea unlawfully sold computer chips in the United States without licenses from the chip inventor, Texas Instruments Inc. The order requires Samsung to pay a penalty to Texas Instruments within the coming weeks. Otherwise, sales of all dynamic random access memory chips made by Samsung and all products using the chips would be banned in the United States. The ban includes circuit boards and equipment called single-in-line packages made by other companies that use D-RAM's made by Samsung with 64,000 or 256,000 characters of memory. It also covers computers, facsimile machines and certain telecommunications equipment and printers bearing either of the Samsung chips. 1990–2000 Since 1990, Samsung has increasingly globalised its activities and electronics; in particular, its mobile phones and semiconductors have become its most important source of income. It was in this period that Samsung started to rise as an international corporation in the 1990s. Samsung's construction branch was awarded contracts to build one of the two Petronas Towers in Malaysia, Taipei 101 in Taiwan and the Burj Khalifa in United Arab Emirates. In 1993, Lee Kun-hee sold off ten of Samsung Group's subsidiaries, downsized the company, and merged other operations to concentrate on three industries: electronics, engineering and chemicals. In 1996, the Samsung Group reacquired the Sungkyunkwan University foundation. Samsung became the world's largest producer of memory chips in 1992 and is the world's second-largest chipmaker after Intel (see Worldwide Top 20 Semiconductor Market Share Ranking Year by Year). In 1995, it created its first liquid-crystal display screen. Ten years later, Samsung grew to be the world's largest manufacturer of liquid-crystal display panels. Sony, which had not invested in large-size TFT-LCDs, contacted Samsung to cooperate, and, in 2006, S-LCD was established as a joint venture between Samsung and Sony in order to provide a stable supply of LCD panels for both manufacturers. S-LCD was owned by Samsung (50% plus one share) and Sony (50% minus one share) and operates its factories and facilities in Tanjung, South Korea. As of 26 December 2011, it was announced that Samsung had acquired the stake of Sony in this joint venture. Compared to other major Korean companies, Samsung survived the 1997 Asian financial crisis relatively unharmed. However, Samsung Motor was sold to Renault at a significant loss. , Renault Samsung is 80.1 percent owned by Renault and 19.9 percent owned by Samsung. Additionally, Samsung manufactured a range of aircraft from the 1980s to the 1990s. The company was founded in 1999 as Korea Aerospace Industries (KAI), the result of a merger between then three domestic major aerospace divisions of Samsung Aerospace, Daewoo Heavy Industries and Hyundai Space and Aircraft Company. However, Samsung still manufactures aircraft engines and gas turbines. 2000–present In 2000, Samsung opened a development center in Warsaw, Poland. Its work began with set-top-box technology before moving into digital TV and smartphones. The smartphone platform was developed with partners, officially launched with the original Samsung Solstice line of devices and other derivatives in 2008, which was later developed into Samsung Galaxy line of devices including Notes, Edge and other products. In 2007, former Samsung chief lawyer Kim Yong Chul claimed that he was involved in bribing and fabricating evidence on behalf of the group's chairman, Lee Kun-hee, and the company. Kim said that Samsung lawyers trained executives to serve as scapegoats in a "fabricated scenario" to protect Lee, even though those executives were not involved. Kim also told the media that he was "sidelined" by Samsung after he refused to pay a $3.3 million bribe to the U.S. Federal District Court judge presiding over a case where two of their executives were found guilty on charges related to memory chip price-fixing. Kim revealed that the company had raised a large number of secret funds through bank accounts illegally opened under the names of up to 1,000 Samsung executives—under his own name, four accounts were opened to manage 5 billion won. In 2010, Samsung announced a ten-year growth strategy centered around five businesses. One of these businesses was to be focused on biopharmaceuticals, to which has committed . In first quarter of 2012, Samsung Electronics became the world's largest mobile phone maker by unit sales, overtaking Nokia, which had been the market leader since 1998. On 24 August 2012, nine American jurors ruled that Samsung Electronics had to pay Apple $1.05 billion in damages for violating six of its patents on smartphone technology. The award was still less than the $2.5 billion requested by Apple. The decision also ruled that Apple did not violate five Samsung patents cited in the case. Samsung decried the decision saying that the move could harm innovation in the sector. It also followed a South Korean ruling stating that both companies were guilty of infringing on each other's intellectual property. In first trading after the ruling, Samsung shares on the Kospi index fell 7.7%, the largest fall since 24 October 2008, to 1,177,000 Korean won. Apple then sought to ban the sales of eight Samsung phones (Galaxy S 4G, Galaxy S2 AT&T, Galaxy S2 Skyrocket, Galaxy S2 T-Mobile, Galaxy S2 Epic 4G, Galaxy S Showcase, Droid Charge and Galaxy Prevail) in the United States which has been denied by the court. As of 2013, the Fair Trade Commission of Taiwan is investigating Samsung and its local Taiwanese advertising agency for false advertising. The case was commenced after the commission received complaints stating that the agency hired students to attack competitors of Samsung Electronics in online forums. Samsung Taiwan made an announcement on its Facebook page in which it stated that it had not interfered with any evaluation report and had stopped online marketing campaigns that constituted posting or responding to content in online forums. In 2015, Samsung has been granted more U.S. patents than any other company – including IBM, Google, Sony, Microsoft and Apple. The company received 7,679 utility patents through 11 December. The Galaxy Note 7 smartphone went on sale on 19 August 2016. However, in early September 2016, Samsung suspended sales of the phone and announced an informal recall. This occurred after some units of the phones had batteries with a defect that caused them to produce excessive heat, leading to fires and explosions. Samsung replaced the recalled units of the phones with a new version; however, it was later discovered that the new version of the Galaxy Note 7 also had the battery defect. Samsung recalled all Galaxy Note 7 smartphones worldwide on 10 October 2016, and permanently ended production of the phone the following day. In 2018, Samsung launched the world's largest mobile manufacturing facility in Noida, India, with guest of honour including Indian Prime Minister Narendra Modi. In the 2021 review of WIPO's annual World Intellectual Property Indicators Samsung was ranked as 3rd in the world for its 170 industrial design registrations published under the Hague System in 2020, slightly down from their 2nd place ranking in 2019 for 166 design registrations being published. Influence Samsung has a powerful influence on South Korea's economic development, politics, media and culture and has been a major driving force behind the "Miracle on the Han River". Its affiliate companies produce around a fifth of South Korea's total exports. Samsung's revenue was equal to 17% of South Korea's $1,082 billion GDP in 2013. "You can even say the Samsung chairman is more powerful than the President of South Korea. [South] Korean people have come to think of Samsung as invincible and above the law", said Woo Suk-hoon, host of a popular economics podcast in a Washington Post article headlined "In South Korea, the Republic of Samsung", published on 9 December 2012. Critics claimed that Samsung knocked out smaller businesses, limiting choices for South Korean consumers and sometimes colluded with fellow giants to fix prices while bullying those who investigate. Lee Jung-hee, a South Korean presidential candidate, said in a debate, "Samsung has the government in its hands. Samsung manages the legal world, the press, the academics and bureaucracy". Operations Samsung comprises around 80 companies. Its activities include construction, consumer electronics, financial services, shipbuilding and medical services. As of April 2011, the Samsung Group comprised 59 unlisted companies and 19 listed companies, all of which had their primary listing on the Korea Exchange. In FY 2009, Samsung reported consolidated revenues of 220 trillion KRW ($172.5 billion). In FY 2010, Samsung reported consolidated revenues of 280 trillion KRW ($258 billion), and profits of 30 trillion KRW ($27.6 billion) based upon a KRW-USD exchange rate of 1,084.5 KRW per USD, the spot rate . These amounts do not include the revenues from Samsung's subsidiaries based outside South Korea. Leadership Lee Byung-chul (1938–1966, 1968–1987) Lee Maeng-hee (1966–1968), Lee Byung-chul's first son Lee Kun-hee (1987–2008, 2010–2020), Lee Byung-chul's third son Lee Soo-bin (2008–2010) Affiliates Samsung Electronics is a multinational electronics and information technology company headquartered in Suwon and the flagship company of the Samsung Group. Its products include air conditioners, computers, digital television sets, active-matrix organic light-emitting diodes (AMOLEDs), mobile phones, display monitors, computer printers, refrigerators, semiconductors and telecommunications networking equipment. It was the world's largest mobile phone maker by unit sales in the first quarter of 2012, with a global market share of 25.4%. It was also the world's second-largest semiconductor maker by 2011 revenues (after Intel). Steco is the joint venture established between Samsung Electronics and Japan's Toray Industries in 1995. Toshiba Samsung Storage Technology Corporation (TSST) is a joint venture between Samsung Electronics and Toshiba of Japan which specialises in optical disc drive manufacturing. TSST was formed in 2004, and Toshiba owns 51 percent of its stock, while Samsung owns the remaining 49 percent. Samsung Electronics is listed on the Korea Exchange stock market (number 005930). Samsung Biologics is a biopharmaceutical division of Samsung, founded in 2011. It has contract development and manufacturing (CDMO) services including drug substance and product manufacturing and bioanalytical testing services. The company is headquartered in Incheon, South Korea and its existing three plants comprises the largest biologic contract manufacturing complex. It expanded its contract development service lab to San Francisco, U.S. Samsung Biologics is listed on the Korean Exchange stock market (number 207940). Samsung Bioepis is a biosimilar medicine producer and joint venture between Samsung Biologics (50 percent plus one share) and the U.S.-based Biogen Idec (50 percent). In 2014, Biogen Idec agreed to commercialize future anti-TNF biosimilar products in Europe through Samsung Bioepis. Samsung Engineering is a multinational construction company headquartered in Seoul. It was founded in January 1969. Its principal activity is the construction of oil refining plants; upstream oil and gas facilities; petrochemical plants and gas plants; steel making plants; power plants; water treatment facilities; and other infrastructure. It achieved total revenues of 9,298.2 billion won (US$8.06 billion) in 2011. Samsung Engineering is listed on the Korea Exchange stock market (number 02803450). Samsung Fire & Marine Insurance is a multinational general insurance company headquartered in Seoul. It was founded in January 1952 as Korea Anbo Fire and Marine Insurance and was renamed Samsung Fire & Marine Insurance in December 1993. Samsung Fire & Marine Insurance offers services including accident insurance, automobile insurance, casualty insurance, fire insurance, liability insurance, marine insurance, personal pensions and loans. As of March 2011 it had operations in 10 countries and 6.5 million customers. Samsung Fire & Marine Insurance had a total premium income of $11.7 billion in 2011 and total assets of $28.81 billion on 31 March 2011. It is the largest provider of general insurance in South Korea. Samsung Fire has been listed on the Korea Exchange stock market since 1975 (number 000810). Samsung Heavy Industries is a shipbuilding and engineering company headquartered in Seoul. It was founded in August 1974. Its principal products are bulk carriers, container vessels, crude oil tankers, cruisers, passenger ferries, material handling equipment steel and bridge structures. It achieved total revenues of 13,358.6 billion won in 2011 and is the world's second-largest shipbuilder by revenues (after Hyundai Heavy Industries). Samsung Heavy Industries is listed on the Korea Exchange stock market (number 010140). Samsung Life Insurance is a multinational life insurance company headquartered in Seoul. It was founded in March 1957 as Dongbang Life Insurance and became an affiliate of the Samsung Group in July 1963. Samsung Life's principal activity is the provision of individual life insurance and annuity products and services. As of December 2011 it had operations in seven countries, 8.08 million customers and 5,975 employees. Samsung Life had total sales of 22,717 billion won in 2011 and total assets of 161,072 billion won at 31 December 2011. It is the largest provider of life insurance in South Korea. Samsung Air China Life Insurance is a 50:50 joint venture between Samsung Life Insurance and China National Aviation Holding. It was established in Beijing in July 2005. Siam Samsung Life Insurance: Samsung Life Insurance holds a 37 percent stake while the Saha Group also has a 37.5 percent stake in the joint venture, with the remaining 25 percent owned by Thanachart Bank. Samsung Life Insurance is listed on the Korea Exchange stock market (number 032830). Samsung SDI is listed on the Korea Exchange stock-exchange (number 006400). On 5 December 2012, the European Union's antitrust regulator fined Samsung SDI and several other major companies for fixing prices of TV cathode-ray tubes in two cartels lasting nearly a decade. SDI also builds lithium-ion batteries for electric vehicles such as the BMW i3, and acquired Magna Steyr's battery plant near Graz in 2015. SDI began using the "21700" cell format in August 2015. Samsung plans to convert its factory in Göd, Hungary to supply 50,000 cars per year. Samsung SDI also produced CRTs and VFD displays until 2012. Samsung SDI uses lithium-ion technology for its phone and portable computer batteries. Samsung SDS is a multinational IT Service company headquartered in Seoul. It was founded in March 1985. Its principal activity is the providing IT system (ERP, IT Infrastructure, IT Consulting, IT Outsourcing, Data Center). Samsung SDS is Korea's largest IT service company. It achieved total revenues of 6,105.9 billion won (US$5.71 billion) in 2012. Samsung C&T Corporation is listed on the Korea Exchange stock market (000830). Samsung Electro-Mechanics, established in 1973 as a manufacturer of key electronic components, is headquartered in Suwon, Gyeonggi-do, South Korea. It is listed on the Korea Exchange stock market (number 009150). Samsung Advanced Institute of Technology (SAIT), established in 1987, is headquartered in Suwon and operates research labs around the world. Ace Digitech is listed on the Korea Exchange stock market (number 036550). Cheil Industries is listed on the Korea Exchange stock market (number 001300). Cheil Worldwide is listed on the Korea Exchange stock market (number 030000). Credu is listed on the Korea Exchange stock market (number 067280). Imarket Korea is listed on the Korea Exchange stock market (number 122900). Samsung Card is listed on the Korea Exchange stock market (number 029780). Hotel Shilla (also known as "The Shilla") opened in March 1979, following the intention of the late Lee Byung-chul, the founder of the Samsung Group. Shilla Hotels and Resorts is listed on the Korea Exchange stock market (number 008770). Samsung Everland covers the three main sectors of Environment & Asset, Food Culture and Resort. The Samsung Medical Center was founded on 9 November 1994, under the philosophy of "contributing to improving the nation's health through the best medical service, advanced medical research and development of outstanding medical personnel". The Samsung Medical Center consists of a hospital and a cancer center, which is the largest in Asia. The hospital is located in an intelligent building with floor space of more than 200,000 square meters and 20 floors above ground and 5 floors underground, housing 40 departments, 10 specialist centers, 120 special clinics and 1,306 beds. The 655-bed Cancer Center has 11 floors above ground and 8 floors underground, with floor space of over 100,000 square meters. SMC is a tertiary hospital manned by approximately 7,400 staff including over 1,200 doctors and 2,300 nurses. Since its foundation in the 1990s, the Samsung Medical Center has successfully incorporated and developed an advanced model with the motto of becoming a "patient-centered hospital", a new concept in Korea. Samsung donates around US$100 million per annum to the Samsung Medical Center. It incorporates Samsung Seoul Hospital, Kangbuk Samsung Hospital, Samsung Changwon Hospital, Samsung Cancer Center and Samsung Life Sciences Research Center. In 2010, the Samsung Medical Center and pharmaceutical multinational Pfizer agreed to collaborate on research to identify the genomic mechanisms responsible for clinical outcomes in hepatocellular carcinoma. Divested Samsung Techwin was listed on the Korea Exchange stock-exchange (number 012450), with its principal activities being the development and manufacture of surveillance (including security cameras), aeronautics, optoelectronics, automations and weapons technology. It was announced to be sold to Hanwha Group in December 2014 and the take-over completed in June 2015. It was later renamed Hanwha Techwin. Samsung Thales Co., Ltd. (until 2001 known as Samsung Thomson-CSF Co., Ltd.) was a joint venture between Samsung Techwin and the France-based aerospace and defence company Thales. It was established in 1978 and is based in Seoul. Samsung's involvement was passed on to Hanwha Group as part of the Techwin transaction. Samsung General Chemicals was sold to Hanwha. Another chemical division was sold to Lotte Corporation in 2016. Samsung Total was a 50/50 joint venture between Samsung and the France-based oil group Total S.A. (more specifically, Samsung General Chemicals and Total Petrochemicals). Samsung's stake was inherited by Hanwha Group in its acquisition of Samsung General Chemicals. Defunct In 1998, Samsung created a U.S. joint venture with Compaq, called Alpha Processor Inc. (API), to help it enter the high-end processor market. The venture was also aimed at expanding Samsung's non-memory chip business by fabricating Alpha processors. At the time, Samsung and Compaq invested $500 million in Alpha Processor. GE Samsung Lighting was a joint venture between Samsung and the GE Lighting subsidiary of General Electric. The venture was established in 1998 and was broken up in 2009. Global Steel Exchange was a joint venture formed in 2000 between Samsung, the U.S.-based Cargill, the Switzerland-based Duferco Group, and the Luxembourg-based Tradearbed (now part of the ArcelorMittal), to handle their online buying and selling of steel. Samtron was a subsidiary of Samsung until 1999 when it became independent. After that, it continued to make computer monitors and plasma displays until 2003, Samtron became Samsung when Samtron was a brand. In 2003 the website redirected to Samsung. S-LCD Corporation was a joint venture between Samsung Electronics (50% plus one share) and the Japan-based Sony Corporation (50% minus one share) which was established in April 2004. On 26 December 2011, Samsung Electronics announced that it would acquire all of Sony's shares in the venture. Joint ventures Samsung Fine Chemicals is listed on the Korea Exchange stock-exchange (number 004000). Samsung Machine Tools of America is a national distributor of machines in the United States. Samsung GM Machine Tools is the head office of China, It is an SMEC Legal incorporated company. Samsung Securities is listed on the Korea Exchange stock-exchange (number 016360). S-1 was founded as Korea's first specialized security business in 1997 and has maintained its position at the top of industry with the consistent willingness to take on challenges. S1 Corporation is listed on the Korea Exchange stock-exchange (number 012750.KS). State-run Korea Agro-Fisheries Trade Corp. set up the venture, aT Grain Co., in Chicago, with three other South Korean companies, Korea Agro-Fisheries owns 55 percent of aT Grain, while Samsung C&T Corp, Hanjin Transportation Co. and STX Corporation each hold 15 percent. Brooks Automation Asia Co., Ltd. is a joint venture between Brooks Automation (70%) and Samsung (30%) which was established in 1999. The venture locally manufactures and configure vacuum wafer handling platforms and 300mm Front-Opening Unified Pod (FOUP) load port modules, and designs, manufactures and configures atmospheric loading systems for flat panel displays. Company POSS – SLPC s.r.o. was founded in 2007 as a subsidiary of Samsung C & T Corporation, Samsung C & T Deutschland and the company POSCO. Samsung BP Chemicals, based in Ulsan, is a 49:51 joint venture between Samsung and the UK-based BP, which was established in 1989 to produce and supply high-value-added chemical products. Its products are used in rechargeable batteries and liquid crystal displays. Samsung Corning Precision Glass is a joint venture between Samsung and Corning, which was established in 1973 to manufacture and market cathode ray tube glass for black and white televisions. The company's first LCD glass substrate manufacturing facility opened in Gumi, South Korea, in 1996. Samsung Sumitomo LED Materials is a Korea-based joint venture between Samsung LED Co., Ltd., an LED maker based in Suwon, Korea-based and the Japan-based Sumitomo Chemical. The JV will carry out research and development, manufacturing and sales of sapphire substrates for LEDs. SD Flex Co., Ltd. was founded in October 2004 as a joint venture corporation by Samsung and DuPont, one of the world's largest chemical companies. Sermatech Korea owns 51% of its stock, while Samsung owns the remaining 49%. The U.S. firm Sermatech International, for a business specializing in aircraft construction processes such as special welding and brazing. Siltronic Samsung Wafer Pte. Ltd, the joint venture by Samsung and wholly owned Wacker Chemie subsidiary Siltronic, was officially opened in Singapore in June 2008. SMP Ltd. is a joint venture between Samsung Fine Chemicals and MEMC. In 2011, MEMC Electronic Materials Inc. and an affiliate of Korean conglomerate Samsung formed a joint venture to build a polysilicon plant. Stemco is a joint venture established between Samsung Electro-Mechanics and Toray Industries in 1995. SB LiMotive is a 50:50 joint company of Robert Bosch GmbH (commonly known as Bosch) and Samsung SDI founded in June 2008. The joint venture develops and manufactures lithium-ion batteries for use in hybrid-, plug-in hybrid vehicles and electric vehicles. Partially owned companies Samsung Heavy Industries owns 10% of the Brazilian shipbuilder Atlântico Sul, whose Atlântico Sul Shipyard is the largest shipyard in South America. The Joao Candido, Brazil's largest ship, was built by Atlântico Sul with technology licensed by Samsung Heavy Industries. The companies have a technical assistance agreement through which industrial design, vessel engineering and other know-how is being transferred to Atlântico Sul. Samsung Life Insurance currently holds a 7.4% stake in the South Korean banking company DGB Financial Group, making it the largest shareholder. DGB Financial Group is a Korea-based company that specialises in banking. The company is divided into six segments of operation and each segment's primary source of funds come from general public deposits. Samsung acquired 7.4% of Gorilla Glass maker Corning, signing a long-term supply deal. Corning is an American company that is experienced in glass chemistry, ceramics science, and optical physics, as well as its manufacturing and engineering, to create goods that support industries and improve living standards. Corning is committed to long-term research and development. Samsung Heavy Industries currently holds a 14.1% stake in Doosan Engine, making it the second-largest shareholder. Doosan Group is a South Korean company found in 1896 by Park Seung-jik. The company specializes in heavy industries and construction such as power plants and desalination plants. Samsung Techwin currently holds a 10% stake in Korea Aerospace Industries (KAI). Other major shareholders include the state-owned Korea Finance Corporation (26.75%), Hyundai Motor (10%) and Doosan (10%). Korea Aerospace Industries is a South Korean defense and aerospace company found in 1999. Korea Aerospace Industries specializes in developing aerospace products, satellites, and aircraft. On April 31, 2021, Korea Aerospace Industries announced that it was going to invest $880 million, 1 trillion won, over five consecutive years to expand the space business industry. MEMC's joint venture with Samsung Electronics Company, Ltd. In 1990, MEMC entered into a joint venture agreement to construct a silicon plant in Korea. MEMC Korea Company is a Korean manufacturer and distributor of electronic components, ingots, silicon wafers, and other products. Samsung buys 10% stake in rival phone maker Pantech. Pantech is a South Korean company found in 1991. Pantech manufactures mobile phones and tablets. Pantech serves in many countries, including South Korea, United States, Japan, Europe, Vietnam, and China. Samsung currently owns 4.19% of Rambus Incorporated. Rambus Incorporated is an American technology company found in 1990. The company specializes in producing electronic components such as licenses chip interface technologies and architectures used in digital electronic products. Samsung currently owns 19.9% of the automobile manufacturer Renault Samsung Motors. Renault Samsung Motors is a South Korean automotive company found in 1994. The company made car related transactions starting in 1998 and since have expanded into a range of cars and electric car models. Samsung currently owns 9.6% of Seagate Technology, making it the second-largest shareholder. Under a shareholder agreement, Samsung has the right to nominate an executive to Seagate's board of directors. Seagate Technology is an American company that works in the computer storage industry. Seagate Technology was found in 1979. The company is a major supplier of microcomputers and hard disks. Samsung owns 3% of Sharp Corporation, a rival company to Samsung. Sharp Corporation is a Japanese company found in 1912. The company specializes in designing and manufacturing electronic products, such as phones, microwave ovens, and air conditioners. Samsung Engineering holds a 10% stake in Sungjin Geotec, an offshore oil drilling company that is a subsidiary of POSCO. SungJin Geotec is a South Korean company found in 1989. The company specializes in manufacturing and developing offshore facilities, oil sand modules, petrochemical plant components, and desalination plants. Taylor Energy is an independent American oil company that drills in the Gulf of Mexico based in New Orleans, Louisiana. Samsung Oil & Gas USA Corp., subsidiaries of Samsung , currently owns 20% of Taylor Energy. Taylor Energy is an American oil and gas company found in 1979. The company works mainly in the oil drilling industry and drills in the Gulf of Mexico. Acquisitions and attempted acquisitions Samsung has made the following acquisitions and attempted acquisitions: Samsung Techwin acquired the German camera manufacturer Rollei in 1995. Samsung (Rollei) used its optic expertise on the crystals of a new line of 100% Swiss-made watches. But on 11 March 1995, the Cologne District court prohibited the advertising and sale of Rollei watches on German territory. In 1999, Rollei management bought out the company. Samsung lost a chance to revive its failed bid to take over Dutch aircraft maker Fokker when other airplane makers rejected its offer to form a consortium. The three proposed partners—Hyundai, Hanjin and Daewoo—notified the South Korean government that they would not join Samsung Aerospace Industries. Samsung bought a 40% stake in AST Research in 1995, in a failed attempt to break into the North American computer market. Samsung was forced to close the California-based computer maker following mass resignations of research staff and a string of losses. In 1995, Samsung's textile department invested in FUBU, an American hip hop apparel company, after the founder placed an advertisement asking for funding in The New York Times newspaper. Samsung Securities Co., Ltd. and City of London-based N M Rothschild & Sons (more commonly known simply as Rothschild) have agreed to form a strategic alliance in investment banking business. Two parties will jointly work on cross border mergers and acquisition deals. In December 2010, Samsung Electronics bought MEDISON Co., a South Korean medical-equipment company, the first step in a long-discussed plan to diversify from consumer electronics. Grandis Inc. – memory developer in July 2011, Samsung announced that it had acquired spin-transfer torque random access memory (MRAM) vendor Grandis Inc. Grandis will become a part of Samsung's R&D operations and will focus on the development of next-generation random-access memory. On 26 December 2011 the board of Samsung Electronics approved a plan to buy Sony's entire stake in their 2004 joint liquid crystal display (LCD) venture for 1.08 trillion won ($938.97 million). On 9 May 2012, mSpot announced that it had been acquired by Samsung Electronics with the intention of a cloud-based music service. The succeeding service was Samsung Music Hub. In December 2012, Samsung announced that it had acquired the privately held storage software vendor NVELO, Inc., based in Santa Clara, California. NVELO will become part of Samsung's R&D operations, and will focus on software for intelligently managing and optimizing next-generation Samsung SSD storage subsystems for consumer and enterprise computing platforms. In January 2013, Samsung announced that it has acquired medical imaging company NeuroLogica, part of the multinational conglomerate's plans to build a leading medical technology business. Terms of the deal were not disclosed. On 14 August 2014, Samsung acquired SmartThings, a fast-growing home automation startup. The company did not release the acquisition price, but TechCrunch reported a $200 million price tag when it first caught word of the deal in July 2014. On 19 August 2014, Samsung said it had acquired U.S. air conditioner distributor Quietside LLC as part of its push to strengthen its "smart home" business. A Samsung Electronics spokesman said the South Korean company acquired 100 percent of Quietside. 3 November 2014, Samsung announced it had acquired Proximal Data, Inc., a San Diego, California-based pioneer of server-side caching software with I/O intelligence that work within virtualized systems. On 18 February 2015, Samsung acquired U.S.-based mobile payments firm LoopPay – This allows Samsung in smartphone transactions. On 5 March 2015, Samsung acquired small U.S.-based manufacturer of light-emitting diode displays, YESCO Electronics, which focuses on making digital billboards and message signs. On 5 October 2016, Samsung announced it had acquired Viv, a company working on artificial intelligence. On 15 November 2016, Samsung Canada announced it has acquired Rich Communications Services, a company working on a new technology for text messaging. Major clients Major clients of Samsung include: Royal Dutch Shell Samsung Heavy Industries will be the sole provider of liquefied natural gas (LNG) storage facilities worth up to US$50 billion to Royal Dutch Shell for the next 15 years. Shell unveiled plans to build the world's first floating liquefied natural gas (FLNG) platform. In October 2012 at Samsung Heavy Industries' shipyard on Geoje Island in South Korea work started on a "ship" that, when finished and fully loaded, will weigh 600,000 tonnes, the world's biggest "ship". That is six times larger than the largest U.S. aircraft carrier. United Arab Emirates government A consortium of South Korean firms, including Samsung, Korea Electric Power Corporation and Hyundai, won a deal worth $40 billion to build nuclear power plants in the United Arab Emirates. Ontario government The government of the Canadian province of Ontario signed off one of the world's largest renewable energy projects, signing a deal worth $6.6 billion for an additional of new wind and solar energy. Under the agreement, a consortium led by Samsung and the Korea Electric Power Corporation will manage the development of -worth of new wind farms and of solar capacity, while also building a manufacturing supply chain in the province. Corporate image The basic color in the logo is blue, which Samsung has employed for years, supposedly symbolizing stability, reliability and corporate social responsibility. Audio logo Samsung has an audio logo, which consists of the notes E♭, A♭, D♭, E♭; after the initial E♭ tone it is up a perfect fourth to A♭, down a perfect fifth to D♭, then up a major second to return to the initial E♭ tone. The audio logo was produced by Musikvergnuegen and written by Walter Werzowa. This audio logo is discontinued as of 2015. Font In 2014, Samsung unveiled its Samsung Sharp Sans font. In July 2016, Samsung unveiled its SamsungOne font, a typeface that hopes to give a consistent and universal visual identity to the wide range of Samsung products. SamsungOne was designed to be used across Samsung's diverse device portfolio, with a focus on legibility for everything from smaller devices like smartphones to larger connected TVs or refrigerators, as well as Samsung marketing and advertisements. The font family supports 400 different languages through over 25,000 characters. Sponsorships Samsung Electronics spent an estimated $14 billion (U.S.) on advertising and marketing in 2013. At 5.4% of annual revenue, this is a larger proportion than any of the world's top-20 companies by sales (Apple spent 0.6% and General Motors spent 3.5%). Samsung became the world's biggest advertiser in 2012, spending $4.3 billion, compared to Apple's $1 billion. Samsung's global brand value of $39.6 billion is less than half that of Apple. Controversies Labor abuses Samsung was the subject of several complaints about child labor in its supply chain from 2012 to 2015. In July 2014, Samsung cut its contract with Shinyang Electronics after it received a complaint about the company violating child labor laws. Samsung says that its investigation turned up evidence of Shinyang using underage workers and that it severed relations immediately per its "zero tolerance" policy for child labor violations. One of Samsung's Chinese supplier factories, HEG, was criticized for using underage workers by China Labor Watch (CLW) in July 2014. HEG denied the charges and has sued China Labor Watch. CLW issued a statement in August 2014 claiming that HEG employed over ten children under the age of 16 at a factory in Huizhou, Guangdong. The group said the youngest child identified was 14 years old. Samsung said that it conducted an onsite investigation of the production line that included one-on-one interviews but found no evidence of child labor being used. CLW responded that HEG had already dismissed the workers described in its statement before Samsung's investigators arrived. CLW also claimed that HEG violated overtime rules for adult workers. CLW said a female college student was only paid her standard wage despite working four hours of overtime per day even though Chinese law requires overtime pay at 1.5 to 2.0 times standard wages. Union busting activity Samsung has a no-union policy and has been engaged in union-busting activities around the world. Samsung has also been sued by union for stealing the corpse of dead worker. On 6 May 2020, Samsung vice chairman Lee Jae-yong apologized for the union-busting scandals. 2007 Slush Fund scandal Kim Yong-chul, the former head of the legal department at Samsung's Restructuring Office, and Catholic Priests Association for Justice uncovered Lee Kun-hee's slush fund on 29 October 2007. He presented a list of 30 artworks that the family of Samsung Group Chairman Lee Kun-hee purchased with some of the slush funds, which were to be found in Samsung's warehouse in south of Seoul, along with documents about bribes to prosecutors, judges and lawmakers, tax collectors with thousands of borrowed-named bank account. The court sentenced Lee Kun-hee to 3 years imprisonment, 5 years probation imprisonment, and fined him 11 billion won. But on 29 December 2009, the South Korean president Lee Myung-bak specially pardoned Lee, stating that the intent of the pardon was to allow Lee to remain on the International Olympic Committee. It is the only independent amnesty to have occurred in South Korean history. Kim Yong-chul published a book 'Thinking about Samsung' in 2010. He wrote detailed accounts of Samsung's behavior and how the company lobbied governmental authorities including the court officials, prosecutors and national tax service officials for transferring Samsung's management rights to Lee Jae-yong. Lee Kun-hee's prostitution scandal In July 2016, the investigative journal KCIJ-Newstapa released a video of Samsung chairman Lee Kun-hee's prostitution and alleged years long employment of female sex workers. On 12 April 2018, Supreme Court of South Korea sentenced the former general manager of CJ CheilJedang, who filmed the prostitution video, to four years and six months in prison for blackmail and intimidation. Supporting far-right groups The investigative team of Special Prosecutors of 2016 South Korean political scandal announced that the Blue House received money from South Korea’s four largest chaebols (Samsung, Hyundai Motor Group, SK Group and LG Group) to fund pro-government demonstrations by conservative and far-right organizations such as the Korean Parent Federation (KPF) and the Moms Brigade Price fixing On 19 October 2011, Samsung companies were fined €145,727,000 for being part of a price cartel of ten companies for DRAMs which lasted from 1 July 1998 to 15 June 2002. The companies received, like most of the other members of the cartel, a 10% reduction for acknowledging the facts to investigators. Samsung had to pay 90% of their share of the settlement, but Micron avoided payment as a result of having initially revealed the case to investigators. In Canada, during 1999, some DRAM microchip manufacturers conspired to price fix, among the accused included Samsung. The price fix was investigated in 2002. A recession started to occur that year, and the price fix ended; however, in 2014, the Canadian government reopened the case and investigated silently. Sufficient evidence was found and presented to Samsung and two other manufacturers during a class action lawsuit hearing. The companies agreed upon a $120 million agreement, with $40 million as a fine, and $80 million to be paid back to Canadians who purchased a computer, printer, MP3 player, gaming console or camera from April 1999 to June 2002. References Samsung Announces 2022 Neo QLED TV Line-Up Across Europe Breaking the Rules with Samsung Galaxy Tab S8 Series: The Biggest, Boldest, Most Versatile Galaxy Tablet Ever External links 1938 establishments in Korea Companies based in Seoul Conglomerate companies established in 1938 Conglomerate companies of South Korea Electronics companies established in 1938 Electronics companies of South Korea Holding companies of South Korea Lens manufacturers Manufacturing companies of South Korea Mobile phone manufacturers Multinational companies headquartered in South Korea Retail companies established in 1938 Retail companies of South Korea Technology companies established in 1938 Technology companies of South Korea Capacitor manufacturers
29383692
https://en.wikipedia.org/wiki/Unity%20%28user%20interface%29
Unity (user interface)
Unity is a graphical shell for the GNOME desktop environment originally developed by Canonical Ltd. for its Ubuntu operating system, and now being developed by the Unity7 Maintainers (Unity7) and UBports (Unity8/Lomiri). Unity debuted in the netbook edition of Ubuntu 10.10. It was initially designed to make more efficient use of space given the limited screen size of netbooks, including, for example, a vertical application switcher called the launcher, and a space-saving horizontal multipurpose top menu bar. Unity was part of the Ayatana project, an initiative with the stated intention of improving the user experience within Ubuntu. Unlike GNOME, KDE Software Compilation, Xfce, or LXDE, Unity is not a collection of applications. It is designed to use existing programs. On 5 April 2017, Mark Shuttleworth announced that Canonical's work on Unity would end. Ubuntu 18.04 LTS, a year away from release at the time, would abandon the Unity desktop and employ the GNOME 3 desktop instead. Unity7 Maintainers took over Unity7 development, while UBports founder Marius Gripsgård announced that the organization would continue Unity8 development. On 27 February 2020, UBports announced that it renamed Unity8 to Lomiri. In May 2020, a new unofficial Ubuntu version was first released. Ubuntu Unity uses the Unity7 desktop. Ubuntu Unity and Unity7 Maintainers have started working on the successor of Unity7, UnityX. Features The Unity user interface consists of several components: Top menu bar: a multipurpose top bar, saving space, and containing: the menu bar of the active application the title bar of the main window of the active application, including the maximize, minimize and exit buttons the session menu, including the global system settings, logout, and shut down the diverse global notification indicators including the time, weather, and the state of the underlying system. Launcher: a taskbar. Multiple instances of an application are grouped under the same icon, with an indicator showing how many instances are open. The user has a choice whether or not to lock an application to the launcher. If it is not locked, an application may be started using the Dash or via a separately installed menu. Quicklist: the accessible menu of launcher items Dash: a desktop search utility that enables searching for information both locally (e.g. installed applications, recent files, or bookmarks) and online (e.g. Twitter or Google Docs). It displays previews of the results. Head-up display (HUD): Allows hotkey searching for top menu bar items from the keyboard, without the need for using the mouse, by pressing and releasing the Alt key. Indicators: a notification area containing the clock, network status, battery status, and audio volume controls Dash Dash is a desktop search utility with preview ability. It enables searching for applications and files. Dash supports search plug-ins, known as Scopes (formerly Lenses). Out of the box, it can query Google Docs, Ubuntu One Music Store, YouTube, Amazon, and social networks (for example, Twitter, Facebook, and Google+). Starting with Ubuntu 13.10, online search queries are sent to a Canonical web service which determines the type of query and directs them to the appropriate third-party web service. Pornographic results are filtered out. None of Ubuntu's official derivatives (Kubuntu, Xubuntu, Lubuntu, or Ubuntu GNOME) include this feature or any variation of it. One of the new features of Unity in Ubuntu 12.10 is the shopping lens. As of October 2012, it sends (through a secure HTTPS connection) the user's queries from the home lens to productsearch.ubuntu.com, which then polls Amazon.com to find relevant products; Amazon then sends product images directly to the user's computer (initially, through unsecured HTTP). If the user clicks on one of these results and then buys something, Canonical receives a small commission on the sale. Many reviewers criticized it: as the home lens is the natural means to search for content on the local machine, reviewers were concerned about the disclosure of queries that were intended to be local, creating a privacy problem. The feature is active by default (instead of opt-in) and many users could be unaware of it. On 23 September 2012, Mark Shuttleworth defended the feature. He posted "the Home Lens of the Dash should let you find *anything* anywhere" and that the shopping lens is a step in that direction. He argued that anonymity is preserved because Canonical servers mediate the communication between Unity and Amazon and users could trust Ubuntu. Ubuntu Community Manager Jono Bacon posted "These features are neatly and unobtrusively integrated into the dash, and they not only provide a more useful and comprehensive dash in giving you visibility on this content, but it also generates revenue to help continue to grow and improve Ubuntu." Steven J. Vaughan-Nichols from ZDNet said the feature does not bother him and wrote "If they can make some users happy and some revenue for the company at the same time, that's fine by me." Ted Samson at InfoWorld reported the responses from Shuttleworth and Bacon, but he still criticized the feature. On 29 October 2012, the Electronic Frontier Foundation criticized the problem. It argued that since product images were (as of October 2012) returned via insecure HTTP then a passive eavesdropper—such as someone on the same wireless network—could get a good idea of the queries. Also, Amazon could correlate the queries with IP addresses. It recommended Ubuntu developers make the feature opt-in and make Ubuntu's privacy settings more fine-grained. It noted that the Dash can be stopped from searching the Internet by switching off "Include online search results" in Ubuntu's privacy settings. On 7 December 2012, Richard Stallman claimed that Ubuntu contains spyware and should not be used by free software supporters. Jono Bacon rebuked him; he said that Ubuntu responded and implemented many of the requirements the community found important. Since September 2013, images are anonymized before being sent to the user's computer. A legal notice in the Dash informs users of the sharing of their data. It states that unless the user has opted out, by turning the searches off, their queries and IP address will be sent to productsearch.ubuntu.com and "selected third parties" for online search results. Ubuntu's Third Party Privacy Policies page informs all of the third parties that may receive users' queries and IP addresses, and states: "For information on how our selected third parties may use your information, please see their privacy policies." Soon after being introduced, doubts emerged on the conformance of the shopping lens with the European Data Protection Directive. By late 2013, these doubts made the grounds for a formal complaint on the shopping lens filed with the Information Commissioner's Office (IOC), the UK data privacy office. Almost one year later the IOC ruled in favour of Canonical, considering the various improvements introduced to the feature in the meantime to render it conformal with the Data Protection Directive. However, the ruling also made clear that at the time of introduction the feature was not legal, among other things, since it lacked a privacy policy statement. In March 2014, Michael Hall speaking for Canonical Ltd, indicated that in Unity 8 users will have to opt-in for each search, which will be conducted by opening a special scope and then choosing where to search. These changes would address all criticisms leveled at Canonical and Unity in the past. As of April 2016, with the release of Ubuntu 16.04 LTS, the setting is off by default. Variants Unity for Ubuntu TV Ubuntu TV, running a Unity variant, was introduced at the 2012 Consumer Electronics Show. Created for SmartTVs, Ubuntu TV provides access to popular Internet services and stream content to mobile devices running Android, iOS and Ubuntu. Unity for Ubuntu Touch On 2 January 2013, Canonical announced a smartphone variant of Unity running on the Mir display server. Unity 2D Initially Canonical maintained two discrete versions of Unity, which were visually almost indistinguishable but technically different. Unity is written as a plugin for Compiz and uses an uncommon OpenGL toolkit called Nux. Being a plugin for Compiz gives Unity GPU-accelerated performance on compatible systems. It is written in the programming languages C++ and Vala. Unity 2D was a set of individual applications developed for environments that Compiz does not run on, such as when graphics card does not support OpenGL. They were written in the GUI building language QML from the widespread Qt framework. By default Unity 2D used the Metacity window manager but could also use accelerated window managers like Compiz or KWin. In Ubuntu 11.10, Unity 2D used Metacity's XRender-based compositor to achieve transparency effects. Starting with Ubuntu 11.10, Unity 2D replaced the classic GNOME Panel as the fall-back for users whose hardware could not run the Compiz version of Unity. Unity 2D was discontinued for the release of Ubuntu 12.10 in October 2012, as the 3D version became more capable of running on lower-powered hardware. Availability As Unity and the supporting Ayatana projects are developed primarily for Ubuntu and Ubuntu was the first to offer new versions. Outside of Ubuntu, other Linux distributors have tried to pick up Ayatana, with varying success. The Ayatana components require modification of other applications, which increases the complexity for adoption by others. Arch Linux offers many Ayatana components, including Unity and Unity 2D, via an unofficial repository or through AUR. Fedora does not offer Unity in its default repositories because Unity requires unsupported patches to GTK. However Unity 6 has been ported to Fedora 17 and can be installed through a branch in the openSUSE repositories where the patches are applied. Newer Fedora and Unity versions are not supported. Frugalware had adopted Ayatana, including Unity and Unity 2D, as part of the development branch for an upcoming Frugalware release but the project is no longer maintained. openSUSE offers many Ayatana components for GNOME. After the packager abandoned the project because of problems with the then-current version of Compiz, new developers picked up the task and provide packages for openSUSE 12.2 (along with versions for Arch Linux and Fedora 17). Newer openSUSE and Unity versions are not supported. Manjaro has a Unity version of its distribution. Ubuntu Unity uses the Unity 7 desktop. Development Ubuntu originally used the full GNOME desktop environment; Ubuntu founder Mark Shuttleworth cited philosophical differences with the GNOME team over the user experience to explain why Ubuntu would use Unity as the default user interface instead of GNOME Shell, beginning April 2011, with Ubuntu 11.04 (Natty Narwhal). In November 2010, Ubuntu Community Manager Jono Bacon explained that Ubuntu will continue to ship the GNOME stack, GNOME applications, and optimize Ubuntu for GNOME. The only difference, he wrote, would be that Unity is a different shell for GNOME. Canonical announced it had engineered Unity for desktop computers as well and would make Unity the default shell for Ubuntu in version 11.04. GNOME Shell was not included in Ubuntu 11.04 Natty Narwhal because work on it was not completed at the time 11.04 was frozen, but is available from a PPA, and is available in Ubuntu 11.10 and later releases, through the official repositories. In November 2010, Mark Shuttleworth announced the intention to eventually run Unity on Wayland instead of the currently used X Window System, although this plan has since been dropped, replacing Wayland with Mir for Unity 8. In December 2010, some users requested that the Unity launcher (or dock) be movable from the left to other sides of the screen, but Mark Shuttleworth stated in reply, "I'm afraid that won't work with our broader design goals, so we won't implement that. We want the launcher always close to the Ubuntu button." However, with Ubuntu 11.10, the Ubuntu button was moved into the launcher. A third-party plugin that moved Unity 3D's launcher to the bottom was available. An option to move the launcher to the bottom of the screen was officially implemented in Ubuntu 16.04. , the Unity shell interface developers use a toolkit called Nux instead of Clutter. Unity is a plugin of the Compiz window manager, which Canonical states is faster than Mutter, the window manager for which GNOME Shell is a plugin. On 14 January 2011, Canonical also released a technical preview of a "2D" version of Unity based on Qt and written in QML. Unity-2D was not shipped on the Ubuntu 11.04 CD, instead the classic GNOME desktop was the fall-back for hardware that could not run Unity. In March 2011, public indications emerged of friction between Canonical (and its development of Unity) and the GNOME developers. As part of Unity development Ubuntu developers had submitted API coding for inclusion in Gnome as an external dependency. According to Dave Neary, "... an external dependency is a non-GNOME module which is a dependency of a package contained in one of the GNOME module sets," and the reasons why libappindicator was not accepted as an external dependency are that "... it does not fit that definition," it has "... duplicate functionality with libnotify," (the current Gnome Shell default) and its CLA does not meet current GNOME policy. Mark Shuttleworth responded, In April 2011, Mark Shuttleworth announced that Ubuntu 11.10 Oneiric Ocelot would not include the classic GNOME desktop as a fall-back to Unity, unlike Ubuntu 11.04 Natty Narwhal. Instead Ubuntu 11.10 used the Qt-based Unity 2D for users whose hardware cannot support the 3D version. However, the classic GNOME desktop (GNOME Panel) can be installed separately in Ubuntu 11.10 and later versions through gnome-panel, a package in the Ubuntu repositories. At the November 2011 Ubuntu Developer Summit, it was announced that Unity for Ubuntu 12.04 would not re-enable the systray, and would have better application integration, and the ability to drag lenses onto the launcher, and that the 2D version of Unity would use the same decoration buttons as the 3D version. During the planning conference for Ubuntu 12.10 it was announced that Unity 2D would probably be dropped in favour of making Unity 3D run better on lower-end hardware. In July 2012, at OSCON, Shuttleworth explained some of the historical reasoning behind Unity's development. The initial decision to develop a new interface in 2008 was driven by a desire to innovate and to pass Microsoft and Apple in user experience. This meant a family of unified interfaces that could be used across many device form factors, including desktop, laptop, tablet, smart phones and TV. Shuttleworth said "‘The old desktop would force your tablet or your phone into all kinds of crazy of funny postures. So we said: Screw it. We’re going to move the desktop to where it needs to be for the future. [This] turned out to be a deeply unpopular process." Initial testing of Unity during development was done in a laboratory setting and showed the success of the interface, despite public opposition. Real world shipping return rates also indicated acceptance. Shuttleworth explained, "ASUS ran an experiment where they shipped half a million [Unity netbooks and laptops] to Germany. Not an easy market. And the return rates on Ubuntu were exactly the same as the return rates on Windows. Which is the key indicator for OEMs who are looking to do this." Microsoft's development of Windows 8 and its Metro interface became an additional incentive for Unity development, as Shuttleworth explained, "we [had to move] our desktop because if we didn’t we’d end up where Windows 8 is. [In Windows 8] you have this shiny tablet interface, and you sit and you use then you press the wrong button then it slaps you in the face and Windows 7 is back. And then you think OK, this is familiar, so you’re kind of getting into it and whack [Windows 8 is back]." In March 2013 the plan to use the Mir display server was announced for future development of Unity, in place of the previously announced Wayland/Weston. In April 2015 it was announced that Unity 8 would ship as part of Ubuntu 16.04 LTS, or possibly later. It was also noted that this version of Unity would not visually differ much from Unity 7. In April 2016 Ubuntu 16.04 was released with Unity 7, not Unity 8, as the default user interface, though Unity 8 could be installed through the Ubuntu software repositories as an optional, preview package. During an Ubuntu Online Summit, Canonical employees said that their goal is to ship Unity 8 as the default interface for Ubuntu 16.10, to be released in October 2016. These plans are now changed and for now Unity 8 will come preinstalled with 16.10 but not as default. On 5 April 2017, Mark Shuttleworth announced that Canonical's work on Unity would end and that Ubuntu would employ the GNOME 3 desktop instead. However, the UBports team forked the Unity 8 repository and continued the development. Currently, the Unity 8 project is maintained and developed by UBports. Reception Early versions of Unity received mixed reviews and generated controversy. Some reviewers found fault with the implementation and limitations, while other reviewers found Unity an improvement over GNOME 2 with the further potential to improve over time. With Ubuntu 12.04, Unity received good reviews. Jack Wallen described it as an "incredible advancement". Jesse Smith described it as "attractive" and said it had grown to maturity. Ryan Paul said Unity was responsive, robust and had the reliability expected from a mature desktop shell. The Dash feature of Unity in Ubuntu 12.10 generated a privacy controversy. Ubuntu 10.10 In reviewing an alpha version of Unity, shortly after it was unveiled in the summer of 2010, Ryan Paul of Ars Technica noted problems figuring out how to launch additional applications that were not on the dock bar. He also mentioned a number of bugs, including the inability to track which applications were open and other window management difficulties. He remarked that many of these were probably due to the early stage in the development process and expected them to be resolved with time. Paul concluded positively, "Our test of the Unity prototype leads us to believe that the project has considerable potential and could bring a lot of value to the Ubuntu Netbook Edition. Its unique visual style melds beautifully with Ubuntu's new default theme and its underlying interaction model seems compelling and well-suited for small screens." In an extensive review of Ubuntu 10.10 shortly after its release in October 2010, Paul made further observations on Unity, noting that "Unity is highly ambitious and offers a substantially different computing experience than the conventional Ubuntu desktop." He concluded that "The [application] selectors are visually appealing, but they are easily the weakest part of the Unity user experience. The poor performance significantly detracts from their value in day-to-day use and the lack of actual file management functionality largely renders the file selector useless. The underlying concepts behind their design are good, however, and they have the potential to be much more valuable in the future as unity matures." Ubuntu 11.04 In March 2011, writer Benjamin Humphrey of OMG Ubuntu criticized the development version of Unity then being tested for Ubuntu 11.04 on a number of grounds, including a development process that is divorced from user experiences, the lack of response to user feedback, "the seemingly unbelievable lack of communication the design team has," and a user interface he described as "cluttered and inconsistent". Overall, however, he concluded that "Unity is not all bad ... While a number of the concepts in Unity may be flawed from a design point of view, the actual idea itself is not, and Canonical deserve applause for trying to jump start the stagnant open source desktop with Unity when the alternatives do not evoke confidence." On 14 April 2011 Ryan Paul reviewed Unity as implemented in Ubuntu 11.04 beta, just two weeks before its stable release. He reported that Unity was on track for inclusion in Natty Narwhal, despite the ambitious development schedule. He indicated, "close attention to detail shines through in many aspects of Unity. The menubar is clean and highly functional. The sidebar dock is visually appealing and has excellent default behaviors for automatic hiding." He noted that the interface still had some weak points, especially difficulties browsing for applications not on the dock, as well as switching between application categories. He noted that, in particular, "random packages from the repositories, which are presented as applications that are available for installation in the launcher, are distracting and largely superfluous". Paul concluded, "There is still a lot of room for improvement, but Unity is arguably a strong improvement over the conventional GNOME 2.x environment for day-to-day use. The breadth of the changes may be disorienting for some users, but most will like what they see when Unity lands on their desktop at the end of the month." Two weeks later he added the lack of configurability to his criticisms. In a very detailed assessment of Ubuntu 11.04 and Unity published on 12 May 2011, Paul further concluded Unity was a positive development for Ubuntu, but that more development had to be invested to make it work right. He wrote, "They have done some incredibly impressive work so far and have delivered a desktop that is suitable for day-to-day use, but it is still very far from fulfilling its full potential." On 25 April 2011, the eve of the release of Ubuntu 11.04, reviewer Matt Hartley of IT Management criticized Unity, saying that the "dumbing down of the Linux desktop environment is bordering on insane". Reviewer Joey Sneddon of OMG Ubuntu was more positive about Unity in his review of Ubuntu 11.04, encouraging users, "Sure it's different—but different doesn't mean bad; the best thing to do is to give it a chance." He concluded that Unity on the desktop makes "better use of screen space, intuitive interface layouts and, most importantly, making a desktop that works for the user and not in spite of them." Following the release of Ubuntu 11.04 Canonical Ltd. founder Mark Shuttleworth indicated that, while he was generally happy with the implementation of Unity, he felt that there was room for improvement. Shuttleworth said, "I recognise there are issues, and I would not be satisfied unless we fixed many of them in 11.10 ... Unity was the best option for the average user upgrading or installing. There are LOTS of people for whom it isn't the best, but we had to choose a default position ... It's by no means perfect, and it would be egotistical to suggest otherwise... I think the bulk of it has worked out fantastically—both at an engineering level (Compiz, Nux) and in the user experience." In reviewing Unity in Ubuntu 11.04 on 9 May 2011, Jesse Smith of DistroWatch criticized its lack of customization, menu handling and Unity hardware requirements, saying, "There's really nothing here which should demand 3D acceleration." He also noted that "The layout doesn't translate well to large screens or multiple-screen systems." Jack M. Germain of Linux Insider reviewed Unity on 11 May 2011, indicating strong dislike for it, saying, "Put me in the Hate It category" and indicating that as development has proceeded he likes it less and less. Ubuntu 11.10 More criticism appeared after the release of Ubuntu 11.10. In November 2011 Robert Storey writing in DistroWatch noted that developer work on Unity is now taking up so much time that little is getting done on outstanding Ubuntu bugs, resulting in a distribution that is not as stable or as fast as it should be. Storey concluded "Perhaps it would be worth putting up with the bugs if Unity was the greatest thing since sliced bread — something wonderful that is going to revolutionize desktop computing. But it's not. I tried Unity, and it's kind of cute, but nothing to write home about." In November 2011 OMG! Ubuntu! conducted a non-scientific poll that asked its readers "which Desktop Environment Are You Using in Ubuntu 11.10?". Of the 15,988 votes cast 46.78% indicated that they were using Unity over GNOME Shell (28.42%), Xfce (7.58%), KDE (6.92%) and LXDE (2.7%). Developers of Linux distributions based upon Ubuntu have also weighed in on the introduction of Unity in early 2011, when Unity was in its infancy. Some have been critical, including two distributions who base their criticism on usability testing. Marco Ghirlanda, the lead developer of the audio- and video-centric ArtistX, stated, "When I tried Unity on computer illiterates, they were less productive and took ages to understand the concepts behind it. When I show them how to use it, they said that it is pretty to see but hard to use." Stephen Ewen, the lead developer for UberStudent, an Ubuntu-based Linux distribution for higher education and college-bound high school students, stated, "Unity's design decreases both visual and functional accessibility, which tabulates to decreased productivity." Ewen also criticized Unity's menu scheme as much less accessible than on GNOME 2, which he said, "means that the brain cannot map as quickly to program categories and subcategories, which again means further decreased productivity." Ubuntu 12.04 LTS Jesse Smith of DistroWatch said that many people, like him, had questioned Ubuntu's direction, including Unity. But with Ubuntu 12.04 he felt that the puzzle pieces, which individually may have been underwhelming, had come together to form a whole, clear picture. He said "Unity, though a step away from the traditional desktop, has several features which make it attractive, such as reducing mouse travel. The HUD means that newcomers can find application functionality with a quick search and more advanced users can use the HUD to quickly run menu commands from the keyboard." He wrote that Unity had grown to maturity, but said he was bothered by its lack of flexibility. Jack Wallen of TechRepublic—who had strongly criticized early versions of Unity—said "Since Ubuntu 12.04 was released, and I migrated over from Linux Mint, I’m working much more efficiently. This isn’t really so much a surprise to me, but to many of the detractors who assume Unity a very unproductive desktop... well, I can officially say they are wrong. [...] I realize that many people out there have spurned Unity (I was one of them for a long time), but the more I use it, the more I realize that Canonical really did their homework on how to help end users more efficiently interact with their computers. Change is hard – period. For many, the idea of change is such a painful notion they wind up missing out on some incredible advancements. Unity is one such advancement." Ryan Paul said Unity was responsive, robust and had the reliability expected from a mature desktop shell. He considered the HUD as one of several excellent improvements that had helped to make Unity "even better in Ubuntu 12.04". Yet he also wrote: "Although Unity's quality has grown to the point where it fulfills our expectations, the user experience still falls short in a number of ways. We identified several key weaknesses in our last two Ubuntu reviews, some of which still haven't been addressed yet. These issues still detract from Unity's predictability and ease of use." Ubuntu 16.04 LTS Jack Wallen of TechRepublic, in reviewing the changes scheduled for Unity in Ubuntu 16.04 LTS, concluded, "Ubuntu Unity is not the desktop pariah you once thought it was. This desktop environment has evolved into a beautiful, efficient interface that does not deserve the scorn and derision heaped upon it by so many." See also Comparison of X Window System desktop environments Comparison of X window managers Ubuntu Unity References External links Official wiki Official Unity hardware requirements Bugs of Unity in Ubuntu Ubuntu Third Party Privacy Policies page Yunit, a fork of Unity Canonical (company) Discontinued software Graphical user interfaces Software that uses GTK Software that uses QML Touch user interfaces Ubuntu
891719
https://en.wikipedia.org/wiki/Ehud%20Shapiro
Ehud Shapiro
Ehud Shapiro (; born 1955) is a multi-disciplinary scientist, artist, entrepreneur and Professor of Computer Science and Biology at the Weizmann Institute of Science. With international reputation, he made fundamental contributions to many scientific disciplines. Ehud was also an Internet pioneer, a successful Internet entrepreneur, and a pioneer and proponent of E-democracy. Ehud is the founder of the Ba Rock Band and conceived its original artistic program. He is a winner of two ERC (European Research Council) Advanced Grants. Education and Professional background Born in Jerusalem in 1955, the guiding light for Ehud Shapiro's scientific endeavors was the philosophy of science of Karl Popper, with which he became acquainted through a high-school project supervised by Moshe Kroy from the Department of Philosophy, Tel Aviv University. In 1979 Shaprio completed his undergraduate studies in Tel Aviv University in Mathematics and Philosophy with distinction. Shapiro's PhD work with Dana Angluin in Computer Science at Yale university attempted to provide an algorithmic interpretation to Popper's philosophical approach to scientific discovery, resulting in both a computer system for the inference of logical theories from facts and a methodology for program debugging, developed using the programming language Prolog. His thesis, "Algorithmic Program Debugging", was published by MIT Press as a 1982 ACM Distinguished Dissertation, followed in 1986 by "The Art of Prolog", a textbook co-authored with Leon Sterling. Coming to the Department of Computer Science and Applied Mathematics at the Weizmann Institute of Science in 1982 as a post-doctoral fellow, Shapiro was inspired by the Japanese Fifth Generation Computer Systems project to invent a high-level programming language for parallel and distributed computer systems, named Concurrent Prolog. A two-volume book on Concurrent Prolog and related work was published by MIT Press in 1987. Shapiro's work had a decisive influence on the strategic direction of the Japanese national project, and he cooperated closely with the project throughout its 10-years duration. In 1993, Shapiro took leave of absence from his tenured position at Weizmann to found Ubique Ltd. (and serve as its CEO), an Israeli Internet software pioneer. Building on Concurrent Prolog, Ubique developed "Virtual Places", a precursor to today's broadly-used Instant Messaging systems. Ubique was sold to America Online in 1995, and following a management buy out in 1997 was sold again to IBM in 1998, where it continues to develop SameTime, IBM's leading Instant Messaging product based on Ubique's technology. Preparing for return to academia, Shapiro ventured into self-study of molecular biology. Shapiro attempted to build a computer from biological molecules, guided by a vision of "A Doctor in a Cell": A biomolecular computer that operates inside the living body, programmed with medical knowledge to diagnose diseases and produce the requisite drugs. Lacking experience in molecular biology, Shapiro realized his first design for a molecular computer as a LEGO-like mechanical device built using 3D stereolithography, which was patented upon his return to Weizmann in 1998. During the last decade and a half, Shapiro’s lab have designed and successfully implemented various molecular computing devices. In 2004, Prof. Shapiro also designed an effective method of synthesizing error-free DNA molecules from error-prone building blocks. In 2011, Prof. Shapiro founded the CADMAD consortium: The CADMAD technological platform aims to deliver a revolution in DNA processing analogous to the revolution text editing underwent with the introduction of electronic text editors. In 2005, Prof. Shapiro presented a vision of the next grand challenge in Human biology: To uncover the Human cell lineage tree. Inside all of us is a cell lineage tree the history of how our body grows from a single cell (the fertilized egg) to 100 trillion cells. The biological and biomedical impact of such a success could be of a magnitude similar, if not larger than that of the Human Genome Project. In his TEDxTel-Aviv talk "Uncovering The Human Cell Lineage Tree The next grand scientific challenge" Prof. Shapiro described the system and results obtained with it so far, and a proposal for a FET Flagship project "Human Cell Lineage Flagship initiative" for uncovering the Human cell lineage tree in health and disease. Inductive Logic Programming The philosopher of science Karl Popper suggested that all scientific theories are by nature conjectures and inherently fallible, and that refutation to old theory is the paramount process of scientific discovery. According to Popper’s Philosophy the Growth of Scientific Knowledge is based upon Conjectures and Refutations. Prof. Shapiro’s doctoral studies with Prof. Dana Angluin attempted to provide an algorithmic interpretation to Karl Popper's approach to scientific discovery in particular for automating the "Conjectures and Refutations" method making bold conjectures and then performing experiments that seek to refute them. Prof. Shapiro generalized this into the "Contradiction Backtracing Algorithm" an algorithm for backtracking contradictions. This algorithm is applicable whenever a contradiction occurs between some conjectured theory and the facts. By testing a finite number of ground atoms for their truth in the model the algorithm can trace back a source for this contradiction, namely a false hypothesis, and can demonstrate its falsity by constructing a counterexample to it. The "Contradiction Backtracing Algorithm" is relevant both to the philosophical discussion on the refutability of scientific theories and in the aid for the debugging of logic programs. Prof. Shapiro laid the theoretical foundation for inductive logic programming and built its first implementation (Model Inference System): a Prolog program that inductively inferred logic programs from positive and negative examples. Inductive logic programming has nowadays bloomed as a subfield of artificial intelligence and machine learning which uses logic programming as a uniform representation for examples, background knowledge and hypotheses. Recent work in this area, combining logic programming, learning and probability, has given rise to the new field of statistical relational learning. Algorithmic program debugging Program debugging is an unavoidable part of software development. Until the 1980s the craft of program debugging, practiced by every programmer, was without any theoretical foundation. In the early 1980s, systematic and principled approaches to program debugging were developed. In general, a bug occurs when a programmer has a specific intention regarding what the program should do, yet the program actually written exhibits a different behavior than intended in a particular case. One way of organizing the debugging process is to automate it (at least partially) via an algorithmic debugging technique. The idea of algorithmic debugging is to have a tool that guides the programmer along the debugging process interactively: It does so by asking the programmer about possible bug sources. Algorithmic debugging was first developed by Ehud Shapiro during his PhD research at Yale University, as introduced in his PhD thesis, selected as a 1982 ACM Distinguished Dissertation. Shapiro implemented the method of algorithmic debugging in Prolog (a general purpose logic programming language) for the debugging of logic programs. In case of logic programs, the intended behavior of the program is a model (a set of simple true statements) and bugs are manifested as program incompleteness (inability to prove a true statement) or incorrectness (ability to prove a false statement). The algorithm would identify a false statement in the program and provide a counter-example to it or a missing true statement that it or its generalization should be added to the program. A method to handle non-termination was also developed. The Fifth Generation Computer Systems project The Fifth Generation Computer Systems project (FGCS) was an initiative by Japan's Ministry of International Trade and Industry, begun in 1982, to create a computer using massively parallel computing/processing. It was to be the result of a massive government/industry research project in Japan during the 1980s. It aimed to create an "epoch-making computer" with-supercomputer-like performance and to provide a platform for future developments in artificial intelligence. In 1982, during a visit to the ICOT, Ehud Shapiro invented Concurrent Prolog, a novel concurrent programming language that integrated logic programming and concurrent programming. Concurrent Prolog is a logic programming language designed for concurrent programming and parallel execution. It is a process oriented language, which embodies dataflow synchronization and guarded-command indeterminacy as its basic control mechanisms. Shapiro described the language in a Report marked as ICOT Technical Report 003, which presented a Concurrent Prolog interpreter written in Prolog. Shapiro's work on Concurrent Prolog inspired a change in the direction of the FGCS from focusing on parallel implementation of Prolog to the focus on concurrent logic programming as the software foundation for the project. It also inspired the concurrent logic programming language Guarded Horn Clauses (GHC) by Ueda, which was the basis of KL1, the programming language that was finally designed and implemented by the FGCS project as its core programming language. Ubique Ltd. In 1993, Prof. Shapiro took a leave of absence from the Weizmann Institute to found and serve as CEO of Ubique Ltd., an Israeli Internet software pioneer. Ubique was a software company that developed instant messaging and collaboration products. The company's first product, Virtual Places 1.0, integrated in one product instant messaging, voice-over-IP and browser-based social networking on top of Unix-based workstations. These ideas and technologies integrated into one product were novel and revolutionary and perhaps ahead of their time. Ubique, was sold to America Online in 1995, bought back by its management in 1997, and sold again to IBM in 1998. Molecular programming languages In the beginning of the 21st century, scientific progress has successfully managed to consolidate knowledge of the 'sequence' and 'structure' branches of molecular cell biology in an accessible manner. For example, the DNA-as-string abstraction captured the primary sequence of nucleotides without including higher and lower-order biochemical properties. This abstraction allows the application of a battery of string algorithms, as well as enabling the practical development of databases and common repositories. As molecular circuits are the information processing devices of cells and organisms, they have been the subject of research of biologists for many decades. Prior to the advent of computational biology tools, biologists were unable to have access to large amounts of data and their analyses. The mountains of knowledge about the function, activity and interaction of molecular systems in cells remained fragmented. Moreover, these past studies that have identified and connected a few components or interactions one at a time, required decades of serial work. In a seminal paper published in 2002 in Nature magazine "Cellular abstractions: Cells as computation" Prof. Shapiro raised the question: Why can’t the study of biomolecular systems make a similar computational leap? Both sequence and structure research have adopted good abstractions: ‘DNA-as-string’ and ‘protein-as-three-dimensional-labelled-graph’, respectively. He believed that computer science could provide the much-needed abstraction for biomolecular systems. Together with his Ph.D. student Aviv Regev he used advanced computer science concepts to investigate the ‘molecule-as-computation’ abstraction, in which a system of interacting molecular entities is described and modelled by a system of interacting computational entities. He developed Abstract computer languages for the specification and study of systems of interacting computations, in order to represent biomolecular systems, including regulatory, metabolic and signalling pathways, as well as multicellular processes such as immune responses. These "molecular programming languages" enabled simulation of the behavior of biomolecular systems, as well as development of knowledge bases supporting qualitative and quantitative reasoning on these systems’ properties. The groundbreaking work (that initially used the π-calculus, a process calculus) was later taken over by IBM Cambridge in the UK (Luca Cardelli) that developed SPiM (Stochastic Pi Calculus Machine). In the last decade the field has flourished with a vast variety of applications. More recently, the field even evolved to a synthesis of two different fields molecular computing and molecular programming. The combination of the two exhibits how different mathematical formalisms (such as Chemical Reaction Networks) can serve as 'programming languages' and various molecular architectures (such as DNA molecules architecture) can in principle implement any behavior that can be mathematically expressed by the formalism being used. Doctor in a cell By combining computer science and molecular biology, researchers have been able to work on a programmable biological computer that in the future may navigate within the human body, diagnosing diseases and administering treatments. This is what Professor Ehud Shapiro from the Weizmann Institute termed a "Doctor in a cell". His group designed a tiny computer made entirely of biological molecules which was successfully programmed in a test tube to identify molecular changes in the body that indicate the presence of certain cancers. The computer was then able to diagnose the specific type of cancer, and to react by producing a drug molecule that interfered with the cancer cells’ activities, causing them to self-destruct. For this work was a member of the 2004 "Scientific American 50" as Research Leader in Nanotechnology. In 2009, Shapiro and PhD student Tom Ran presented the prototype of an autonomous programmable molecular system, based on the manipulation of DNA strands, which is capable of performing simple logical deductions. This prototype is the first simple programming language implemented on molecular-scale. Introduced into the body, this system has immense potential to accurately target specific cell types and administer the appropriate treatment, as it can perform millions of calculations at the same time and 'think' logically. Prof Shapiro’s team aims to make these computers perform highly complex actions and answer complicated questions, following a logical model first proposed by Aristotle over 2000 years ago. The biomolecular computers are extremely small: three trillion computers can fit into a single drop of water. If the computers were given the rule 'All men are mortal and the fact 'Socrates is a man', they would answer 'Socrates is mortal'. Multiple rules and facts were tested by the team and the biomolecular computers answered them correctly each time. The team has also found a way to make these microscopic computing devices 'user-friendly' by creating a compiler a program for bridging between a high-level computer programming language and DNA computing code. They sought to develop a hybrid in silico/in vitro system that supports the creation and execution of molecular logic programs in a similar way to electronic computers, enabling anyone who knows how to operate an electronic computer, with absolutely no background in molecular biology, to operate a biomolecular computer. In 2012, Prof. Ehud Shapiro and Dr. Tom Ran have succeeded in creating a genetic device that operates independently in bacterial cells. The device has been programmed to identify certain parameters and mount an appropriate response. The device searches for transcription factors proteins that control the expression of genes in the cell. A malfunction of these molecules can disrupt gene expression. In cancer cells, for example, the transcription factors regulating cell growth and division do not function properly, leading to increased cell division and the formation of a tumor. The device, composed of a DNA sequence inserted into a bacterium, performs a "roll call" of transcription factors. If the results match pre-programmed parameters, it responds by creating a protein that emits a green light supplying a visible sign of a "positive" diagnosis. In follow-up research, the scientists plan to replace the light-emitting protein with one that will affect the cell's fate, for example, a protein that can cause the cell to commit suicide. In this manner, the device will cause only "positively" diagnosed cells to self-destruct. Following the success of the study in bacterial cells, the researchers are planning to test ways of recruiting such bacteria as an efficient system to be conveniently inserted into the human body for medical purposes (which shouldn't be problematic given our natural microbiome; recent research reveals there are already 10 times more bacterial cells in the human body than human cells, that share our body space in a symbiotic fashion). Yet another research goal is to operate a similar system inside human cells, which are much more complex than bacteria. DNA editing Prof. Shapiro designed an effective method of synthesizing error-free DNA molecules from error-prone building blocks. DNA programming is the DNA-counterpart of computer programming. The basic computer programming cycle is to modify an existing program, test the modified program, and iterate until the desired behavior is obtained. Similarly, the DNA programming cycle is to modify a DNA molecule, test its resulting behavior, and iterate until the goal (which is either understanding the behavior or improving it) is achieved. One key difference between the two is that unlike computer programming, our understanding of DNA as programming language is very far from being perfect, and therefore trial and error are the norm rather than the exception in DNA-based research and development. Hence DNA programming is more efficient if multiple variants of a DNA program, also called a DNA library, are created and tested in parallel, rather than creating and testing just one program at a time. Hence the basic DNA programming cycle, when operating in full steam, takes the best DNA programs from the previous cycle, uses them as a basis for creating a new set of DNA programs, tests them, and iterates until the goal is achieved. Furthermore, Polymerase Chain Reaction (PCR) is the DNA-equivalent of Gutenberg's movable type printing, both allowing large-scale replication of a piece of text. De novo DNA synthesis is the DNA-equivalent of mechanical typesetting; both ease the setting of text for replication. What is the DNA-equivalent of the word processor? Word processing was rapidly adopted as a replacement for the typewriter when users had discovered its revolutionary advantages in document creation, editing, formatting and saving. While the electronic representation of text in computers allows the processing of text within a simple unified framework, DNA processing the creation of variations and combinations of existing DNA is performed by biology labs daily using a plethora of unrelated manual labor-intensive methods. As a result, so far no universal method for DNA processing has been proposed and, consequently, no engineering discipline that further utilizes the processed DNA has emerged. Prof. Shapiro founded the CADMAD consortium: The CADMAD technological platform aims to deliver a revolution in DNA processing analogous to the revolution text editing underwent with the introduction of electronic text editors. The biotechnology revolution has, to a large extent, been held back by its notoriously prolonged R&D cycle compared to the computer programming cycle. A CAD/CAM technology for DNA which will bring word processor ease to DNA processing and thus support rapid DNA programming will revolutionize biotechnology by shortening the R&D cycle of DNA-based applications. This can only be accomplished by concerting the development of complex, multi-layered technologies which integrate expertise from fields as varied as algorithmics, software engineering, biotechnology, robotics and chemistry. These are only now starting to emerge as feasible. Human cell lineage tree In 2005, Prof. Shapiro presented a vision of the next grand challenge in Human biology: To uncover the Human cell lineage Tree. Inside all of us is a cell lineage tree the history of how our body grows from a single cell (the fertilized egg) to 100 trillion cells. The biological and biomedical impact of such a success could be of a magnitude similar, if not larger than that of the Human Genome Project. Every human being starts as a single cell the fusion of an egg and a sperm and progresses via cell division and cell death through development, birth, growth, and aging. Human health depends on maintaining a proper process of cell division, renewal and death, and humanity's most severe diseases, notably cancer, auto-immune diseases, diabetes, neuro-degenerative and cardiovascular disorders, and the multitude of inherited rare diseases are all the result of specific aberrations in this process. The history of a person's cells, from conception until any particular moment in time, can be captured by a mathematical entity called a cell lineage tree. The root of the tree represents the fertilized egg, the leaves of the tree represent the person's extant cells, and branches in the tree capture every single cell division in the person's history. Science knows precisely the cell lineage tree of only one organism a worm called Caenorhabditis elegans that reaches its full size of 1 millimeter and 1,000 cells in 36 hours. By comparison, a newborn mouse, weighing only a few grams, has about 1 billion cells. An average person has about 100 trillion cells. Understanding the structure and dynamics of the human cell lineage tree in development, growth, renewal, aging, and disease is a central and urgent quest of biology and medicine. The challenge of uncovering the Human Cell Lineage Tree is reminiscent, both in nature and in scope, to the challenge faced by the Human Genome Project at its inception and, in fact, its results will decisively contribute to the functional translation and ultimate understanding of the genome sequence. A technological leap of a magnitude similar to the one that occurred during the Human Genome Project is required for the success of the human cell lineage project, and the biological and biomedical impact of such a success could be of a magnitude similar, if not larger than that of the Human Genome Project. Central open problems in biology and medicine are in effect questions about the human cell lineage tree: its structure and its dynamics in development, growth, renewal, aging, and disease. Consequently, knowing the Human Cell Lineage Tree would resolve these problems and entail a leapfrog advance in human knowledge and health. Many central questions in biology and medicine that are actually specific questions about the Human cell lineage tree, in health and disease: Which cancer cells initiate relapse after chemotherapy? Which cancer cells can metastasize? Do insulin-producing beta cells renew in healthy adults? Do eggs renew in adult females? Which cells renew in healthy and in unhealthy adult brain? Knowing the Human cell lineage tree would answer all these questions and more. Fortunately, our cell lineage tree is implicitly encoded in our cells’ genomes via mutations that accumulate when body cells divide. Theoretically, it could be reconstructed with high precision by sequencing every cell in our body, at a prohibitive cost. Practically, analyzing only highly-mutable fragments of the genome is sufficient for cell lineage reconstruction. Shapiro's lab has developed a proof-of-concept multidisciplinary method and system for cell lineage analysis from somatic mutations. In his TEDxTel-Aviv talk "Uncovering The Human Cell Lineage Tree The next grand scientific challenge" Prof. Shapiro described the system and results obtained with it so far, and a proposal for a FET Flagship project "Human Cell Lineage Flagship initiative" for uncovering the Human cell lineage tree in health and disease. E-democracy Ehud initiated in 2012 and led the "open party" (later "open community") project within the Public Knowledge Workshop, which aimed to provide foundations for the operation of an e-party espousing direct democracy via the internet. He further extended his concepts of e-democracy in his Davos 2016 WEF lecture and Financial Times Opinion article. In 2020 Ehud founded the political party Democratit - freedom, equality and fraternity. See also Ehud Shapiro's talk in TEDxTel-Aviv: Uncovering The Human Cell Lineage Tree The next grand scientific challenge Democratit - freedom, equality and fraternity (political party) References 1955 births Living people Israeli bioinformaticians Israeli computer scientists Weizmann Institute of Science faculty People from Jerusalem European Research Council grantees
25753737
https://en.wikipedia.org/wiki/ArchOne
ArchOne
ArchOne is an Arch Linux based operating system, optimized for Acer Aspire One netbooks, but usable on other PCs with similar hardware. Features ArchOne is preconfigured to support the hardware of the Acer Aspire One. Drivers for network, sound, graphics, special keys and the webcam are active and make the hardware ready to use immediately upon first boot. ArchOne uses a rolling-release model for upgrades. Edition ArchOne has three versions, each with a different desktop manager: Openbox edition, GNOME edition and the KDE edition., but with the same application software, including Firefox, Google Chrome, Skype, KeePassX, Hsoconnect, Gparted, GIMP, OpenOffice.org, VLC, MPlayer. External links ArchOne Website Arch Linux Website References Arch-based Linux distributions Pacman-based Linux distributions Linux distributions
2280139
https://en.wikipedia.org/wiki/PC%20Tools%20%28software%29
PC Tools (software)
PC Tools is a collection of software utilities for DOS developed by Central Point Software. History of development The original PC Tools package was first developed as a suite of utilities for DOS, released for retail in 1985 for $39.95. With the introduction of version 4.0, the name was changed to PC Tools Deluxe, and the primary interface became a colorful graphical shell (previously the shell resembled PC BOSS and was monochrome.) By version 7.0 of the package in 1991 several Windows programs had been added to it. Though the 7.0 version sold well, it was criticised in computer trade publications for being overcomplicated and riddled with bugs. It was widely considered to have been rushed to publication, despite the objections of many of Central Point Software's employees. PC Tools PRO (Version 9) for DOS was the last stable version released by Central Point Software before acquisition. In June 1994 Central Point was acquired by their top competitor Symantec. The package MORE PC Tools was released by Symantec's Central Point Division in October 1994 and included additional utilities: Backtrack, CrashGuard Pro (ex-Central Point Recuperator), DriveCheck, DriveSpeed. After that the product line was ultimately discontinued. PC Tools was the main competitor to Norton Utilities, which Symantec had acquired in 1990. Symantec now uses the PC Tools brand name—acquired from an Australian security vendor in 2008—for low-cost antivirus and antispyware software The PC Tools brand was retired on December 4, 2013 and the website now refers visitors to products in Symantec's Norton product line. Utilities included PC Shell — a file manager, capable amongst other things of displaying the contents of data files used by various popular database, word-processor, and spreadsheet packages PC-Cache — a licensed disk cache of HyperCache from the HyperDisk Speed Kit PC-Secure - a file encryption utility Central Point Anti-Virus — an antivirus program. Central Point Backup — a backup utility for archiving and restoring data to and from disc or tape. In earlier releases, this utility was officially named "PC Backup". Innovative features included compression during backup and floppy disk spanning, and optional use of the Central Point Option Board for 33% faster disk writing. DiskFix — a utility for repairing on-disc file system data structures of a disc volume DiskEdit — a Disk editor Unformat — a utility that attempts to reverse the effects of a high-level format of a disc volume Undelete — a utility that attempts to recover a deleted file Mirror — a tool for storing the File Allocation Table to permit recovery of high-level formatted disks in combination with Unformat Compress — a disc volume defragmentation utility FileFix — a utility for repairing corruption to the data files used by various popular database, wordprocessor, and spreadsheet packages Commute — a remote control utility VDefend — a memory-resident computer virus detection utility SysInfo — a system information utility, incorporating diagnostics from 1993 onwards. The diagnostics were licensed from the Eurosoft product Pc-Check Central Point Desktop (CPS) — an alternative Windows desktop shell, supporting nested icon groups, file manager, resource monitoring dashboard, virtual desktops, launch menus and many other features The Mirror, Undelete, and Unformat utilities were licensed by Central Point to Microsoft for inclusion in MS-DOS 5.0. Central Point Anti-Virus and VDefend were licensed as Microsoft AntiVirus and VSafe, respectively, in MS-DOS 6.0 through 6.22. References Notes Further reading Utility software DOS software 1986 software
56682115
https://en.wikipedia.org/wiki/Aleem%20Khan
Aleem Khan
Abdul Aleem Khan is a Pakistani politician and a businessman who owns Samaa TV and Vision Group. He was also the Senior Minister of Punjab and Minister of Food. He is a member of the Provincial Assembly of the Punjab since August 2018. Previously, he served as a Provincial Minister of Punjab for Local Government and Community Development and Provincial Minister of Punjab for Planning and Development from 27 August 2018 to 6 February 2019. He was a member of the Provincial Assembly of the Punjab from 2003 to 2007 and also served as Provincial Minister of Punjab for Information Technology for the same period. Early life and education Khan was born on 5 October 1972 in Lahore, Pakistan. He hails from Pashtun family of the Kakazai tribe. He received his early education from Crescent Model Higher Secondary School, Lahore. He graduated from the Government College University, Lahore in 1992 and has a degree of Bachelor of Arts. Philanthropy Apart from politics, Aleem Khan has made a foundation naming “Abdul Aleem Khan Foundation” which is currently working in various sectors of Education, Health and Community Development. The foundation is currently operating more than 60 water filtration plants in the under developed areas of Lahore. The foundation is running ten 10 free healthcare dispensaries, which provide free medical services and medicines to the people. Abdul Aleem Khan foundation is running an Orphanage for the girls by the name of “Apna Ghar” which is providing free education and boarding facilities to the orphanage girls. Abdul Aleem Khan Foundation continuously makes financial contributions to the Health and Education sector by making donations to various educational institutions and health care institutes mainly as Shaukat Khanum Memorial Cancer Hospital, INMOL, Ghurki Hospital, NAMAL University, Rising Sun Institute DHA and Mughalpura Campus. A.A.K Foundation constructed Hydrotherapy Swimming Pool and a Speech Therapy Department at Rinsing Sun Institute, DHA Campus for the Special Children. Foundation donated 2-Kanal of Land and also paid the construction cost of a state of the art Abdur Raheem Khan Campus, Mughalpura for the education of the special children. This campus consists of 3 floors and a basement including Hydrotherapy Pool, Management Centre and class rooms for the C.P children. Further, more than 200 children are sponsored by the Foundation for their studies at the Abdur Rahim Khan Campus, Mughalpura. 100% of the expenses such as utility, salaries of the teachers and allied staff and maintenance of the campus are being borne by the Abdul Aleem Khan Foundation. Abdul Aleem Khan Foundation also contributed actively in construction/extension of Shaukat Khanum Memorial Hospital, Lahore and Peshawar and donated a huge amount for this purpose. In recognition of the donations by the Foundation, the Shaukat Khanum Memorial Hospital, Lahore has named the third floor of the patients ward as Abdur Rahim Khan Floor and Inpatient floor at Shaukat Khanum Memorial Hospital, Peshawar named as Nasim Khan(Mother of Abdul Aleem Khan) Floor. Mammography and X-Ray Machines had also been provided to the SKMT, Lahore and Peshawar. Foundation is currently running a project at Kot Lakhpat Jail, Lahore for rehabilitation of 60 bed Hospital, construction of Wash Rooms, provision of Desert Coolers etc. to prisoners. It is also pertinent to mention here that Abdul Aleem Khan Foundation has deposited a sum Rs: 3.9 Million in the Government Treasury during last five months in order to release the 44 prisoners who cannot afford to pay the fine on completion of their sentence. Business career Abdul Aleem Khan is the owner of Vision Group which is a leading real estate development company in Pakistan. Vision Group took its first step into existence when the Park View brand was introduced in Lahore's real estate market. Since then, there has been no looking back. Park View is now a major player in real estate developers on a national level. The park view by Vision Group has also not only taken its real estate projects to cities nationwide but has also diversified its businesses. Vision Group has entered the education sector with The National School which aims to equip the youth to excel in real life. Political career Khan made his political debut in 2002 and ran for the seat of the National Assembly of Pakistan as a candidate of Pakistan Muslim League (Q) (PML-Q) from Constituency NA-127 (Lahore-X) in 2002 Pakistani general election but was unsuccessful. He received 20,545 votes and lost the seat to Muhammad Tahir-ul-Qadri. He was elected to the Provincial Assembly of the Punjab as a candidate of PML-Q from Constituency PP-147 (Lahore-XI) in by-election held in January 2003 He received 18,059 votes and defeated Ameer ul Azim, a candidate of Muttahida Majlis-e-Amal (MMA). On 15 November 2003, he was inducted into the provincial Punjab cabinet of Chief Minister Chaudhry Pervaiz Elahi and was appointed as Provincial Minister of Punjab for Information Technology. He remained as Provincial Minister of Punjab for Information Technology till 2007. He ran for the seat of the National Assembly as a candidate of PML-Q from Constituency NA-127 (Lahore-X) in 2008 Pakistani general election but was unsuccessful. He received 13,707 votes and lost the seat to Naseer Ahmed Bhutta. In the same election, he also ran for the seat of the Provincial Assembly of the Punjab as a candidate of PML-Q from Constituency PP-147 (Lahore-XI) but was unsuccessful. He received 9,493 votes and lost the seat to Mohsin Latif. In January 2012, he quit PML-Q and joined Pakistan Tehreek-e-Insaf (PTI). In February 2013, he was elected as Deputy President of PTI Lahore. He was not given ticket by PTI to contest in 2013 Pakistani general election. He ran for the seat of the National Assembly as a candidate of PTI from Constituency NA-122 (Lahore-V) in by-election held in October 2015 but was unsuccessful. He received 72,043 votes and lost the seat to Sardar Ayaz Sadiq. He filed a petition with the Election Commission of Pakistan (ECP) to challenge the results of the by-election. He alleged rigging was done in the constituency and claimed of having evidence. In 2016, the ECP said that Khan had submitted fake affidavits to prove the rigging allegation. In July 2016, PTI appointed him as the president of the party's central Punjab chapter. He was elected to the Provincial Assembly of the Punjab as a candidate of PTI from Constituency PP-158 (Lahore-XV) in 2018 Pakistani general election. On 27 August 2018, he was inducted into the provincial Punjab cabinet of Chief Minister Sardar Usman Buzdar and was appointed as Provincial Minister of Punjab for Local Government and Community Development. He was given the status of a senior minister in the cabinet. In October 2018, Prime Minister Imran Khan ordered Khan to vacate the Punjab Chief Minister's camp office in Lahore after he was found of not fully cooperating with Chief Minister Usman Buzdar. In February 2019, he was arrested by the National Accountability Bureau (NAB). The same day, he announced his resignation as Provincial Minister of Punjab for Local Government and Community Development. On 13 April 2020, He has been inducted again in the Provincial Cabinet and been appointed as the Senior Minister of Punjab and Provincial Minister of Punjab for Food. He is also the Chairman of the Corona Task Force in Punjab. In November 2021, He resigned as minister of food. Assets He made two financial transactions of Rs 198 million and Rs 140 million during his tenure as Provincial Minister of Punjab for Information Technology between 2002 and 2007. According to documents submitted to the Election Commission of Pakistan in 2018, Khan declared his assets worth . References 1972 births Living people People named in the Panama Papers Pakistan Muslim League (Q) MPAs (Punjab) Punjab MPAs 2002–2007 Pakistan Tehreek-e-Insaf MPAs (Punjab) Provincial ministers of Punjab Punjab MPAs 2018–2023 Pakistanis named in the Pandora Papers
10806668
https://en.wikipedia.org/wiki/Ho%20Chi%20Minh%20City%20University%20of%20Information%20Technology
Ho Chi Minh City University of Information Technology
VNUHCM-University of Information Technology (VNUHCM-UIT; ) is a public university located in Ho Chi Minh City, Vietnam, a member of Vietnam National University, Ho Chi Minh City. Although its name is about information technology, this university teaches many computer studies. The first course was inaugurated on 6 November 2006. History The predecessor of this university was Center for IT Development. On 8 June 2006, the Vietnamese prime minister signed Decision no. 134/2006/QĐ-TTg to establish this university. Academic Programs Undergraduate programs Computer Sciences Data Science Information Systems Computer Engineering Electronic Commerce Computer Networks and Communications Software Engineering Information Technology Information Security Graduate programs: Computer Sciences Information Systems Information Security Information Technology Research fields Knowledge Engineering Signal Processing Computer Network Protocols Network Security Multimedia Processing Mobile Network Embedded & VLSI Design IoT & Robotics Geographic Information Systems(GIS) References External links Introduction of the University of IT Introduction of Center for IT Development Universities in Ho Chi Minh City
14631011
https://en.wikipedia.org/wiki/Jane%20Hillston
Jane Hillston
Jane Elizabeth Hillston (born 1963) is British professor of Quantitative Modelling and Head of School in the School of Informatics, University of Edinburgh, Scotland. Early life and education Hillston received a BA in Mathematics from the University of York in 1985, an MSc in Mathematics from Lehigh University in the United States in 1987 and a PhD in Computer Science from the University of Edinburgh in 1994, where she has spent her subsequent academic career. Her PhD thesis was awarded the BCS/CPHC Distinguished Dissertation Awards in 1995 and has been published by Cambridge University Press. Research and career She has been an EPSRC Research Fellow (1994–1995), Lecturer (1995–2001), Reader (2001–2006) and Professor of Quantitative Modelling since 2006. Hillston is a member of the Laboratory for Foundations of Computer Science at Edinburgh. In 2018 she was appointed Head of the School of Informatics at Edinburgh, taking over from Johanna Moore. Jane Hillston is known for her work on stochastic process algebras. In particular, she has developed the PEPA process algebra, and helped develop Bio-PEPA, which is based on the earlier PEPA algebra and is specifically aimed at analyzing biochemical networks. Hillston serves in the editorial board of Logical Methods in Computer Science; Elsevier Theoretical Computer Science, as one of the editors in the area of Theory of Natural Computing, and as an Associate Editor of ACM Transactions on Modeling and Computer Simulation (TOMACS). Awards In 2004, she received the first Roger Needham Award at the Royal Society in London awarded yearly for a distinguished research contributor in computer research by a UK-based researcher within ten years of their PhD. In March 2007 she was elected to the fellowship of the Royal Society of Edinburgh. In 2018, Hillston was elected the membership of the Academia Europaea. In 2018 she was a recipient of the Suffrage Science Award for Computer Science. in 2021 she was awarded the RSE Lord Kelvin Medal. She led the University of Edinburgh School of Informatics in applying for an Athena SWAN Award, which they subsequently achieved silver in. The award shows that the department provides a "supportive environment" for female students. References External links Jane Hillston's home page Official web page LFCS web page 1963 births Living people British computer scientists Formal methods people Alumni of the University of York Lehigh University alumni Alumni of the University of Edinburgh Academics of the University of Edinburgh Fellows of the Royal Society of Edinburgh Members of Academia Europaea British women computer scientists
12799
https://en.wikipedia.org/wiki/Graphic%20design
Graphic design
Graphic design is the profession and academic discipline whose activity consists in projecting visual communications intended to transmit specific messages to social groups, with specific objectives. As opposed to art, whose aim is merely contemplation, design is based on the principle of "form follows a specific function". Therefore, graphic design is an interdisciplinary branch of design whose foundations and objectives revolve around the definition of problems and the determination of objectives for decision-making, through creativity, innovation and lateral thinking along with manual or digital tools, transforming them for proper interpretation. This activity helps in the optimization of graphic communications (see also communication design). It is also known as visual communication design, visual design or editorial design. The role of the graphic designer in the communication process is that of encoder or interpreter of the message. They work on the interpretation, ordering, and presentation of visual messages. The design work can be based on a customer’s demand, a demand that ends up being established linguistically, either orally or in writing, that is, that graphic design transforms a linguistic message into a graphic manifestation. Graphic design has, as a field of application, different areas of knowledge focused on any visual communication system. For example, it can be applied in advertising strategies, or it can also be applied in the aviation world or space exploration. In this sense, in some countries graphic design is related as only associated with the production of sketches and drawings, this is incorrect, since visual communication is a small part of a huge range of types and classes where it can be applied. Given the rapid and massive growth in information sharing, the demand for experienced designers is greater than ever, particularly because of the development of new technologies and the need to pay attention to human factors beyond the competence of the engineers who develop them. Terminology The first known use of the term "graphic design" in the way in which we understand it today was in the 4 July 1908 issue (volume 9, number 27) of Organized Labor, a publication of the Labor Unions of San Francisco, in an article about technical education for printers: An Enterprising Trades Union … The admittedly high standard of intelligence which prevails among printers is an assurance that with the elemental principles of design at their finger ends many of them will grow in knowledge and develop into specialists in graphic design and decorating. … A decade later, the 1917–1918 course catalog of the California School of Arts & Crafts advertised a course titled Graphic Design and Lettering, which replaced one called Advanced Design and Lettering. Both classes were taught by Frederick Meyer; it is unclear why he changed the name of the course. History The origins of graphic design can be traced from the origins of human existence, from the caves of Lascaux, to Rome's Trajan's Column to the illuminated manuscripts of the Middle Ages, to the neon lights of Ginza, Tokyo. In "Babylon, artisans pressed cuneiform inscriptions into clay bricks or tablets which were used for construction. The bricks gave information such as the name of the reigning monarch, the builder, or some other dignitary". This was the first known road sign announcing the name of the governor of a state or mayor of the city. The Egyptians developed communication by hieroglyphics that used picture symbols dating as far back as 136 B.C. found on the Rosetta Stone. "The Rosetta stone, found by one of Napoleon's engineers was an advertisement for the Egyptian ruler, Ptolemy as the "true Son of the Sun, the Father of the Moon, and the Keeper of the Happiness of Men" The Egyptians also invented papyrus, paper made from reeds found along the Nile, on which they transcribed advertisements more common among their people at the time. During the "Dark Ages", from 500 AD to 1450 AD, monks created elaborate, illustrated manuscripts. In both its lengthy history and in the relatively recent explosion of visual communication in the 20th and 21st centuries, the distinction between advertising, art, graphic design and fine art has disappeared. They share many elements, theories, principles, practices, languages and sometimes the same benefactor or client. In advertising, the ultimate objective is the sale of goods and services. In graphic design, "the essence is to give order to information, form to ideas, expression, and feeling to artifacts that document the human experience." Graphic design in the United States began with Benjamin Franklin who used his newspaper The Pennsylvania Gazette to master the art of publicity, to promote his own books, and to influence the masses. "Benjamin Franklin's ingenuity gained in strength as did his cunning and in 1737 he had replaced his counterpart in Pennsylvania, Andrew Bradford as postmaster and printer after a competition he instituted and won. He showed his prowess by running an ad in his General Magazine and the Historical Chronicle of British Plantations in America (the precursor to the Saturday Evening Post) that stressed the benefits offered by a stove he invented, named the Pennsylvania Fireplace. His invention is still sold today and is known as the Franklin stove. " American advertising initially imitated British newspapers and magazines. Advertisements were printed in scrambled type and uneven lines, which made them difficult to read. Franklin better organized this by adding a 14-point type for the first line of the advertisement; although later shortened and centered it, making "headlines". Franklin added illustrations, something that London printers had not attempted. Franklin was the first to utilize logos, which were early symbols that announced such services as opticians by displaying golden spectacles. Franklin taught advertisers that the use of detail was important in marketing their products. Some advertisements ran for 10-20 lines, including color, names, varieties, and sizes of the goods that were offered. The advent of printing During the Tang Dynasty (618–907) wood blocks were cut to print on textiles and later to reproduce Buddhist texts. A Buddhist scripture printed in 868 is the earliest known printed book. Beginning in the 11th century, longer scrolls and books were produced using movable type printing, making books widely available during the Song dynasty (960–1279). During the 17th-18th century movable type was used for handbills or trade cards which were printed from wood or copper engravings. These documents announced a business and its location. English painter William Hogarth used his skill in engraving was one of the first to design for business trade. In Mainz Germany, in 1448, Johannes Gutenberg introduced movable type using a new metal alloy for use in a printing press and opened a new era of commerce. This made graphics more readily available since mass printing dropped the price of printing material significantly. Previously, most advertising was word of mouth. In France and England, for example, criers announced products for sale just as ancient Romans had done. The printing press made books more widely available. Aldus Manutius developed the book structure that became the foundation of western publication design. This era of graphic design is called Humanist or Old Style. Additionally, William Caxton, England's first printer produced religious books, but had trouble selling them. He discovered the use of leftover pages and used them to announce the books and post them on church doors. This practice was termed "squis" or "pin up" posters, in approximately 1612, becoming the first form of print advertising in Europe. The term Siquis came from the Roman era when public notices were posted stating "if anybody...", which in Latin is "si quis". These printed announcements were followed by later public registers of wants called want ads and in some areas such as the first periodical in Paris advertising was termed "advices". The "Advices" were what we know today as want ad media or advice columns. In 1638 Harvard University received a printing press from England. More than 52 years passed before London bookseller Benjamin Harris received another printing press in Boston. Harris published a newspaper in serial form, Publick Occurrences Both Foreign and Domestick. It was four pages long and suppressed by the government after its first edition. John Campbell is credited for the first newspaper, the Boston News-Letter, which appeared in 1704. The paper was known during the revolution as "Weeklies". The name came from the 13 hours required for the ink to dry on each side of the paper. The solution was to first, print the ads and then to print the news on the other side the day before publication. The paper was four pages long having ads on at least 20%-30% of the total paper, (pages one and four) the hot news was located on the inside. The initial use of the Boston News-Letter carried Campbell's own solicitations for advertising from his readers. Campbell's first paid advertisement was in his third edition, May 7 or 8th, 1704. Two of the first ads were for stolen anvils. The third was for real estate in Oyster Bay, owned by William Bradford, a pioneer printer in New York, and the first to sell something of value. Bradford published his first newspaper in 1725, New York's first, the New-York Gazette. Bradford's son preceded him in Philadelphia publishing the American Weekly Mercury, 1719. The Mercury and William Brooker's Massachusetts Gazette, first published a day earlier. Nineteenth century In 1849, Henry Cole became one of the major forces in design education in Great Britain, informing the government of the importance of design in his Journal of Design and Manufactures. He organized the Great Exhibition as a celebration of modern industrial technology and Victorian design. From 1891 to 1896, William Morris' Kelmscott Press published some of the most significant of the graphic design products of the Arts and Crafts movement, and made a lucrative business of creating and selling stylish books. Morris created a market for works of graphic design in their own right and a profession for this new type of art. The Kelmscott Press is characterized by an obsession with historical styles. This historicism was the first significant reaction to the state of nineteenth-century graphic design. Morris' work, along with the rest of the Private Press movement, directly influenced Art Nouveau. During the first half of the ninetieth century, there were diverse styles that were used by various graphic designers. Several examples are Greek, Roman, Classical, Egyptian, and Gothic. The early part of the century has often been regarded as being lackluster for reviving historic styles. However, the latter part of the century would showcase designers using these existing styles as a conceptual framework to expand their own styles. For instance, designer Augustus W.N. Pugin has a quote in the book The True Principles of Pointed or Christian Architecture (1841) that says Gothic is "not a style, but a principle." Will H. Bradley became one of the notable graphic designers in the late nineteenth-century due to creating art pieces in various Art Nouveau styles. Bradley created a number of designs as promotions for a literary magazine titled The Chap-Book. One of them was a Thanksgiving poster that was finished in 1895. The poster is recognized for including a system of curved lines and forms. The poster also borrows elements from Japanese printing styles by using flat colored planes. Bradley's works have proven to be inspiration as the concept of art posters would become more commonplace by the early twentieth century. In addition, art posters would become a significant aspect in the subject of advertising. Twentieth century In 1917, Frederick H. Meyer, director and instructor at the California School of Arts and Crafts, taught a class entitled "Graphic Design and Lettering". Raffe's Graphic Design, published in 1927, was the first book to use "Graphic Design" in its title. The signage in the London Underground is a classic design example of the modern era. Although he lacked artistic training, Frank Pick led the Underground Group design and publicity movement. The first Underground station signs were introduced in 1908 with a design of a solid red disk with a blue bar in the center and the name of the station. The station name was in white sans-serif letters. It was in 1916 when Pick used the expertise of Edward Johnston to design a new typeface for the Underground. Johnston redesigned the Underground sign and logo to include his typeface on the blue bar in the center of a red circle. In the 1920s, Soviet constructivism applied 'intellectual production' in different spheres of production. The movement saw individualistic art as useless in revolutionary Russia and thus moved towards creating objects for utilitarian purposes. They designed buildings, theater sets, posters, fabrics, clothing, furniture, logos, menus, etc. Jan Tschichold codified the principles of modern typography in his 1928 book, New Typography. He later repudiated the philosophy he espoused in this book as fascistic, but it remained influential. Tschichold, Bauhaus typographers such as Herbert Bayer and László Moholy-Nagy and El Lissitzky greatly influenced graphic design. They pioneered production techniques and stylistic devices used throughout the twentieth century. The following years saw graphic design in the modern style gain widespread acceptance and application. The post-World War II American economy revealed a greater need for graphic design, mainly in advertising and packaging. The spread of the German Bauhaus school of design to Chicago in 1937 brought a "mass-produced" minimalism to America; sparking "modern" architecture and design. Notable names in mid-century modern design include Adrian Frutiger, designer of the typefaces Univers and Frutiger; Paul Rand, who took the principles of the Bauhaus and applied them to popular advertising and logo design, helping to create a uniquely American approach to European minimalism while becoming one of the principal pioneers of corporate identity, a subset of graphic design. Alex Steinweiss is credited with the invention of the album cover; and Josef Müller-Brockmann, who designed posters in a severe yet accessible manner typical of the 1950s and 1970s era. The professional graphic design industry grew in parallel with consumerism. This raised concerns and criticisms, notably from within the graphic design community with the First Things First manifesto. First launched by Ken Garland in 1964, it was re-published as the First Things First 2000 manifesto in 1999 in the magazine Emigre 51 stating "We propose a reversal of priorities in favor of more useful, lasting and democratic forms of communication - a mindshift away from product marketing and toward the exploration and production of a new kind of meaning. The scope of debate is shrinking; it must expand. Consumerism is running uncontested; it must be challenged by other perspectives expressed, in part, through the visual languages and resources of design." Both editions attracted signatures from practitioners and thinkers such as Rudy VanderLans, Erik Spiekermann, Ellen Lupton and Rick Poynor. The 2000 manifesto was also published in Adbusters, known for its strong critiques of visual culture. Applications Graphic design is applied to everything visual, from road signs to technical schematics, from interoffice memorandums to reference manuals. Design can aid in selling a product or idea. It is applied to products and elements of company identity such as logos, colors, packaging and text as part of branding (see also advertising). Branding has become increasingly more important in the range of services offered by graphic designers. Graphic designers often form part of a branding team. Graphic design is applied in the entertainment industry in decoration, scenery and visual story telling. Other examples of design for entertainment purposes include novels, vinyl album covers, comic books, DVD covers, opening credits and closing credits in filmmaking, and programs and props on stage. This could also include artwork used for T-shirts and other items screenprinted for sale. From scientific journals to news reporting, the presentation of opinion and facts is often improved with graphics and thoughtful compositions of visual information - known as information design. Newspapers, magazines, blogs, television and film documentaries may use graphic design. With the advent of the web, information designers with experience in interactive tools are increasingly used to illustrate the background to news stories. Information design can include data visualization, which involves using programs to interpret and form data into a visually compelling presentation, and can be tied in with information graphics. Skills A graphic design project may involve the stylization and presentation of existing text and either preexisting imagery or images developed by the graphic designer. Elements can be incorporated in both traditional and digital form, which involves the use of visual arts, typography, and page layout techniques. Graphic designers organize pages and optionally add graphic elements. Graphic designers can commission photographers or illustrators to create original pieces. Designers use digital tools, often referred to as interactive design, or multimedia design. Designers need communication skills to convince an audience and sell their designs. The "process school" is concerned with communication; it highlights the channels and media through which messages are transmitted and by which senders and receivers encode and decode these messages. The semiotic school treats a message as a construction of signs which through interaction with receivers, produces meaning; communication as an agent. Typography Typography includes type design, modifying type glyphs and arranging type. Type glyphs (characters) are created and modified using illustration techniques. Type arrangement is the selection of typefaces, point size, tracking (the space between all characters used), kerning (the space between two specific characters) and leading (line spacing). Typography is performed by typesetters, compositors, typographers, graphic artists, art directors, and clerical workers. Until the digital age, typography was a specialized occupation. Certain fonts communicate or resemble stereotypical notions. For example, 1942 Report is a font which types text akin to a typewriter or a vintage report. Page layout Page layout deals with the arrangement of elements (content) on a page, such as image placement, text layout and style. Page design has always been a consideration in printed material and more recently extended to displays such as web pages. Elements typically consist of type (text), images (pictures), and (with print media) occasionally place-holder graphics such as a dieline for elements that are not printed with ink such as die/laser cutting, foil stamping or blind embossing. Printmaking Printmaking is the process of making artworks by printing on paper and other materials or surfaces. The process is capable of producing multiples of the same work, each called a print. Each print is an original, technically known as an impression. Prints are created from a single original surface, technically a matrix. A matrix is essentially a template, and can be made of wood, metal, or glass. The design is created on the matrix by working its flat surface with either tools or chemical. Common types of matrices include: plates of metal, usually copper or zinc for engraving or etching; stone, used for lithography; blocks of wood for woodcuts, linoleum for lino-cuts and fabric plates for screen-printing. Works printed from a single plate create an edition, in modern times usually each signed and numbered to form a limited edition. Prints may be published in book form, as artist's books. A single print could be the product of one or multiple techniques. Aside from technology, graphic design requires judgment and creativity. Critical, observational, quantitative and analytic thinking are required for design layouts and rendering. If the executor is merely following a solution (e.g. sketch, script or instructions) provided by another designer (such as an art director), then the executor is not usually considered the designer. Strategy Strategy is becoming more and more essential to effective graphic design. The main distinction between graphic design and art is that graphic design solves a problem as well as being aesthetically pleasing. This balance is where strategy comes in. It is important for a graphic designer to understand their clients' needs, as well as the needs of the people who will be interacting with the design. It is the designer's job to combine business and creative objectives to elevate the design beyond purely aesthetic means. Tools The method of presentation (e.g. Arrangements, style, medium) is important to the design. The development and presentation tools can change how an audience perceives a project. The image or layout is produced using traditional media and guides, or digital image editing tools on computers. Tools in computer graphics often take on traditional names such as "scissors" or "pen". Some graphic design tools such as a grid are used in both traditional and digital form. In the mid-1980s desktop publishing and graphic art software applications introduced computer image manipulation and creation capabilities that had previously been manually executed. Computers enabled designers to instantly see the effects of layout or typographic changes, and to simulate the effects of traditional media. Traditional tools such as pencils can be useful even when computers are used for finalization; a designer or art director may sketch numerous concepts as part of the creative process. Styluses can be used with tablet computers to capture hand drawings digitally. Computers and software Designers disagree whether computers enhance the creative process. Some designers argue that computers allow them to explore multiple ideas quickly and in more detail than can be achieved by hand-rendering or paste-up. While other designers find the limitless choices from digital design can lead to paralysis or endless iterations with no clear outcome. Most designers use a hybrid process that combines traditional and computer-based technologies. First, hand-rendered layouts are used to get approval to execute an idea, then the polished visual product is produced on a computer. Graphic designers are expected to be proficient in software programs for image-making, typography and layout. Nearly all of the popular and "industry standard" software programs used by graphic designers since the early 1990s are products of Adobe Systems Incorporated. Adobe Photoshop (a raster-based program for photo editing) and Adobe Illustrator (a vector-based program for drawing) are often used in the final stage. Some designers across the world use CorelDraw. CorelDraw is a vector graphics editor software developed and marketed by Corel Corporation. Open source software used to edit the vector graphis is Inkscape. Primary file format used in Inkscape is Scalable Vector Graphics (SVG). You can import or export the file in any other vector format. Designers often use pre-designed raster images and vector graphics in their work from online design databases. Raster images may be edited in Adobe Photoshop, vector logos and illustrations in Adobe Illustrator and CorelDraw, and the final product assembled in one of the major page layout programs, such as Adobe InDesign, Serif PagePlus and QuarkXpress. Powerful open-source programs (which are free) are also used by both professionals and casual users for graphic design, these include Inkscape (for vector graphics), GIMP (for photo-editing and image manipulation), Krita (for painting), and Scribus (for page layout). Related design fields Interface design Since the advent of personal computers, many graphic designers have become involved in interface design, in an environment commonly referred to as a Graphical User Interface (GUI). This has included web design and software design when end user-interactivity is a design consideration of the layout or interface. Combining visual communication skills with an understanding of user interaction and online branding, graphic designers often work with software developers and web developers to create the look and feel of a web site or software application. An important aspect of interface design is icon design. User experience design User experience design (UX) is the study, analysis, and development of creating products that provide meaningful and relevant experiences to users. This involves the creation of the entire process of acquiring and integrating the product, including aspects of branding, design, usability, and function. Experiential graphic design Experiential graphic design is the application of communication skills to the built environment. This area of graphic design requires practitioners to understand physical installations that have to be manufactured and withstand the same environmental conditions as buildings. As such, it is a cross-disciplinary collaborative process involving designers, fabricators, city planners, architects, manufacturers and construction teams. Experiential graphic designers try to solve problems that people encounter while interacting with buildings and space (also called environmental graphic design). Examples of practice areas for environmental graphic designers are wayfinding, placemaking, branded environments, exhibitions and museum displays, public installations and digital environments. Occupations Graphic design career paths cover all parts of the creative spectrum and often overlap. Workers perform specialized tasks, such as design services, publishing, advertising and public relations. As of 2017, median pay was $48,700 per year. The main job titles within the industry are often country specific. They can include graphic designer, art director, creative director, animator and entry level production artist. Depending on the industry served, the responsibilities may have different titles such as "DTP Associate" or "Graphic Artist". The responsibilities may involve specialized skills such as illustration, photography, animation, visual effects or interactive design. Employment in design of online projects was expected to increase by 35% by 2026, while employment in traditional media, such as newspaper and book design, expect to go down by 22%. Graphic designers will be expected to constantly learn new techniques, programs, and methods. Graphic designers can work within companies devoted specifically to the industry, such as design consultancies or branding agencies, others may work within publishing, marketing or other communications companies. Especially since the introduction of personal computers, many graphic designers work as in-house designers in non-design oriented organizations. Graphic designers may also work freelance, working on their own terms, prices, ideas, etc. A graphic designer typically reports to the art director, creative director or senior media creative. As a designer becomes more senior, they spend less time designing and more time leading and directing other designers on broader creative activities, such as brand development and corporate identity development. They are often expected to interact more directly with clients, for example taking and interpreting briefs. Crowdsourcing in graphic design Jeff Howe of Wired Magazine first used the term "crowdsourcing" in his 2006 article, "The Rise of Crowdsourcing." It spans such creative domains as graphic design, architecture, apparel design, writing, illustration etc. Tasks may be assigned to individuals or a group and may be categorized as convergent or divergent. An example of a divergent task is generating alternative designs for a poster. An example of a convergent task is selecting one poster design. Companies, Startups, Small businesses & Entrepreneurs have all benefitted a lot from design crowdsourcing since it helps them source great graphic designs at a fraction of the budget they used to spend before. Getting a logo design through crowdsourcing being one of the most common. Major companies who operate in the design crowdsourcing space are generally referred to as design contest sites. See also Related areas Related topics References Bibliography Fiell, Charlotte and Fiell, Peter (editors). Contemporary Graphic Design. Taschen Publishers, 2008. Wiedemann, Julius and Taborda, Felipe (editors). Latin-American Graphic Design. Taschen Publishers, 2008. External links The Universal Arts of Graphic Design – Documentary produced by Off Book Graphic Designers, entry in the Occupational Outlook Handbook of the Bureau of Labor Statistics of the United States Department of Labor Communication design
8978893
https://en.wikipedia.org/wiki/NERC%20Tag
NERC Tag
A NERC Tag, also commonly referred to as an E-Tag, represents a transaction on the North American bulk electricity market scheduled to flow within, between or across electric utility company territories. The NERC Tag is named for the North American Electric Reliability Corporation (NERC), which is the entity that was responsible for the implementation of the first energy tagging processes. NERC Tags were first introduced in 1997, in response to the need to track the increasingly complicated energy transactions which were produced as a result of the beginning of electric deregulation in North America. Electric Deregulation in North America The Federal Energy Regulatory Commission (FERC)'s Energy Policy Act of 1992 was the first major step towards electric deregulation in North America, and was followed by a much more definitive action when FERC issued Orders 888 and 889 in 1996, which laid the groundwork for formalized deregulation of the industry and led to the creation of the network of Open Access Same-Time Information System (OASIS) nodes. FERC is an independent agency of the U.S. Government and thus its authority extends only over electric utilities operating in the United States. However, NERC members include all of the FERC footprint as well as all of the electric utilities in lower Canada and a Mexican utility company. In the interest of reciprocity and commonality, all NERC members generally cooperate with FERC rules. The creation of OASIS nodes allowed for energy to be scheduled across multiple power systems, creating complex strings of single "point-to-point" transactions which could be connected end-to-end to literally travel across the continent. This frequently created situations where it was difficult or impossible for transmission system operators to ascertain all of the transactions impacting their local system or take any corrective actions to alleviate situations which could put the power grid at risk of damage or collapse. The NERC Tag was implemented as a result of this new problem introduced by deregulation. NERC Tag Versions NERC Tag 1.x The earliest NERC Tag application was based on a Microsoft Excel spreadsheet, and was introduced in 1997. The form was usually completed by the power marketers or schedulers, by defining the date and time of the transaction, the physical path of the energy schedule from its point of generation to point of consumption, the financial path (buying/selling chain) of the energy schedule, the hourly energy amounts scheduled to flow, and also the OASIS transmission requests for each power system crossed which thereby documented that permission to cross each power system had been properly obtained. Elements of a NERC Tag included Control Areas (CA), Transmission Providers (TP), Purchasing/Selling Entities (PSE), transmission Points of Receipt (POR) and Points of Delivery (POD), as well as product codes for several transmission and generation priorities. The physical path was the most important aspect of the NERC Tag in terms of understanding the impact of a collection of individual transactions after they had been compiled into a single complete transaction. To complete the physical path it was necessary to identify the power system and power plant where the energy was to be generated, any and all power systems that would be utilized to move the energy to its eventual destination, and lastly the power system and location of the delivery point where the energy would be consumed (the "load"). When a NERC Tag was created in the spreadsheet, the information was then distilled into a small CSV formatted data packet which was disseminated via e-mail to all of the participants listed on the NERC tag. In this way, all participants of a transaction were able to determine which other electric utilities and power marketers were involved in the transaction, and what the roles of the other participants were. More importantly, in the event of a contingency such as a transmission line outage or generation failure, all participants could more easily be notified of the schedule change, and could then all act in cooperation to curtail the scheduled transaction. The NERC Tag 1.0 implementation was not capable of collecting schedule flow data in any useful way, but it did serve to familiarize schedulers with the demands of tagging their transactions, a process that would eventually be mandatory. A database of transmission scheduling points maintained by NERC through the Transmission System Information Networks (TSIN) that was originally developed for the OASIS nodes was greatly expanded to include additional information required in the process of creating NERC Tags. The spreadsheet-based NERC Tag application saw minor improvements in functionality and scope with small incremental changes which advanced it to NERC Tag 1.3, although there was not much discernible difference to the participants, and until version 1.4 was implemented, any previous version could still be used. E-Tag 1.4 and 1.5 Not long after NERC introduced the NERC Tag spreadsheet and packet emailer, NERC concluded that it did not want involvement in any future software development or maintenance. A NERC Tag specification document, version 1.4, was drafted as the next level in energy tagging, the NERC Tag would subsequently also be known as an E-Tag. Data transfer would now occur directly over an Internet connection instead of via e-mail. This eliminated the cumbersome process required to receive a data packet via email and port it back into the original spreadsheet-based tagging application. This change made the NERC Tag much easier to use in a real-time application. E-Tag 1.4 went into effect in 1999, but was replaced just nine months later with E-Tag 1.5, followed three months later with E-Tag 1.501. The 1.5 and 1.501 Specs corrected the shortcomings experienced with the initial release of the first E-Tag Spec. Although NERC was responsible for the E-Tag Spec, it opened development of the application to run it to the software market. Initially there were numerous E-Tagging software providers, mainly a mix of small start-ups and new applications developed by existing energy industry software developers. The E-Tag 1.5 Spec was written in such a way that the various applications were permitted to have differing graphical user interfaces (GUIs), but functionally "under the hood" they were required to be able to interact with each other when transmitting, receiving and processing E-Tags. A new feature introduced with E-Tag 1.4/1.5, made possible by the real-time sharing of E-Tags, was the ability for reliability entities (namely the CA's and TP's) in the E-Tag to electronically approve or deny E-Tags based on various criteria. The arrival of real-time tagging also enabled NERC to begin collecting real-time and short-term future data regarding the energy transactions scheduled throughout the North American power grid. The data from approved transactions was ported to the Interchange Distribution Calculator (IDC), where the data could be applied to a virtual study model of the Eastern Interconnection. The IDC went online in 1999. E-Tag 1.6 Building on the lessons experienced with E-Tag applications to date, E-Tag 1.6 went into effect in 2000. There were seven variations of E-Tag 1.6, up to E-Tag 1.67 which was in effect until late 2002. Most of the changes in E-Tag 1.6 were of a functional nature and not overly apparent to the users. Under E-Tag 1.6, NERC implemented the "no tag, no flow" rule, where all energy transactions were to be documented with an E-Tag. Accurate system studies of the Eastern Interconnection in order to determine which schedules should be curtailed would only be possible if every transaction was tagged and therefore included in the IDC calculations. Reliability Coordinators in the Eastern Interconnection could access the IDC online and run flow studies based on various operating scenarios with all of the current energy schedules derived from the E-Tags. When an actual contingency occurred, the Reliability Coordinators could identify the constrained transmission line or corridor within the IDC, and the IDC would then identify which E-Tagged schedules should be curtailed in order to ease the loading on the restricted facilities. E-Tag 1.7 NERC's E-Tag 1.7 Specification completely reworked the E-Tag platform from scratch. Some users said that it was so significant that it might have been more appropriate to have called it "E-Tag 2.0". For the first time, Extensible Markup Language (XML) was utilized to format the data transferred between E-Tag applications, finally replacing the base CSV data transfer format based on its ancestral NERC Tag 1.0 spreadsheet/e-mail origins. The TSIN database was expanded to include generation and load points which were matched with PSEs that had rights to schedule them, and also included complex associations that enforced matched sets of PORs and PODs with TPs. E-Tag 1.7 also greatly expanded the time frame flexibility of an E-Tag by allowing extensions and modifications with comprehensive approval processes, layering of multiple OASIS requests for transmission rights, and also fully automated the tag curtailment functions from the IDC so that individual manual tag curtailments were no longer necessary. Shortly after E-Tag 1.7 went online in 2002, the Western Electricity Coordinating Council (WECC) implemented the WECC Unscheduled Flow (USF) Tool, which accomplished a similar automated curtailing capability for the Western Interconnection that the IDC had done for the Eastern Interconnection. The number of software choices for E-Tag software dwindled within the first few years to a handful of major players. The number of E-Tag users was strictly limited by the number of entities involved in E-Tagging, and the cost of complying with NERC E-Tag Specifications became prohibitive for any software company that did not already have significant market share or adequate financial backing. The added complexities of E-Tag 1.7 dealt a severe blow to most of the E-Tagging software providers, and within a year of E-Tag 1.7 going online, there was only one dominant E-Tag software provider remaining, which also provided all IDC and WECC USF services, though a few holdouts and customer-developed "in-house" E-Tag applications remain. Version 1.7.097 of E-Tag was implemented on January 3, 2007. E-Tag 1.8 Five years following the release of E-Tag 1.7, a major update was developed and implemented on December 4, 2007. E-Tag 1.8 cleaned up some long-standing issues not easily addressed with minor revisions to E-Tag 1.7 and brought the E-Tag applications back up to current industry policy standards. Future of E-Tag OASIS primarily deals with the purchase and availability of transmission from individual transmission providers with a forward-looking time frame, while E-Tag is focused on real-time scheduling and power flow management across multiple systems. Nonetheless, the FERC-derived OASIS applications and NERC-derived E-Tag applications are somewhat duplicative. FERC's plan for the eventual introduction of OASIS Phase 2 envisions a combined platform to post transmission offerings, allow transmission purchases, and facilitate scheduling and flow management, effectively merging the essential functions of E-Tag and OASIS. However, there has been very little activity to move towards OASIS Phase 2 since the introduction of E-Tag 1.7 in 2002, and the future remains unclear. As both systems have increased in complexity over time, the difficulties in merging the two independently evolved systems have likewise also increased. External links North American Electric Reliability Corporation (NERC) NERC/TSIN E-Tag 1.7 Page Transmission System Information Networks (TSIN) TSIN List of OASIS node links Federal Energy Regulatory Commission (FERC) Electric power transmission Electric power distribution Electric power in North America Electricity markets
24885771
https://en.wikipedia.org/wiki/Tianhe-1
Tianhe-1
Tianhe-I, Tianhe-1, or TH-1 (, ; Sky River Number One) is a supercomputer capable of an Rmax (maximum range) of 2.5 peta FLOPS. Located at the National Supercomputing Center of Tianjin, China, it was the fastest computer in the world from October 2010 to June 2011 and was one of the few petascale supercomputers in the world. In October 2010, an upgraded version of the machine (Tianhe-1A) overtook ORNL's Jaguar to become the world's fastest supercomputer, with a peak computing rate of 2.57 petaFLOPS. In June 2011 the Tianhe-1A was overtaken by the K computer as the world's fastest supercomputer, which was also subsequently superseded. Both the original Tianhe-1 and Tianhe-1A use a Linux-based operating system. On 12 August 2015, the 186,368-core Tianhe-1, felt the impact of the powerful Tianjin explosions and went offline for some time. Xinhua reports that "the office building of Chinese supercomputer Tianhe-1, one of the world's fastest supercomputers, suffered damage". Sources at Tianhe-1 told Xinhua the computer is not damaged, but they have shut down some of its operations as a precaution. Operation resumed on 17 August 2015. Background Tianhe-1 Tianhe-1 was developed by the Chinese National University of Defense Technology (NUDT) in Changsha, Hunan. It was first revealed to the public on 2009, and was immediately ranked as the world's fifth fastest supercomputer in the TOP500 list released at the 2009 Supercomputing Conference (SC09) held in Portland, Oregon, on 2009. Tianhe achieved a speed of 563 teraflops in its first Top 500 test and had a peak performance of 1.2 petaflops. Thus at startup, the system had an efficiency of 46%. Originally, Tianhe-1 was powered by 4,096 Intel Xeon E5540 processors and 1,024 Intel Xeon E5450 processors, with 5,120 AMD graphics processing units (GPUs), which were made up of 2,560 dual-GPU ATI Radeon HD 4870 X2 graphics cards. Tianhe-1A In October 2010, Tianhe-1A, an upgraded supercomputer, was unveiled at HPC 2010 China. It is now equipped with 14,336 Xeon X5670 processors and 7,168 Nvidia Tesla M2050 general purpose GPUs. 2,048 FeiTeng 1000 SPARC-based processors are also installed in the system, but their computing power was not counted into the machine's official LINPACK statistics as of October 2010. Tianhe-1A has a theoretical peak performance of 4.701 petaflops. NVIDIA suggests that it would have taken "50,000 CPUs and twice as much floor space to deliver the same performance using CPUs alone." The current heterogeneous system consumes 4.04 megawatts compared to over 12 megawatts had it been built only with CPUs. The Tianhe-1A system is composed of 112 computer cabinets, 12 storage cabinets, 6 communications cabinets, and 8 I/O cabinets. Each computer cabinet is composed of four frames, with each frame containing eight blades, plus a 16-port switching board. Each blade is composed of two computer nodes, with each computer node containing two Xeon X5670 6-core processors and one Nvidia M2050 GPU processor. The system has 3584 total blades containing 7168 GPUs, and 14,336 CPUs, managed by the SLURM job scheduler. The total disk storage of the systems is 2 Petabytes implemented as a Lustre clustered file system, and the total memory size of the system is 262 terabytes. Another significant reason for the increased performance of the upgraded Tianhe-1A system is the Chinese-designed NUDT custom designed proprietary high-speed interconnect called Arch that runs at 160 Gbit/s, twice the bandwidth of InfiniBand. The system also used the Chinese-made FeiTeng-1000 central processing unit. The FeiTeng-1000 processor is used both on service nodes and to enhance the system interconnect. The supercomputer is installed at the National Supercomputing Center, Tianjin, and is used to carry out computations for petroleum exploration and aircraft design. It is an "open access" computer, meaning it provides services for other countries. The supercomputer will be available to international clients. The computer cost $88 million to build. Approximately $20 million is spent annually for electricity and operating expenses. Approximately 200 workers are employed in its operation. Tianhe-IA was ranked as the world's fastest supercomputer in the TOP500 list until July 2011 when the K computer overtook it. In June 2011, scientists at the Institute of Process Engineering (IPE) at the Chinese Academy of Sciences (CAS) announced a record-breaking scientific simulation on the Tianhe-1A supercomputer that furthers their research in solar energy. CAS-IPE scientists ran a complex molecular dynamics simulation on all 7,168 NVIDIA Tesla GPUs to achieve a performance of 1.87 petaflops (about the same performance as 130,000 laptops). The Tianhe-1A supercomputer was shut down after the National Supercomputing Center of Tianjin was damaged by an explosion nearby. The computer was not damaged and still remains operational. See also HPC Challenge Benchmark Supercomputing in China Tianhe-2 References External links National University of Defense Technology Official website 2009 in science Petascale computers Supercomputing in China X86 supercomputers
18938782
https://en.wikipedia.org/wiki/GNU%20Free%20Documentation%20License
GNU Free Documentation License
The GNU Free Documentation License (GNU FDL or simply GFDL) is a copyleft license for free documentation, designed by the Free Software Foundation (FSF) for the GNU Project. It is similar to the GNU General Public License, giving readers the rights to copy, redistribute, and modify (except for "invariant sections") a work and requires all copies and derivatives to be available under the same license. Copies may also be sold commercially, but, if produced in larger quantities (greater than 100), the original document or source code must be made available to the work's recipient. The GFDL was designed for manuals, textbooks, other reference and instructional materials, and documentation which often accompanies GNU software. However, it can be used for any text-based work, regardless of subject matter. For example, the free online encyclopedia Wikipedia uses the GFDL (coupled with the Creative Commons Attribution Share-Alike License) for much of its text, excluding text that was imported from other sources after the 2009 licensing update that is only available under the Creative Commons license. History The GFDL was released in draft form for feedback in September 1999. After revisions, version 1.1 was issued in March 2000, version 1.2 in November 2002, and version 1.3 in November 2008. The current state of the license is version 1.3. The first discussion draft of the GNU Free Documentation License version 2 was released on September 26, 2006, along with a draft of the new GNU Simpler Free Documentation License. On December 1, 2007, Wikipedia founder Jimmy Wales announced that a long period of discussion and negotiation between and amongst the Free Software Foundation, Creative Commons, the Wikimedia Foundation and others had produced a proposal supported by both the FSF and Creative Commons to modify the Free Documentation License in such a fashion as to allow the possibility for the Wikimedia Foundation to migrate the projects to the similar Creative Commons Attribution Share-Alike (CC BY-SA) license. These changes were implemented on version 1.3 of the license, which includes a new provision allowing certain materials released under the license to be used under a Creative Commons Attribution Share-Alike license also. Conditions Material licensed under the current version of the license can be used for any purpose, as long as the use meets certain conditions. All previous authors of the work must be attributed. All changes to the work must be logged. All derivative works must be licensed under the same license. The full text of the license, unmodified invariant sections as defined by the author if any, and any other added warranty disclaimers (such as a general disclaimer alerting readers that the document may not be accurate for example) and copyright notices from previous versions must be maintained. Technical measures such as DRM may not be used to control or obstruct distribution or editing of the document. Secondary sections The license explicitly separates any kind of "Document" from "Secondary Sections", which may not be integrated with the Document, but exist as front-matter materials or appendices. Secondary sections can contain information regarding the author's or publisher's relationship to the subject matter, but not any subject matter itself. While the Document itself is wholly editable and is essentially covered by a license equivalent to (but mutually incompatible with) the GNU General Public License, some of the secondary sections have various restrictions designed primarily to deal with proper attribution to previous authors. Specifically, the authors of prior versions have to be acknowledged and certain "invariant sections" specified by the original author and dealing with his or her relationship to the subject matter may not be changed. If the material is modified, its title has to be changed (unless the prior authors permit to retain the title). The license also has provisions for the handling of front-cover and back-cover texts of books, as well as for "History", "Acknowledgements", "Dedications" and "Endorsements" sections. These features were added in part to make the license more financially attractive to commercial publishers of software documentation, some of whom were consulted during the drafting of the GFDL. "Endorsements" sections are intended to be used in official standard documents, where the distribution of modified versions should only be permitted if they are not labeled as that standard anymore. Commercial redistribution The GFDL requires the ability to "copy and distribute the Document in any medium, either commercially or noncommercially" and therefore is incompatible with material that excludes commercial re-use. As mentioned above, the GFDL was designed with commercial publishers in mind, as Stallman explained: Material that restricts commercial re-use is incompatible with the license and cannot be incorporated into the work. However, incorporating such restricted material may be fair use under United States copyright law (or fair dealing in some other countries) and does not need to be licensed to fall within the GFDL if such fair use is covered by all potential subsequent uses. One example of such liberal and commercial fair use is parody. Compatibility with Creative Commons licensing terms Although the two licenses work on similar copyleft principles, the GFDL is not compatible with the Creative Commons Attribution-ShareAlike license. However, at the request of the Wikimedia Foundation, version 1.3 added a time-limited section allowing specific types of websites using the GFDL to additionally offer their work under the CC BY-SA license. These exemptions allow a GFDL-based collaborative project with multiple authors to transition to the CC BY-SA 3.0 license, without first obtaining the permission of every author, if the work satisfies several conditions: The work must have been produced on a "Massive Multiauthor Collaboration Site" (MMC), such as a public wiki for example. If external content originally published on a MMC is present on the site, the work must have been licensed under Version 1.3 of the GNU FDL, or an earlier version but with the "or any later version" declaration, with no cover texts or invariant sections. If it was not originally published on an MMC, it can only be relicensed if it were added to an MMC before November 1, 2008. To prevent the clause from being used as a general compatibility measure, the license itself only allowed the change to occur before August 1, 2009. At the release of version 1.3, the FSF stated that all content added before November 1, 2008 to Wikipedia as an example satisfied the conditions. The Wikimedia Foundation itself after a public referendum, invoked this process to dual-license content released under the GFDL under the CC BY-SA license in June 2009, and adopted a foundation-wide attribution policy for the use of content from Wikimedia Foundation projects. Enforcement There have currently been no cases involving the GFDL in a court of law, although its sister license for software, the GNU General Public License, has been successfully enforced in such a setting. Although the content of Wikipedia has been plagiarized and used in violation of the GFDL by other sites, such as Baidu Baike, no contributors have ever tried to bring an organization to court due to violation of the GFDL. In the case of Baidu, Wikipedia representatives asked the site and its contributors to respect the terms of the licenses and to make proper attributions. Criticism Some critics consider the GFDL a non-free license. Some reasons for this are that the GFDL allows "invariant" text which cannot be modified or removed, and that its prohibition against digital rights management (DRM) systems applies to valid usages, like for "private copies made and not distributed". Notably, the Debian project, Thomas Bushnell, Nathanael Nerode, and Bruce Perens have raised objections. Bruce Perens saw the GFDL even outside the "Free Software ethos": In 2006, Debian developers voted to consider works licensed under the GFDL to comply with their Debian Free Software Guidelines provided the invariant section clauses are not used. The results was GFDL without invariant sections is DFSG compliant. However, their resolution stated that even without invariant sections, GFDL-licensed software documentation "is still not free of trouble", namely because of its incompatibility with the major free software licenses. Those opposed to the GFDL have recommended the use of alternative licenses such as the BSD License or the GNU GPL. The FLOSS Manuals foundation, an organization devoted to creating manuals for free software, decided to eschew the GFDL in favor of the GPL for its texts in 2007, citing the incompatibility between the two, difficulties in implementing the GFDL, and the fact that the GFDL "does not allow for easy duplication and modification", especially for digital documentation. DRM clause The GNU FDL contains the statement: A criticism of this language is that it is too broad, because it applies to private copies made but not distributed. This means that a licensee is not allowed to save document copies "made" in a proprietary file format or using encryption. In 2003, Richard Stallman said about the above sentence on the debian-legal mailing list: Invariant sections A GNU FDL work can quickly be encumbered because a new, different title must be given and a list of previous titles must be kept. This could lead to the situation where there are a whole series of title pages, and dedications, in each and every copy of the book if it has a long lineage. These pages cannot be removed until the work enters the public domain after copyright expires. Richard Stallman said about invariant sections on the debian-legal mailing list: GPL incompatible in both directions The GNU FDL is incompatible in both directions with the GPL—material under the GNU FDL cannot be put into GPL code and GPL code cannot be put into a GNU FDL manual. At the June 22nd and 23rd 2006 international GPLv3 conference in Barcelona, Eben Moglen hinted that a future version of the GPL could be made suitable for documentation: Burdens when printing The GNU FDL requires that licensees, when printing a document covered by the license, must also include "this License, the copyright notices, and the license notice saying this License applies to the Document". This means that if a licensee prints out a copy of an article whose text is covered under the GNU FDL, they must also include a copyright notice and a physical printout of the GNU FDL, which is a significantly large document in itself. Worse, the same is required for the standalone use of just one (for example, Wikipedia) image. Wikivoyage, a web site dedicated to free content travel guides, chose not to use the GFDL because it considers it unsuitable for short printed texts. Other free content licenses Some of these were developed independently of the GNU FDL, while others were developed in response to perceived flaws in the GNU FDL. GNU Simpler Free Documentation License Creative Commons licenses Design Science License Free Art license FreeBSD Documentation License Open Content License Open Game License Open Publication License WTFPL List of projects that use the GFDL Most projects of the Wikimedia Foundation, including Wikipedia (excluding Wikivoyage and Wikinews) - On June 15, 2009, the Section 11 clauses were used to dual-license the content of these wikis under the Creative Commons Attribution Share-Alike license and GFDL. An Anarchist FAQ Citizendium - the project uses GFDL for articles originally from Wikipedia. Free On-line Dictionary of Computing Last.fm - artists descriptions are under GFDL Marxists Internet Archive PlanetMath (now uses CC-BY-SA license) Rosetta Code SourceWatch The specification documents that define TRAK, an enterprise architecture framework, are released under the GFDL. Abstract Algebra by Thomas W. Judson. the Baseball-Reference's BR Bullpen, a free user-contributed baseball wiki See also BSD license Copyleft Copyright Free software license GNU Non-commercial educational Open content Share-alike Software licensing References External links FSF guide to the new drafts of documentation licenses GFDL official text Free Software and Free Manuals, essay by Richard Stallman Apple's Common Documentation License , an alternative license GNU Project Free content licenses Software documentation Copyleft Free Software Foundation Free and open-source software licenses Copyleft software licenses
7354269
https://en.wikipedia.org/wiki/Web%20desktop
Web desktop
A web desktop or webtop is a desktop environment embedded in a web browser or similar client application. A webtop integrates web applications, web services, client–server applications, application servers, and applications on the local client into a desktop environment using the desktop metaphor. Web desktops provide an environment similar to that of Windows, Mac, or a graphical user interface on Unix and Linux systems. It is a virtual desktop running in a web browser. In a webtop the applications, data, files, configuration, settings, and access privileges reside remotely over the network. Much of the computing takes place remotely. The browser is primarily used for display and input purposes. The terms "web desktop" and "webtop" are distinct from web operating system, a network operating system such as TinyOS or distributed operating system such as Inferno. In popular use, web desktops are sometimes referred to incorrectly as web operating systems or simply WebOS. History In the context of a web desktop, the term Webtop was first introduced by the Santa Cruz Operation (SCO) in 1994 for a web-based interface to their Unix operating system. This application was based on the provisional application entitled "The Adaptive Internet Protocol System" filed Nov. 13, 1997, serial number 60/065,521 and is the U.S. patent for the technology used in the Tarantella Webtop. Andy Bovingdon and Ronald Joe Record, who both explored the concepts in different directions, are often credited as the inventors. The initial SCO Webtop, developed by Record, utilized a Netscape Navigator plugin to display applications in a browser window via TightVNC. A trademark application for "SCO Webtop" was filed with the U.S. Patent and Trademark Office on November 8, 1996. In order to avoid confusion with the more complex technology incorporated into the Tarantella Webtop it was abandoned on December 24, 1997 by The Santa Cruz Operation. Bovingdon's three tiered architecture (TTA) concept was launched as the Tarantella Webtop. This technology originated from early commercial use of web server technology by SCO. the first OS vendor to include a commercial web server, NCSA HTTPd, and commercial web browser, NCSA Mosaic. Their X.desktop product line, obtained when they acquired IXI Limited in the UK, was the first to have icons for URLs (controlled via the Deskshell scripting language) and an HTML-based help system, called DeskHelp, which extended the NCSA Mosaic web browser to include APIs and scripting linked to the X.desktop product for interactive control. The IXI Limited scripting language based on Python was later replaced with JavaScript. Tarantella allowed real UNIX and Windows applications to be displayed within a web browser through the use of Java to form a true web based desktop or Webtop. The first SCO Webtop releases were part of SCO Skunkware before being integrated into SCO OpenServer version 5 and UnixWare 7. Tarantella was subsequently purchased by Sun Microsystems and integrated into their Sun Secure Global Desktop. Byte magazine referred to the Webtop as a NUI (Network User Interface). More recently, Google released an operating system for web connection called Chrome OS and several 11-12" netbooks from Acer and Samsung have implemented the system. It is thought to represent a useful fraction (~10%) of the current (2012) netbook sales. Advantages Convenience A personalized desktop on every supported client device Mobility Access your desktop anywhere there is a supported client device Session management Server-side session management allows roaming users to access restored sessions from anywhere Software management Ensures all users are running the same current versions of all applications Updates and patches need only be applied to the server - no need to update multiple clients No need for software to distribute software over the network Security Less prone to typical attacks, viruses, worms, unpatched clients, vulnerabilities Sensitive data stored on secure servers rather than scattered across multiple potentially unprotected and vulnerable clients (e.g. smart phones and laptops) Encrypted transmission of all data between server and clients (e.g. https) Software Management features (above) accommodate quick and easy application of security advisories on server side Webtop administrator can control which applications and data each user is allowed to access High availability From a single device access Windows, UNIX, Linux, and Mainframe applications, all at the same time Minimal hardware requirements for client devices (except for rendered technologies such as Flash/Flex/SilverLight) Less downtime - robust server system more easily protected and less likely to fail than multiple client desktops Fault tolerance - if a client device fails for any reason simply replace it with any other supported client device without loss of data, configuration, preferences, or application access Drawbacks Security Because all data is transferred over the internet, it might be possible for a cracker to intercept the connection and read data. Although with the use of https 256-bit encryption and access control lists, this can be safe-guarded. Speed When using a web desktop the whole code used for visualization (.js/.css files, Flash player files, etc.) needs to be transferred to the local computer, so that it can be displayed. Further, network latency or congestion can intermittently slow webtop activity. Offline application storage can mitigate this issue. Application features Some webtop-delivered applications may not contain the full feature set of their traditional desktop counterparts. Network Access Web desktops require access to a network. If the client device is misconfigured or the network is unreachable then the web desktop is unavailable. Controlled access In some webtop implementations and deployments a user's access to some applications and data can be restricted. This is also considered an advantage of webtops but can be viewed as a drawback from the user's perspective. Central control The normal webtop user is not able to install additional applications or update existing applications. Updates typically must be performed by an administrator on the server side. Webtop users are dependent upon the webtop administrator whereas in the traditional desktop environment the user can fix and/or break the system by installing new software or updates. This can also be seen as an advantage for webtops. Comparison of web desktops The following tables compare general and technical information for a number of web desktops. See also Comparison of remote desktop software Hosted desktop Online office suite Rich Internet application Virtual Network Computing Notes References SCO Tarantella Offers New Twist On an Old Thin-Client Dance, "Network Computing Magazine", Mark Andrew Seltzer, January 24, 2000 Ditch That Desktop for a Webtop, PC World, October 16, 2000 SCO Company History, Operating System Documentation Project SCO revamps UnixWare with Linux features, CNET News.com, February 23, 1999 SCO Showcases Latest In Network Computing for Real-World Environments, Network Computing News, April 29, 1997 Desktop environments
1217123
https://en.wikipedia.org/wiki/Tomboy%20%28software%29
Tomboy (software)
Tomboy is a free and open-source desktop notetaking app written for Windows, macOS, Linux, and BSD operating systems. Tomboy is part of the GNOME desktop environment. As Ubuntu changed over time and its cloud synchronization software Ubuntu One came and went, Tomboy inspired various forks and clones. Its interface is a word processor with a wiki-like linking system to connect notes together. Words in the note body that match existing note titles become hyperlinks automatically, making it simple to construct a personal wiki. For example, repeated references to favorite artists would be automatically highlighted in notes containing their names. As of version 1.6 (2010), it supports text entries and hyperlinks to the World Wide Web, but not graphic image linking or embedding. Features Some of the editing features supported: Text highlighting Inline spell checking using GtkSpell Automatic hyperlinking of Web and email addresses Undo/redo Font styling and sizing Bulleted lists Note synchronization over SSH, WebDAV, Ubuntu One, or the Tomboy REST API that is implemented by several server applications Plugins Tomboy supports several plugins, including: Evolution mail links Galago/Pidgin presence Note of the day (disabled by default) Fixed width text HTML export LaTeX math (in the package , not installed by default) Print Ports Conboy: a Tomboy port to the Maemo platform written in the C language Gnote: a conversion of the Tomboy code to the C++ language, but is not cross-platform libktomgirl: a platform-independent library that reads and writes the Tomboy File Format tomboy-ng: a port to the Pascal programming language that maintains compatibility Tomboy and Tomdroid Tomdroid: an effort to produce a Tomboy-compatbile notetaking app for the Android operating system. See also Comparison of notetaking software References External links Software that uses Mono (software) Free note-taking software Cross-platform free software Free software programmed in C Sharp GNOME Applications Note-taking software that uses GTK
33989804
https://en.wikipedia.org/wiki/Lorien%20Legacies
Lorien Legacies
Lorien Legacies is a series of young adult science fiction books, written by James Frey, Jobie Hughes, and formerly, Greg Boose, under the collective pseudonym Pittacus Lore. Lorien Legacies I am Number Four The Loric are a nearly extinct race of extraterrestrials due to a deadly battle with another alien race, the Mogadorians. The only survivors are nine teenagers and their corresponding guardians, Cêpans, who escaped to Earth. Now the Mogadorians intend to finish the job. Numbers One, Two, and Three have already been killed, causing three circular brands to appear on the ankles of the remaining Loric. The book is told from the perspective of Number Four, also known as John Smith. John is the next target for the Mogadorians, who must kill the Loric in the order of their numbers. He and his Cêpan, Henri, move from place to place, changing their identities to keep hidden. John begins inheriting Legacies, the ancestral right of special Loric known as Garde, and must learn to keep his powers hidden from his human friends. The Power Of Six The story is told by two members of the Garde: Number Seven (Marina), who is hiding at a convent in Spain, and Number Four (John Smith), who is on the run with Sam, Number Six, and Bernie Kosar (John Smith's Chimæra). While John, Six and Sam try to stay ahead of the Mogadorians while searching for the other surviving Loric, Marina searches for news of John after his battle at the school that came at the end of I Am Number Four. The Rise of Nine The Rise of Nine, the 3rd book of The Lorien Legacies, was released on August 21, 2012. John and Nine escape from the Mogadorian base and head to Chicago, where Nine's penthouse is located. They meet up with Six, Eight, Marina and Ella, the tenth child who escaped on the only other ship. While Six, Eight, Marina and Ella attempt to teleport to New Mexico, six becomes stranded and gets kidnapped by the Mogadorians. The remaining Loric invade Dulce base in order to free Six, Sarah, and Sam, as well as fight Sertakus Ra, leader of the Mogadorians. The Fall of Five The Fall of Five, the 4th book of The Lorien Legacies, was released on August 27, 2013. This book is also told in the first person perspective, between Sam Goode, John Smith (Four), and Marina (Seven). The Revenge of Seven The book is narrated in the first person, with Number Four (John), Number Six, and Ella as narrators. The Fate of Ten The Fate of Ten is the sixth in the series, and was released on 1 September 2015. Its title via facebook page for the series. It was also revealed that there will be seven books in the series, rather than the previously believed six. The cover for the book was released on the 23rd of April 2015, via an interview with MTV. The book is primarily narrated in first person, from the perspectives of Four (John), Six and towards the end of the book, Ten (Ella). A prologue in third person narrates briefly the story of an unnamed human-turned-Garde (later known as Daniela) as she survives a Mogadorian attack on New York. United As One United As One is the seventh and final book in the series. The title was revealed on October 26, 2015. It was released on June 28, 2016. The fact that John (Number four) has Ximic may hint at the fact that he is Pittacus' descendant, as Ella is Setrákus'. Other clues include physical features, such as blonde hair, and Pittacus' Lumen. United as One is told in first perspective of two characters, Number Four (John Smith) and Number Six. Marina stands on the edge of a cliff in a dream, after she was left unconscious after a fight with Setrákus Ra. She sees Setrákus Ra taking the appearance of Eight, convincing her to let go and be with Eight, but she refuses and chooses to continue to fight. Five is in a padded cell, locked up by Nine and John. Setrákus Ra also comes to him in a dream to convince him to come back on his side. Five refuses. Mark walking the football field of his old school in Paradise, clutching a photo of Sarah. He is met by Setrákus Ra who tries to tell Mark he can bring Sarah back and they can fight together. It is unknown if Mark accepts the offer. John is in Patience Creek with Sam and Daniella when Ella, Marina, Six, Mark, Adam, and Lexa arrive. The Garde regroup and John has a moment with the deceased Sarah, and Mark is disgusted at his lack of emotion and blames him for her death. The Garde talk tactics with Lawson, the acting general. John becomes increasingly cold and distant, no longer content with playing defense. A plan is hatched to hijack one of the warships and to strip the skimmers of their cloaking devices to co-ordinate a united strike against the other Mogadorian warships. Some new Human-Garde post a video on the internet to attract John's attention to have him pick them up as they stand by a Loralite stone in Niagara Falls. The team splits, with half going to rescue the human garde, while John and Nine meet with Five, who is locked away in a cell. John wants Five to teach him how to fly so he can board the warship. However, he struggles to pick up the legacy, so Five attacks him, hoping the battle rush would help him learn the skill, which he does. However, Marina wakes up and levitates an icicle in front of Five's face, threatening to kill him. The other team, en route to Niagara Falls, encounter turbulence when Sam uses his technopathic legacy to accidentally turn off the ship's engines. With some help from Ella, Sam is able to visualize the workings of the ship and turn them back on. When they arrive on the scene they see fresh signs of a battle, with Skimmers destroyed and fire and ash swirling. The human garde managed to hold their own and fought off the Mogadorians. However, there is some disagreement if the teens are ready for war. To avoid any further people teleporting to Niagara Falls, Ella turns off the Loralite stone. As a neighboring warship makes its way there to check in, the team leaves for base. Adam instigates dissension in the Mogadorian ranks. A warship fires on Sydney while another captain in Moscow declares himself the beloved leader, which leads to the captain in Berlin making an assassination attempt on the usurper. In Kazakhstan another 2 warships meet and began to blow each other apart. Sam attempts to use his techno legacy to copy the cloaking device codes into everyday electronics, he builds a prototype that he asks the team to test when they attack the warship. John, Six, and Adam board the warship. John begins killing all the Mogadorians in a cold ruthless manner, while Adam and Six strip the skimmers of their cloaking devices. Before the day is done the Garde have stolen a warship. They meet a Mogadorian named Rex, whom Adam recognizes and saves from John's attack. Sam's tech is found to be successful. John, not wanting to sacrifice any more friends, takes off to finish Setrákus without the others knowing. He realizes he needs a cloaking device and heads back to Patience Creek. However, the base is overrun by Mogadorians, who found the place through a mind controlled Mark James. Some of the true-borns have augmentations, twisted forms of legacies. Phiri dun Ra is there, whose arm has been replaced with black ooze-like tendrils, then she impales John and begins to drain his legacies while he's helpless. While helpless, Mark James slips the Voron noose around John's neck. The damage from this weapon cannot be healed. The others find out about John's suicide mission and argue among themselves over his chosen course of action. Six calls Sam to tell him about his prototype's success when she hears firing in the background. Sam tells Six he thinks they are under attack. Back at the base, John is helpless as he's dragged along the inside of the base, with Phiri mocking his failure at being a hero and his delusional idea of being able to save everyone. To prove a point she kills Mark. Phiri, John and the mind controlling Mogadorians run into resistance. The mind controlling Mogadorian releases a swarm of seemingly black flies that force half the marines to open fire on the others. The human garde make an appearance and are shocked to see John crawling on all fours with a noose around his neck. he screams at them to run but Phiri manages to kill 2 while the other 2 manage to escape. Sam shouts at the lights to turn off and frees Five who immediately attacks Phiri, however the mind controlling Mogadorian manages to overtake Five, but is quickly killed by Sam. Phiri screams for extraction and loses her grip on John as Five slams into her. The shadow Mogadorian makes an appearance and teleports both Phiri and Five. The shadow Mogadorian tries to finish them off, but Sam tells the lights to turn on while he's in a mid teleport, which slices him in half. The others have no time to mourn the dead and set their plan in motion. The garde use the loralite stone to teleport across the world arming the governments with cloaking tech. The Garde aboard their stolen warship make their way to the West Virginia base where Setrákus waits. Six summons the biggest storm she has ever created to damage the Anubis, destroying it. Five flies Six and Adam into the base to deactivate the shield, once that's done, Marina, John, Five, and Nine make their way underground to find Setrákus. They find a mass grave of Mogadorians and Garde Humans, which repulses them all and steels their drives to end this. Setrákus emerges from a pool of black ooze covered in black slime that has given him a youthful appearance and effective immortality. They try to fight against him, however, their legacies are ineffective against him. Setrákus grabs Five and drowns him in the black puddle. He maims Nine, taking off his arm. While trying to heal him, Marina attacks Setrákus with her healing legacy, to which he recoils in pain. John, picking up on this, continues the healing assault and urges Marina to run with Nine, who is no shape to fight, she protests but John insists. Nine transfers his strength and anti gravity legacy to her and she slings him over her shoulder and runs. John is locked in battle with Setrákus with both of them taking heavy damage, while John pours his healing legacy into him, while also using it on himself in small bursts to keep him going. He establishes telepathic contact with Sam and tells him to blow up the mountain. While this is going on Adam and Six run into Phiri, who wounds Six and attacks Adam, Dust attacks Phiri but is quickly overwhelmed. Six manages to telekinetically pin Phiri to the wall, but a Mogadoraian sneaks behind her and shoots her, Adam promptly kills the Mogadorian but Phiri impales him and begins to drain his Loric spark, she releases a seismic wave that knocks Six back. Adam plunges his hands into the black oily tentacles and begins to tear itself away from Phiri's stump and bond with Adam. He uses his seismic legacy and collapses the floor with both Adam and Phiri falling in. Six attempts to save him but there's nothing there to telekinetically grab. Six realizes Dust has disappeared. Ella urges Six to make her way to the main entrance where Marina and Nine are on their way. Marina tries to get Six out of the base, but she refuses, asking Marina to heal her so she can finish the battle. Six makes her way to the central chamber where John lays beaten and broken. She sees a frail withered old man crawling across the cavern floor she picks up the Voron dagger and decapitates him. She makes her way over to John who she thinks is dead but a gasp from John proves otherwise. His eyes swollen shut he grasps six's arm and whispers 'Sarah?', Six gives him a quick kiss before he falls unconscious. Six grabs John and they climb on to Bernie Kosar's back and fly up. John remembers snippets after that such as seeing Adam crying as he clutched the final form of dust who was frozen in a wolf/snake form. John decides to close his eyes not wanting to see any more sadness. One year later John is secluded in the Himalayas restoring the cave that Eight had found. He scrubs the prophecies from the wall and Ella, who has kept him company for some months, tells him it's time. Clutching a small box, he flies to the Garde academy under construction in San Francisco to meet the others. John apologizes and gives Nine something from his box. Lexa is also at the academy hacking into the dark web protecting the human garde from potential threats. He then goes to New York where repairs and construction is still under way. He observes Daniela from afar who uses her stone legacy to repair building foundations before embracing her mother. John sees Agent Walker in a Montreal car park where she kills someone. John reveals himself to her and she is pleased to see him, he says to her you better have a good reason for that, in response she shows what was in the man's briefcase, 3 vials of black ooze which surprises John. Walker takes out a vial from her pocket and pours it into the black ooze, destroying the substance. Walker says she could use some help tracking this down and John hands her a blue loralite pendant from his box and says they will talk about it soon. He finds Adam in Alaska with Rex in a prison camp with other Mogs who surrendered, John says he can break them out and he replies saying this is the best place for him to rehabilitate his people. It's mentioned he got a full pardon but didn't use it. John gives him a pendant. He goes to paradise and hugs Malcolm which brings back memories of Henri. He ruminates on tracking Mark James' father to tell him what really happened. Finally he steps on Sarah's doorstep still unable to tell her parents what happened. He tracks Six and Sam who are on their vacation. He observes them in the ocean in an embrace and leaves 2 pendants with a note. Lastly he looks for Marina who John says would have taken ages to track if it weren't for her sporadic calls to Ella. Ella says she's not the same and has become paranoid and angry. He finds her in a speedboat navigating islands in the south pacific. John recognizes the signs of isolation from his own experiences. When he appears she isn't startled and asks if he's really there or has gone crazy. He replies that it's really him and she smiles. She says that he's come at a good time and shows him a video footage of the West Virginia base where an object that looked like a missile shoots up into the sky. Marina says she found Five last week on one of the small deserted islands. His body is very skinny with lumps of skin hanging off with dark patches of obsidian blotching his skin. Marina confides in John that she has tried to get over the war and all the death and destruction but she just can't. She tells John that he said her she could decide what to do with Five but she doesn't want to carry it around with her. John suggests her to get out of here and Marina asks what to do with Five to which John says that he's a ghost we aren't. Marina returns with John to the Himalayas. She finally cries when she sees what he's done with the cave. She reaches out to John before thanking him and she kisses him. He's not sure what it means ruminating maybe nothing, maybe something. He shows Marina what he's been working for the past year: a massive wooden table with a loralite stone in the middle reminiscent of the table in the Elder's chamber with the Loric symbol of unity burned into the table and the pendants. John plans to use this as a meeting place for friends and allies where they will gather to solve problems, with a space at the table for Loric/Garde/humans/allies. The only exception is that there are not nine seats around the table as he's done with numbers. The Lost Files series The Legacies The Legacies, was released on July 24, 2012. It is a paperback edition of the three previously released Lost Files novellas—Six’s Legacy, Nine’s Legacy, and The Fallen Legacies. Originally published as individual e-books, this was the first time they were available in print. All the included stories can be found under these names: Six's Legacy, --- Nine's Legacy, --- The Fallen Legacies. Six's Legacy Six's Legacy was released on July 26, 2011. It is told from the first person perspective of Six. Before she met John in Paradise, Ohio, she lived with her Cepan Katarina. They were captured by the Mogadorians in NY. Katarina was killed after Six gives the Mogs false information. While in prison, Six developed her legacy of invisibility and used it to escape. Before leaving she took revenge on the Mogadorian who killed Katarina. After years, she hears of Number Four in Ohio and takes a bus there. Nine's Legacy Nine's Legacy was released on February 28, 2012. It is told from the first person perspective of Pittacus Lore's character, Nine. Nine and his Cepan, Sandor, live in a penthouse on top of the John Hancock Center in Chicago. Nine starts to date a girl. He realizes she was being used to find him and gets captured. The Mogadorians torture Sandor to break Nine. Eventually Nine kills him to put him out of his misery. Nine is held in the West Virginia Mog compound until he's rescued by John. The Fallen Legacies The Fallen Legacies was released on July 24, 2012. It is told from the first person perspective of Adamus Sutekh, a Mogadorian general's son. The book starts with in front of the Washington Monument in Washington D.C. Adam and his adopted brother Ivan are doing homework. Then Adam and Ivan are called by their father. The Mogadorians have found a lead on Number One. Adam and Ivan accompany their father to Malaysia to kill Number One. A vat-born soldier (a Mogadorian soldier grown through genetic engineering) kills Number One right as she develops the legacy to create earthquakes. Once back at Ashwood Estates, a Mogadorian compound, Adam is asked by his father to be hooked up to a special machine that is connected to One's body. He falls asleep. In his deep sleep, he meets the ghost of One. One takes him through her life on Earth up until she is killed and tries to see doubt in Mogadorian progress. When Adam awakes it has been three years. Adam can't believe he's been asleep that long. But, he is just in time to accompany his father and Ivan on a mission to find Number Two. Adam gets to the apartment where she's staying first. He tells her he's there to help. Just then Ivan arrives. He thinks Adam is trying to trick the Garde and kills her himself. After however Adam deletes a post that Number Two was using to try and contact the other members of the Garde. Time passes and Number Three is located. The Mogs proceed to Kenya to find him. Adam and Ivan meet a boy named Hannu. He is Number Three. Adam tries to warn him but is too late. His father kills Three. The book ends when Adam falls down a ravine. Secret Histories Secret Histories was released on July 23, 2013. Secret Histories refers to the second compilation of Lost Files in a paperback book. Secret Histories is a collection of three novellas. Originally published as e-novellas, they are now together in one print volume. The entire book contains three separate stories: The Search for Sam --- Last Days of Lorien --- The Forgotten Ones The Search for Sam The Search for Sam, was released on December 26, 2012. It is told from the first person perspective of Adamus Sutekh. It picks up where The Fallen Legacies left off. The Last Days of Lorien The Last Days of Lorien was released on April 9, 2013. Harpercollins describes it as a "stunning prequel novella to the New York Times bestselling I Am Number Four series". It is set on Lorien before it was attacked by the Mogadorians, and is told from the perspective of Sandor (Nine's Cêpan). The Forgotten Ones The Forgotten Ones was released on July 23, 2013. It is told from the first person perspective of Adamus Sutekh. It is the third and final of Pittacus Lore's novellas told by the son of a mogadorian general turned to the Loric side. Hidden Enemy Hidden Enemy was released on July 22, 2014, in the US and August 14, 2014, in the UK. Hidden Enemy is the third compilation of Lost Files in one book. It is a collection of three novellas, originally published as separate e-novellas. The stories are "Five's Legacy," "Return to Paradise," and "Five's Betrayal." Five's Legacy The seventh Lost Files book, initially scheduled to be released on December 23, 2013, was released as an E-Book on February 11, 2014. Called "Five's Legacy," the novella centers around the origins of Number Five, including the short amount of time he spent with his Cepan, Rey, and his eventual "capture" by Mogadorians. Return to Paradise In this thrilling one-hundred-page prequel companion novella, discover what happened in the aftermath of the Mogadorians' attack on Paradise, Ohio, from Mark James—Number Four's bully-turned-ally. After Four leaves town to find the rest of the Garde, Mark is left behind to pick up the pieces. His school has been destroyed, his home burned down, and, worst yet, Mark now knows the horrifying truth: aliens live among us and some of them seek to destroy us. Even with the FBI tailing him and Sarah Hart, Mark tries to return to a normal life. But when Sarah goes missing, he knows he can no longer sit back and do nothing. His quest to find her will lead him to new allies and a startling revelation about the Mogadorians' plan for invasion. It was released April 15, 2014 exclusively as an e-Book. Five's Betrayal This sequel to Five's Legacy finds Number Five entering the ranks of the Mogadorian army. The Mogs have convinced him that they will be the victors in the war for Earth, and Five decides he would rather be on the winning side, realizing that the only thing that matters is his survival. Rebel Allies I Am Number Four: The Lost Files: Rebel Allies is a collection of three thrilling stories by Pittacus Lore. Originally published as the e-novellas The Fugitive, The Navigator, and The Guard, now, for the first time ever, they are together in one print volume. The Fugitive follows Mark James as he tries to track down Sarah Hart, evade the Mogadorians and the FBI, and discover the identity of the mysterious blogger he knows only as GUARD. The Navigator reveals the truth about the crew of the two Loric spaceships who escaped to Earth and shows what happened to the pilots after they arrived and parted ways with the Garde. The Guard tells the story of the hacker who has been aiding the Lorien survivors from the shadows for years. She's determined to defeat the Mogs—and she just found her secret weapon. The Fugitive Follows Mark James as he tries to track down Sarah Hart, evade the Mogadorians and the FBI, and discover the identity of the mysterious blogger he knows only as GUARD. The Navigator Reveals the truth about the crew of the two Loric spaceships who escaped to Earth and shows what happened to the pilots after they arrived and parted ways with the Garde. The Guard Tells the story of the hacker who has been aiding the Lorien survivors from the shadows for years. She's determined to defeat the Mogs—and she just found her secret weapon. Zero Hour Legacies Reborn Gives a look at the Mogadorian invasion from the perspective of Daniela Morales, a human teen who’s shocked to discover aliens are attacking New York—and that she suddenly has the power to fight back. Last Defense Reveals what happens to Malcolm Goode after the warships descend. To get to the president’s secret bunker near Washington, DC, he’ll have to fight his way through a war zone. Hunt for the Garde Picks up after the events of The Fate of Ten, following the stories of three different Mogadorians. One will do anything for redemption. One has a thirst for blood. One questions everything. Transmissions & Journals These very short works are meant to be read in line with the seven main novels. [—Pittacus Lore Transmissions—Pittacus Lore Transmissions are a bonus series of audio transmissions from Pittacus Lore. There are nine of them in total, and can be found on SoundCloud under user PittacusLore. --The Lost Files Bonus: The Journals—Five Journals were also released for key characters throughout the series. Most were made as e-books, and some were pre-order bonuses or just included in a physical book copy of one of the main series. They include: "Sarah's Journal," "Sam's Journal," "Eight's Origin," "The Scar," and "Malcolm's Journal." Lorien Legacies Reborn novel series (Sequel series) Generation One Fugitive Six Return to Zero The Legacy Chronicles series Trial by Fire Out of the Ashes Into the Fire Up in Smoke Out of the Shadows Chasing Ghosts Raising Monsters Killing Giants Characters Garde Number One: Gender - Female _ Race - Loric/Garde _ Age - 20 (1990)_Status - Dead_ Cause of Death - Killed by Mogadorian _ Cepan - Hilde _ Legacies - Telekinesis and Enhancement Number Two (Maggie Hoyle): Gender - Female_ Race - Loric/Garde _ Age - 15 (1995)_Status - Dead _ Cause of Death - Killed by Mogadorian _ Cepan - Conrad _ Legacies - Telekinesis and Enhancement Number Three (Hannu): Gender - Male _ Race - Loric/Garde _ Age - 14 (1996) _ Status - Dead _ Cause of Death - Killed by Mogadorian _ Cepan - Male (unknown) _ Legacies - Telekinesis, Enhancement, Fortem, Omnilingualism Number Four (John Smith): Gender - Male _ Race - Loric/Garde _ Age - 15 (1995) _ Status - Alive _ Cepan - Henri _ Legacies - Telekinesis, Enhancement, Lumen, Anima, Extrasensory Perception, Ximic (Recupero, Novis, Petras, Avex, Dreynen, Telepathy, Glacen, Sturma, Terric, Air Vanishing) _Aliases - John Smith, Daniel Jones, Jobie Frey, James Hughes, Donald, John Kent _Relationships - Sarah (ex - dead), Marina (girlfriend - ran off with each other after the war ended) Number Five: Gender - Male _ Race - Loric/Garde _ Age - 16 (1994) _ Status - Dead _ Cepan - Albert - Cause of Death - Number Five died protecting New Lorien by carrying Ran, who had absorbed the kinetic force of a nuclear warhead, into the upper atmosphere and out of reach of the Garde. _ Legacies - Telekinesis, Enhancement, Avex, Externa _ Aliases - Cody, Zach Number Six: Gender - Female _ Race - Loric/Garde _ Age - 16 (1994) _ Status -Alive _ Cepan - Katalina _ Legacies - Telekinesis, Enhancement, Novis, Sturma, Accelerated Hearing _ Aliases - Maren Elizabeth, Kelly, Veronica, and Sheila _ Relationship - Sam (boyfriend - traveled the world together) Number Seven (Marina): Gender - Female _ Race - Loric/Garde _ Age - 16 (1994) _ Status - Alive _ Cepan - Adalina _ Legacies - Telekinesis, Enhancement, Submari, Noxen, Recupero, Glacen, an unnamed legacy _ Aliases - Marina, Birgitta, Yasmin, Minka, Genevieve, Astrid, Sophie _ Relationships - Eight (ex - dead), John Smith (boyfriend - ran off with each other after the war ended) Number Eight: Gender - Male _ Race - Loric/Garde _ Age - 18/20 (1990-1992) _ Status - Dead _ Cause of Death - Stabbed in the Heart by Five _ Cepan - Reynolds _ Legacies - Telekinesis, Enhancement, Morfen, Teleportation, Pondus, Precognition, Extrasensory Perception _ Aliases - Naveen, Joseph, Vishnu _ Relationships - Marina (ex - if Eight were alive then they probably would still be together) Number Nine: Gender - Male _ Race - Loric/Garde _ Age - 15 (1995) _ Status - Alive _ Cepan - Sandor _ Legacies - Telekinesis, Enhancement, Anima, Precognition, Super Strength, Accelix, Super Hearing, Miras, Liberum _ Aliases - Stanley Worthington, Professor Nine, Tony _ Relationships - Ella (girlfriend? - Ella and Nine seem to be very close for friends) Number Ten (Ella): Gender - Female _ Race - Loric/Garde _ Age - 11 (1999) _ Status - Alive _ Cepan - Crayton _ Legacies - Telekinesis, Enhancement, Aeternus, _ Telepathy, Dreynen, Precognition _ Aliases - Ella _ Relationships - Nine (boyfriend? - Ella and Nine seem to be very close for friends) Other Loric Henri (Brandon), John's Cêpan. Katarina (Kater), Six's Cêpan. Adelina (Adel), Marina's Cêpan. Sandor, Nine's Cêpan. Crayton, Ella's Cêpan. Lexa (GUARD), a Loric hacker hiding on Earth. Pittacus Lore, leader of the nine Elders of Lorien who sent the Garde to Earth. The Chimaera who traveled with the Garde to Earth: Bernie Kosar (a beagle, John), Olivia (a Loric creature, Ella), Dust (a wolf, Adam), Stanley (a cat, Sam), Biscuit (a golden retriever, Daniela), Gamera (a snapping turtle, Ran), Regal (a hawk, Caleb), Bandit (a raccoon, Nigel). Hilde (Hessu), One's Cêpan. Conrad, Maggie's Cêpan. Kentra, Hannu's Cêpan. Humans Sarah Hart, John's girlfriend. Mark James, Sarah's ex-boyfriend and John's rival. Sam Goode, John's friend and Loric ally. Malcolm Goode, Sam's father and longtime Loric ally. Agent Karen Walker, an FBI agent pursuing the Garde. Agent Murray, a Loric sympathizer recruited by Walker. Agent Purdy, a MogPro FBI agent receiving genetic enhancements. Secretary Bud Sanderson, a politician allied with the Mogs. Daniela Morales, a human with a surprising ability. General Grahish Sharma, an ally of Number Eight in India. Héctor Ricardo, Marina's friend in Spain. General Clarence Lawson, leader of the human military resistance during the Mogadorian Invasion. Mogadorians Setrákus Ra, the Mogodorian leader, originally a member of the Garde, and the series' primary antagonist. Adamus "Adam" Sutekh, a Mog who allies himself with the Garde and whose father allowed testing on him after he got the abilities of number one. General Andrakkus Sutekh, a trueborn Mog leader and Adam's father. Phiri Dun-Ra, a disgraced trueborn Mog commander. Ivanick Shu-Ra, a trueborn Mog bent on hunting down the Garde. Dr. Lockham Anu, a researcher at Ashwood. Rexicus "Rex" Saturnus, a Mog who begins to doubt the leadership of Setrákus Ra. Critical reception Reception to the series has been mostly positive. The first two books, I Am Number Four and The Power of Six, both reached #1 on The New York Times Best Seller list, collectively spending ten weeks in the top spot. Film adaptation In 2009 DreamWorks Pictures bought the rights to I Am Number Four and released the movie on February 18, 2011. The movie was the first DreamWorks film to be distributed by Disney's Touchstone Pictures and received generally negative reviews from critics, with review aggregator Rotten Tomatoes giving a score of 32% based on 156 reviews. The movie had a worldwide gross of $145,982,798 and a budget of $50 million, but still fell short of expectations. Plans for any future installments for the series have been shelved. Director D. J. Caruso confirmed that he would like to direct a sequel, but in an interview with MTV Hollywood Crush Lore has stated that any questions or requests for a sequel should be directed to producer Michael Bay. References External links The Lorien Legacies official site UK The Lorien Legacies official site Science fiction book series Young adult novel series Novels by James Frey
1956397
https://en.wikipedia.org/wiki/Internet%20Explorer%204
Internet Explorer 4
Microsoft Internet Explorer 4 (IE4) is a graphical web browser that Microsoft unveiled in Spring of 1997, and released in September 1997, primarily for Microsoft Windows, but also with versions available for the classic Mac OS, Solaris, and HP-UX and marketed as "The Web the Way You Want It". It was one of the main participants of the first browser war. Its distribution methods and Windows integration were involved in the United States v. Microsoft Corp. case. It was superseded by Microsoft Internet Explorer 5 in March 1999. It was the default browser in Windows 95 OSR 2.5 (later default was Internet Explorer 5) and Windows 98 First Edition (later default was Internet Explorer 6) and can replace previous versions of Internet Explorer on Windows 3.1x, Windows NT 3.x, Windows 95 and Windows NT 4.0; in addition the Internet Explorer layout engine MSHTML (Trident) was introduced. It attained just over 60% market share by March 1999 when IE5 was released. In August 2001 when Internet Explorer 6 was released, IE4.x had dropped to 7% market share and IE5 had increased to 80%. IE4 market share dropped under 1% by 2004. Internet Explorer 4 is no longer available for download from Microsoft. However, archived versions of the software can be found on various websites. Overview The Internet Explorer 4.0 Platform Preview was released in April 1997, and Platform Preview 2.0 in July that year. Internet Explorer 4 was released to the public in September, 1997 and deepened the level of integration between the web browser and the underlying operating system. Installing version 4 on a Windows 95 or Windows NT 4 machine and choosing "Windows Desktop Update" would result in the traditional Windows Explorer being replaced by a version more akin to a web browser interface, as well as the Windows desktop itself being web-enabled via Active Desktop. The integration with Windows, however, was subject to numerous packaging criticisms (see United States v. Microsoft Corp.). This option was no longer available with the installers for later versions of Internet Explorer but was not removed from the system if already installed. Internet Explorer 4 introduced support for Group Policy, allowing companies to configure and lock down many aspects of the browser's configuration. Internet Mail and News was replaced with Outlook Express, and Microsoft Chat and an improved NetMeeting were also included. This version also was included with Windows 98. Version 4.5 (only for Mac) dropped support for 68k Macs, but offered new features such as easier 128-bit encryption. The last non-Mac version was 4.0 Service Pack 2. Uninstalling IE4 became the subject of concern to some users and was a point of contention in later lawsuits (see Removal of Internet Explorer and United States v. Microsoft Corp..) Internet Explorer version 4.0 for Macintosh On January 6, 1998, at the Macworld Expo in San Francisco, Microsoft announced the release of the final version of Internet Explorer version 4.0 for Macintosh. Version 4 includes support for offline browsing, Dynamic HTML, a new faster Java virtual machine and Security Zones that allow users or administrators to limit access to certain types of web content depending on which zone (for example Intranet or Internet) the content is coming from. At the same event, Apple announced the release of Mac OS 8.1, which would be bundled with IE4. At the following year's San Francisco Macworld Expo on January 9, 1999, Microsoft announced the release of Internet Explorer 4.5 Macintosh Edition. This new version dropped 68K processor support, introduced Form AutoFill, Print Preview, and Page Holder pane, which allowed user to hold a page of links on one side of the screen that opens pages in the right hand and support for Mac OS technology like Sherlock. Internet Explorer 4 for Unix On November 5, 1997, a beta of IE for Unix 4.0 was released for testing on Solaris. On January 27, 1998, it was reported that IE 4.0 for Solaris was due in March; Tod Nielsen, general manager of Microsoft's developer relations group, joked that "he wanted to launch Internet Explorer 4.0 for Unix at the Ripley's Believe It or Not! museum in San Francisco" because of skepticism from those who suspected IE for Unix was vaporware. It was further reported that versions for "HP-UX, IBM AIX, and Irix" were planned. The software used to enable this, MainWin XDE, was available for Solaris 2.5.1 on SPARC and Intel, SunOS 4.1.4, Irix 5.3, Irix 6.2, HP UX 10.2, and IBM AIX 4.1.5. On March 4, 1998, IE 4.0 for Unix on Solaris was released. Later that year, a version for HP-UX was released. Features, technology, and integrated software IE4 came with Active Desktop, Windows Desktop Update, Channels, Frontpage Express, Netmeeting, NetShow, Web Publishing Wizard, Microsoft Chat 2.0 and Progressive Networks RealPlayer.</ref> Outlook Express 4 replaced Internet Mail and News. Other new features including Dynamic HTML, inline PNG, Favicons, a parental rating system, and the ability to 'subscribe' to a website in favorites, where it would notify the user of an update. Stephen Reid of PC Pro noted in his review: {{quote|But it was the Web-style view that surprised me so much on first using IE 4. This changes the way you look at Windows, with files and folders now acting like hyperlinks on a Web page; you move your cursor over them to select them, then single click to launch. Individual folders are viewed as Web pages, including My Computer and Control Panel, and any folder you wish can be customised with your choice of background. Bundled and/or integrated software Microsoft Chat 2.0 is a simple text chatting program included in the Windows NT-line of operating system, including Windows NT 3.x, Windows XP and Windows Server 2003. It utilizes NetBIOS session service and NetDDE. Outlook Express 4.0 is the successor of Microsoft Internet Mail and News, an early e-mail client add-on for Internet Explorer 3. Internet Mail and News handled only plain text and rich text (RTF) e-mail, it lacked HTML email. Despite being versioned 4.0, Outlook Express was at its first iteration. NetMeeting is a VoIP and multi-point videoconferencing client that uses the H.323 protocol for video and audio conferencing. FrontPage Express 2.0 was a stripped-down version of Microsoft FrontPage. It was bundled with Internet Explorer 4, but was also available for free, and could be downloaded from online repositories. RealPlayer was a streaming media player made by Progressive Networks (later called RealNetworks). The first version of RealPlayer was introduced in April 1995 as RealAudio Player and was one of the first media players capable of streaming media over the Internet. Active Desktop Active Desktop is a feature of Microsoft Internet Explorer 4.0's optional Windows Desktop Update that allows the user to add HTML content to the desktop, along with some other features. This functionality was intended to be installed on the then-current Windows 95 operating system, and later Windows 98. Active Desktop placed a number of "channels" on the user's computer desktop that provided continually-updated information, such as news headlines and stock quotes, without requiring the user to open a web browser. Channels Active Channel is a website type which allows synchronizing website content and viewing it offline. It makes use of the Channel Definition Format, which is a way of defining a website's content and structure. Each country had different channels, so picking a country during the installation of IE 4 (and therefore Windows 98) was important. Channels could be displayed in a Channel Bar and made heavy use of Dynamic HTML. Windows Desktop Update Windows Desktop Update was an optional feature included with Internet Explorer 4, which provided several updated shell features later introduced with the Windows 98 operating system for older versions of Microsoft Windows. The Windows Desktop Update also added the ability to create desk-bands like the quicklaunch bar. It also updated the Windows file manager, explorer.exe (also a shell), to be more modular and extensible. MSHTML MSHTML (Trident) was a layout engine introduced with IE4. It was designed as a software component to allow software developers to easily add web browsing functionality to their own applications. It presents a COM interface for accessing and editing web pages in any COM-supported environment, like C++ and .NET. For instance, the WebBrowser control can be added to a C++ program and MSHTML can then be used to access the page currently displayed in the web browser and retrieve element values. Events from the WebBrowser control can also be captured. MSHTML functionality becomes available by connecting the file to the software project. Browser Helper Object A Browser Helper Object (BHO) is a DLL module designed as a plugin for Internet Explorer 4.0, and provides added functionality. Most BHOs are loaded once by each new instance of Internet Explorer. System requirements Adoption capability overview Internet Explorer 4.0 had support for Windows 3.1x, Windows 95, Windows 98, Windows NT 3.x, and Windows NT 4.0 (Service Pack 3 or later). Version 4.0 was included in the first release of Windows 98, although the second edition included IE5. HP-UX, Solaris, and Mac OS were also supported. IE4 supported 68k Macs, although this was dropped in Internet Explorer 4.5. Windows For Windows, Initially Windows 95 or above, 16MB of RAM, 11MB of disk space (minimum for install). Mac System Requirements for initial release of 4.0 for Mac: Macintosh with 68030 or higher processor System 7.1 or higher 8 MB of RAM with virtual memory on (12 MB recommended) 12 MB of hard disk space for IE4 and 8.5 MB of hard disk space for Java VM. Open Transport 1.1.1 or higher or MacTCP 2.0.6 or, Config PPP or similar PPP connection software (Control Panel) with PPP (Extension). IE 4.5 did not support 68k Macs. Encryption Internet Explorer 4 was the first version of the browser to support TLS 1.0. Internet Explorer 4 supported 40-bit and later 128-bit encryption through an add-on, using Server Gated Cryptography (SGC). A 256-bit encryption would not become available in IE for nearly 10 years until the Windows Vista version of Internet Explorer 7. 128-bit encryption was available or included for these versions: Microsoft Internet Explorer 4.5 for Macintosh Microsoft Internet Explorer 4.5 128-Bit Edition Microsoft Internet Explorer 4.01 Microsoft Internet Explorer 4.0 for Unix Microsoft Internet Explorer 4.01 Service Pack 2 Microsoft Internet Explorer 4.0 for Macintosh Microsoft Internet Explorer 4.0 128-Bit Edition If it is not possible to upgrade to 128-bit, then 40-bit (SGC) is standard. Versions Versions overview Mac OS: Version 4.0 – January 6, 1998 Version 4.5 – January 5, 1999 Shdocvw.dll version numbers plus related notes. major version.minor version.build number.sub-build number 4.71.544 Internet Explorer 4.0 Platform Preview 1.0 (PP1) 4.71.1008.3 Internet Explorer 4.0 Platform Preview 2.0 (PP2) 4.71.1712.6 Internet Explorer 4.0 4.72.2106.8 Internet Explorer 4.01 4.72.3110.8 Internet Explorer 4.01 Service Pack 1 (Windows 98) 4.72.3612.1713 Internet Explorer 4.01 Service Pack 2 Comparison of Features across platforms See also History of the Internet United States v. Microsoft Corp. Comparison of web browsers Timeline of web browsers Further reading References External links Internet Explorer Architecture Internet Explorer Community — The official Microsoft Internet Explorer Community 1997 software Gopher clients Internet Explorer Discontinued internet suites Macintosh web browsers POSIX web browsers Windows 95 Windows 98 Windows components Windows web browsers
19103666
https://en.wikipedia.org/wiki/AXFS
AXFS
AXFS (Advanced XIP Filesystem) is a compressed read-only file system for Linux, initially developed at Intel, and now maintained at Numonyx. It was designed to use execute in place (XIP) alongside compression aiming to reduce boot and program load times, while retaining a small memory footprint for embedded devices. This is achieved by mixing compressed and uncompressed pages in the same executable file. AXFS is free software (licensed under the GPL). Cramfs is another read-only compressed file system that supports XIP (with patches); however, it uses a strategy of decompressing entire files, whereas AXFS supports XIP with page granularity. See also Squashfs is another read-only compressed file system Cloop is a compressed loopback device module for the Linux kernel e2compr provides compression for ext2 List of file systems Comparison of file systems References Further reading Tony Benavides, Justin Treon, Jared Hulbert and Weide Chang, The Enabling of an Execute-In-Place Architecture to Reduce the Embedded System Memory Footprint and Boot Time, Journal of Computers, Vol. 3, No. 1, Jan 2008, pp. 79–89 Jared Hulbert, Introducing the Advanced XIP File System, (talk) Proceedings of the 2008 Linux Symposium External links AXFS website Justin Treon (February 14, 2008) Side by side comparison of launching applications stored in the AXFS, SquashFS, CRAMFS and JFFS2 read-only filing systems. (video) "Application eXecute-In-Place (XIP) with Linux and AXFS" Free special-purpose file systems Compression file systems Read-only file systems supported by the Linux kernel
18599475
https://en.wikipedia.org/wiki/Ultamatix
Ultamatix
Ultamatix was a tool to automate the addition of applications, codecs, fonts and libraries not provided directly by the software repositories of Debian-based distributions like Ubuntu. History Ultamatix was based on Automatix, picking up where its development ended. It has many of the same characteristics, but works on Ubuntu 8.04, and the developer claims to have fixed many of the problems with Automatix. Supported software Ultamatix allowed the installation of 101 different programs/features, including programs such as the Adobe Flash Player plugin, Adobe Reader, multimedia codecs (including MP3, Windows Media Audio and video-DVD support), fonts, programming software (compilers) and games. Reception Ultamatix has received positive reviews, with Softpedia calling it "Ultamatix: The New Automatix", and Linux.com saying it "may be a worthy successor to Automatix for new Ubuntu and Debian users" and that "The real value of Ultamatix is in making the Linux experience easier for new users". As with its detailed criticism of Automatix, many in the Ubuntu community believe that there are better solutions for installing the programs covered with this tool, many of which can be installed either from standard Ubuntu repositories or the third-party Medibuntu repository. Developers and users of Ubuntu have also raised concerns that Ultamatix and Automatix could create longer-term problems, by installing packages in an 'unclean' manner that can prevent the entire Ubuntu system from being upgraded for security and other reasons. The original developer of Automatix has given some positive and negative comments. Other issues are noted in the comments of Softpedia's review and the comments in Linux.com's review. See also Medibuntu Getdeb References External links Official Website (Down) Linux configuration utilities Linux package management-related software
52278978
https://en.wikipedia.org/wiki/Linux.Wifatch
Linux.Wifatch
Linux.Wifatch is an open-source piece of malware which has been noted for not having been used for malicious actions, instead attempting to secure devices from other malware. Linux.Wifatch operates in a manner similar to a computer security system and updates definitions through its Peer to Peer network and deletes remnants of malware which remain. Linux.Wifatch has been active since at least November 2014. According to its authors the idea for Linux.Wifatch came after reading the Carna paper. Linux.Wifatch was later released on GitLab by its authors under the GNU General Public License on October 5, 2015. Operation Linux.Wifatch's primary mode of infection is to log into devices using weak or default telnet credentials. Once infected, Linux.Wifatch removes other malware and disables telnet access, replacing it with the message "Telnet has been closed to avoid further infection of this device. Please disable telnet, change telnet passwords, and/or update the firmware." See also Denial-of-service attack BASHLITE – another notable IoT malware Linux.Darlloz – another notable IoT malware Remaiten – another notable IoT malware Mirai – another notable IoT malware Hajime (malware) - malware which appears to be similar in purpose to Wifatch References External links Linux.Wifatch at GitLab Botnets Free software IoT malware Linux malware Telnet
14171448
https://en.wikipedia.org/wiki/Specification%20%28technical%20standard%29
Specification (technical standard)
A specification often refers to a set of documented requirements to be satisfied by a material, design, product, or service. A specification is often a type of technical standard. There are different types of technical or engineering specifications (specs), and the term is used differently in different technical contexts. They often refer to particular documents, and/or particular information within them. The word specification is broadly defined as "to state explicitly or in detail" or "to be specific". A requirement specification is a documented requirement, or set of documented requirements, to be satisfied by a given material, design, product, service, etc. It is a common early part of engineering design and product development processes in many fields. A functional specification is a kind of requirement specification, and may show functional block diagrams. A design or product specification describes the features of the solutions for the Requirement Specification, referring to either a designed solution or final produced solution. It is often used to guide fabrication/production. Sometimes the term specification is here used in connection with a data sheet (or spec sheet), which may be confusing. A data sheet describes the technical characteristics of an item or product, often published by a manufacturer to help people choose or use the products. A data sheet is not a technical specification in the sense of informing how to produce. An "in-service" or "maintained as" specification, specifies the conditions of a system or object after years of operation, including the effects of wear and maintenance (configuration changes). Specifications are a type of technical standard that may be developed by any of various kinds of organizations, in both the public and private sectors. Example organization types include a corporation, a consortium (a small group of corporations), a trade association (an industry-wide group of corporations), a national government (including its different public entities, regulatory agencies, and national laboratories and institutes), a professional association (society), a purpose-made standards organization such as ISO, or vendor-neutral developed generic requirements. It is common for one organization to refer to (reference, call out, cite) the standards of another. Voluntary standards may become mandatory if adopted by a government or business contract. Use In engineering, manufacturing, and business, it is vital for suppliers, purchasers, and users of materials, products, or services to understand and agree upon all requirements. A specification may refer to a standard which is often referenced by a contract or procurement document, or an otherwise agreed upon set of requirements (though still often used in the singular). In any case, it provides the necessary details about the specific requirements. Standards for specifications may be provided by government agencies, standards organizations (SAE, AWS, NIST, ASTM, ISO / IEC, CEN / CENELEC, DoD, etc.), trade associations, corporations, and others. The following British standards apply to specifications: BS 7373-1:2001 Guide to the preparation of specifications BS 7373-2:2001 Product specifications. Guide to identifying criteria for a product specification and to declaring product conformity BS 7373-3:2005, Product specifications. Guide to identifying criteria for specifying a service offering A design/product specification does not necessarily prove a product to be correct or useful in every context. An item might be verified to comply with a specification or stamped with a specification number: this does not, by itself, indicate that the item is fit for other, non-validated uses. The people who use the item (engineers, trade unions, etc.) or specify the item (building codes, government, industry, etc.) have the responsibility to consider the choice of available specifications, specify the correct one, enforce compliance, and use the item correctly. Validation of suitability is necessary. Guidance and content Sometimes a guide or a standard operating procedure is available to help write and format a good specification. A specification might include: Descriptive title, number, identifier, etc. of the specification Date of last effective revision and revision designation A logo or trademark to indicate the document copyright, ownership and origin Table of Contents (TOC), if the document is long Person, office, or agency responsible for questions on the specification, updates, and deviations. The significance, scope or importance of the specification and its intended use. Terminology, definitions and abbreviations to clarify the meanings of the specification Test methods for measuring all specified characteristics Material requirements: physical, mechanical, electrical, chemical, etc. Targets and tolerances. Acceptance testing, including performance testing requirements. Targets and tolerances. Drawings, photographs, or technical illustrations Workmanship Certifications required. Safety considerations and requirements Security considerations and requirements (where appropriate: e.g. for products and services to be provided to government or military agencies, information technology firms, etc.) Environmental considerations and requirements Quality control requirements, acceptance sampling, inspections, acceptance criteria; or, where a quality management system is operating, quality assurance requirements set forth to regulate business processes involved in the delivery of the product/service which is in the scope of the specification. Person, office, or agency responsible for enforcement of the specification (which could include the arrangement and execution of audits for verifying compliance with the requirements set forth in the specification). Completion and delivery conditions (often referring to standardized INCOTERMS). Provisions for rejection, reinspection, rehearing, corrective measures References and citations for which any instructions in the content maybe required to fulfill the traceability and clarity of the document Signatures of approval, if necessary; sometimes specific procedures apply to sign-off / buy-off events. Change record to summarize the chronological development, revision and completion if the document is to be circulated internally Annexes and Appendices that are expand details, add clarification, or offer options. Construction specifications Construction specifications in North America Specifications in North America form part of the contract documents that accompany and govern the construction of building and infrastructure projects. Specifications describe the quality and performance of building materials, using code citations and published standards, whereas the drawings or building information model (BIM) illustrates quantity and location of materials. The guiding master document of names and numbers is the latest edition of MasterFormat. This is a consensus document that is jointly sponsored by two professional organizations: Construction Specifications Canada and Construction Specifications Institute based in the United States and updated every two years. While there is a tendency to believe that "specifications overrule drawings" in the event of discrepancies between the text document and the drawings, the actual intent must be made explicit in the contract between the Owner and the Contractor. The standard AIA (American Institute of Architects) and EJCDC (Engineering Joint Contract Documents Committee) states that the drawings and specifications are complementary, together providing the information required for a complete facility. Many public agencies, such as the Naval Facilities Command (NAVFAC) state that the specifications overrule the drawings. This is based on the idea that words are easier for a jury (or mediator) to interpret than drawings in case of a dispute. The standard listing of construction specifications falls into 50 Divisions, or broad categories of work types and work results involved in construction. The divisions are subdivided into sections, each one addressing a specific material type (concrete) or a work product (steel door) of the construction work. A specific material may be covered in several locations, depending on the work result: stainless steel (for example) can be covered as a sheet material used in flashing and sheet Metal in division 07; it can be part of a finished product, such as a handrail, covered in division 05; or it can be a component of building hardware, covered in division 08. The original listing of specification divisions was based on the time sequence of construction, working from exterior to interior, and this logic is still somewhat followed as new materials and systems make their way into the construction process. Each section is subdivided into three distinct parts: "general", "products" and "execution". The MasterFormat and section format system can be successfully applied to residential, commercial, civil, and industrial construction. Although many Architects find the rather voluminous commercial style of specifications too lengthy for most residential projects and therefore either produce more abbreviated specifications of their own or use ArCHspec (which was specifically created for residential projects). Master specification systems are available from multiple vendors such as Arcom, Visispec, BSD, and Spectext. These systems were created to standardize language across the United States and are usually subscription based. Specifications can be either "performance-based", whereby the specifier restricts the text to stating the performance that must be achieved by the completed work, "prescriptive" where the specifier states the specific criteria such as fabrication standards applicable to the item, or "proprietary", whereby the specifier indicates specific products, vendors and even contractors that are acceptable for each workscope. In addition, specifications can be "closed" with a specific list of products, or "open" allowing for substitutions made by the Contractor. Most construction specifications are a combination of performance-based and proprietary types, naming acceptable manufacturers and products while also specifying certain standards and design criteria that must be met. While North American specifications are usually restricted to broad descriptions of the work, European ones and Civil work can include actual work quantities, including such things as area of drywall to be built in square meters, like a bill of materials. This type of specification is a collaborative effort between a specwriter and a quantity surveyor. This approach is unusual in North America, where each bidder performs a quantity survey on the basis of both drawings and specifications. In many countries on the European continent, content that might be described as "specifications" in the United States are covered under the building code or municipal code. Civil and infrastructure work in the United States often includes a quantity breakdown of the work to be performed as well. Although specifications are usually issued by the architect's office, specification writing itself is undertaken by the architect and the various engineers or by specialist specification writers. Specification writing is often a distinct professional trade, with professional certifications such as "Certified Construction Specifier" (CCS) available through the Construction Specifications Institute and the Registered Specification Writer (RSW) through Construction Specifications Canada. Specification writers are either employees of or sub-contractors to architects, engineers, or construction management companies. Specification writers frequently meet with manufacturers of building materials who seek to have their products specified on upcoming construction projects so that contractors can include their products in the estimates leading to their proposals. In February 2015, ArCHspec went live, from ArCH (Architects Creating Homes), a nationwide American professional society of Architects whose purpose is to improve residential architecture. ArCHspec was created specifically for use by Licensed Architects while designing SFR (Single Family Residential) architectural projects. Unlike the more commercial CSI (50+ division commercial specifications), ArCHspec utilizes the more recognizable 16 traditional Divisions, plus a Division 0 (Scope & Bid Forms) and Division 17 (low voltage). Many architects, up to this point, did not provide specifications for residential designs, which is one of the reasons ArCHspec was created: to fill a void in the industry with more compact specifications for residential use. Shorter form specifications documents suitable for residential use are also available through Arcom, and follow the 50 division format, which was adopted in both the United States and Canada starting in 2004. The 16 division format is no longer considered standard, and is not supported by either CSI or CSC, or any of the subscription master specification services, data repositories, product lead systems, and the bulk of governmental agencies. The United States' Federal Acquisition Regulation governing procurement for the federal government and its agencies stipulates that a copy of the drawings and specifications must be kept available on a construction site. Construction specifications in Egypt Specifications in Egypt form part of contract documents. The Housing and Building National Research Center (HBRC) is responsible for developing construction specifications and codes. The HBRC has published more than 15 books which cover building activities like earthworks, plastering, etc. Construction specifications in the UK Specifications in the UK are part of the contract documents that accompany and govern the construction of a building. They are prepared by construction professionals such as architects, architectural technologists, structural engineers, landscape architects and building services engineers. They are created from previous project specifications, in-house documents or master specifications such as the National Building Specification (NBS). The National Building Specification is owned by the Royal Institute of British Architects (RIBA) through their commercial group RIBA Enterprises (RIBAe). NBS master specifications provide content that is broad and comprehensive, and delivered using software functionality that enables specifiers to customize the content to suit the needs of the project and to keep up to date. UK project specification types fall into two main categories prescriptive and performance. Prescriptive specifications define the requirements using generic or proprietary descriptions of what is required, whereas performance specifications focus on the outcomes rather than the characteristics of the components. Specifications are an integral part of Building Information Modeling and cover the non-geometric requirements. Food and drug specifications Pharmaceutical products can usually be tested and qualified by various Pharmacopoeia. Current existing pharmaceutical standards include: British Pharmacopoeia European Pharmacopoeia Japanese Pharmacopoeia The International Pharmacopoeia United States Pharmacopeia If any pharmaceutical product is not covered by the above standards, it can be evaluated by the additional source of Pharmacopoeia from other nations, from industrial specifications, or from a standardized formulary such as British National Formulary for Children British National Formulary National Formulary A similar approach is adopted by the food manufacturing, of which Codex Alimentarius ranks the highest standards, followed by regional and national standards. The coverage of food and drug standards by ISO is currently less fruitful and not yet put forward as an urgent agenda due to the tight restrictions of regional or national constitution. Specifications and other standards can be externally imposed as discussed above, but also internal manufacturing and quality specifications. These exist not only for the food or pharmaceutical product but also for the processing machinery, quality processes, packaging, logistics (cold chain), etc. and are exemplified by ISO 14134 and ISO 15609. The converse of explicit statement of specifications is a process for dealing with observations that are out-of-specification. The United States Food and Drug Administration has published a non-binding recommendation that addresses just this point. At the present time, much of the information and regulations concerning food and food products remain in a form which makes it difficult to apply automated information processing, storage and transmission methods and techniques. Data systems that can process, store and transfer information about food and food products need formal specifications for the representations of data about food and food products in order to operate effectively and efficiently. Development of formal specifications for food and drug data with the necessary and sufficient clarity and precision for use specifically by digital computing systems have begun to emerge from some government agencies and standards organizations: the United States Food and Drug Administration has published specifications for a "Structured Product Label" which drug manufacturers must by mandate use to submit electronically the information on a drug label. Recently, the ISO has made some progress in the area of food and drug standards and formal specifications for data about regulated substances through the publication of ISO 11238. Information technology Specification need In many contexts, particularly software, specifications are needed to avoid errors due to lack of compatibility, for instance, in interoperability issues. For instance, when two applications share Unicode data, but use different normal forms or use them incorrectly, in an incompatible way or without sharing a minimum set of interoperability specification, errors and data loss can result. For example, Mac OS X has many components that prefer or require only decomposed characters (thus decomposed-only Unicode encoded with UTF-8 is also known as "UTF8-MAC"). In one specific instance, the combination of OS X errors handling composed characters, and the samba file- and printer-sharing software (which replaces decomposed letters with composed ones when copying file names), has led to confusing and data-destroying interoperability problems. Applications may avoid such errors by preserving input code points, and normalizing them to only the application's preferred normal form for internal use. Such errors may also be avoided with algorithms normalizing both strings before any binary comparison. However errors due to file name encoding incompatibilities have always existed, due to a lack of minimum set of common specification between software hoped to be inter-operable between various file system drivers, operating systems, network protocols, and thousands of software packages. Formal specification A formal specification is a mathematical description of software or hardware that may be used to develop an implementation. It describes what the system should do, not (necessarily) how the system should do it. Given such a specification, it is possible to use formal verification techniques to demonstrate that a candidate system design is correct with respect to that specification. This has the advantage that incorrect candidate system designs can be revised before a major investment has been made in actually implementing the design. An alternative approach is to use provably correct refinement steps to transform a specification into a design, and ultimately into an actual implementation, that is correct by construction. Architectural specification In (hardware, software, or enterprise) systems development, an architectural specification is the set of documentation that describes the structure, behavior, and more views of that system. Program specification A program specification is the definition of what a computer program is expected to do. It can be informal, in which case it can be considered as a user manual from a developer point of view, or formal, in which case it has a definite meaning defined in mathematical or programmatic terms. In practice, many successful specifications are written to understand and fine-tune applications that were already well-developed, although safety-critical software systems are often carefully specified prior to application development. Specifications are most important for external interfaces that must remain stable. Functional specification In software development, a functional specification (also, functional spec or specs or functional specifications document (FSD)) is the set of documentation that describes the behavior of a computer program or larger software system. The documentation typically describes various inputs that can be provided to the software system and how the system responds to those inputs. Web service specification Web services specifications are often under the umbrella of a quality management system. Document specification These types of documents define how a specific document should be written, which may include, but is not limited to, the systems of a document naming, version, layout, referencing, structuring, appearance, language, copyright, hierarchy or format, etc. Very often, this kind of specifications is complemented by a designated template. See also Benchmarking Change control Guideline Defense Standard Design specification Diagnostic design specification Documentation Document management system Formal specification Functional specification List of ISO standards List of Air Ministry specifications Open standard Performance testing Process specification Product design specification Publicly Available Specification Revision control Requirements analysis Shop drawing Specification and Description Language Specification tree Standardization Statistical interference Systems engineering Submittals (construction) Technical documentation Tolerance (engineering) Verification and validation References Further reading Pyzdek, T, "Quality Engineering Handbook", 2003, Godfrey, A. B., "Juran's Quality Handbook", 1999, "Specifications for the Chemical And Process Industries", 1996, ASQ Quality Press, ASTM E29-06b Standard Practice for Using Significant Digits in Test Data to Determine Conformance with Specifications Journal of Chemical Information and Modeling Journal of Documentation, Emerald Group Publishing, Product development Construction documents Quality Standards Technical communication Technical specifications
48467656
https://en.wikipedia.org/wiki/International%20Command%20and%20Control%20Research%20and%20Technology%20Symposium
International Command and Control Research and Technology Symposium
The International Command and Control Research and Technology Symposium (ICCRTS) is one of the world's premier scientific conferences on topics related to Command and Control (C2), from both a theoretical and practical point of view. It is the only major conference with C2 as its exclusive focus. The symposium has had papers not only on C2 in military contexts, but also in civilian ones such as disaster relief. ICCRTS has been widely attended by scientists and practitioners from the United States and a number of NATO countries, most commonly Canada, the United Kingdom, the Netherlands, and Turkey, but others as well. Regular participants have also come from Australia, Brazil, Finland, and Singapore. The symposium publishes proceedings with full-length, peer-reviewed scientific papers. Evolution and Current Status ICCRTS began in the 1990s as a program within Command and Control Research Program (CCRP) funded by the United States Department of Defense (DoD). Since 2015, the symposium has been operated by The International Command and Control Institute (IC2I), which also maintains archives of its publications. A number of papers, particularly by U.S. authors, are also available to the public through the U.S. Defense Technical Information Center (DTIC). At its peak in 2009-2011 ICCRTS had several hundred participants. However, the loss of explicit funding support from the United States Department of Defense in 2015 nearly killed the symposium. That year it had to be cancelled, even though people had already written and submitted their papers. The papers were nevertheless published in the Proceedings, and since 2016 under the aegis of the IC2I, the symposium has bounced back and stabilized, although at lower levels of participation than it enjoyed between 2009 and 2011. Details of Past Symposia Note that in earlier years, two symposia were sometimes held: the international one, known as ICCRTS; and a U.S. domestic one, known as CCRTS (Command and Control Research and Technology Symposium). The first ICCRTS in 1995 was in the United States and was modest in size (63 participants), with only a handful of non-U.S. participants. The 1996, 1997, and 1998 meetings were held in the United Kingdom, the United States, and Sweden, respectively. Below are details of the meetings since 1999: 4th ICCRTS: Held at U.S. Naval War College, Rhode Island, USA. June 29 - July 1, 1999 5th ICCRTS: Held at Australia War Memorial, Canberra ACT, Australia. 24–26 October 2000 6th ICCRTS: "Collaboration in the Information Age." Held at United States Naval Academy, Annapolis, MD, USA, 19–21 June 2001. 7th ICCRTS: "Enabling Synchronized Operations." Held in Quebec City, Quebec, Canada, 16–20 September 2002. CCRTS 2002: "Transformation Through Experimentation." Held at U.S. Naval Postgraduate School, Monterey, California, USA, 11–13 June 2002. 8th ICCRTS: "Information Age Transformation." Held at National Defense University, Washington, D.C., USA, 17–19 June 2003. CCRTS 2004: "Power of Information Age Concepts and Technologies." Held in San Diego, California, USA, 15–17 June 2004. 9th ICCRTS:"Coalition Transformation: An Evolution of People, Processes, and Technology to Enhance Interoperability. Held in Copenhagen, Denmark, 14–16 September 2004. 10th ICCRTS: "The Future of Command and Control." Held in McLean, Virginia, USA, 13–16 June 2005. CCRTS 2006: "State of the Art and the State of the Practice." Held in San Diego, California, USA, 20–22 June 2006. 11th ICCRTS: "Coalition Command and Control in the Networked Era." Held in Cambridge, United Kingdom, 26–28 September 2006. 12th ICCRTS: "Adapting Command and Control for the 21st Century." 18–21 June 2007. 13th ICCRTS: "C2 for Complex Endeavors." Held at the Meydenbauer Center, Bellevue, Washington, USA, 17–19 June 2008. 14th ICCRTS: "C2 and Agility." Held at the Omni Shoreham Hotel, Washington, D.C., USA, 15–17 June 2009. 15th ICCRTS: "The Evolution Of C2: Where Have We been? Where Are We going?" Held at Fairmont Miramar Hotel & Bungalows, Santa Monica, California, USA, 22–24 June 2010. 16th ICCRTS: "Collective C2 in Multinational Civil-Military Operations." Held at Loews Hôtel Le Concorde, Québec City, Québec, Canada, 21–23 June 2011. 17th ICCRTS: "Operationalizing C2 Agility." Held at George Mason University, Fairfax, Virginia, USA, 19–21 June 2012. 18th ICCRTS: Held at Institute for Defense Analyses, Alexandria, Virginia, USA, 19–21 June 2013. 19th ICCRTS: Held at Institute for Defense Analyses, Alexandria, Virginia, USA, 17–19 June 2014. 20th ICCRTS: "C2, Cyber, and Trust." Papers were not presented, only published in the Proceedings (2015). 21st ICCRTS: "C2 in a Complex Connected Battlespace." Held in London, United Kingdom, 6–18 September 2016. 22nd ICCRTS: "Frontiers of C2." Held at University of Southern California Institute for Creative Technologies, Playa Vista, California, 6–18 November 2017. 23rd ICCRTS: "Multi-Domain C2.” Held at Florida Institute for Human & Machine Cognition (IHMC), Pensacola, Florida, USA, 6-9 November 2018. 24th ICCRTS: "Managing Cyber Risk to Mission." Held at Johns Hopkins University Applied Physics Laboratory, Laurel, Maryland, USA, 29-31 October 2019. References Command and control
50399318
https://en.wikipedia.org/wiki/Keith%20Holland
Keith Holland
Keith Holland (born 6 December 1935) is a British former racing driver from England who competed in various classes of racing in the 1960s and 1970s. He is known for winning the 1969 Madrid Grand Prix in a Formula 5000 car in a field which contained several Formula One entries. He was also a regular competitor in the British Formula 5000 Championship finishing third in the title standings on two occasions. Racing career Early career Holland's career began in 1961, with a Lotus 11, yielding a third-place finish in the Boxing Day meeting at Brands Hatch. He continued in 1962, with a GSM Delta competing in only three national level, or above, races. He achieved a best finish of fourth at a 500km race at the Nürburgring in September. In 1963, Holland entered the Guards Trophy (S2.0 class) at Brands Hatch in a Diva GT but did not finish. Holland next appeared at national level in the 1967 Brands Hatch six-hour race. He competed alongside entrant Terry Drury in a Ford GT40 and finished 14th. Although Holland entered other events in 1967, he did not actually compete in them. In 1968, Holland again entered the Brands Hatch six-hour race, together with the Monza 1000 km race, in Drury's GT40, but did not finish either event. The pairing then entered the Targa Florio with the Ford and were classified 54th, followed by the 1000km of Nürburgring, finishing 35th. Holland then returned to domestic racing with a Lotus 47 before competing in a six-hour race at Surfers Paradise in an Austin Mini Cooper, finishing fifth. He then finished 12th in the Guards Trophy at Brands Hatch with the Lotus 47, and was entered for the 1968 24 Hours of Le Mans alongside Drury in the GT40, but did not attend. 1969 Madrid Grand Prix Holland took part in the 1969 Madrid Grand Prix at Jarama with a Formula 5000 Lola T142-Chevrolet V8 entered by Alan Fraser. The race was originally intended as a full-scale Grand Prix-type event as the Spanish Grand Prix had been moved to Barcelona. However, a clash of dates meant that a relatively small mixed field of F5000 and F1 cars took part. Holland led early in the race, after Tony Dean (BRM P261) spun at the start, but was passed by Peter Gethin in an F5000 McLaren-Chevrolet. Towards the end, Holland had to drop back slightly due to water temperature issues, but on the last lap, Gethin's Chevrolet engine broke a connecting-rod and despite spinning on his own final lap, Holland was able to win the race. 1969 also saw Holland enter the French Formula Three Championship with Fraser's Brabham BT15 and finish eighth in the Oulton Park Gold Cup with the Lola T142. He competed in the British Formula Three Championship in a Brabham BT21 finishing 27th in the title standings with three points. However, his 1969 British F5000 season was more successful, achieving fourth place in the championship, with six podium finishes. In 1970, Holland entered the Australian Grand Prix (at that time run to F5000 rules) in a McLaren M10B-Chevrolet. He qualified in eighth position but retired after nine laps with an oil-pump problem. He also competed in the British F5000 championship in Fraser's Lola, finishing in 12th place in the standings with eight points from five races. Formula 5000 In 1971, Holland competed in three races in the Tasman Series with the Mclaren-Chevrolet entered by Road & Track Auto Services. He was classified 14th in the title standings with two points. He participated in the 1971 British Formula 5000 championship with the McLaren finishing in 16th place with four points from eight races. At the 1971 World Championship Victory Race, Holland was classified 15th, in the McLaren, in a race stopped at 15 laps following a fatal accident to Jo Siffert. Holland entered the 1972 International Gold Cup with the McLaren-Chevrolet. He did not finish, retiring after 31 laps with broken engine mountings. He subsequently participated in the Rothmans 50,000 race at Brands Hatch with a Lola T190 (F5000) entered by Chris Featherstone. He qualified in 29th position and thus took part in the main race, for the first 30 qualifiers, where he finished 13th but 12 laps behind. At the John Player Challenge Trophy, Holland finished 11th in a Chevron B24 entered by Sid Taylor Racing. In the 1972 British F5000 championship, Holland competed in eight races and was classified eighth in the championship with 16 points. He achieved two podium finishes and two fastest laps. In the 1972 World Championship Victory Race, he finished 11th in a Chevron B24. In 1973, Holland finished third in the British F5000 Championship, using both a Trojan T101 and a Lola T190 entered by Ian Ward Racing. He achieved a total of 116 points from 15 races, with wins at Mallory Park and Mondello Park, each with the Trojan. He had one further podium finish together with two pole positions and two fastest laps. At the 1973 International Trophy, Holland finished 10th with the Trojan and at the 1973 Race of Champions qualified 11th but did not finish due to a rear wing problem. In the 1974 British F5000 Championship, Holland dropped to 19th in the title standings with 20 points from eight races, including two podium finishes, and used both a Trojan T102 and a Lola T332. Entries were made for the 1974 Race of Champions and the International Trophy but Holland did not start at either event. However, co-driven by Tony Birchenhough and Brian Joscelyne he achieved a tenth-place finish in the Brands Hatch 1,000km race using a Lola T294, but again paired with Birchenhough, did not finish the Kyalami six-hour race. The 1975 British F5000 Championship saw Holland finish in 25th position with 10 points, having only competed in one event. However, in 1976, with the series renamed as Shellsport International and run to Formula Libre rules, Holland finished third in the standings using a Lola T400, with a win at Brands Hatch in October, three other podium finishes and two pole positions. Holland began the 1977 series with a second place at Mallory Park but achieved only one other podium-finish and dropped to eighth in the championship with 63 points. In 1978, Holland competed in the Rothmans International Series which was part of the Australian Formula 1 category and at that time included cars to F5000 specification. He finished 10th in the championship with 3 points from four races. Personal life The winning car from the 1969 Madrid Grand Prix was restored in 2004 as part of the UK television series Salvage Squad. Holland made a brief appearance in the programme. Racing record Complete European F5000 Championship results (key) (Races in bold indicate pole position; races in italics indicate fastest lap.) Complete Formula One non-championship results (key) Complete Shellsport International Series results (key) (Races in bold indicate pole position; races in italics indicate fastest lap.) References External links Profile at chicanef1.com 1935 births Living people English racing drivers French Formula Three Championship drivers Sportspeople from Maidstone
39828290
https://en.wikipedia.org/wiki/Jaikumar%20Radhakrishnan
Jaikumar Radhakrishnan
Jaikumar Radhakrishnan (born 30 May 1964) is an Indian computer scientist specialising in combinatorics and communication complexity. He has served as dean of the School of Technology and Computer Science at the Tata Institute of Fundamental Research, Mumbai, India, where he is currently a senior professor. He obtained his B.Tech. degree in Computer Science and Engineering from the Indian Institute of Technology, Kharagpur in 1985 and his Ph.D. in Theoretical Computer Science from Rutgers University, NJ, USA, in 1991 under the guidance of Endre Szemerédi. His first research paper, titled "Better Bounds for Threshold Formulas", won the Machtey Award for best student paper at the IEEE Symposium on Foundations of Computer Science (FOCS) in 1991. His areas of research include combinatorics, graph theory, probability theory, information theory, communication complexity, computational complexity theory, quantum computation and quantum information science. He was awarded the Shanti Swarup Bhatnagar Prize for Science and Technology in the category of Mathematical Sciences in 2008, India's highest honour for excellence in science, mathematics and technology. Other awards/honours Fellow of the Indian Academy of Sciences, since 2007. References 1964 births Living people 20th-century Indian mathematicians Recipients of the Shanti Swarup Bhatnagar Award in Mathematical Science
53777572
https://en.wikipedia.org/wiki/ForkLift%20%28file%20manager%29
ForkLift (file manager)
ForkLift is a dual-pane file manager and file transfer client for macOS, developed by BinaryNights. Major releases ForkLift 1.0 was released on June 1, 2007. ForkLift 2.0 was released on November 22, 2010. ForkLift 3.0 was released on February 21, 2017. See also File manager Comparison of file managers Comparison of FTP client software References Further reading Adam Pash, LifeHacker Replace Finder with ForkLift May 16, 2007 Clint Ecker, Ars Technica Ars at WWDC: Video interview with Andy and Mudi of BinaryNights July 8, 2007 Brett Terpstra, Engadget ForkLift 2, slick file management, fast file transfers November 25, 2010 David Chartier, Macworld ForkLift 2.0 FTP client gets faster, more powerful December 7, 2010 External links Official website Official blog File managers Orthodox file managers FTP clients SFTP clients Utilities for macOS MacOS-only software
6502190
https://en.wikipedia.org/wiki/RK05
RK05
Digital Equipment Corporation's RK05 was a disk drive whose removable disk pack could hold about 2.5 megabytes of data. Introduced 1972, it was similar to IBM's 1964-introduced 2310, and used a disk pack similar to IBM's 2315 disk pack, although the latter only held 1 megabyte. An RK04 drive, which had half the capacity of an RK05, was also offered. Systems on which it could be used included DEC's PDP-8, PDP-11, and PDP-15. Overview The RK05 was a moving head magnetic disk drive manufactured by DEC, a Maynard, Massachusetts company. It stored approximately 2.5 MB on a 14", single-platter IBM-2315-style front-loading removable disk cartridge. The cartridge permitted users to have relatively unlimited off-line storage and to have very fast access to such data. The PDP systems to which it could be attached had numerous operating systems for each computer architecture, so by changing disk pack another operating system could also be booted. Although the 14-inch cartridge could not fit in a shirt pocket, unlike DECtape, the RK05 provided personal, portable, and expandable storage. Although a minimal PDP-8/A came with only one drive, most computers were configured with at least one additional storage device; some systems had four drives. Technical details Occupying 10.5 inches (6U) of space in a standard 19-inch rack, the drive was competitive at the time. The cartridge contained a single, 14" aluminum platter coated with iron oxide in an epoxy binder. The two ferrite and ceramic read/write heads were pressed towards the disk by spring arms, floating on an air bearing maintained by the rotation of the disk. They were positioned by a voice coil actuator using a linear optical encoder for feedback. The track density was 100 tracks-per-inch. The bit density along the track was about 2200 bits-per-inch. Discrete electronics computed the velocity profile for seeks commanded by the controller. An absolute filter (HEPA filter) provided pressurized air to the cartridge, excluding most contaminants that would otherwise cause head crashes. When used on 16-bit systems such as the PDP-11, the drive stored roughly 1.2 megawords. When used on 12-bit systems such as the PDP-8, the drive stored 1.6 megawords (so roughly the same bit capacity, albeit formatted differently). Multiple drives were daisy chained from their controller using Unibus cabling; a terminator was installed in the farthest drive. The 16-bit (Unibus) controller was known as the RK11; it allowed the connection of up to eight RK05 drives. Seeks could be overlapped among the drives but only one drive at a time could transfer data. The most-common 12-bit (Omnibus) controller was known as the RK8E; it supported up to four RK05 drives. The RK05 disk had more than 4096 sectors and so could not be addressed completely by a single PDP-8 12-bit word. To accommodate this, the OS/8 operating system split each drive into two logical volumes, for example, RKA0 and RKB0, representing the outermost and innermost cylinders of the drive. Predecessors Prior to the introduction of DEC's own drives, DEC resold two drives from Diablo Data Systems (later acquired by Xerox), the 1.22 megabyte RK02, of which very few were shipped, and the 2.45 megabyte RK03 (Diablo Model 31). These were interface compatible with the RK05; RK03 and RK05 disks could be interchanged as well. The RK04 was DEC's counterpart to the RK02, for low-density storage of 600 Kwords, and the RK05 was DEC's counterpart of the high-density RK03, storing 1.2 Mwords. Variants The RK05E was a late version of the standard-density RK05 drive. It contained many reliability enhancements compared to the earlier versions. The RK05J was the final version of the standard-density RK05 removable pack drive. A non-removable RK05F was also produced. By "fixing" the otherwise removable cartridge in place, it was able to avoid the cartridge-to-cartridge compatibility problems that limited the capacity of the ordinary RK05. As a result, it could operate at twice the normal track density, doubling the capacity of the drive to about 5MB. Compatibles RK05-compatible non-DEC RK05s were produced and marketed. References External links RK05 drive information compiled by David Gesswein PDP-11 RK05 removable hard disk drive RK05 disk drive maintenance manual Further reading Brief history of the RK05 1972 introductions DEC hardware Hard disk drives
12655903
https://en.wikipedia.org/wiki/Vala%20%28programming%20language%29
Vala (programming language)
Vala is an object-oriented programming language with a self-hosting compiler that generates C code and uses the GObject system. Vala is syntactically similar to C# and includes notable features such as anonymous functions, signals, properties, generics, assisted memory management, exception handling, type inference, and foreach statements. Its developers, Jürg Billeter and Raffaele Sandrini, wanted to bring these features to the plain C runtime with little overhead and no special runtime support by targeting the GObject object system. Rather than compiling directly to machine code or assembly language, it compiles to a lower-level intermediate language. It source-to-source compiles to C, which is then compiled with a C compiler for a given platform, such as GCC or Clang. Using functionality from native code libraries requires writing vapi files, defining the library interfaces. Writing these interface definitions is well-documented for C libraries, especially when based on GObject. Bindings are already available for a large number of libraries, including for C libraries that are not based on GObject, such as the multimedia library SDL, OpenGL, etc. Description Vala is a programming language that combines the high-level build-time performance of scripting languages with the run-time performance of low-level programming languages. It aims to bring modern programming language features to GNOME developers without imposing any additional runtime requirements and without using a different ABI, compared to applications and libraries written in C. The syntax of Vala is similar to C#, modified to better fit the GObject type system. History Vala was conceived by Jürg Billeter and was implemented by him and Raffaele Sandrini, who wished for a higher level alternative for developing GNOME applications instead of C. They did like the syntax and semantics of C# but did not want to use Mono, so they finished a compiler in May 2006. Initially, it was bootstrapped using C, and one year later (with release of version 0.1.0 in July 2007), the Vala compiler became self-hosted. As of 2021, the current stable release branch with long-term support is 0.48, and the language is under active development with the goal of releasing a stable version 1.0. Language design Features Vala uses GLib and its submodules (GObject, GModule, GThread, GIO) as the core library, which is available for most operating systems and offers things like platform independent threading, input/output, file management, network sockets, plugins, regular expressions, etc. The syntax of Vala currently supports modern language features as follows: Interfaces Properties Signals Foreach Lambda expressions Type inference for local variables Generics Non-null types Assisted memory management Exception handling Graphical user interfaces can be developed with the GTK GUI toolkit and the Glade GUI builder. Memory management For memory management, the GType or GObject system provides reference counting. In C, a programmer must manually manage adding and removing references, but in Vala, managing such reference counts is automated if a programmer uses the language's built-in reference types rather than plain pointers. The only detail you need to worry about is to avoid generating reference cycles, because in that case this memory management system will not work correctly. Vala also allows manual memory management with pointers as an option. Bindings Vala is intended to provide runtime access to existing C libraries, especially GObject-based libraries, without the need for runtime bindings. To use a library with Vala, all that needed is an API file (.vapi) containing the class and method declarations in Vala syntax. However, C++ libraries are not supported. At present, vapi files for a large part of the GNU project and GNOME platform are included with each release of Vala, including GTK. There is also a library called Gee, written in Vala, that provides GObject-based interfaces and classes for commonly used data structures. It should also be easily possible to write a bindings generator for access to Vala libraries from applications written in other languages, e.g., C#, as the Vala parser is written as a library, so that all compile-time information is available when generating a binding. Tools Editors Tooling for Vala development has seen significant improvement over the recent years. The following is a list of some popular IDEs and text editors with plug-ins that add support for programming in Vala: GNOME Builder Visual Studio Code, with Vala plugin Vim, with arrufat/vala.vim plugin Emacs, with vala-mode Atom Geany Code intelligence Currently, there are two actively developing language servers which offer code intelligence for Vala as follows: , designed for any editor that supports LSP, including VSCode, vim, and GNOME Builder , currently the default language server for Vala in GNOME Builder and provides support to any editor with support for LSP Build systems Currently, there are a number of build systems supporting Vala, including Automake, CMake, Meson, and others. Debugging Debugging for Vala programs can be done with either GDB or LLDB. For debugging in IDEs, GNOME Builder has built-in debugging support for Vala with GDB. Visual Studio Code has extensions for GDB and LLDB, such as cpptools and CodeLLDB. Examples Hello world A simple "Hello, World!" program in Vala: void main () { print ("Hello World\n"); } As can be noted, unlike C or C++, there are no header files in Vala. The linking to libraries is done by specifying --pkg parameters during compiling. Moreover, the GLib library is always linked and its namespace can be omitted (print is in fact GLib.print). Object-oriented programming Below is a more complex version which defines a subclass HelloWorld inheriting from the base class GLib.Object, aka the GObject class. It shows some of Vala's object-oriented features: class HelloWorld: Object { private uint year = 0; public HelloWorld () { } public HelloWorld.with_year (int year) { if (year>0) this.year = year; } public void greeting () { if (year == 0) print ("Hello World\n"); else /* Strings prefixed with '@' are string templates. */ print (@"Hello World, $(this.year)\n"); } } void main (string[] args) { var helloworld = new HelloWorld.with_year (2021); helloworld.greeting (); } As in the case of GObject library, Vala does not support multiple inheritance, but a class in Vala can implement any number of interfaces, which may contain default implementations for their methods. Here is a piece of sample code to demonstrate a Vala interface with default implementation (sometimes referred to as a mixin) using GLib; interface Printable { public abstract string print (); public virtual string pretty_print () { return "Please " + print (); } } class NormalPrint: Object, Printable { string print () { return "don't forget about me"; } } class OverridePrint: Object, Printable { string print () { return "Mind the gap"; } public override string pretty_print () { return "Override"; } } void main (string[] args) { var normal = new NormalPrint (); var overridden = new OverridePrint (); print (normal.pretty_print ()); print (overridden.pretty_print ()); } Signals and callbacks Below is a basic example to show how to define a signal in a class that is not compact, which has a signal system built in by Vala through GLib. Then callback functions are registered to the signal of an instance of the class. The instance can emit the signal and each callback function (also referred to as handler) connected to the signal for the instance will get invoked in the order they were connected in: class Foo { public signal void some_event (); // definition of the signal public void method () { some_event (); // emitting the signal (callbacks get invoked) } } void callback_a () { stdout.printf ("Callback A\n"); } void callback_b () { stdout.printf ("Callback B\n"); } void main () { var foo = new Foo (); foo.some_event.connect (callback_a); // connecting the callback functions foo.some_event.connect (callback_b); foo.method (); } Threading A new thread in Vala is a portion of code such as a function that is requested to be executed concurrently at runtime. The creation and synchronization of new threads are done by using the Thread class in GLib, which takes the function as a parameter when creating new threads, as shown in the following (very simplified) example: int question(){ // Some print operations for (var i = 0; i < 3; i++){ print ("."); Thread.usleep (800000); stdout.flush (); } return 42; } void main () { if (!Thread.supported ()) { stderr.printf ("Cannot run without thread support.\n"); return; } print ("The Ultimate Question of Life, the Universe, and Everything"); // Generic parameter is the type of return value var thread = new Thread<int> ("question", question); print(@" $(thread.join ())\n"); } Graphical user interface Below is an example using GTK to create a GUI "Hello, World!" program (see also GTK hello world) in Vala: using Gtk; int main (string[] args) { Gtk.init (ref args); var window = new Window (); window.title = "Hello, World!"; window.border_width = 10; window.window_position = WindowPosition.CENTER; window.set_default_size (350, 70); window.destroy.connect (Gtk.main_quit); var label = new Label ("Hello, World!"); window.add (label); window.show_all (); Gtk.main (); return 0; } The statement Gtk.main () creates and starts a main loop listening for events, which are passed along via signals to the callback functions. As this example uses the GTK package, it needs an extra --pkg parameter (which invokes pkg-config in the C backend) to compile: valac --pkg gtk+-3.0 hellogtk.vala See also Genie, a programming language for the Vala compiler with a syntax closer to Python. Shotwell, an image organiser written in Vala. Geary, an email client written in Vala. elementary OS, a Linux distribution with a desktop environment programmed mostly in Vala. Budgie, a Linux desktop environment programmed mostly in Vala. References External links API Documentation Vala repository on GNOME · GitLab LibGee, a utility library for Vala. Vala sample code for beginners List of Vala programs Autovala, a program that automatizes and simplifies creating CMake and Meson files for Vala/C projects The Vala community on GitHub Akira - Linux native designer tool Kangaroo - Cross-platform database client tool for popular databases Comparison with other languages Vala and Java Vala and C# Benchmarks of different languages, including Vala Programming languages Object-oriented programming languages Software using the LGPL license Source-to-source compilers Statically typed programming languages Programming languages created in 2006 2006 software Cross-platform free software
30623709
https://en.wikipedia.org/wiki/South%20African%20hacker%20history
South African hacker history
A brief history of computer hacking in South Africa. Note: A distinction needs to be made between a "white hat" hacker who hacks out of intellectual curiosity, and a "black hat" hacker who has ulterior motives. In recent times there has been an attempt to restore the meaning of the term hacker, which is still associated with creating code, and its secondary meaning, which has become the stuff of Hollywood legend. The term "cracker" is a better description for those who break into secured system by exploiting computer vulnerabilities. 1990 Activists are trapped by BOSS agents who use ATM autotellers to monitor transactions. IBM is now the subject of an ongoing court case for its active support of the apartheid regime. 1991 Cape Educational Computer Society (CECS) becomes the first to advocate free software culture in South Africa. Many hackers gain their first experience of online world via Douglas Reeler's modem. Also in 1991, Kagenna Magazine publishes an article on Cyberpunk by Dr Tim Leary, the first time the word is mentioned in print in South Africa. 1994 A right-wing hacker attempts to sabotage election results by hacking into the computers processing election results of South Africas first democratic election. 2004 A group of computer hackers calling themselves "Spykids" strikes 45 Cape Town business websites and defaces their home pages. 1998 Police arrest a teenage boy from Rondebosch who hacked through all the security features of South African telecommunications company Telkom's computer system but apparently did no damage. The DA party website is defaced by a hacker. 1999 Hackers break into South Africa's official statistics website, replacing economic information with critical comments about the national telephone company, Telkom. 2005 "Team Evil", a group of Maroccan hackers, defaces 250 South African websites on the afternoon of 8 January, with anti-American propaganda. 2006 First National Bank, Standard and Absa banks are the targets of several successful online attacks. The financial institutions report that no less than 10 bank accounts have been hacked. The value of the damages caused by the attack is estimated at approximately 80.000 dollars. 2008 H.O.Z, currently the largest South African hacker community goes online, and quickly gains a reputation for bypassing local cell network internet restrictions. Although authorities have been unable to pin point the master minds behind the incidents, S.A. anti-cyber terrorism, vows to stay tunes to its community members and hopes one day to put a stop to these elite members of its hacking community, they will be paying close attention to its site owner EVILWez. South African Minister for Finance and Economic Development, announces 32 arrests in connection with more than 80 separate fraud counts related to spyware and the loss of (13m pounds) R130m. 2009 Hackers expose corrupt business practice in the banking system - a confidential document detailing information about South African banks is published by Wikileaks. 2010 The second Live Hacking 2010 South Africa ethical hacking workshop was held in Pretoria. Courses in ethical hacking are offered. Gauteng's department of local government's website is hacked by CeCen Hack Team who appear to be a radical Islamic group HackingStats.com an online resource "monitoring and documenting hacked South African-based websites" goes online. 2011 Police unit, The Hawks announce they are on the verge of making further arrests in connection with the a "multi-million rand cyber raid" on the Land Bank over the Christmas season. 2012 Three government websites were hacked in December by a lone activist apparently angered at South Africa's support for the Saharawi Arab Democratic Republic in Western Sahara. 2013 "South Africa needs to be saved and freed from corruption", says Team GhostShell, but luckily it has assembled a "strong force" of hacktivists equal to the task. That force will now break into government information vaults and bring to light the evidence that will reveal corruption and nefarious doings." Through a series of tweets 'data dump' using the account @DomainerAnon, explained that they hacked into the website “for the 34 miners killed during clashes with police in Marikana on August 16, 2012”. References Hacking (computer security)
45079597
https://en.wikipedia.org/wiki/Certified%20Penetration%20Testing%20Engineer
Certified Penetration Testing Engineer
Certified Penetration Testing Engineer (C)PTE) is an internationally recognized cyber security certification administered by the United States-based information security company Mile2. The accreditation maps to the Committee on National Security Systems' 4013 education certification. The C)PTE certification is considered one of five core cyber security certifications. Accreditations Obtaining the C)PTE certification requires proven proficiency and knowledge of five key information security elements, penetration testing, data collection, scanning, enumeration, exploitation and reporting. The CPTE certification is one of several information assurance accreditations recognized by the U.S. National Security Agency. The certification has also been approved by the U.S. Department of Homeland Security's National Initiative for Cyber Security Studies and Careers (NICSS) and the U.S.-based National Security Systems Committee. Examination The online exam for C)PTE accreditation lasts two hours and consists of 100 multiple choice questions. References External links Mile2 C)PTE website page Beginners Guide to Penetration Testing Computer security qualifications Data security Information technology qualifications
67757693
https://en.wikipedia.org/wiki/Lambert%20Sonna%20Momo
Lambert Sonna Momo
Lambert Sonna Momo (born 1970 in Yaoundé) is a Swiss computer scientist of Cameroonian origin. He is known for his work in electronic identification and authentication through biometrics. Education After obtaining a bachelor's degree in mathematics at the University of Yaoundé in 1993, he continued his studies with two master's degrees at the EPFL in Lausanne in software engineering (2001) and in information systems (2003). He obtained his doctorate in information and security systems at the University of Lausanne in 2008 for a thesis on "Elaboration de tableaux de bord SSI dynamiques: une approche à base d'ontologies" (Developing dynamic ISS dashboards: an ontology-based approach) under supervision of Solange Ghernaouti-Hélie. Career Academic career Until 2014, he taught at University of Lausanne on topics related to computer security and the protection of private data. In 2016, he assembled a multidisciplinary team composed of biometrics specialist Sébastien Marcel at Idiap Research Institute; cryptographer Serge Vaudenay, director of the EPFL's Security and Cryptography Laboratory, electronics engineer Pierre Roduit at HES SO Valais Wallis, and microtechnologist Eric Grenet at Swiss Centre for Electronics and Microtechnology. This team jointly developed BioID and BioLocker, a patented biometric authentication technology based on multi-view vein scanningthat combines data security and respect for private sphere protection. Entrepreneurship He is the founder of GLOBAL ID SA, a spin-off of EPFL that brings vein-based biometric authentication technology to the market. The biometric technology based on vein recognition is considered ethical because the key is hidden and therefore impossible to steal; the encryption is done end-to-end with a random code that changes constantly. The contactless scanner is under development and the project has received a grant of 1 million from the Swiss Confederation. Lambert Sonna Momo is the Inventor of the VenoScanner. Publications References External links Website of Global-ID Website of Idiap Research Institute Website of LASEC-EPFL 1970 births Living people École Polytechnique Fédérale de Lausanne alumni University of Lausanne Yaounde II Yaoundé Cryptography
32682
https://en.wikipedia.org/wiki/VSE%20%28operating%20system%29
VSE (operating system)
z/VSE (Virtual Storage Extended) is an operating system for IBM mainframe computers, the latest one in the DOS/360 lineage, which originated in 1965. Announced Feb. 1, 2005 by IBM as successor to VSE/ESA 2.7, then-new z/VSE was named to reflect the new "System z" branding for IBM's mainframe product line. DOS/VSE was introduced in 1979 as a successor to DOS/VS; in turn, DOS/VSE was succeeded by VSE/SP version 1 in 1983, and VSE/SP version 2 in 1985. It is less common than prominent z/OS and is mostly used on smaller machines. In the late 1980s, there was a widespread perception among VSE customers that IBM was planning to discontinue VSE and migrate its customers to MVS instead, although IBM relented and agreed to continue to produce new versions of VSE. Overview DOS/360 originally used 24-bit addressing. As the underlying hardware evolved, VSE/ESA acquired 31-bit addressing capability. IBM released z/VSE Version 4, which requires 64-bit z/Architecture hardware and can use 64-bit real mode addressing, in 2007. With z/VSE 5.1 (available since 2011) z/VSE introduced 64 bit virtual addressing and memory objects (chunks of virtual storage), that are allocated above 2 GB. The latest shipping release is z/VSE 6.2.0 - available since December 2017, which includes the new CICS Transaction Server for z/VSE 2.2. User interfaces Job Control Language (JCL) A Job Control Language (JCL) that continues in the positional-parameter orientation of earlier DOS systems is z/VSE's batch processing primary user interface. There is also another, special interface for system console operators. Beyond batch z/VSE, like z/OS systems, had traditionally provided 3270 terminal user interfaces. However, most z/VSE installations have at least begun to add Web browser access to z/VSE applications. z/VSE's TCP/IP is a separately priced option for historic reasons, and is available in two different versions from two vendors. Both vendors provide a full function TCP/IP stack with applications, such as telnet and FTP. One TCP/IP stack provides IPv4 communication only, the other IPv4 and IPv6 communication. In addition to the commercially available TCP/IP stacks for z/VSE, IBM also provides the Linux Fastpath method which uses IUCV socket or Hipersockets connections to communicate with a Linux guest, also running on the mainframe. Using this method the z/VSE system is able to fully exploit the native Linux TCP/IP stack. IBM recommends that z/VSE customers run Linux on IBM Z alongside, on the same physical system, to provide another 64-bit application environment that can access and extend z/VSE applications and data via Hipersockets using a wide variety of middleware. CICS, one of the most popular enterprise transaction processing systems, is extremely popular among z/VSE users and now implements recent innovations such as Web services. DB2 is also available and popular. Device support z/VSE can use ECKD, FBA and SCSI devices. Fibre Channel access to SCSI storage devices was initially available on z/VSE 3.1 on a limited basis (including on IBM's Enterprise Storage Server (ESS), IBM System Storage DS8000, DS6000 series), but the limitations disappeared with 4.2 (thus including IBM Storwize V7000, V5000, V3700 and V9000). Older z/VSE versions The last VSE/ESA release - VSE/ESA 2.7 - is no longer supported since February 28, 2007. z/VSE 3.1 was the last release, that was compatible with 31-bit mainframes, as opposed to z/VSE Version 4, 5 and 6. z/VSE 3.1 was supported to 2009. z/VSE Version 4 is no longer supported since October 2014 (end of service for z/VSE 4.3). For VSE/ESA, DOS/VSE, VSE/SP, see History of IBM mainframe operating systems#DOS/VS See also z/OS z/TPF z/VM History of IBM mainframe operating systems#DOS/VS History of IBM mainframe operating systems References External links IBM z/VSE website IBM mainframe operating systems
295299
https://en.wikipedia.org/wiki/Open%20Mind%20Common%20Sense
Open Mind Common Sense
Open Mind Common Sense (OMCS) is an artificial intelligence project based at the Massachusetts Institute of Technology (MIT) Media Lab whose goal is to build and utilize a large commonsense knowledge base from the contributions of many thousands of people across the Web. It has been active from 1999 to 2016. Since its founding, it has accumulated more than a million English facts from over 15,000 contributors in addition to knowledge bases in other languages. Much of OMCS's software is built on three interconnected representations: the natural language corpus that people interact with directly, a semantic network built from this corpus called ConceptNet, and a matrix-based representation of ConceptNet called AnalogySpace that can infer new knowledge using dimensionality reduction. The knowledge collected by Open Mind Common Sense has enabled research projects at MIT and elsewhere. History The project is the brainchild of Marvin Minsky, Push Singh, Catherine Havasi, and others. Development work begins in September 1999, and the project is open to the Internet a year later. Havasi describes it in her dissertation as "an attempt to ... harness some of the distributed human computing power of the Internet, an idea which was then only in its early stages." The original OMCS is influenced by the website Everything2 and its predecessor, and presents a minimalist interface that is inspired by Google. Push Singh is slated to become a professor at the MIT Media Lab to lead the Common Sense Computing group in 2007 until his suicide on Tuesday, February 28, 2006. The project is currently run by the Digital Intuition Group at the MIT Media Lab under Catherine Havasi. Database and website There are many different types of knowledge in OMCS. Some statements convey relationships between objects or events, expressed as simple phrases of natural language: some examples include "A coat is used for keeping warm", "The sun is very hot", and "The last thing you do when you cook dinner is wash your dishes". The database also contains information on the emotional content of situations, in such statements as "Spending time with friends causes happiness" and "Getting into a car wreck makes one angry". OMCS contains information on people's desires and goals, both large and small, such as "People want to be respected" and "People want good coffee". Originally, these statements could be entered into the Web site as unconstrained sentences of text, which had to be parsed later. The current version of the Web site collects knowledge only using more structured fill-in-the-blank templates. OMCS also makes use of data collected by the Game With a Purpose "Verbosity". In its native form, the OMCS database is simply a collection of these short sentences that convey some common knowledge. In order to use this knowledge computationally, it has to be transformed into a more structured representation. ConceptNet ConceptNet is a semantic network based on the information in the OMCS database. ConceptNet is expressed as a directed graph whose nodes are concepts, and whose edges are assertions of common sense about these concepts. Concepts represent sets of closely related natural language phrases, which could be noun phrases, verb phrases, adjective phrases, or clauses. ConceptNet is created from the natural-language assertions in OMCS by matching them against patterns using a shallow parser. Assertions are expressed as relations between two concepts, selected from a limited set of possible relations. The various relations represent common sentence patterns found in the OMCS corpus, and in particular, every "fill-in-the-blanks" template used on the knowledge-collection Web site is associated with a particular relation. The data structures that make up ConceptNet were significantly reorganized in 2007, and published as ConceptNet 3. The Software Agents group currently distributes a database and API for the new version 4.0.<ref name="launchpad-conceptnet" /> In 2010, OMCS co-founder and director Catherine Havasi, with Robyn Speer, Dennis Clark and Jason Alonso, created Luminoso, a text analytics software company that builds on ConceptNet. It uses ConceptNet as its primary lexical resource in order to help businesses make sense of and derive insight from vast amounts of qualitative data, including surveys, product reviews and social media. Machine learning tools The information in ConceptNet can be used as a basis for machine learning algorithms. One representation, called AnalogySpace, uses singular value decomposition to generalize and represent patterns in the knowledge in ConceptNet, in a way that can be used in AI applications. Its creators distribute a Python machine learning toolkit called Divisi for performing machine learning based on text corpora, structured knowledge bases such as ConceptNet, and combinations of the two. Comparison to other projects Other similar projects include Never-Ending Language Learning, Mindpixel (discontinued), Cyc, Learner, SenticNet, Freebase, YAGO, DBpedia, and Open Mind 1001 Questions, which have explored alternative approaches to collecting knowledge and providing incentive for participation. The Open Mind Common Sense project differs from Cyc because it has focused on representing the common sense knowledge it collected as English sentences, rather than using a formal logical structure. ConceptNet is described by one of its creators, Hugo Liu, as being structured more like WordNet than Cyc, due to its "emphasis on informal conceptual-connectedness over formal linguistic-rigor". There is also the Brazilian initiative, named Open Mind Common Sense in Brazil (OMCS-Br), led by the Advanced Interaction Lab at Federal University of São Carlos (LIA-UFSCar). This project started in 2005, in collaboration with the Software Agents Group at the MIT Media Lab, the main goal is to collect common sense stated in Brazilian Portuguese and use it to develop culturally sensitive software applications based on extracting cultural profiles' knowledge from ConceptNet. This is intended to help developers and users with a culturally contextualized content software, making the final applications more flexible, adaptive, accessible and usable. The main applications' focuses are education and healthcare. See also Attempto controlled English Never-Ending Language Learning Mindpixel ThoughtTreasure Semantic Web DBpedia Freebase (database) yago (database) References External links Open Mind Common Sense Open Mind Common Sense meta-repository Github ConceptNet AnalogySpace The Divisi inference toolkit Commonsense Computing Initiative's Webpage (Site doesn't exist) The Open Mind Initiative (Site doesn't exist) OMCSNetCPP - Open source C++ inference engine using the OMCSNet data Open Mind Common Sense in Brazil (Site broken) Advanced Interaction Laboratory Open-source artificial intelligence Knowledge bases Creative Commons-licensed databases
194026
https://en.wikipedia.org/wiki/Rensselaer%20Polytechnic%20Institute
Rensselaer Polytechnic Institute
Rensselaer Polytechnic Institute () (RPI) is a private research university in Troy, New York, with additional campuses in Hartford and Groton, Connecticut. It was established in 1824 by Stephen van Rensselaer and Amos Eaton for the "application of science to the common purposes of life" and is the oldest technological university in the English-speaking world and the Western Hemisphere. Numerous American colleges or departments of applied sciences were modeled after Rensselaer. Built on a hillside, RPI's campus overlooks the city of Troy and the Hudson River, and is a blend of traditional and modern architecture. The institute operates an on‑campus business incubator and the Rensselaer Technology Park. RPI is organized into six main schools which contain 37 departments, with emphasis on science and technology. It classified among "R1: Doctoral Universities: Very High Research Activity" and recognized for its degree programs in engineering, computing, business and management, information technology, the sciences, design, and liberal arts. As of 2017, RPI's faculty and alumni included six members of the National Inventors Hall of Fame, six National Medal of Technology winners, five National Medal of Science winners, eight Fulbright Scholarship recipients, and a Nobel Prize winner in Physics; in addition, 86 faculty or alumni are members of the National Academy of Engineering, 17 of the National Academy of Sciences, 25 of the American Academy of Arts and Sciences, eight of the National Academy of Medicine, one of the National Academy of Public Administration, and nine of the National Academy of Inventors. RPI has been ranked since 1970 among the top 50 American universities. History 1824–1900 Stephen Van Rensselaer established the Rensselaer School on 5 November 1824 with a letter to the Reverend Dr. Samuel Blatchford, in which Van Rensselaer asked Blatchford to serve as the first president. Within the letter he set down several orders of business. He appointed Amos Eaton as the school's first senior professor and appointed the first board of trustees. The school opened on Monday, 3 January 1825 at the Old Bank Place, a building at the north end of Troy. Tuition was around $40 per semester (equivalent to $800 in 2012). The fact that the school attracted students from as far as Ohio and Pennsylvania is attributed to the reputation of Eaton. Fourteen months of successful trial led to the incorporation of the school on 21 March 1826 by the state of New York. In its early years, the Rensselaer School strongly resembled a graduate school more than it did a college, drawing graduates from many older institutions. Under Eaton, the Rensselaer School, renamed the Rensselaer Institute in 1832, was a small but vibrant center for technological research. The first civil engineering degrees in the United States were granted by the school in 1835, and many of the best remembered civil engineers of that time graduated from the school. Important visiting scholars included Joseph Henry, who had previously studied under Amos Eaton, and Thomas Davenport, who sold the world's first working electric motor to the institute. In 1847 alumnus Benjamin Franklin Greene became the new senior professor. Earlier he had done a thorough study of European technical schools to see how Rensselaer could be improved. In 1850 he reorganized the school into a three-year polytechnic institute with six technical schools. In 1861 the name was changed to Rensselaer Polytechnic Institute. A severe conflagration of 10 May 1862, known as "The Great Fire", destroyed more than 507 buildings in Troy and gutted in the heart of the city. The "Infant School" building that housed the Institute at the time was destroyed in this fire. Columbia University proposed that Rensselaer leave Troy altogether and merge with its New York City campus. Ultimately, the proposal was rejected and the campus left the crowded downtown for the hillside. Classes were temporarily held at the Vail House and in the Troy University building until 1864, when the Institute moved to a building on Broadway on 8th Street, now the site of the Approach. One of the first Latino student organizations in the United States was founded at RPI in 1890. The Club Hispano Americano was established by the international Latin American students that attended the institute at this time. Since 1900 In 1904 the institute was for the fourth time devastated by fire, when its main building was completely destroyed. However, RPI underwent a period of academic and resource expansion under the leadership of President Palmer Ricketts. Named President in 1901, Ricketts liberalized the curriculum by adding the Department of Arts, Science, and Business Administration, in addition to the Graduate School. He also expanded the university's resources and developed RPI into a true polytechnic institute by increasing the number of degrees offered from two to twelve; these included electrical engineering, mechanical engineering, biology, chemistry, and physics. During Rickett's tenure, enrollment increased from approximately 200 in 1900 to a high of 1,700 in 1930. Another period of expansion occurred following World War II as returning veterans used their GI Bill education benefits to attend college. The "Freshman Hill" residence complex was opened in 1953 followed by the completion of the Commons Dining Hall in 1954, two more halls in 1958, and three more in 1968. In this same time frame (1966) Herta Regina Leng was appointed as RPI's first female full professor. She is now honored there with an annual lecture series. In 1961, there was major progress in academics at the institute with the construction of the Gaerttner Linear Accelerator, then the most powerful in the world, and the Jonsson-Rowland Science Center. The current Student Union building was opened in 1967. The next three decades brought continued growth with many new buildings (see 'Campus' below), and growing ties to industry. The "H-building", previously used for storage, became the home for the RPI incubator program, the first such program sponsored solely by a university. Shortly after this, RPI decided to invest $3 million in pavement, water and power on around of land it owned south of campus to create the Rensselaer Technology Park. In 1982 the New York State legislature granted RPI $30 million to build the George M. Low Center for Industrial Innovation, a center for industry-sponsored research and development. In 1999, RPI gained attention when it was one of the first universities to implement a mandatory laptop computer program. This was also the year of the arrival of President Shirley Ann Jackson, a former chairperson of the Nuclear Regulatory Commission under President Bill Clinton. She instituted "The Rensselaer Plan" (discussed below), an ambitious plan to revitalize the institute. Many advances have been made under the plan, and Jackson has enjoyed the ongoing support of the RPI Board of Trustees. However, her leadership style did not sit well with many faculty; on 26 April 2006, RPI faculty voted 149 to 155 in a failed vote of no-confidence in Jackson. In September 2007, RPI's Faculty Senate was suspended for over four years over conflict with the administration. On 3 October 2008, RPI celebrated the opening of the $220 million Experimental Media and Performing Arts Center. That same year the national economic downturn resulted in the elimination of 98 staff positions across the institute, about five percent of the workforce. Campus construction expansion continued, however, with the completion of the $92 million East Campus Athletic Village and opening of the new Blitman Commons residence hall in 2009. As of 2015, all staff positions had been reinstated at the institute, experiencing significant growth from pre-recession levels and contributing over $1 billion annually to the economy of the Capital District. That same year, renovation of the North Hall, E-Complex, and Quadrangle dormitories began and was later completed in 2016 to house the largest incoming class in Rensselaer's history. Campus RPI's campus sits upon a hill overlooking Troy and the Hudson River. The surrounding area is mostly residential neighborhoods, with the city of Troy lying at the base of the hill. The campus is bisected by 15th Street, with most of the athletic and housing facilities to the east, and the academic buildings to the west. A footbridge spans the street, linking the two halves. Much of the campus features a series of Colonial Revival style structures built in the first three decades of the 20th century. Overall, the campus has enjoyed four periods of expansion. 1824–1905 RPI was originally located in downtown Troy, but gradually moved to the hilltop that overlooks the city. Buildings that remain from this time include Winslow Chemical Laboratory, a building on the National Register of Historic Places. Located at the base of the hill on the western edge of campus, it currently houses the Social and Behavioral Research Laboratory. Ricketts Campus, 1906–1935 President Palmer Ricketts supervised the construction of the school's "Green Rooftop" Colonial Revival buildings that give much of the campus a distinct architectural style. Buildings constructed during this period include the Carnegie Building (1906), Walker Laboratory (1907), Russell Sage Laboratory (1909), Pittsburgh Building (1912), Quadrangle Dormitories (1916–1927), Troy Building (1925), Amos Eaton Hall (1928), Greene Building (1931) and Ricketts Building (1935). Also built during this period was "The Approach" (1907), a massive ornate granite staircase found on the west end of campus. Originally linking RPI to the Troy Union Railroad station, it again serves as an important link between the city and the university. In 1906 the '86 Field, home field of the football team until 2008, was completed with support of the Class of 1886. Post-war expansion, 1946–1960 After World War II, the campus again underwent major expansion. Nine dormitories were built at the east edge of campus bordering Burdett Avenue, a location that came to be called "Freshman Hill". The Houston Field House (1949) was reassembled, after being moved in pieces from its original Rhode Island location. West Hall, which was originally built in 1869 as a hospital, was acquired by the Institute in 1953. The ornate building is an example of French Second Empire architecture. It was listed on the National Register of Historic Places in 1973. Another unique building is the Voorhees Computing Center (VCC). Originally the St. Joseph's Seminary chapel, it was built in 1933 and acquired by Rensselaer in 1958, and after renovation served as the institute's library from 1960 until the completion of the new Folsom Library, in 1976. The Folsom Library, located adjacent to the computing center, has a concrete exterior that was designed to harmonize with the light gray brick of the chapel; architecturally, it is an example of the modern brutalist style. Subsequently, the university was unsure of what to do with the chapel, or whether to keep it at all, but in 1979 decided to preserve it and renovate it to house computer labs and facilities to support the institute's computing initiatives. Today the VCC serves as the backbone for the institute's data and telephony infrastructure. Modern campus, since 1961 The modern campus features the Jonsson-Rowland Science Center (J-ROWL) (1961), Materials Research Center (MRC) (1965), Rensselaer Union (1967), Cogswell Laboratory (1971), Darrin Communications Center (DCC) (1973), Jonsson Engineering Center (JEC) (1977), Low Center for Industrial Innovation (CII) (1987), a public school building which was converted into Academy Hall (1990), and the Center for Biotechnology and Interdisciplinary Studies (2004). Tunnels connect the Low Center, DCC, JEC and Science Center. A tenth dormitory named Barton Hall was added to Freshman Hill in August 2000, featuring the largest rooms available for freshmen. On 3 October 2008, the university celebrated the grand opening of the Experimental Media and Performing Arts Center (EMPAC) situated on the west edge of campus. The building was constructed on the precipice of the hill, with the main entrance on top. Upon entering, elevated walkways lead into a 1,200-seat concert hall. Most of the building is encased in a glass exoskeleton, with an atrium-like space between it and the "inner building". Adjacent to and underneath the main auditorium there is a 400-seat theater, offices, and two black-box studios with to ceilings. Originally budgeted for $50 million, the EMPAC construction costs ballooned to over $200 million due to difficulty of anchoring the foundation in the soft clay of the hill. In 2008, RPI announced the purchase of the former Rensselaer Best Western Inn, located at the base of the hill, along with plans to transform it into a new residence hall. After extensive renovations, the residence hall was dedicated on 15 May 2009, as the Howard N. Blitman, P.E. '50 Residence Commons. It houses about 300 students in 148 rooms and includes a fitness center, dining hall, and conference area. The new residence hall is part of a growing initiative to involve students in the Troy community and help revitalize the downtown. RPI owns and operates three office buildings in downtown Troy, the Rice and Heley buildings and the historic W. & L.E. Gurley Building. RPI also owns the Proctor's Theater building in Troy which was purchased in 2004, with the intention of converting it into office space. As of 2011, Rensselaer had signed an agreement with Columbia Development Companies to acquire both Proctor's Theatre and Chasan Building in Troy and launch a redevelopment. Other campuses The Institute runs a campus in Hartford, Connecticut, and a distance learning center in Groton, Connecticut. These centers are used by graduates and working professionals and are managed by the Hartford branch of RPI, Rensselaer at Hartford. At Hartford, graduate degrees are offered in business administration, management, computer science, computer and systems engineering, electrical engineering, engineering science, mechanical engineering and information technology. There are also a number of certificate programs and skills training programs for working professionals. Academics Rensselaer Polytechnic Institute has five schools: the School of Architecture, the School of Engineering, the School of Humanities, Arts, and Social Sciences, the Lally School of Management & Technology, and the School of Science. The School of Engineering is the largest by enrollment, followed by the School of Science, the School of Management, the School of Humanities, Arts, and Social Sciences, and the School of Architecture. There also exists an interdisciplinary program in Information Technology that began in the late 1990s, programs in prehealth and prelaw, Reserve Officers' Training Corps (ROTC) for students desiring commissions as officers in the armed forces, a program in cooperative education (Co-Op), and domestic and international exchange programs. All together, the university offers over 145 degree programs in nearly 60 fields that lead to bachelor's, master's, and doctoral degrees. In addition to traditional majors, RPI has around a dozen special interdisciplinary programs, such as Games and Simulation Arts and Sciences (GSAS), Design, Innovation, and Society (DIS), Minds & Machines, and Product Design and Innovation (PDI). RPI is a technology-oriented university; all buildings and residence hall rooms have hard-wired and wireless high speed internet access, and all incoming freshmen have been required to have a laptop computer since 1999. Nationally, RPI is a member of the National Association of Independent Colleges and Universities (NAICU) and the NAICU's University and College Accountability Network (U-CAN). Rensselaer Plan With the arrival of the president, Shirley Ann Jackson, came the "Rensselaer Plan" announced in 1999. Its goal is to achieve greater prominence for Rensselaer as a technological research university. Various aspects of the plan include bringing in a larger graduate student population and new research faculty, and increasing participation in undergraduate research, international exchange programs, and "living and learning communities". So far, there have been a number of changes under the plan: new infrastructure such as the Center for Biotechnology and Interdisciplinary Studies, Experimental Media and Performing Arts Center, and Computational Center for Nanotechnology Innovations (CCNI) have been built to support new programs, and application numbers have increased. In 2018, Rensselaer received a record number of applications: 20,337. According to Jared Cohon in 2006, then president of Carnegie Mellon University, "Change at Rensselaer in the last five years has occurred with a scope and swiftness that may be without precedent in the recent history of American higher education." The ability to attract greater research funds is needed to meet the goals of the plan, and the university has set a goal of $100 million annually. Fourteen years later, in FY2013, research expenditures reached this goal. To help raise money the university mounted a $1 billion capital campaign, of which the public phase began in September 2004 and was expected to finish by 2008. In 2001, a major milestone of the campaign was the pledging of an unrestricted gift of $360 million by an anonymous donor, believed to be the largest such gift to a U.S. university at the time. The university had been a relative stranger to such generosity as the prior largest single gift was $15 million. By September 2006, the $1 billion goal has been exceeded much in part to an in-kind contribution of software commercially valued at $513.95 million by the Partners for the Advancement of Collaborative Engineering Education (PACE). In light of this, the board of trustees increased the goal of the $1 billion capital campaign to $1.4 billion by 30 June 2009. The new goal was met by 1 October 2008. In 2016, President Jackson announced during the Fall Town Hall Meeting that the institute was in the final stages of organizing a new capital campaign which it intends to launch in 2017 to meet the goals of the Rensselaer Plan 2024. The goal of the campaign was cited as being primarily for the support of financial aid for undergraduate students and the expansion of on-campus research facilities to accommodate planned increases in doctoral and graduate enrollment. The fundraising goal of the capital campaign is $1 billion, with over $400 million raised prior to the campaign going public. Ambitious spending on the Rensselaer Plan has led the university into financial difficulties, with its credit rating lowered by several agencies. Faculty Rankings For 2021, U.S. News & World Report ranked Rensselaer tied for 53rd among national universities in the U.S., 40th out of 180 for "Best Value" in undergraduate education, and tied for 68th out of the top 83 in "Most Innovative Schools". The same rankings placed Rensselaer's undergraduate engineering program tied at 32nd among schools whose highest degree is a doctorate, and its graduate program is ranked tied for 43rd out of 218 engineering schools. The Leiden Ranking (2016) placed RPI at 127 among the top 900 world universities and research institutions according to the proportion of the top 1% most frequently cited publications of a university. In 2016, The Economist ranked Rensselaer No. 18 among four-year non-vocational colleges and universities and Times Higher Education–QS World University Rankings placed Rensselaer among the top 50 universities for technology in the world. In 2016, Rensselaer was listed among the top ten universities for highest median earnings. Civil liberties organization FIRE gave RPI its 2020 "Lifetime Censorship Award" "For its unashamed, years-long record of censoring its critics and utter disinterest in protecting students’ rights". Research and development Rensselaer is classified among "R1: Doctoral Universities – Very High Research Activity". Rensselaer has established six areas of research as institute priorities: biotechnology, energy and the environment, nanotechnology, computation and information technology, and media and the arts. Research is organized under the Office of the Vice President for Research, Jonathan Dordick. In 2018, Rensselaer operated 34 research centers and maintained annual sponsored research expenditures of $100.8 million. One of the most recent of Rensselaer's research centers is the Center for Biotechnology and Interdisciplinary Studies, a 218,000 square-foot research facility and a national pacesetter for fundamental and applied research in biotechnology. The primary target of the research center is biologics, a research priority based on data-driven understanding of proteomics, protein regulation, and gene regulation. It involves using biocatalysis and synthetic biology tools to block or supplement the actions of specific cells or proteins in the immune system. Over the past decade, CBIS has produced over 2,000 peer-reviewed publications with over 30,000 citations and currently employs over 200 scientists and engineers. The center is also used primarily to train undergraduate and graduate students, with over 1,000 undergraduates and 200 doctoral students trained. The center also has numerous academic and industry partners including the Icahn School of Medicine at Mount Sinai. These partnerships have resulted in numerous advances over the last decade through new commercial developments in diagnostics, therapeutics, medical devices, and regenerative medicine which are a direct result of research at the center. Examples of advancements include the creation of synthetic heparin, antimicrobial coatings, detoxification chemotherapy, on-demand bio-medicine, implantable sensors, and 3D cellular array chips. Rensselaer also hosts the Tetherless World Constellation, a multidisciplinary research institution focused on theories, methods, and applications of the World Wide Web. Research is carried out in three inter-connected themes: Future Web, Semantic Foundations and Xinformatics. At Rensselaer, a constellation is a multidisciplinary team composed of senior and junior faculty members, research scientists, and postdoctoral, graduate, and undergraduate students. The faculty experts for the TWC constellation are James Hendler, Deborah McGuinness and Peter Fox. Faculty alumni of TWC includes Heng Ji (Natural Language Processing). In 2016, the Constellation received a one million dollar grant from the Bill & Melinda Gates Foundation for continuing work on a novel data visualization platform that will harness and accelerate the analysis of vast amounts of data for the foundation's Healthy Birth, Growth, and Development Knowledge Integration initiative. In conjunction with the constellation, Rensselaer operates the Center for Computational Innovations which is the result of a $100 million collaboration between Rensselaer, IBM, and New York State to further nanotechnology innovations. The center is currently home to the most powerful private-university based supercomputer in the world and its supercomputer is consistently ranked among the most powerful in the world, capable of performing over 1.1 peta-FLOPS. The center's main focus is on reducing the cost associated with the development of nanoscale materials and devices, such as used in the semiconductor industry. The university also utilizes the center for interdisciplinary research in biotechnology, medicine, energy, and other fields. Rensselaer additionally operates a nuclear reactor and testing facility – the only university-run reactor in New York State – as well as the Gaerttner Linear Accelerator, which is currently being upgraded under a $9.44 million grant from the US Department of Energy. Students In 2018, Rensselaer's enrollment was 7,442 total resident students, including 6,590 undergraduate and 1,329 graduate. Over 71% of Rensselaer's students are from out of state. More than 20% of students are international. Rensselaer students represent all 50 U.S. states and over 60 countries. The undergraduate student to faculty ratio is 13:1. Among the class of 2020, 66% are in the top 5 percent of their high school class, 93% in the top quarter, and 99% in the top half. The average unweighted high school GPA for enrolled students was 3.88 on a 4.0 scale, with 65% having a 3.75 GPA or higher and 99% having at least a 3.0. Rensselaer's yield rate for the Class of 2021 surpassed 20 percent in the year 2018 with over 20,000 applications received by Rensselaer's Office of Admissions. The average SAT score range was 1330-1500 for the mid-50% range with a median SAT score of 1420 on a scale of 1600. The average ACT score range was 29-33 for the mid-50% range with a median ACT score of 31. In 2016, Rensselaer's freshman retention rate was 94% and admissions selectivity rating was 35th in the nation according to the US News & World Report. Since 2000, undergraduate enrollment grew by over 1,700 students, from 4,867 to 6,590 during calendar year 2018, and the full-time graduate enrollment declined from 1500 to 1,188. Roughly 12% of students received the Rensselaer medal, a merit scholarship with a cumulative value of $100,000 for exceptional high school students in science and mathematics. 95% of full-time domestic undergraduate students receive either need-based or merit-based financial aid, averaging 85% of total financial need met per student. In 2018, Rensselaer invested over $140 million in financial aid and scholarships for students. Gender ratio RPI became coeducational in 1942. In 1966, the male-to-female ratio was 19:1, in the 1980s it reached as low as 8:1, and in the early 1990s the ratio was around 5:1. In 2009, RPI had a ratio of 2.5:1 (72% male / 28% female), In 2016, the ratio for the incoming freshman class had fallen to 2.1:1 (68% male / 32% female), the lowest in the history of the institute. In the fall of 2016, more than 1,000 women enrolled in Rensselaer Polytechnic Institute's undergraduate engineering programs for the first time in its history. These women represented 30 percent of the student body in engineering at the university, and 32 percent of the university's total gender composition. Shekhar Garde, Rensselaer's dean of engineering, claims he wants to increase the female composition of the institute to 50 percent before 2030. Greek life Rensselaer Polytechnic Institute has an extensive history of Greek community involvement on campus, including past presidents, honorary academic building dedications, and philanthropic achievements. The overall Greek system at Rensselaer stresses Leadership, Fortitude, Innovation, and Evolution. RPI currently has 29 active fraternities as well as 6 sororities, with 32 percent involvement of all males and 18 percent involvement of all females, organized under the Interfraternal Council and Panhellenic Council. Of those Greek organizations, three were founded at Rensselaer including the Theta Xi national engineering fraternity, the Sigma Delta Hispanic-interest local sorority, and the Rensselaer Society of Engineers local engineering fraternity. Theta Xi fraternity was established by RPI students on 29 April 1864, the only national fraternity founded during the Civil War. The Theta Xi Fraternity Chapter House is listed on the National Register of Historic Places. Additionally, Rensselaer is home to the Epsilon Zeta chapter of the Alpha Phi Omega, or "APO," national service fraternity, which operates a test-bank and office at the top floor of the Student Union. The organization also hosts a campus lost & found, universal can tab collection, and a public 3D printing service. In 2017, Chi Phi and Theta Chi at Rensselaer co-hosted an event called "Brave A Shave For Kids With Cancer," along with several other Greek organizations - raising over $22,000 for pediatric cancer research with dozens of participants shaving their heads to spread awareness of pediatric cancers. Many fraternities and sororities also engage in Adopt-a-Highway and host events in the local community. Since its inception, all members of Greek Life have also participated in Navigating Rensselaer & Beyond - RPI's official continuation of student orientation through hosting annual events open to all students such as Beach Day/Hike with Greek Life, a day of hiking and team building activities for incoming freshmen, and Saratoga Therapeutic Equine Program, a day of service focused on horse rehabilitation programs. Greek Life organizations also operate Greek-affiliated groups including the Alumni Inter-Greek Council, Greek Greeks - a student-run venture which aims to promote sustainability and safe environmental practices in Greek chapter houses, Greek Spectrum - an LGBTQIA support and advocacy group, and the undergraduate Greek leadership society Order of Omega. Athletics The RPI Engineers are the athletic teams for the university. RPI currently sponsors 23 sports, 21 of which compete at the NCAA Division III level in the Liberty League; men's and women's ice hockey compete at the Division I level in ECAC Hockey. The official nickname of some of the school's Division III teams was changed in 1995 from the Engineers to the Red Hawks. However, the hockey, football, cross-country, tennis and track and field teams all chose to retain the Engineers name. The Red Hawks name was, at the time, very unpopular among the student body; a Red Hawk mascot was frequently taunted with thrown concessions and chants of "kill the chicken!". In 2009 the nickname for all teams has since been changed back to Engineers. In contrast, the official ice hockey mascot, known as Puckman, has always been very popular. Puckman is an anthropomorphic hockey puck with an engineer's helmet. During the 1970s and 1980s, one RPI cheer was: E to the x, dy/dx, E to the x, dx Cosine, secant, tangent, sine 3.14159 Square root, cube root, log of pi Disintegrate them, RPI! Ice hockey (men's) RPI has a competitive Division I hockey team who won NCAA national titles in 1954 and 1985. Depending on how the rules are interpreted, the RPI men's ice hockey team may have the longest winning streak on record for a Division I team; in the 1984–85 season it was undefeated for 30 games, but one game was against the University of Toronto, a non-NCAA team. Continuing into the 1985–86 season, RPI continued undefeated over 38 games, including two wins over Toronto. The streak ended at Boston University against the Terriers. Adam Oates and Daren Puppa, two players during that time, both went on to become stars in the NHL. Joé Juneau, who played from 1987 to 1991, and Brian Pothier, who played from 1996 to 2000, also spent many years in the NHL. Graeme Townshend, who also played in the late 1980s, had a brief NHL career. He is the first man of Jamaican ancestry to play in the National Hockey League. The ice hockey team plays a significant role in the campus's culture, drawing thousands of fans each week to the Houston Field House during the season. The team's popularity even sparked the tradition of the hockey line, where students lined up for season tickets months in advance of the on-sale date. Today, the line generally begins a week or more before ticket sales. Another tradition since 1978 has been the "Big Red Freakout!" game held close to the first weekend of February. Fans usually dress in the schools colors red (cherry) and white, and gifts such as T-shirts are distributed en masse. In ice hockey, the RPI's biggest rival has always been the upstate engineering school Clarkson University. In recent years RPI has also developed a spirited rivalry with their conference travel partner Union College, with whom they annually play a nonconference game in Albany for the Mayor's Cup. Ice hockey (women's) The women's ice hockey team moved to the NCAA Division I level in 2005. During the 2008–09 season the team set the record for most wins in one season (19-14-4). On 28 February 2010, Rensselaer made NCAA history. The Engineers beat Quinnipiac, 2–1, but it took five overtimes. It is now the longest game in NCAA Women's Ice Hockey history. Senior defenseman Laura Gersten had the game-winning goal. She registered it at 4:32 of the fifth overtime session to not only clinch the win, but the series victory. Lacrosse (men's) The lacrosse team represented the United States in the 1948 Olympics in London. It won the Wingate Memorial Trophy as national collegiate champions in 1952. Future NHL head coach Ned Harkness coached the lacrosse and ice hockey teams, winning national championships in both sports. Baseball The Engineers baseball squad is perennially atop the Liberty League standings, and has seen 8 players move on to the professional ranks, including 4 players selected in the MLB draft. The team is coached by Keith Glasser. The Engineers play their home games at the historic Robison Field. American football American rugby was played on campus in the late 1870s. Intercollegiate football begin as late as 1886 when an RPI team first played a Union College team on a leased field in West Troy (Watervliet). Since 1903, RPI and nearby Union have been rivals in football, making it the oldest such rivalry in the state. The teams have played for the Dutchman's Shoes since 1950. RPI Football had their most successful season in 2003, when they finished 11-2 and lost to St. Johns (Minn.) in the NCAA Division III semi final game. Athletic facilities The Houston Field House is a 4,780‑seat multi-purpose arena located on the RPI campus. It opened in 1949 and is home to the RPI Engineers men's and women's ice hockey teams. The Field House was renovated starting in 2007 as part of the major campus improvement project to build the East Campus Athletic Village. The renovations included locker rooms upgrades, addition of a new weight room, and a new special reception room dedicated to Ned Harkness. Additionally, as part of the renovations through a government grant, solar panels were installed on the roof to supply power to the building. As part of the Rensselaer Plan, the Institute recently completed a major project to improve its athletic facilities with the East Campus Athletic Village. The plan included construction of a new and much larger 4,842‑seat football stadium, a basketball arena with seating for 1,200, a new 50-meter pool, an indoor track and field complex, new tennis courts, new weight rooms and a new sports medicine center. The Institute broke ground on 26 August 2007, and construction of the first phase is expected to last two years. The estimated cost of the project is $78 million for phase one and $35–$45 million for phase two. Since the completion of the new stadium, the bleachers on the Class of '86 football field on the central campus have been removed and the field has become an open space. In the future the new space could be used for expansions of the academic buildings, but for now members of the campus planning team foresee a "historic landscape with different paths and access ways for students and vehicles alike". Student life The students of RPI have created and participate in a variety of clubs and organizations funded by the Student Union. About 170 of these organizations are funded by the Student Union, while another thirty, which consist mostly of political and religious organizations, are self-supporting. In 2006 the Princeton Review ranked RPI second for "more to do on campus." The Union was the last entirely student-run union at a private university in the United States until September 2017. Phalanx is RPI's Senior Honor Society. It was founded in 1912, when Edward Dion and the Student Council organized a society to recognize those RPI students who had distinguished themselves among their peers in the areas of leadership, service and devotion to the alma mater. It is a fellowship of the most active in student activities and has inducted more than 1,500 members since its founding. RPI has around twenty intramural sports organizations, many of which are broken down into different divisions based on level of play. Greek organizations compete in them as well as independent athletes. There are also thirty-nine club sports. Given the university's proximity to the Berkshires, Green Mountains and Adirondacks, the Ski Club and the Outing Club are some of the largest groups on campus. The Ski Club offers weekly trips to local ski areas during the winter months, while the Outing Club offers trips on a weekly basis for a variety of activities. The Rensselaer Polytechnic is the student-run weekly newspaper. The Poly printed about 7,000 copies each week and distributed them around campus until 2018 when the newspaper switched to online-only distribution due to budget concerns. Although it is the Union club with the largest budget, The Poly receives no subsidy from the Union, and obtains all funding through the sale of advertisements. There is also a popular student-run magazine called Statler & Waldorf which prints on a semesterly basis. RPI has an improvisational comedy group, Sheer Idiocy, which performs several shows a semester. There are also several music groups ranging from a cappella groups such as the Rusty Pipes, Partial Credit, the Rensselyrics and Duly Noted, to several instrumental groups such as the Orchestra, the Jazz Band and a classical choral group, the Rensselaer Concert Choir. Another notable organization on campus is WRPI, the campus radio station. WRPI differs from most college radio in that it serves a radius including the greater Albany area. With 10 kW of broadcasting power, WRPI maintains a stronger signal than nearly all college radio stations and some commercial stations. WRPI currently broadcasts on 91.5 FM in the Albany area. The RPI Players is an on‑campus theater group that was formed in 1929. The Players resided in the Old Gym until 1965 when they moved to their present location at the 15th Street Lounge. This distinctive red shingled building had been a USO hall for the U.S. Army before being purchased by RPI. The Players have staged over 275 productions in its history. RPI songs There are a number of songs commonly played and sung at RPI events. Notable among them are: "The Alma Mater (Here's to Old RPI)" – sung at formal events such as commencement and convocation, also played and sung by the Pep Band at hockey and football games, and played daily at noon by the quadrangle bell tower. It was published in the first book of Songs of Rensselaer printed in 1913. "Hail, Dear Old Rensselaer" – used to be the fight song during the 1960s. It is still played today by the Pep Band at athletic events. "All We've Learned at Rensselaer" – sung at the RPI commencement ceremonies by the Rensselyrics. Although the Rensselyrics are an a cappella group, this song is accompanied by piano. Each verse or section has a different musical style, several of which are closely based on Billy Joel songs or other popular songs. First Year Experience and CLASS programs Another notable aspect of student life at RPI is the "First-Year Experience" (FYE) program. Freshman begin their stay at RPI with a week called "Navigating Rensselaer and Beyond" or NRB week. The Office of the First-Year Experience provides several programs that extend to not only freshman, but to all students. These include family weekend, community service days, the Information and Personal Assistance Center (IPAC), and the Community Advocate Program. The FYE program was awarded the 2006 NASPA Excellence Gold Award, in the category of "Enrollment Management, Orientation, Parents, First-Year, Other-Year and related". Since 2008, Jackson's administration has led an effort to form the CLASS Initiative ("Clustered Learning Advocacy and Support for Students"), which requires all sophomores to live on campus and to live with special "residence cluster deans". The transition to this program began in early 2010 among some resistance from some fraternities and students who had planned to live off campus. NROTC RPI NROTC is an officer accession program hosted at RPI with the goal of developing Midshipmen into commissioned officers into the United States Navy and Marine Corps. The unit consists of students from RPI as well as Union College. The program was officially started at RPI in September 1941, just a few months before the US involvement in WWII. RPI NROTC was part of the V-12 training program that was aimed at increasing the number of total commissioned officers during WWII. It focused on developing officers for the military specializing in technical degrees such as engineering, medicine, and foreign languages. The RPI class of 1945 had a large majority of its student body in the NROTC program with around 70% of the 932 students. Since 1926, over 75 Naval Officers have attained flag officer rank with a degree from RPI. Besides the US Naval Academy, this is the largest number of flag officers produced from one single institute. RPI NROTC is home to several notable alumni including NASA Astronaut CDR Reid Wiseman and RDML Lewis Combs. RDML Combs is the founder of the Navy Construction Battalion, commonly referred to as the “Seabees”, which plays a crucial role in creating forward deployed bases as well as humanitarian efforts to bring fresh water to underdeveloped communities. Army ROTC In 1947, Rensselaer Polytechnic Institute (RPI) began an Army ROTC program affiliated with the Corps of Engineers, Transportation Corps, and the Signal Corps. Siena College later formed its own ROTC program in September 1950 as a Field Artillery program. The intent of the focusing programs on specific branches was to align Cadets with branches that closely aligned with their academic backgrounds, a practice that was discontinued nationwide in the 1960s. Cadets today have the opportunity to compete for any Army branch despite their academic background. In 1981, the State University at Albany (UAlbany) established an extension center of the RPI program to allow students to enroll in ROTC courses at the UAlbany campus. As part of the Army's downsizing in the early 1990s the separate RPI Army ROTC program was discontinued. The Army ROTC was reduced to its current configuration of 270 programs nationwide. Siena College became the host institution for Army ROTC in the New York Capital District Region. RPI and UAlbany are now part of Cadet Command's partnership school program, and the Mohawk Battalion includes cross-enrolled students throughout the capital region. Among these colleges and universities are: Rensselaer Polytechnic Institute, Union College, Skidmore College, the College of Saint Rose, Albany College of Pharmacy & Health Sciences, and the University at Albany. Religious Clubs One of the religious clubs that can be found at Rensselaer Polytechnic Institute is RPI Sage Hillel. This is a Jewish club that incorporates both Rensselaer Polytechnic Institute and Russell Sage College. Hillel club is a club that is part of a much larger international organization called Hillel International. Hillel's purpose is “Enriching the lives of Jewish students so that they may enrich the Jewish people and the world.” and their vision is “We envision a world where every student is inspired to make an enduring commitment to Jewish life, learning and Israel.” Both of which can be found on their website, https://www.hillel.org/about. Hillel is not a very big club at RPI, however as all clubs it was negatively affected by Covid and has been slowly making a comeback. Hillel meets on Fridays at 6:30 for services and has several other activities throughout the week as decided on by the board of Hillel. Another religious club at RPI is the Cru club. Cru is a non denominational Christian club that, similarly to Hillel, holds worship on Fridays from 7pm. There are other events throughout the week, such as a men's small group, a women's small group, a Saturday morning small group, and a book club. Cru club is part of an also much larger organization, and there are many such programs in schools across the country. Cru was originally founded in 1951 by Bill and Vonette Bright at Fuller Theological Seminary. Its values are faith, growth, and fruitfulness. Cru holds retreats several times a year and has many resources dedicated to helping people who are curious in the Christian faith. Cru is a good deal bigger than Hillel, and regularly gets more members at worship, with Friday evenings being the most popular days for attendance. History of Women Rensselaer Polytechnic Institute historically has always been a male dominated institute. The first woman to apply to RPI applied in 1873, and her name was Elizabeth R. Bruswell. However, she did not attend as she was the only female to apply and it was suggested that she would not be accepted as it would not be comfortable for her as the only woman on campus. For many years afterwards, the school continued to only allow admission to men. It wasn't until 1942 that women were welcome to enroll in classes at Rensselaer. The First Women at Rensselaer Polytechnic Institute Students Camilla (Trent) Cluett (Architecture), Elizabeth English (Biology), Helen Ketchum (Architecture), Lois Graham (Mechanical Engineering), and Mary Ellen Rathbun (Metallurgical Engineering) were the first women to enroll in 1942. Lois Graham and Mary Ellen Rathbun became the first to graduate on April 22, 1945. In addition, Antoinette A. Patti was the first woman to receive a master's degree from RPI in February 1947, in Chemistry. The first Doctoral Degree received by a woman at RPI was Reva R. G. Servoss in June 1954, in Chemistry as well. Faculty Miss Hazel Brennan was the first woman assistant instructor in Chemistry in 1918. She officially was the first woman to teach at Rensselaer Polytechnic Institute. The following year, 1919, Marie De Pierpont was hired as an instructor in French and was later named professor as well as being named head of the language department in 1928. She was the first woman to hold a full professorship at the institute, and ended her position in 1932. It wasn't for another 11 years (1943) that another woman was hired as an instructor. Herta Leng, who worked in the Physics Department, was given the title of assistant professor in 1945, and became a full professor in 1966, the second woman to be a full professor at Rensselaer Polytechnic Institute. To this day, the percentage of women enrolled at RPI is still significantly lower. The ratio as of 2020 was about 32% women and 68% men. The role of women studying at Rensselaer Polytechnic Institute is continuously growing and providing inspiration for young girls who study STEM. Notable alumni According to the Rensselaer Alumni Association, there are nearly 100,000 RPI graduates currently living in the United States, and another 4,378 living abroad. In 1995, the Alumni Association created the Rensselaer Alumni Hall of Fame. Several notable 19th century civil engineers graduated from RPI. These include the visionary of the transcontinental railroad, Theodore Judah, Brooklyn Bridge engineer Washington Roebling, George Washington Gale Ferris Jr. (who designed and built the original Ferris Wheel) and Leffert L. Buck, the chief engineer of the Williamsburg Bridge in New York City. Many RPI graduates have made important inventions, including Allen B. DuMont ('24), creator of the first commercial television and radar; Keith D. Millis ('38), inventor of ductile iron; Ted Hoff ('58), father of the microprocessor; Raymond Tomlinson ('63), often credited with the invention of e-mail; inventor of the digital camera Steven Sasson and Curtis Priem ('82), designer of the first graphics processor for the PC, and co-founder of NVIDIA. RPI Prof. Matthew Hunter invented a process to refine titanium in 1910. H. Joseph Gerber pioneered computer-automated manufacturing systems for industry. In addition to NVIDIA, RPI graduates have also gone on to found or co-found major companies such as John Wiley and Sons, Texas Instruments, Fairchild Semiconductor, PSINet, MapInfo, Adelphia Communications, Level 3 Communications, Garmin, Bugle Boy and Vacasa. Several RPI graduates have played a part in the U.S. space program: George Low (B.Eng. 1948, M.S. 1950) was manager of the Apollo 11 project and served as president of RPI, and astronauts John L. Swigert, Jr., Richard Mastracchio, Gregory R. Wiseman, and space tourist Dennis Tito are alumni. The Electric Power Research Institute (EPRI) was founded by Dr. Chauncey Starr who graduated from RPI with a PhD in physics in 1935. Political figures who graduated from RPI included federal judge Arthur J. Gajarsa (B.S. 1962),and Major General Thomas Farrell of the Manhattan Project. Edward Burton Hughes, the Acting Commissioner of New York State Department of Transportation in 1969, Executive Deputy Commissioner of New York State Department of Transportation from 1967 to 1970, and Deputy Superintendent of New York State Department of Public Works from 1952 to 1967. Bertram Dalley Tallamy, seventh Federal Highway Administrator under the Federal Aid Highway Act of 1956, in office from February 5, 1957 – January 20, 1961. DARPA director Tony Tether, Representative John Olver of Massachusetts's 1st congressional district, and Senators John Barrasso of Wyoming, Mark Shepard of Vermont, and George R. Dennis of Maryland, Prime Minister Hani Al-Mulki of Jordan. Notable ice hockey players include NHL Hockey Hall of Famer and five-time NHL All Star Adam Oates (1985), Stanley Cup winner and former NHL All Star Mike McPhee (1982), two-time Calder Cup winner Neil Little (1994), former NHL All Rookie Joé Juneau (1991), and former NHL All Star Daren Puppa (1985). Other notable alumni include 1973 Nobel Prize in Physics winner Ivar Giaever (Ph.D. 1964); the first African-American woman to become a thoracic surgeon, Rosalyn Scott (B.S. 1970); director of Linux International Jon Hall (M.S. 1977); NCAA president Myles Brand (B.S. 1964); Lois Graham (B.S.ME 1946), who was the first woman to receive a degree in engineering from RPI, and went on to become the first woman in the US to receive a PhD in engineering; adult stem cell pioneer James Fallon; Michael D. West, gerontologist and stem cell scientist, founder of Geron, now CEO of BioTime (1976); director Bobby Farrelly (1981), David Ferrucci, lead researcher on IBM's Watson/Jeopardy! project; 66th AIA Gold Medal-winning architect Peter Q Bohlin; Matt Patricia, former head coach for the Detroit Lions; Garrettina LTS Brown, founder of Garrett's List, King Breeders and inventor of FreeTV; Luis Acuña-Cedeño, Governor of the Venezuelan Sucre State and former Minister of Universities; Andrew Franks, former placekicker for the Miami Dolphins of the National Football League; Sean Conroy, the first openly gay professional baseball player; Prem Jain (Father of Green Buildings in India); Keith Raniere, an American felon and the founder of NXIVM, a multi-level marketing company and cult. See also Association of Independent Technological Universities References Further reading External links RPI Athletics website Schools in Troy, New York Private universities and colleges in New York (state) Engineering universities and colleges in New York (state) Technological universities in the United States Schools in Rensselaer County, New York Education in New York's Capital District Educational institutions established in 1824 1824 establishments in New York (state) Tourist attractions in Rensselaer County, New York
49962980
https://en.wikipedia.org/wiki/Moscow%20State%20University%2C%20Tashkent
Moscow State University, Tashkent
Moscow State University, Tashkent, or Branch of Moscow State University Named for M.V. Lemonosov in Tashkent, was established in 2006 by the government of Uzbekistan as a branch of Moscow State University. The university primarily focuses on two areas: psychology and computer science. The campus is at 22 Amir Temur Prospect. Founding The decision of opening the branch was signed at the meeting between the presidents of the Russian Federation and Uzbekistan on 14 November 2005 in Moscow. The branch was established in Tashkent on 24 February 2006 by a resolution of the President of Uzbekistan Islam Karimov. The branch was allocated a complex of buildings under construction, in which the academic lyceum was originally supposed to be placed Tashkent Automobile and Road Institute. Mission and objectives The purpose of the university is training qualified specialists and professionals. The students are taught to the standards of Moscow State University and of international institutions, while upholding the goals of the national education system of Uzbekistan. Performance in MSU is evaluated according to legislation of the Republic of Uzbekistan and the Russian Federation. These are the official objectives of the university: Train highly qualified specialists who can solve problems in their branch of science by combining theoretical knowledge and scientific research Integrate scientific disciplines by using the scientific and educational potential power of MSU, bringing talented people to do scientific research Work with international institutions to develop professionals in their scientific disciplines, paying attention to world-cultural processes Enlarge the mutual cooperation among Uzbek, Russian, and international specialists to train fully developed employees Develop textbooks and other educational materials Re-train and enhance the knowledge and qualifications of professors and teachers in new academic disciplines and professions Organize the procedures to print scientific works and disseminate them Campus The MSU campus has one of the biggest resource centers among Uzbekistani universities, contains more than 8,000 textbooks, books, magazines, and other materials. It also has an e-library, a multimedia studio, and a video conference hall. The students are able to watch video lectures related to their studies after class. The video conference hall is mainly used by students to attend live video lectures that are broadcast from Moscow. In addition to this, there is a hall on campus that can hold 220 people and allows larger lectures and events to be held. The campus has facilities to allow students to participate in athletics in their free time. There is a large winter sports center that has seats for spectators, and an athletics complex located outdoors, with 3 tennis courts, two volleyball pitches, and one court for basketball. To assist the students as much as possible in their studies, MSU has opened a polygraph center on the campus. This center helps the students complete their tasks faster. There is a medical center on campus that makes up a large portion of the main building. MSU even has its own hotel with 10 rooms, which are mainly used by visiting teachers called to the university to give lectures. Structure Financing of works in the branch at the expense of the state budget of Uzbekistan in the form of grants listed below Ministry of Higher Education of the Republic, and payments of students on a fee-contract basis. Now the post of the head of the branch is occupied by the doctor of physical and mathematical sciences, professor V. B. Kudryavtsev (Moscow), deputy heads – V. A. Nosov (Moscow), T. Yu. Bazarov (Moscow), Executive director – E. M. Saydamatov, Deputy Executive Directors – T.A. O. Karshiev, A. B. Mamanazarov, Sh. Sh. Mirzaev. Faculties Currently, the branch has two types of bachelor's and master's degrees: "Psychology" and "Applied Mathematics and Computer Science". They provide students with essential theoretical, and practical knowledge, in order to become professionals on their fields. Faculty of Psychology The main purpose of the "Faculty of Psychology" is training professionals who work and assist in fields of medical sciences, solve difficult problems, and help citizens. The length of studies in this faculty is set at five years. The lessons are given only in Russian. The curriculum involves both general and professional subjects: General psychology Experimental psychology General psychological practicum History of psychology Zoo-psychology Psycho-genetics Methodical psychology Mathematical ways of psychology Development of psychology Social psychology Psychology of work Clinical psychology Special psychology Psycho-physiology Methods of teaching psychology Faculty of Analytical Mathematics and Computer Science The Faculty of Analytical Mathematics and Computer Science is the main faculty at Moscow State University. It focuses on subjects in analytical mathematics, computer science, and computer programming. The length of study is set at 4 years. All of the lessons are in Russian. The curriculum's priorities are toward developing computer science skills in students. After graduation students can work at academic institutions, universities, organs of government, banks, insurance companies, finance companies, consulting companies, national and international corporations that employ computer technologies. Subjects are listed below: First year: Discrete mathematics Linear algebra and analytic geometry Mathematical analysis Algorithms Assembly language English language Fundamentals of cybernetics History Philosophy Second year: Functioning analysis Mathematical analysis English language Simple differential equations Operating systems Physics Fundamentals of psychology Programming systems Probability theorem and mathematical system Third year: Optimization methods Artificial intelligence Constructing compilers Databases Physical fundamentals of computer systems Programming languages Equations of mathematical physics Fourth year: Mathematical logics Game theory Concepts of modern informatics Law Modern problems of analytical mathematics Sociology Special course Examinations at "Faculty of Analytical Mathematics and Computer Science" are held in three subjects, and they are: Algorithms. Combination Fundamentals of cryptography Mathematical logics Scientific work On 28 September 2018, the branch held the Republican Scientific and Practical Conference "Actual problems of psychology in Uzbekistan." Admissions Admission procedures and attestations are in according with standards set by Moscow State University. The curriculum, educational programs, and educational materials are also approved by Moscow State University in Tashkent. Until 2011 years the study at the Faculty of Psychology lasted for 5 years (specialist), currently the period of study in both areas – 4 years (bachelor). The term of study for graduate programs – 2 years. Entrance examinations To complete the admission procedure, students must pass examinations approved by Moscow State University and Uzbekistan's ministry of higher education. Interested applicants must provide needed documents and pass 3 examinations according to the faculty they choose. The Faculty of Psychology requires passing 3 written examinations in general subjects: Mathematics Biology Russian language Interested applicants in the Faculty of Analytical Mathematics and Computer Science must also pass written examinations in 3 general subjects: Mathematics Computer science Russian language Applicants who have completed the examination procedure are offered positions at the university as students. Branch training The lessons are taught in Russian by teachers from Uzbekistan and the Russian Federation. The head branch of Moscow State University annually sends specialists to the school to supervise the organization of lessons and to act as scientific advisors for the bachelor's and master's degree theses. As of 2017, each faculty has 20 budgeted and 30 contracted places for studying in the bachelor-degree program and 4 budgeted and 6 contracted places in the master's program. Graduates will be awarded with original Moscow State University diplomas. Students In 2018, 434 students were enrolled at the branch, which in 397 were undergraduate and 37 were undergraduate. 90% undergraduates are consist of bachelor graduates of the branch. The representing team of the branch by (T.Sytdikov, B.Soliev, A.Bystrygova, trainer – B. Ashirmatov) qualified for the final stage of the International Student Competition in programming in 2014, where it took the 37th place among the 122 participant teams. Among the students there are prize-winners in international sports competitions – Farangis Aliyeva (1st place for Open CIS Cup – ITF World Taekwondo Cup). Branch graduates In 2019, the Tashkent branch of MSU graduated over 600 specialists. Some of them continued their studies as postgraduate students in Moscow State University and defended their thesis for the competition Candidate of Science degrees. Graduates of the branch work in various ministries of Uzbekistan (Information Technologies and Communications, Higher and Secondary Special Education, Public Education; Preschool Education), Institute of Mathematics of Academy of Sciences, in the scientific and practical research center "Oila" ("Family"), National University of Uzbekistan, in the "Lukoil" company, in the Republican Center for Social Adaptation of Children, the public children's fund "Sen yolg`iz emassan" ("You are not alone"), "Association of Psychologists of Uzbekistan", and the Federation of Gymnastics of Uzbekistan. See also Inha University in Tashkent Management Development Institute of Singapore in Tashkent Tashkent Automobile and Road Construction Institute Tashkent Financial Institute Tashkent Institute of Irrigation and Melioration Tashkent State Agrarian University Tashkent State Technical University Tashkent State University of Economics Tashkent State University of Law Tashkent University of Information Technologies TEAM University Tashkent Turin Polytechnic University in Tashkent University of World Economy and Diplomacy Westminster International University in Tashkent References External links Official web-site of MSU in Moscow (Faculty of psychology) Information in Uzbek language Information about the university in Uzbek Buildings and structures in Tashkent Education in Tashkent Universities in Uzbekistan
7952276
https://en.wikipedia.org/wiki/Joseph%20F.%20Traub
Joseph F. Traub
Joseph Frederick Traub (June 24, 1932 – August 24, 2015) was an American computer scientist. He was the Edwin Howard Armstrong Professor of Computer Science at Columbia University and External Professor at the Santa Fe Institute. He held positions at Bell Laboratories, University of Washington, Carnegie Mellon, and Columbia, as well as sabbatical positions at Stanford, Berkeley, Princeton, California Institute of Technology, and Technical University, Munich. Traub was the author or editor of ten monographs and some 120 papers in computer science, mathematics, physics, finance, and economics. In 1959 he began his work on optimal iteration theory culminating in his 1964 monograph, which is still in print. Subsequently, he pioneered work with Henryk Woźniakowski on computational complexity applied to continuous scientific problems (information-based complexity). He collaborated in creating significant new algorithms including the Jenkins-Traub Algorithm for Polynomial Zeros, as well as the Kung-Traub, Shaw-Traub, and Brent-Traub algorithms. One of his research areas was continuous quantum computing. As of November 10, 2015, his works have been cited 8500 times, and he has an h-index of 35. From 1971 to 1979 he headed the Computer Science Department at Carnegie Mellon and led it from a critical period to eminence. From 1979 to 1989 he was the founding Chair of the Computer Science Department at Columbia. From 1986 to 1992 he served as founding Chair of the Computer Science and Telecommunications Board, National Academies and held the post again 2005–2009. Traub was founding editor of the Annual Review of Computer Science (1986–1990) and Editor-in-Chief of the Journal of Complexity (1985–2015). Both his research and institution building work have had a major impact on the field of computer science. Early career Traub attended the Bronx High School of Science where he was captain and first board of the chess team. After graduating from City College of New York he entered Columbia in 1954 intending to take a PhD in physics. In 1955, on the advice of a fellow student, Traub visited the IBM Watson Research Lab at Columbia. At the time, this was one of the few places in the country where a student could gain access to computers. Traub found his proficiency for algorithmic thinking matched perfectly with computers. In 1957 he became a Watson Fellow through Columbia. His thesis was on computational quantum mechanics. His 1959 PhD is in applied mathematics since computer science degrees were not yet available. (Indeed, there was no Computer Science Department at Columbia until Traub was invited there in 1979 to start the Department.) Career In 1959, Traub joined the Research Division of Bell Laboratories in Murray Hill, NJ. One day a colleague asked him how to compute the solution of a certain problem. Traub could think of a number of ways to solve the problem. What was the optimal algorithm, that is, a method which would minimize the required computational resources? To his surprise, there was no theory of optimal algorithms. (The phrase computational complexity, which is the study of the minimal resources required to solve computational problems was not introduced until 1965.) Traub had the key insight that the optimal algorithm for solving a continuous problem depended on the available information. This was to eventually lead to the field of information-based complexity. The first area for which Traub applied his insight was the solution of nonlinear equations. This research led to the 1964 monograph, Iterative Methods for the Solution of Equations, which is still in print. In 1966 he spent a sabbatical year at Stanford University where he met a student named Michael Jenkins. Together they created the Jenkins-Traub Algorithm for Polynomial Zeros. This algorithm is still one of the most widely used methods for this problem and is included in many textbooks. In 1970 he became a professor at the University of Washington and in 1971 he became Head of the Carnegie Mellon Computer Science Department. The Department was quite small including Gordon Bell, Nico Haberman, Allen Newell, Raj Reddy, Herbert A. Simon, and William Wulf. Just prior to 1971 many faculty had left the Department to take positions elsewhere. Those professors who remained formed a core of world-class scientists recognized as leaders of the discipline. By 1978 the Department had grown to some 50 teaching and research faculty. One of Traub's PhD students was H. T. Kung, now a chaired professor at Harvard. They created the Kung-Traub algorithm for computing the expansion of an algebraic function. They showed that computing the first terms was no harder than multiplying two -th degree polynomials. This problem had been worked on by Isaac Newton who missed a key point. In 1973 he invited Henryk Woźniakowski to visit CMU. They pioneered the field of information-based complexity, co-authoring three monographs and numerous papers. Woźniakowski is now an emeritus professor at both Columbia and the University of Warsaw, Poland. In 1978, while on sabbatical at Berkeley, he was recruited by Peter Likins to become founding Chairman of the Computer Science Department at Columbia and Edwin Howard Armstrong Professor of Computer Science. He served as chair 1979–1989. In 1980 he co-authored A General Theory of Optimal Algorithms, with Woźniakowski. This was the first research monograph on information-based complexity. Greg Wasilkowski joined Traub and Woźniakowski in two more monographs Information, Uncertainty, Complexity, Addison-Wesley, 1983, and Information-Based Complexity, Academic Press, 1988. In 1985 Traub became founding Editor-in-Chief of the Journal of Complexity. This was probably the first journal which had complexity in the sense of computational complexity in its title. Starting with two issues and 285 pages in 1985, the Journal now publishes six issues and nearly 1000 pages. In 1986, he was asked by the National Academies to form a Computer Science Board. The original name of the Board was the Computer Science and Technology Board (CSTB). Several years later CSTB was asked to also be responsible for telecommunications so it was renamed the Computer Science and Telecommunications Board, preserving the abbreviation CSTB. The Board deals with critical national issues in computer science and telecommunications. Traub served as founding chair 1986–1992 and held the post again 2005–2009. In 1990 Traub taught in the summer school of the Santa Fe Institute (SFI). He has since played a variety of roles at SFI. In the nineties he organized a series of Workshops on Limits to Scientific Knowledge funded by the Alfred P. Sloan Foundation. The goal was to enrich science in the same way that the work of Gödel and Turing on the limits of mathematics enriched that field. There were a series of Workshops on limits in various disciplines: physics, economics, and geophysics. Starting in 1991 Traub has been co-organizer of an international Seminar on "Continuous Algorithms and Complexity" at Schloss Dagstuhl, Germany. The ninth Seminar was held in September 2006. Many of the Seminar talks are on information-based complexity and more recently on continuous quantum computing. Traub was invited by the Accademia Nazionale dei Lincee in Rome, Italy, to present the 1993 Lezione Lincee. He chose to give the cycle of six lectures at the Scuola Normale in Pisa. He invited Arthur Werschulz to join him in publishing the lectures. The lectures appeared in expanded form Complexity and Information, Cambridge University Press, 1998. In 1994 he asked a PhD student, Spassimir Paskov, to compare the Monte Carlo method (MC) with the Quasi-Monte Carlo method (QMC) when calculating a collateralized mortgage obligation (CMO) Traub had obtained from Goldman Sachs. This involved the numerical approximation of a number of integrals in 360 dimensions. To the surprise of the research group Paskov reported that QMC always beat MC for this problem. People in finance had always used MC for such problems and the experts in number theory believed QMC should not be used for integrals of dimension greater than 12. Paskov and Traub reported their results to a number of Wall Street firms to considerable initial skepticism. They first published the results in 1995. The theory and software was greatly improved by Anargyros Papageorgiou. Today QMC is widely used in the financial sector to value financial derivatives. QMC is not a panacea for all high dimensional integrals. Research is continuing on the characterization of problems for which QMC is superior to MC. In 1999 Traub received the Mayor's medal for Science and Technology. Decisions regarding this award are made by the New York Academy of Sciences. The medal was awarded by Mayor Rudy Giuliani in a ceremony in Gracie Mansion. Moore's law is an empirical observation that the number of features on a chip doubles roughly every 18 months. This has held since the early 60s and is responsible for the computer and telecommunications revolution. It is widely believed that Moore's law will cease to hold in 10–15 years using silicon technology. There is therefore interest in creating new technologies. One candidate is quantum computing. That is building a computer using the principles of quantum mechanics. Traub and his colleagues decided to work on continuous quantum computing. The motivation is that most problems in physical science, engineering, and mathematical finance have continuous mathematical models. In 2005 Traub donated some 100 boxes of archival material to the Carnegie Mellon University Library. This collection is being digitized. Patents on algorithms and software The U.S. patents US5940810 and US0605837 were issued to Traub et al. for the FinDer Software System and were assigned to Columbia University. These patents cover an application of a well known technique (low discrepancy sequences) to a well known problem (valuation of securities). Personal life Traub had two daughters, Claudia Traub-Cooper and Hillary Spector. He lived in Manhattan and Santa Fe with his wife, author Pamela McCorduck. He often opined on current events by writing to the New York Times, which frequently published his comments. Selected honors and distinctions Founding Chair, Computer Science and Telecommunications Board, National Academies, 1986–92, 2005–2009 Distinguished Senior Scientist Award, Alexander von Humboldt Foundation, 1992, 1998 1993 Lezione Lincee, Accademia Nazionale dei Lincei, Rome, Italy Lecture, Presidium, Academy of Sciences, Moscow, USSR 1990 First Prize, Ministry of Education, Poland, for the research monograph "Information-Based Complexity", 1989 Board of Governors, New York Academy of Sciences, 1986-9 (Executive Committee 1987–89) Fellow: American Association for the Advancement of Science, 1971; ACM 1994; New York Academy of Sciences, 1999; American Mathematical Society, 2012 1999 New York City Mayor's Award for Excellence in Science and Technology Festschrift for Joseph F. Traub, Academic Press, 1993 Festschrift for Joseph F. Traub, Elsevier, 2004 Honorary Doctorate of Science, University of Central Florida, 2001 Founding Editor-in-Chief, Journal of Complexity, 1985 Selected publications Selected monographs Iterative Methods for the Solution of Equations, Prentice Hall, 1964. Reissued Chelsea Publishing Company, 1982; Russian translation MIR, 1985; reissued American Mathematical Society, 1998. Algorithms and Complexity: New Directions and Recent Results, (editor) Academic Press, 1976. Information-Based Complexity, Academic Press, 1988 (with G. Wasilkowski and H. Woźniakowski). Complexity and Information, Cambridge University Press, 1998 (with A. G. Werschulz); Japanese translation, 2000. Selected papers Variational Calculations of the State of Helium, Phys. Rev. 116, 1959, 914–919. The Future of Scientific Journals, Science 158, 1966, 1153–1159 (with W. S. Brown and J. R. Pierce). A Three-Stage Variable-Shift Iteration for Polynomial Zeros and Its Relation to Generalized Rayleigh Iteration, Numerische mathematik 14, 1970, 252–263 (with M. A. Jenkins). Computational Complexity of Iterative Processes, SIAM Journal on Computing 1, 1972, 167–179. Parallel Algorithms and Parallel Computational Complexity, Proceedings IFIP Congress, 1974, 685–687. Convergence and Complexity of Newton Iteration for Operator Equations, Journal of the ACM 26, 1979, 250–258 (with H. Woźniakowski). All Algebraic Functions Can Be Computed Fast, Journal of the ACM 25, 1978, 245–260 (with H. T. Kung). On the Complexity of Composition and Generalized Composition of Power Series, SIAM Journal on Computing 9, 1980, 54–66 (with R. Brent).Complexity of Linear Programming, Operations Research Letters 1, 1982, 59–62 (with H. Woźniakowski).Information-Based Complexity, Nature 327, July, 1987, 29–33 (with E. Packel).The Monte Carlo Algorithm with a Pseudo-Random Number Generator, Mathematics of Computation 58, 199, 303–339 (with H. Woźniakowski).Breaking Intractability, Scientific American, January, 1994, 102–107 (with H. Woźniakowski). Translated into German, Italian, Japanese and Polish.Linear Ill-Posed Problems are Solvable on the Average for All Gaussian Measures, Math Intelligencer 16, 1994, 42–48 (with A. G. Werschulz).Faster Evaluation of Financial Derivatives, Journal of Portfolio Management 22, 1995, 113–120 (with S. Paskov).A Continuous Model of Computation, Physics Today, May, 1999, 39–43.No Curse of Dimensionality for Contraction Fixed points in the Worst Case, Econometrics, Vol. 70, No. 1, January, 2002, 285–329 (with J. Rust and H. Woźniakowski).Path Integration on a Quantum Computer'', Quantum Information Processing, 2003, 365–388 (with H. Woźniakowski). References External links Joseph Traub's Columbia homepage Joseph Traub digital archive at Carnegie Mellon Research monograph Complexity and Information Oral history interviews with Joseph F. Traub in April 1984, Oct. 1984, and March 1985 Charles Babbage Institute, University of Minnesota. SIAM Oral History CMU Distinguished Lecture Video CMU 50th Anniversary Video 1932 births 2015 deaths American computer scientists American people of German-Jewish descent Jewish American scientists Fellows of the Association for Computing Machinery California Institute of Technology faculty Carnegie Mellon University faculty Columbia University faculty Columbia School of Engineering and Applied Science faculty Columbia School of Engineering and Applied Science alumni Stanford University School of Engineering faculty University of California, Berkeley faculty University of Washington faculty Fellows of the American Mathematical Society Fellows of the Society for Industrial and Applied Mathematics Members of the United States National Academy of Engineering Santa Fe Institute people Scientists at Bell Labs The Bronx High School of Science alumni City College of New York alumni 20th-century American mathematicians 21st-century American mathematicians Annual Reviews (publisher) editors
196882
https://en.wikipedia.org/wiki/Burrows%E2%80%93Abadi%E2%80%93Needham%20logic
Burrows–Abadi–Needham logic
Burrows–Abadi–Needham logic (also known as the BAN logic) is a set of rules for defining and analyzing information exchange protocols. Specifically, BAN logic helps its users determine whether exchanged information is trustworthy, secured against eavesdropping, or both. BAN logic starts with the assumption that all information exchanges happen on media vulnerable to tampering and public monitoring. This has evolved into the popular security mantra, "Don't trust the network." A typical BAN logic sequence includes three steps: Verification of message origin Verification of message freshness Verification of the origin's trustworthiness. BAN logic uses postulates and definitions – like all axiomatic systems – to analyze authentication protocols. Use of the BAN logic often accompanies a security protocol notation formulation of a protocol and is sometimes given in papers. Language type BAN logic, and logics in the same family, are decidable: there exists an algorithm taking BAN hypotheses and a purported conclusion, and that answers whether or not the conclusion is derivable from the hypotheses. The proposed algorithms use a variant of magic sets. Alternatives and criticism BAN logic inspired many other similar formalisms, such as GNY logic. Some of these try to repair one weakness of BAN logic: the lack of a good semantics with a clear meaning in terms of knowledge and possible universes. However, starting in the mid-1990s, crypto protocols were analyzed in operational models (assuming perfect cryptography) using model checkers, and numerous bugs were found in protocols that were "verified" with BAN logic and related formalisms. In some cases a protocol was reasoned as secure by the BAN analysis but were in fact insecure. This has led to the abandonment of BAN-family logics in favor of proof methods based on standard invariance reasoning. Basic rules The definitions and their implications are below (P and Q are network agents, X is a message, and K is an encryption key): P believes X: P acts as if X is true, and may assert X in other messages. P has jurisdiction over X: P's beliefs about X should be trusted. P said X: At one time, P transmitted (and believed) message X, although P might no longer believe X. P sees X: P receives message X, and can read and repeat X. {X}K: X is encrypted with key K. fresh(X): X has not previously been sent in any message. key(K, P↔Q): P and Q may communicate with shared key K The meaning of these definitions is captured in a series of postulates: If P believes key(K, P↔Q), and P sees {X}K, then P believes (Q said X) If P believes (Q said X) and P believes fresh(X), then P believes (Q believes X). P must believe that X is fresh here. If X is not known to be fresh, then it might be an obsolete message, replayed by an attacker. If P believes (Q has jurisdiction over X) and P believes (Q believes X), then P believes X There are several other technical postulates having to do with composition of messages. For example, if P believes that Q said <X, Y>, the concatenation of X and Y, then P also believes that Q said X, and P also believes that Q said Y. Using this notation, the assumptions behind an authentication protocol can be formalized. Using the postulates, one can prove that certain agents believe that they can communicate using certain keys. If the proof fails, the point of failure usually suggests an attack which compromises the protocol. BAN logic analysis of the Wide Mouth Frog protocol A very simple protocol — the Wide Mouth Frog protocol — allows two agents, A and B, to establish secure communications, using a trusted authentication server, S, and synchronized clocks all around. Using standard notation the protocol can be specified as follows: Agents A and B are equipped with keys Kas and Kbs, respectively, for communicating securely with S. So we have assumptions: A believes key(Kas, A↔S) S believes key(Kas, A↔S) B believes key(Kbs, B↔S) S believes key(Kbs, B↔S) Agent A wants to initiate a secure conversation with B. It therefore invents a key, Kab, which it will use to communicate with B. A believes that this key is secure, since it made up the key itself: A believes key(Kab, A↔B) B is willing to accept this key, as long as it is sure that it came from A: B believes (A has jurisdiction over key(K, A↔B)) Moreover, B is willing to trust S to accurately relay keys from A: B believes (S has jurisdiction over (A believes key(K, A↔B))) That is, if B believes that S believes that A wants to use a particular key to communicate with B, then B will trust S and believe it also. The goal is to have B believes key(Kab, A↔B) A reads the clock, obtaining the current time t, and sends the following message: 1 A→S: {t, key(Kab, A↔B)}Kas That is, it sends its chosen session key and the current time to S, encrypted with its private authentication server key Kas. Since S believes that key(Kas, A↔S), and S sees {t, key(Kab, A↔B)}Kas, then S concludes that A actually said {t, key(Kab, A↔B)}. (In particular, S believes that the message was not manufactured out of whole cloth by some attacker.) Since the clocks are synchronized, we can assume S believes fresh(t) Since S believes fresh(t) and S believes A said {t, key(Kab, A↔B)}, S believes that A actually believes that key(Kab, A↔B). (In particular, S believes that the message was not replayed by some attacker who captured it at some time in the past.) S then forwards the key to B: 2 S→B: {t, A, A believes key(Kab, A↔B)}Kbs Because message 2 is encrypted with Kbs, and B believes key(Kbs, B↔S), B now believes that S said {t, A, A believes key(Kab, A↔B)}. Because the clocks are synchronized, B believes fresh(t), and so fresh(A believes key(Kab, A↔B)). Because B believes that S's statement is fresh, B believes that S believes that (A believes key(Kab, A↔B)). Because B believes that S is authoritative about what A believes, B believes that (A believes key(Kab, A↔B)). Because B believes that A is authoritative about session keys between A and B, B believes key(Kab, A↔B). B can now contact A directly, using Kab as a secret session key. Now let's suppose that we abandon the assumption that the clocks are synchronized. In that case, S gets message 1 from A with {t, key(Kab, A↔B)}, but it can no longer conclude that t is fresh. It knows that A sent this message at some time in the past (because it is encrypted with Kas) but not that this is a recent message, so S doesn't believe that A necessarily wants to continue to use the key Kab. This points directly at an attack on the protocol: An attacker who can capture messages can guess one of the old session keys Kab. (This might take a long time.) The attacker then replays the old {t, key(Kab, A↔B)} message, sending it to S. If the clocks aren't synchronized (perhaps as part of the same attack), S might believe this old message and request that B use the old, compromised key over again. The original Logic of Authentication paper (linked below) contains this example and many others, including analyses of the Kerberos handshake protocol, and two versions of the Andrew Project RPC handshake (one of which is defective). References Further reading Source: The Burrows–Abadi–Needham logic Theory of cryptography Automated theorem proving
1284405
https://en.wikipedia.org/wiki/TeleVideo
TeleVideo
TeleVideo Corporation was a U.S. company that achieved its peak of success in the early 1980s producing computer terminals. TeleVideo was founded in 1979 by K. Philip Hwang, a Utah State University,Hanyang University graduate born in South Korea who had run a business producing CRT monitors for arcade games since 1975. The company was headquartered in San Jose, California. TeleVideo's terminal protocol was popular in the early days of microcomputers and was widely supported by applications as well as terminal emulators (often referred to as "TeleVideo 925 emulation"). TeleVideo also built CP/M-compatible 8-bit desktop and portable personal computers based on the Z80 processor. Up to sixteen of these machines could be connected to proprietary multi-user systems through serial interfaces. In April, 1983, TeleVideo introduced an MS-DOS 2.0 compatible personal computer based on the Intel 8088. This was introduced as the Model TS-1603 and included 128 KB RAM (expandable up to 256 KB), integrated monitor, modem and keyboard. The Model TS-1603 ran both TeleVideo PC DOS 2.0 and CP/M-86 1.1. The company later turned to manufacturing Windows compatible thin client computers, but eventually sold this business line to Neoware in October 2005. The latter was subsequently taken over by Hewlett-Packard in 2007. On March 14, 2006, TeleVideo, Inc. filed a voluntary petition for reorganization under Chapter 11 of the United States Bankruptcy Code. After more than 35 years in business and with millions of terminals sold worldwide TeleVideo discontinued the manufacturing and sales of all terminal products as of September 30, 2011. Products Terminals: TeleVideo 905, 910, 912, 914, 920, 921, 922, 924, 925, 9320, 935, 950, 955, 965, 970, 990, 995-65, Personal Terminal Graphic boards for Terminals: 914GR, 924GR, 970GR CP/M systems: TeleVideo TS-800, TS-802, TS-803 CP/M Plus and MP/M II: TeleVideo TS-804 (4 users for MP/M II) CP/M-86/MS-DOS systems: TeleVideo TS-1603 TeleVideo TPC-1, a portable CP/M system similar to the Osborne-1 Early multi-user systems: TeleVideo TS-806 (6 users), TS-816 (16 users) References External links Official website (mostly defunct) TS-802 CP/M personal computer Marcus Bennett's TeleVideo Documentation resource History of Televideo founders Background on TS-1603 All In One Computer Terminals Wiki 1975 establishments in California 2011 disestablishments in California American companies established in 1975 American companies disestablished in 2011 Companies based in San Jose, California Companies that filed for Chapter 11 bankruptcy in 2006 Computer companies established in 1975 Computer companies disestablished in 2011 Defunct companies based in the San Francisco Bay Area Defunct computer companies of the United States Defunct computer hardware companies Manufacturing companies established in 1975 Manufacturing companies disestablished in 2011 Technology companies based in the San Francisco Bay Area
2574632
https://en.wikipedia.org/wiki/List%20of%20software%20development%20philosophies
List of software development philosophies
This is a list of approaches, styles, methodologies, philosophies in software development and engineering. It also contains programming paradigms, software development methodologies, software development processes, and single practices, principles and laws. Some of the mentioned methods are more relevant to a specific field than another, such as automotive or aerospace. The trend towards agile methods in software engineering is noticeable, however the need for improved studies on the subject is also paramount. Also note that some of the methods listed might be newer or older or still in use or out-dated, and the research on software design methods is not new and on-going. Software development methodologies, guidelines, strategies Large-scale programming styles Behavior-driven development Design-driven development Domain-driven design Secure by design Test-driven development Acceptance test-driven development Continuous test-driven development Specification by example Data-driven development Data-oriented design Specification-related paradigms Iterative and incremental development Waterfall model Formal methods Comprehensive systems Agile software development Lean software development Lightweight methodology Adaptive software development Extreme programming Feature-driven development ICONIX Kanban (development) Unified Process Rational Unified Process OpenUP Agile Unified Process Rules of thumb, laws, guidelines and principles 300 Rules of Thumb and Nuggets of Wisdom (excerpt from Managing the Unmanageable - Rules, Tools, and Insights for Managing Software People and Teams by Mickey W. Mantle, Ron Lichty) Karpov's 42 Ultimate Question of Programming, Refactoring, and Everything Big ball of mud Brooks's law C++ Core Guidelines (Stroustrup/Sutter) P1 - P13 Philosophy rules Command–query separation (CQS) Cowboy coding Do what I mean (DWIM) Don't repeat yourself (DRY) Egoless programming Fail-fast Gall's law If it ain't broke, don't fix it KISS principle Law of Demeter, also known as the principle of least knowledge Law of conservation of complexity, also known as Tesler's Law Lehman's laws of software evolution Minimalism (computing) Ninety-ninety rule Open–closed principle Pareto Principle Parkinson's law Principle of least astonishment (POLA) Release early, release often Robustness principle, also known as Postel's law Rule of least power Separation of mechanism and policy Service loose coupling principle Single source of truth (SSOT) Single version of the truth (SVOT) SOLID (object-oriented design) There's more than one way to do it Uniform access principle Unix philosophy Worse is better You aren't gonna need it (YAGNI) General Responsibility Assignment Software Patterns (GRASP) Other Davis 201 Principles of Software Development Don't Make Me Think (Principles of intuitive navigation and information design) The Art of Computer Programming (general computer-science masterpiece by Donald E. Knuth) The Cathedral and the Bazaar - book comparing top-down vs. bottom-up open-source software The Philosophy of Computer Science Where's the Theory for Software Engineering? Programming paradigms Agent-oriented programming Aspect-oriented programming (AOP) Convention over configuration Component-based software engineering Functional programming (FP) Hierarchical object-oriented design (HOOD) Literate programming Logic programming Modular programming Object-oriented programming (OOP) Procedural programming Reactive programming Software development methodologies Agile Unified Process (AUP) Constructionist design methodology (CDM) Dynamic systems development method (DSDM) Extreme programming (XP) Iterative and incremental development Kanban Lean software development Model-based system engineering (MBSE) Open Unified Process Pair programming Rapid application development (RAD) Rational Unified Process (RUP) Scrum Structured systems analysis and design method (SSADM) Unified Process (UP) Software development processes Active-Admin-driven development (AADD) Behavior-driven development (BDD) Bug-driven development (BgDD) Configuration-driven development (CDD) Design-driven development (D3) Domain-driven design (DDD) Feature-driven development (FDD) Test-driven development (TDD) User-centered design (UCD) (User-Driven Development (UDD)) Value-driven design (VDD) Software review Software quality assurance See also Anti-pattern Coding conventions Design pattern Programming paradigm Software development methodology Software development process Outline of computer science Outline of software engineering Outline of computer engineering Outline of computer programming :Category:Programming principles Further reading ISO/IEC/IEEE 26515:2018(E) - ISO/IEC/IEEE International Standard - Systems and software engineering — Developing information for users in an agile environment Other materials, books, articles, etc. Don't Make Me Think (book by Steve Krug about human computer interaction and web usability) References Software development process Methodology Computer science