id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
23005763
https://en.wikipedia.org/wiki/GNOME%20Activity%20Journal
GNOME Activity Journal
GNOME Activity Journal is a semantic desktop browser-like application for the GNOME desktop environment. Instead of providing direct access to the hierarchical file system like most file managers, GNOME Activity Journal uses the Zeitgeist framework to classify files according to metadata. This includes time and date of previous accesses, location of use (using GPS positioning), file type, tagging and more. In addition to local files, GNOME Activity Journal also organizes web browsing history, email and other data sources. GNOME Activity Journal was ported to GTK3 and Python3 in version 1.0.0. It is available as part of Debian, Fedora, Arch Linux (AUR) and Ubuntu. History GNOME Activity Journal's inclusion in GNOME 3.0 was initially rejected with the provided reason being that it did not integrate well in the whole desktop and looked more like a standalone application, but that decision was revisited at the GNOME Boston Summit and integration with GNOME is once again planned. Ubuntu shipped Zeitgeist as a standard part of their new desktop environment, Unity, in Ubuntu 11.04. Gnome Activity Journal is not shipped by default, but the Unity Dash makes use of Zeitgeist. Screenshots See also Zeitgeist (software) References External links Gnome Activity Journal on GNOME wiki Applications using D-Bus Free multilingual software GNU Project software File managers Free software programmed in Python GNOME Applications Office software that uses GTK Software that uses GStreamer
43966823
https://en.wikipedia.org/wiki/Multi-state%20modeling%20of%20biomolecules
Multi-state modeling of biomolecules
Multi-state modeling of biomolecules refers to a series of techniques used to represent and compute the behaviour of biological molecules or complexes that can adopt a large number of possible functional states. Biological signaling systems often rely on complexes of biological macromolecules that can undergo several functionally significant modifications that are mutually compatible. Thus, they can exist in a very large number of functionally different states. Modeling such multi-state systems poses two problems: The problem of how to describe and specify a multi-state system (the "specification problem") and the problem of how to use a computer to simulate the progress of the system over time (the "computation problem"). To address the specification problem, modelers have in recent years moved away from explicit specification of all possible states, and towards rule-based modeling that allow for implicit model specification, including the κ-calculus, BioNetGen, the Allosteric Network Compiler and others. To tackle the computation problem, they have turned to particle-based methods that have in many cases proved more computationally efficient than population-based methods based on ordinary differential equations, partial differential equations, or the Gillespie stochastic simulation algorithm. Given current computing technology, particle-based methods are sometimes the only possible option. Particle-based simulators further fall into two categories: Non-spatial simulators such as StochSim, DYNSTOC, RuleMonkey, and NFSim and spatial simulators, including Meredys, SRSim and MCell. Modelers can thus choose from a variety of tools; the best choice depending on the particular problem. Development of faster and more powerful methods is ongoing, promising the ability to simulate ever more complex signaling processes in the future. Introduction Multi-state biomolecules in signal transduction In living cells, signals are processed by networks of proteins that can act as complex computational devices. These networks rely on the ability of single proteins to exist in a variety of functionally different states achieved through multiple mechanisms, including post-translational modifications, ligand binding, conformational change, or formation of new complexes. Similarly, nucleic acids can undergo a variety of transformations, including protein binding, binding of other nucleic acids, conformational change and DNA methylation. In addition, several types of modifications can co-exist, exerting a combined influence on a biological macromolecule at any given time. Thus, a biomolecule or complex of biomolecules can often adopt a very large number of functionally distinct states. The number of states scales exponentially with the number of possible modifications, a phenomenon known as "combinatorial explosion". This is of concern for computational biologists who model or simulate such biomolecules, because it raises questions about how such large numbers of states can be represented and simulated. Examples of combinatorial explosion Biological signaling networks incorporate a wide array of reversible interactions, post-translational modifications and conformational changes. Furthermore, it is common for a protein to be composed of several - identical or nonidentical - subunits, and for several proteins and/or nucleic acid species to assemble into larger complexes. A molecular species with several of those features can therefore exist in a large number of possible states. For instance, it has been estimated that the yeast scaffold protein Ste5 can be a part of 25666 unique protein complexes. In E. coli, chemotaxis receptors of four different kinds interact in groups of three, and each individual receptor can exist in at least two possible conformations and has up to eight methylation sites, resulting in billions of potential states. The protein kinase CaMKII is a dodecamer of twelve catalytic subunits, arranged in two hexameric rings. Each subunit can exist in at least two distinct conformations, and each subunit features various phosphorylation and ligand binding sites. A recent model incorporated conformational states, two phosphorylation sites and two modes of binding calcium/calmodulin, for a total of around one billion possible states per hexameric ring. A model of coupling of the EGF receptor to a MAP kinase cascade presented by Danos and colleagues accounts for distinct molecular species, yet the authors note several points at which the model could be further extended. A more recent model of ErbB receptor signalling even accounts for more than one googol () distinct molecular species. The problem of combinatorial explosion is also relevant to synthetic biology, with a recent model of a relatively simple synthetic eukaryotic gene circuit featuring 187 species and 1165 reactions. Of course, not all of the possible states of a multi-state molecule or complex will necessarily be populated. Indeed, in systems where the number of possible states is far greater than that of molecules in the compartment (e.g. the cell), they cannot be. In some cases, empirical information can be used to rule out certain states if, for instance, some combinations of features are incompatible. In the absence of such information, however, all possible states need to be considered a priori. In such cases, computational modeling can be used to uncover to what extent the different states are populated. The existence (or potential existence) of such large numbers of molecular species is a combinatorial phenomenon: It arises from a relatively small set of features or modifications (such as post-translational modification or complex formation) that combine to dictate the state of the entire molecule or complex, in the same way that the existence of just a few choices in a coffee shop (small, medium or large, with or without milk, decaf or not, extra shot of espresso) quickly leads to a large number of possible beverages (24 in this case; each additional binary choice will double that number). Although it is difficult for us to grasp the total numbers of possible combinations, it is usually not conceptually difficult to understand the (much smaller) set of features or modifications and the effect each of them has on the function of the biomolecule. The rate at which a molecule undergoes a particular reaction will usually depend mainly on a single feature or a small subset of features. It is the presence or absence of those features that dictates the reaction rate. The reaction rate is the same for two molecules that differ only in features which do not affect this reaction. Thus, the number of parameters will be much smaller than the number of reactions. (In the coffee shop example, adding an extra shot of espresso will cost 40 cent, no matter what size the beverage is and whether or not it has milk in it). It is such "local rules" that are usually discovered in laboratory experiments. Thus, a multi-state model can be conceptualised in terms of combinations of modular features and local rules. This means that even a model that can account for a vast number of molecular species and reactions is not necessarily conceptually complex. Specification vs computation The combinatorial complexity of signaling systems involving multi-state proteins poses two kinds of problems. The first problem is concerned with how such a system can be specified; i.e. how a modeler can specify all complexes, all changes those complexes undergo and all parameters and conditions governing those changes in a robust and efficient way. This problem is called the "specification problem". The second problem concerns computation. It asks questions about whether a combinatorially complex model, once specified, is computationally tractable, given the large number of states and the even larger number of possible transitions between states, whether it can be stored electronically, and whether it can be evaluated in a reasonable amount of computing time. This problem is called the "computation problem". Among the approaches that have been proposed to tackle combinatorial complexity in multi-state modeling, some are mainly concerned with addressing the specification problem, some are focused on finding effective methods of computation. Some tools address both specification and computation. The sections below discuss rule-based approaches to the specification problem and particle-based approaches to solving the computation problem. A wide range of computational tools exist for multi-state modeling. The specification problem Explicit specification The most naïve way of specifying, e.g., a protein in a biological model is to specify each of its states explicitly and use each of them as a molecular species in a simulation framework that allows transitions from state to state. For instance, if a protein can be ligand-bound or not, exist in two conformational states (e.g. open or closed) and be located in two possible subcellular areas (e.g. cytosolic or membrane-bound), then the eight possible resulting states can be explicitly enumerated as: bound, open, cytosol bound, open, membrane bound, closed, cytosol bound, closed, membrane unbound, open, cytosol unbound, open, membrane unbound, closed, cytosol unbound, closed, membrane Enumerating all possible states is a lengthy and potentially error-prone process. For macromolecular complexes that can adopt multiple states, enumerating each state quickly becomes tedious, if not impossible. Moreover, the addition of a single additional modification or feature to the model of the complex under investigation will double the number of possible states (if the modification is binary), and it will more than double the number of transitions that need to be specified. Rule-based model specification It is clear that an explicit description, which lists all possible molecular species (including all their possible states), all possible reactions or transitions these species can undergo, and all parameters governing these reactions, very quickly becomes unwieldy as the complexity of the biological system increases. Modelers have therefore looked for implicit, rather than explicit, ways of specifying a biological signaling system. An implicit description is one that groups reactions and parameters that apply to many types of molecular species into one reaction template. It might also add a set of conditions that govern reaction parameters, i.e. the likelihood or rate at which a reaction occurs, or whether it occurs at all. Only properties of the molecule or complex that matter to a given reaction (either affecting the reaction or being affected by it) are explicitly mentioned, and all other properties are ignored in the specification of the reaction. For instance, the rate of ligand dissociation from a protein might depend on the conformational state of the protein, but not on its subcellular localization. An implicit description would therefore list two dissociation processes (with different rates, depending on conformational state), but would ignore attributes referring to subcellular localization, because they do not affect the rate of ligand dissociation, nor are they affected by it. This specification rule has been summarized as "Don't care, don't write". Since it is not written in terms of reactions, but in terms of more general "reaction rules" encompassing sets of reactions, this kind of specification is often called "rule-based". This description of the system in terms of modular rules relies on the assumption that only a subset of features or attributes are relevant for a particular reaction rule. Where this assumption holds, a set of reactions can be coarse-grained into one reaction rule. This coarse-graining preserves the important properties of the underlying reactions. For instance, if the reactions are based on chemical kinetics, so are the rules derived from them. Many rule-based specification methods exist. In general, the specification of a model is a separate task from the execution of the simulation. Therefore, among the existing rule-based model specification systems, some concentrate on model specification only, allowing the user to then export the specified model into a dedicated simulation engine. However, many solutions to the specification problem also contain a method of interpreting the specified model. This is done by providing a method to simulate the model or a method to convert it into a form that can be used for simulations in other programs. An early rule-based specification method is the κ-calculus, a process algebra that can be used to encode macromolecules with internal states and binding sites and to specify rules by which they interact. The κ-calculus is merely concerned with providing a language to encode multi-state models, not with interpreting the models themselves. A simulator compatible with Kappa is KaSim. BioNetGen is a software suite that provides both specification and simulation capacities. Rule-based models can be written down using a specified syntax, the BioNetGen language (BNGL). The underlying concept is to represent biochemical systems as graphs, where molecules are represented as nodes (or collections of nodes) and chemical bonds as edges. A reaction rule, then, corresponds to a graph rewriting rule. BNGL provides a syntax for specifying these graphs and the associated rules as structured strings. BioNetGen can then use these rules to generate ordinary differential equations (ODEs) to describe each biochemical reaction. Alternatively, it can generate a list of all possible species and reactions in SBML, which can then be exported to simulation software packages that can read SBML. One can also make use of BioNetGen's own ODE-based simulation software and its capability to generate reactions on-the-fly during a stochastic simulation. In addition, a model specified in BNGL can be read by other simulation software, such as DYNSTOC, RuleMonkey, and NFSim. Another tool that generates full reaction networks from a set of rules is the Allosteric Network Compiler (ANC). Conceptually, ANC sees molecules as allosteric devices with a Monod-Wyman-Changeux (MWC) type regulation mechanism, whose interactions are governed by their internal state, as well as by external modifications. A very useful feature of ANC is that it automatically computes dependent parameters, thereby imposing thermodynamic correctness. An extension of the κ-calculus is provided by React(C). The authors of React C show that it can express the stochastic π calculus. They also provide a stochastic simulation algorithm based on the Gillespie stochastic algorithm for models specified in React(C). ML-Rules is similar to React(C), but provides the added possibility of nesting: A component species of the model, with all its attributes, can be part of a higher-order component species. This enables ML-Rules to capture multi-level models that can bridge the gap between, for instance, a series of biochemical processes and the macroscopic behaviour of a whole cell or group of cells. For instance, a proof-of-concept model of cell division in fission yeast includes cyclin/cdc2 binding and activation, pheromone secretion and diffusion, cell division and movement of cells. Models specified in ML-Rules can be simulated using the James II simulation framework. A similar nested language to represent multi-level biological systems has been proposed by Oury and Plotkin. A specification formalism based on molecular finite automata (MFA) framework can then be used to generate and simulate a system of ODEs or for stochastic simulation using a kinetic Monte Carlo algorithm. Some rule-based specification systems and their associated network generation and simulation tools have been designed to accommodate spatial heterogeneity, in order to allow for the realistic simulation of interactions within biological compartments. For instance, the Simmune project includes a spatial component: Users can specify their multi-state biomolecules and interactions within membranes or compartments of arbitrary shape. The reaction volume is then divided into interfacing voxels, and a separate reaction network generated for each of these subvolumes. The Stochastic Simulator Compiler (SSC) allows for rule-based, modular specification of interacting biomolecules in regions of arbitrarily complex geometries. Again, the system is represented using graphs, with chemical interactions or diffusion events formalised as graph-rewriting rules. The compiler then generates the entire reaction network before launching a stochastic reaction-diffusion algorithm. A different approach is taken by PySB, where model specification is embedded in the programming language Python. A model (or part of a model) is represented as a Python programme. This allows users to store higher-order biochemical processes such as catalysis or polymerisation as macros and re-use them as needed. The models can be simulated and analysed using Python libraries, but PySB models can also be exported into BNGL, kappa, and SBML. Models involving multi-state and multi-component species can also be specified in Level 3 of the Systems Biology Markup Language (SBML) using the multi package. A draft specification is available. Thus, by only considering states and features important for a particular reaction, rule-based model specification eliminates the need to explicitly enumerate every possible molecular state that can undergo a similar reaction, and thereby allows for efficient specification. The computation problem When running simulations on a biological model, any simulation software evaluates a set of rules, starting from a specified set of initial conditions, and usually iterating through a series of time steps until a specified end time. One way to classify simulation algorithms is by looking at the level of analysis at which the rules are applied: they can be population-based, single-particle-based or hybrid. Population-based rule evaluation In Population-based rule evaluation, rules are applied to populations. All molecules of the same species in the same state are pooled together. Application of a specific rule reduces or increases the size of one of the pools, possibly at the expense of another. Some of the best-known classes of simulation approaches in computational biology belong to the population-based family, including those based on the numerical integration of ordinary and partial differential equations and the Gillespie stochastic simulation algorithm. Differential equations describe changes in molecular concentrations over time in a deterministic manner. Simulations based on differential equations usually do not attempt to solve those equations analytically, but employ a suitable numerical solver. The stochastic Gillespie algorithm changes the composition of pools of molecules through a progression of randomness reaction events, the probability of which is computed from reaction rates and from the numbers of molecules, in accordance with the stochastic master equation. In population-based approaches, one can think of the system being modeled as being in a given state at any given time point, where a state is defined according to the nature and size of the populated pools of molecules. This means that the space of all possible states can become very large. With some simulation methods implementing numerical integration of ordinary and partial differential equations or the Gillespie stochastic algorithm, all possible pools of molecules and the reactions they undergo are defined at the start of the simulation, even if they are empty. Such "generate-first" methods scale poorly with increasing numbers of molecular states. For instance, it has recently been estimated that even for a simple model of CaMKII with just 6 states per subunits and 10 subunits, it would take 290 years to generate the entire reaction network on a 2.54 GHz Intel Xeon processor. In addition, the model generation step in generate-first methods does not necessarily terminate, for instance when the model includes assembly of proteins into complexes of arbitrarily large size, such as actin filaments. In these cases, a termination condition needs to be specified by the user. Even if a large reaction system can be successfully generated, its simulation using population-based rule evaluation can run into computational limits. In a recent study, a powerful computer was shown to be unable to simulate a protein with more than 8 phosphorylation sites ( phosphorylation states) using ordinary differential equations. Methods have been proposed to reduce the size of the state space. One is to consider only the states adjacent to the present state (i.e. the states that can be reached within the next iteration) at each time point. This eliminates the need for enumerating all possible states at the beginning. Instead, reactions are generated "on-the-fly" at each iteration. These methods are available both for stochastic and deterministic algorithms. These methods still rely on the definition of an (albeit reduced) reaction network - in contrast to the "network-free" methods discussed below. Even with "on-the-fly" network generation, networks generated for population-based rule evaluation can become quite large, and thus difficult - if not impossible - to handle computationally. An alternative approach is provided by particle-based rule evaluation. Particle-based rule evaluation In particle-based (sometimes called "agent-based") simulations, proteins, nucleic acids, macromolecular complexes or small molecules are represented as individual software objects, and their progress is tracked through the course of the entire simulation. Because particle-based rule evaluation keeps track of individual particles rather than populations, it comes at a higher computational cost when modeling systems with a high total number of particles, but a small number of kinds (or pools) of particles. In cases of combinatorial complexity, however, the modeling of individual particles is an advantage because, at any given point in the simulation, only existing molecules, their states and the reactions they can undergo need to be considered. Particle-based rule evaluation does not require the generation of complete or partial reaction networks at the start of the simulation or at any other point in the simulation and is therefore called "network-free". This method reduces the complexity of the model at the simulation stage, and thereby saves time and computational power. The simulation follows each particle, and at each simulation step, a particle only "sees" the reactions (or rules) that apply to it. This depends on the state of the particle and, in some implementation, on the states of its neighbours in a holoenzyme or complex. As the simulation proceeds, the states of particles are updated according to the rules that are fired. Some particle-based simulation packages use an ad-hoc formalism for specification of reactants, parameters and rules. Others can read files in a recognised rule-based specification format such as BNGL. Non-spatial particle-based methods StochSim is a particle-based stochastic simulator used mainly to model chemical reactions and other molecular transitions. The algorithm used in StochSim is different from the more widely known Gillespie stochastic algorithm in that it operates on individual entities, not entity pools, making it particle-based rather than population-based. In StochSim, each molecular species can be equipped with a number of binary state flags representing a particular modification. Reactions can be made dependent on a set of state flags set to particular values. In addition, the outcome of a reaction can include a state flag being changed. Moreover, entities can be arranged in geometric arrays (for instance, for holoenzymes consisting of several subunits), and reactions can be "neighbor-sensitive", i.e. the probability of a reaction for a given entity is affected by the value of a state flag on a neighboring entity. These properties make StochSim ideally suited to modeling multi-state molecules arranged in holoenzymes or complexes of specified size. Indeed, StochSim has been used to model clusters of bacterial chemotactic receptors, and CaMKII holoenzymes. An extension to StochSim includes a particle-based simulator DYNSTOC, which uses a StochSim-like algorithm to simulate models specified in the BioNetGen language (BNGL), and improves the handling of molecules within macromolecular complexes. Another particle-based stochastic simulator that can read BNGL input files is RuleMonkey. Its simulation algorithm differs from the algorithms underlying both StochSim and DYNSTOC in that the simulation time step is variable. The Network-Free Stochastic Simulator (NFSim) differs from those described above by allowing for the definition of reaction rates as arbitrary mathematical or conditional expressions and thereby facilitates selective coarse-graining of models. RuleMonkey and NFsim implement distinct but related simulation algorithms. A detailed review and comparison of both tools is given by Yang and Hlavacek. It is easy to imagine a biological system where some components are complex multi-state molecules, whereas others have few possible states (or even just one) and exist in large numbers. A hybrid approach has been proposed to model such systems: Within the Hybrid Particle/Population (HPP) framework, the user can specify a rule-based model, but can designate some species to be treated as populations (rather than particles) in the subsequent simulation. This method combines the computational advantages of particle-based modeling for multi-state systems with relatively low molecule numbers and of population-based modeling for systems with high molecule numbers and a small number of possible states. Specification of HPP models is supported by BioNetGen, and simulations can be performed with NFSim. Spatial particle-based methods Spatial particle-based methods differ from the methods described above by their explicit representation of space. One example of a particle-based simulator that allows for a representation of cellular compartments is SRSim. SRSim is integrated in the LAMMPS molecular dynamics simulator and allows the user to specify the model in BNGL. SRSim allows users to specify the geometry of the particles in the simulation, as well as interaction sites. It is therefore especially good at simulating the assembly and structure of complex biomolecular complexes, as evidenced by a recent model of the inner kinetochore. MCell allows individual molecules to be traced in arbitrarily complex geometric environments which are defined by the user. This allows for simulations of biomolecules in realistic reconstructions of living cells, including cells with complex geometries like those of neurons. The reaction compartment is a reconstruction of a dendritic spine. MCell uses an ad-hoc formalism within MCell itself to specify a multi-state model: In MCell, it is possible to assign "slots" to any molecular species. Each slot stands for a particular modification, and any number of slots can be assigned to a molecule. Each slot can be occupied by a particular state. The states are not necessarily binary. For instance, a slot describing binding of a particular ligand to a protein of interest could take the states "unbound", "partially bound", and "fully bound". The slot-and-state syntax in MCell can also be used to model multimeric proteins or macromolecular complexes. When used in this way, a slot is a placeholder for a subunit or a molecular component of a complex, and the state of the slot will indicate whether a specific protein component is absent or present in the complex. A way to think about this is that MCell macromolecules can have several dimensions: A "state dimension" and one or more "spatial dimensions". The "state dimension" is used to describe the multiple possible states making up a multi-state protein, while the spatial dimension(s) describe topological relationships between neighboring subunits or members of a macromolecular complex. One drawback of this method for representing protein complexes, compared to Meredys, is that MCell does not allow for the diffusion of complexes, and hence, of multi-state molecules. This can in some cases be circumvented by adjusting the diffusion constants of ligands that interact with the complex, by using checkpointing functions or by combining simulations at different levels. Examples of multi-state models in biology A (by no means exhaustive) selection of models of biological systems involving multi-state molecules and using some of the tools discussed here is give in the table below. See also Multiscale modeling Rule-based modeling References Biomolecules Cell signaling Chemical bonding Proteins Enzyme kinetics Modeling and simulation
881423
https://en.wikipedia.org/wiki/Blade%20server
Blade server
A blade server is a stripped-down server computer with a modular design optimized to minimize the use of physical space and energy. Blade servers have many components removed to save space, minimize power consumption and other considerations, while still having all the functional components to be considered a computer. Unlike a rack-mount server, a blade server fits inside a blade enclosure, which can hold multiple blade servers, providing services such as power, cooling, networking, various interconnects and management. Together, blades and the blade enclosure form a blade system, which may itself be rack-mounted. Different blade providers have differing principles regarding what to include in the blade itself, and in the blade system as a whole. In a standard server-rack configuration, one rack unit or 1U— wide and tall—defines the minimum possible size of any equipment. The principal benefit and justification of blade computing relates to lifting this restriction so as to reduce size requirements. The most common computer rack form-factor is 42U high, which limits the number of discrete computer devices directly mountable in a rack to 42 components. Blades do not have this limitation. , densities of up to 180 servers per blade system (or 1440 servers per rack) are achievable with blade systems. Blade enclosure Enclosure (or chassis) performs many of the non-core computing services found in most computers. Non-blade systems typically use bulky, hot and space-inefficient components, and may duplicate these across many computers that may or may not perform at capacity. By locating these services in one place and sharing them among the blade computers, the overall utilization becomes higher. The specifics of which services are provided varies by vendor. Power Computers operate over a range of DC voltages, but utilities deliver power as AC, and at higher voltages than required within computers. Converting this current requires one or more power supply units (or PSUs). To ensure that the failure of one power source does not affect the operation of the computer, even entry-level servers may have redundant power supplies, again adding to the bulk and heat output of the design. The blade enclosure's power supply provides a single power source for all blades within the enclosure. This single power source may come as a power supply in the enclosure or as a dedicated separate PSU supplying DC to multiple enclosures. This setup reduces the number of PSUs required to provide a resilient power supply. The popularity of blade servers, and their own appetite for power, has led to an increase in the number of rack-mountable uninterruptible power supply (or UPS) units, including units targeted specifically towards blade servers (such as the BladeUPS). Cooling During operation, electrical and mechanical components produce heat, which a system must dissipate to ensure the proper functioning of its components. Most blade enclosures, like most computing systems, remove heat by using fans. A frequently underestimated problem when designing high-performance computer systems involves the conflict between the amount of heat a system generates and the ability of its fans to remove the heat. The blade's shared power and cooling means that it does not generate as much heat as traditional servers. blade-enclosures feature variable-speed fans and control logic, or even liquid cooling systems that adjust to meet the system's cooling requirements. At the same time, the increased density of blade-server configurations can still result in higher overall demands for cooling with racks populated at over 50% full. This is especially true with early-generation blades. In absolute terms, a fully populated rack of blade servers is likely to require more cooling capacity than a fully populated rack of standard 1U servers. This is because one can fit up to 128 blade servers in the same rack that will only hold 42 1U rack-mount servers. Networking Blade servers generally include integrated or optional network interface controllers for Ethernet or host adapters for Fibre Channel storage systems or converged network adapter to combine storage and data via one Fibre Channel over Ethernet interface. In many blades, at least one interface is embedded on the motherboard and extra interfaces can be added using mezzanine cards. A blade enclosure can provide individual external ports to which each network interface on a blade will connect. Alternatively, a blade enclosure can aggregate network interfaces into interconnect devices (such as switches) built into the blade enclosure or in networking blades. Storage While computers typically use hard disks to store operating systems, applications and data, these are not necessarily required locally. Many storage connection methods (e.g. FireWire, SATA, E-SATA, SCSI, SAS DAS, FC and iSCSI) are readily moved outside the server, though not all are used in enterprise-level installations. Implementing these connection interfaces within the computer presents similar challenges to the networking interfaces (indeed iSCSI runs over the network interface), and similarly these can be removed from the blade and presented individually or aggregated either on the chassis or through other blades. The ability to boot the blade from a storage area network (SAN) allows for an entirely disk-free blade, an example of which implementation is the Intel Modular Server System. Other blades Since blade enclosures provide a standard method for delivering basic services to computer devices, other types of devices can also utilize blade enclosures. Blades providing switching, routing, storage, SAN and fibre-channel access can slot into the enclosure to provide these services to all members of the enclosure. Systems administrators can use storage blades where a requirement exists for additional local storage. Uses Blade servers function well for specific purposes such as web hosting, virtualization, and cluster computing. Individual blades are typically hot-swappable. As users deal with larger and more diverse workloads, they add more processing power, memory and I/O bandwidth to blade servers. Although blade-server technology in theory allows for open, cross-vendor systems, most users buy modules, enclosures, racks and management tools from the same vendor. Eventual standardization of the technology might result in more choices for consumers; increasing numbers of third-party software vendors have started to enter this growing field. Blade servers do not, however, provide the answer to every computing problem. One can view them as a form of productized server-farm that borrows from mainframe packaging, cooling, and power-supply technology. Very large computing tasks may still require server farms of blade servers, and because of blade servers' high power density, can suffer even more acutely from the heating, ventilation, and air conditioning problems that affect large conventional server farms. History Developers first placed complete microcomputers on cards and packaged them in standard 19-inch racks in the 1970s, soon after the introduction of 8-bit microprocessors. This architecture was used in the industrial process control industry as an alternative to minicomputer-based control systems. Early models stored programs in EPROM and were limited to a single function with a small real-time executive. The VMEbus architecture () defined a computer interface that included implementation of a board-level computer installed in a chassis backplane with multiple slots for pluggable boards to provide I/O, memory, or additional computing. In the 1990s, the PCI Industrial Computer Manufacturers Group PICMG developed a chassis/blade structure for the then emerging Peripheral Component Interconnect bus PCI called CompactPCI. CompactPCI was actually invented by Ziatech Corp of San Luis Obispo, CA and developed into an industry standard. Common among these chassis-based computers was the fact that the entire chassis was a single system. While a chassis might include multiple computing elements to provide the desired level of performance and redundancy, there was always one master board in charge, or two redundant fail-over masters coordinating the operation of the entire system. Moreover this system architecture provided management capabilities not present in typical rack mount computers, much more like in ultra-high reliability systems, managing power supplies, cooling fans as well as monitoring health of other internal components. Demands for managing hundreds and thousands of servers in the emerging Internet Data Centers where the manpower simply didn't exist to keep pace a new server architecture was needed. In 1998 and 1999 this new Blade Server Architecture was developed at Ziatech based on their Compact PCI platform to house as many as 14 "blade servers" in a standard 19" 9U high rack mounted chassis, allowing in this configuration as many as 84 servers in a standard 84 Rack Unit 19" rack. What this new architecture brought to the table was a set of new interfaces to the hardware specifically to provide the capability to remotely monitor the health and performance of all major replaceable modules that could be changed/replaced while the system was in operation. The ability to change/replace or add modules within the system while it is in operation is known as Hot-Swap. Unique to any other server system the Ketris Blade servers routed Ethernet across the backplane (where server blades would plug-in) eliminating more than 160 cables in a single 84 Rack Unit high 19" rack. For a large data center tens of thousands of Ethernet cables, prone to failure would be eliminated. Further this architecture provided the capabilities to inventory modules installed in the system remotely in each system chassis without the blade servers operating. This architecture enabled the ability to provision (power up, install operating systems and applications software) (e.g. a Web Servers) remotely from a Network Operations Center (NOC). The system architecture when this system was announced was called Ketris, named after the Ketri Sword, worn by nomads in such a way as to be drawn very quickly as needed. First envisioned by Dave Bottom and developed by an engineering team at Ziatech Corp in 1999 and demonstrated at the Networld+Interop show in May 2000. Patents (site patents) were awarded for the Ketris blade server architecture. In October 2000 Ziatech was acquired by Intel Corp and the Ketris Blade Server systems would become a product of the Intel Network Products Group. PICMG expanded the CompactPCI specification with the use of standard Ethernet connectivity between boards across the backplane. The PICMG 2.16 CompactPCI Packet Switching Backplane specification was adopted in Sept 2001. This provided the first open architecture for a multi-server chassis. The Second generation of Ketris would be developed at Intel as an architecture for the telecommunications industry to support the build out of IP base telecom services and in particular the LTE (Long Term Evolution) Cellular Network build-out. PICMG followed with this larger and more feature-rich AdvancedTCA specification, targeting the telecom industry's need for a high availability and dense computing platform with extended product life (10+ years). While AdvancedTCA system and boards typically sell for higher prices than blade servers, the operating cost (manpower to manage and maintain) are dramatically lower, where operating cost often dwarf the acquisition cost for traditional servers. AdvancedTCA promote them for telecommunications customers, however in the real world implementation in Internet Data Centers where thermal as well as other maintenance and operating cost had become prohibitively expensive, this blade server architecture with remote automated provisioning, health and performance monitoring and management would be a significantly less expensive operating cost. The first commercialized blade-server architecture was invented by Christopher Hipp and David Kirkeby, and their patent () was assigned to Houston-based RLX Technologies. RLX, which consisted primarily of former Compaq Computer Corporation employees, including Hipp and Kirkeby, shipped its first commercial blade server in 2001. RLX was acquired by Hewlett Packard in 2005. The name blade server appeared when a card included the processor, memory, I/O and non-volatile program storage (flash memory or small hard disk(s)). This allowed manufacturers to package a complete server, with its operating system and applications, on a single card/board/blade. These blades could then operate independently within a common chassis, doing the work of multiple separate server boxes more efficiently. In addition to the most obvious benefit of this packaging (less space consumption), additional efficiency benefits have become clear in power, cooling, management, and networking due to the pooling or sharing of common infrastructure to support the entire chassis, rather than providing each of these on a per server box basis. In 2011, research firm IDC identified the major players in the blade market as HP, IBM, Cisco, and Dell. Other companies selling blade servers include Supermicro, Hitachi. Blade models Though independent professional computer manufacturers such as Supermicro offer blade servers, the market is dominated by large public companies such as Cisco Systems, which had 40% share by revenue in Americas in the first quarter of 2014. The remaining prominent brands in the blade server market are HPE, Dell and IBM, though the latter sold its x86 server business to Lenovo in 2014 after selling its consumer PC line to Lenovo in 2005. In 2009, Cisco announced blades in its Unified Computing System product line, consisting of 6U high chassis, up to 8 blade servers in each chassis. It has a heavily modified Nexus 5K switch, rebranded as a fabric interconnect, and management software for the whole system. HP's line consists of two chassis models, the c3000 which holds up to 8 half-height ProLiant line blades (also available in tower form), and the c7000 (10U) which holds up to 16 half-height ProLiant blades. Dell's product, the M1000e is a 10U modular enclosure and holds up to 16 half-height PowerEdge blade servers or 32 quarter-height blades. See also Blade PC HP BladeSystem Mobile PCI Express Module (MXM) Modular crate electronics Multibus Server computer References External links BladeCenter blade servers – x86 processor-based servers Server hardware Computer-related introductions in 2001 ..............................................................................................................................................................................................
4614585
https://en.wikipedia.org/wiki/Connect%3ADirect
Connect:Direct
Connect:Direct—originally named Network Data Mover (NDM)— is a computer software product that transfers files between mainframe computers and/or midrange computers. It was developed for mainframes, with other platforms being added as the product grew. NDM was renamed to Connect:Direct in 1993, following the acquisition of Systems Center, Inc. by Sterling Software. In 1996, Sterling Software executed a public spinoff of a new entity called Sterling Commerce, which consisted of the Communications Software Group (the business unit responsible for marketing the Connect:Direct product and other file transfer products sourced from the pre-1993 Sterling Software (e.g. Connect:Mailbox)) and the Sterling EDI Network business. In 2000, SBC Communications acquired Sterling Commerce and held it until 2010. AT&T merged with SBC effective November 2005. In 2010, IBM completed the purchase of Sterling Commerce from AT&T. Technology Traditionally, Sterling Connect:Direct used IBM's Systems Network Architecture (SNA) via dedicated private lines between the parties involved to transfer the data. In the early 1990s TCP/IP support was added. Connect:Direct's primary advantage over FTP was that it made file transfers routine and reliable. IBM Sterling Connect:Direct is used within the financial services industry, government agencies and other large organizations that have multiple computing platforms: mainframes, midrange, Linux or Windows systems. In terms of speed, Connect:Direct typically performs slightly faster than FTP, reaching the maximum that the interconnecting link can support. If CPU cycles are available, Connect:Direct has several compression modes that can greatly enhance the throughput of the transfer, but care must be exercised in multi-processing environments as Connect:Direct can consume large amounts of processing cycles, impacting other workloads. Connect:Direct originally did not support encrypted and secure data transfers, however an add-on, Connect:Direct Secure+, provided such support. Encryption can be accomplished with Transport Layer Security using SSL, TLS or Station-to-Station protocol (STS). Since being acquired by IBM, the add-on has been folded into the base product, so it always supports the latest encryption and security standards. Connect:Direct file transfers can be done in two formats: Binary mode (where no translation occurs) or in a mode where translation is used to convert an ASCII file to EBCDIC as it is moved to a mainframe or vice versa. These conversions are handled automatically based on the local systems, which is a significant concern with other file transfer software when moving between distributed and mainframe systems. History In the mid-1980s, several employees of UCC (University Computing Company subsequently renamed to Uccel Corporation) left to form "The System Center, Inc." in Dallas, Texas. The new company was going to develop a mainframe systems management tool. While researching the requirements of this new software package, it became clear that a more marketable tool would be a high-speed file transfer product. Thus Network Data Mover ("NDM") was created. Originally developed to support high speed file transfer between mainframes using IBM's MVS operating system, later support was added for IBM's DOS/VSE (mainframe DOS, not PC) and VM/CMS operating systems. Recognizing the need to span the diversity of hardware environments, midrange and finally PC support was added. The System Center merged with VM Software of Reston, Virginia to form "Systems Center, Inc.", with the new headquarters in Reston. The combined company was later purchased by Sterling Software of Dallas in 1993. The headquarters then moved back to Dallas. In 1996 the company was split back into two separate divisions, NDM and VM. The two new companies were called Sterling Commerce (the file transfer group) and Sterling Software (all the application software). Sterling Software had its own file transfer product, Synctrac, which was merged with the NDM division to create a single file transfer centric entity. As the Internet boom occurred, the needs for these file transfer systems grew and the value of the division grew. Eventually in the final days of the "dot com" (.com) boom in early 2001, merely weeks before the crash, Sam Wyly and Sterling Williams sold Sterling Commerce to SBC Communications (now AT&T) for $3.9 billion. Days later, the other half of Sterling was sold to Computer Associates International for another $4 billion. IBM announced the closing of its acquisition of Sterling Commerce on August 27, 2010 for approximately $1.4 billion. References External links Systems Network Architecture Internet Protocol based network software File transfer software IBM mainframe software
24727697
https://en.wikipedia.org/wiki/Rattle%20GUI
Rattle GUI
Rattle GUI is a free and open source software (GNU GPL v2) package providing a graphical user interface (GUI) for data mining using the R statistical programming language. Rattle is used in a variety of situations. Currently there are 15 different government departments in Australia, in addition to various other organisations around the world, which use Rattle in their data mining activities and as a statistical package. Rattle provides considerable data mining functionality by exposing the power of the R Statistical Software through a graphical user interface. Rattle is also used as a teaching facility to learn the R software Language. There is a Log Code tab, which replicates the R code for any activity undertaken in the GUI, which can be copied and pasted. Rattle can be used for statistical analysis, or model generation. Rattle allows for the dataset to be partitioned into training, validation and testing. The dataset can be viewed and edited. There is also an option for scoring an external data file. Features File Inputs = CSV, TXT, Excel, ARFF, ODBC, R Dataset, RData File, Library Packages Datasets, Corpus, and Scripts. Statistics = Min, Max, Quartiles, Mean, St Dev, Missing, Medium, Sum, Variance, Skewness, Kurtosis, chi square. Statistical tests = Correlation, Wilcoxon-Smirnov, Wilcoxon Rank Sum, T-Test, F-Test, and Wilcoxon Signed Rank. Clustering = KMeans, Clara, Hierarchical, and BiCluster. Modeling = Decision Trees, Random Forests, ADA Boost, Support Vector Machine, Logistic Regression, and Neural Net. Evaluation = Confusion Matrix, Risk Charts, Cost Curve, Hand, Lift, ROC, Precision, Sensitivity. Charts = Box Plot, Histogram, Correlations, Dendrograms, Cumulative, Principal Components, Benford, Bar Plot, Dot Plot, and Mosaic. Transformations = Rescale (Recenter, Scale 0-1, Median/MAD, Natural Log, and Matrix) - Impute ( Zero/Missing, Mean, Median, Mode & Constant), Recode (Binning, Kmeans, Equal Widths, Indicator, Join Categories) - Cleanup (Delete Ignored, Delete Selected, Delete Missing, Delete Obs with Missing) Rattle also uses two external graphical investigation / plotting tools. Latticist and GGobi are independent applications which provide highly dynamic and interactive graphic data visualisation for exploratory data analysis. Packages The capabilities of R are extended through user-submitted packages, which allow specialized statistical techniques, graphical devices, as well as import/export capabilities to many external data formats. Rattle uses these packages - RGtk2, pmml, colorspace, ada, amap, arules, biclust, cba, descr, doBy, e1071, ellipse, fEcofin, fBasics, foreign, fpc, gdata, gtools, gplots, gWidgetsRGtk2, Hmisc, kernlab, latticist, Matrix, mice, network, nnet, odfWeave, party, playwith, psych, randomForest, reshape, RGtk2Extras, ROCR, RODBC, rpart, RSvgDevice, survival, timeDate, graph, RBGL, bitops, See also R interfaces References Graham J Williams (2011). Data Mining with Rattle and R: The Art of Excavating Data for Knowledge Discovery, Springer, Use R!. In 2010, Rattle was listed in the top 10 graphical user interfaces in statistical software by Decision Stats. Rattle is described as an "attractive, easy-to-use front end ... data mining toolkit" in an article published in the Teradata Magazine, volume 9, issue 3, page 57 (September 2009). Graham J William (2009). Rattle: A Data Mining GUI for R. The R Journal 1(2):45-55. External links Official home page Source code page Free R (programming language) software Data mining and machine learning software
18436261
https://en.wikipedia.org/wiki/You%27re%20the%20Greatest%20LUVer
You're the Greatest LUVer
You're the Greatest LUVer is a German 1998 compilation album by Dutch girl group Luv' which features hit singles and album tracks from the formation's heyday (1977–1981). As the female pop trio dominated the charts in a large part of Continental Europe in the late 1970s, Germany was Luv's biggest export market. They had eight entries on the German singles charts: "You're the Greatest Lover" (a #1 hit that reached the gold status as it sold 600.000 units in Germany alone), "Trojan Horse" ( a Top 3 song), "Casanova" (a Top 10 single), "Eeny Meeny Miny Moe", "Ooh, Yes I Do", "Ann-Maria", "One More Little Kissie" and "My Number One". Album history In 1979, Patty Brard, José Hoebee, Marga Scheide, their producers (Hans van Hemert and Piet Souer) and their manager (Pim Ter Linde) received the 'Conamus Export Prize' for being 'Holland's best export act'. Germany was a key territory for the group and helped them to score successful hit records outside their homeland. Luv' was a household name thanks to their performances on German TV shows such as Starparade, Tanzparty, Musikladen and Disco. During their brief German glory, "You're the Greatest Lover" was used for the soundtrack of an episode of the Derrick TV series. Moreover, Luv' played a cameo role in the 1979 movie Cola, Candy, Chocolate in which they performed "Trojan Horse". To celebrate the 20th anniversary of the trio success in Goethe's country, Mercury Records decided to release the compilation You're the Greatest LUVer. Nowadays, the trio's greatest hits (like the "Greatest Lover" and "Trojan Horse") are often included on German 1970's/Disco various artists compilation albums or downloadable as ringtones. Track listing All tracks written by Hans van Hemert and Piet Souer under the pseudonym 'Janschen & Janschens'. "You're the Greatest Lover" – 2:51 Taken from the album With Luv' (1978) "Casanova" – 3:51 Taken from the album Lots of Luv' (1979a) "U.O.me (Waldolala)" – 2:58 Taken from the album With Luv' (1978) "My Number One" – 3:11 Taken from the album Forever Yours (1980) "Shoes off (Boots on)" – 3:07 Taken from the album Lots of Luv' (1979a) "Don Juanito De Carnaval" – 3:12 Taken from the album With Luv' (1978) "Trojan Horse" – 3:25 Taken from the German edition of the With Luv' (1978) album "Louis Je T'Adore" – 2:36 B-side of "My Man", taken from the album With Luv' (1978) "Don't Let Me Down" – 3:36 B-side of "Eeny Meeny Miny Moe", taken from the album With Luv' (1978) "Eeny Meeny Miny Moe – 2:59 Taken from the album Lots of Luv' (1979a) "DJ" – 3:20 B-side of "Casanova", taken from the album Lots of Luv' (1979a) "Ooh, Yes I Do" – 2:58 Taken from the album True Luv' (1979b) "Ann-Maria – 4:41 Taken from the album 'True Luv' (1979b) "One More Little Kissie" – 3:46 Taken from the album Forever Yours (1980) "Tingalingaling" – 2:31 Taken from the album Forever Yours (1980) "Marcellino – 3:14 Taken from the album Lots of Luv' (1979a) "I Like Sugar Candy Kisses" – 3:36 Taken from the album Lots of Luv' (1979a) "If You Love Me" – 2:34 Taken from the album Lots of Luv' (1979a) "I.M.U.R" – 3:36 B-side of "Eeny Meeny Miny Moe", taken from the album Lots of Luv''' (1979a) "Luv' Hitpack (Medley)" – 4:32 Taken from the single "Luv' Hitpack" (1989) Personnel José Hoebee – vocals Patty Brard – vocals Marga Scheide – vocals Ria Thielsch – vocals Production Hans van Hemert – producer, songwriter Piet Souer – conductor/arranger Peter Slaghuis – remix on track 20 References External links Detailed Luv' discography at Rate Your Music Detailed Luv' discography at Discogs Luv' albums 1998 greatest hits albums
3369633
https://en.wikipedia.org/wiki/EMeta
EMeta
eMeta Corporation was a company that provided access control, subscription management and ecommerce solutions for media, entertainment and software companies. It was founded in 1998 and headquartered in New York City. eMeta was taken over by Macrovision Corporation in 2006. Macrovision sold the eMeta operations to Atypon in November 2008. The eMeta suite of software products, consisting of RightAccess and RightCommerce, helps customers like McGraw-Hill, NYTimes.com and Wolters Kluwer license and sell digital assets, including text, audio, video, streaming media, games and software applications (SaaS). Products and services RightAccess RightAccess is the access control module within the eMeta suite, providing advanced authentication, authorization, administration, registration and licensing functionality for all types of digital goods and services. RightAccess allows companies to manage how customers access online offerings. RightCommerce RightCommerce is the ecommerce module within the eMeta suite, making it easy to market and sell digital products. Designed for complex digital goods billing and subscription management, it allows companies to target customers with free trials, discounts or gift subscriptions – while increasing incremental or add-on purchases with promotions and targeted pricing. Customer care features, including self-care, improve customer satisfaction. eRightsWEB eRightsWEB is the ASP, or software as a service (SaaS), version of the eMeta suite, giving smaller organizations the opportunity to benefit from an access control and ecommerce system. Notable customers Celera Genomics CNN.com Elsevier FT.com Hoover's, Inc. (US and UK) iVillage.com Knight Ridder Digital McGraw-Hill Higher Education NASCAR.com NYTimes.com Oxford University Press ProQuest Information and Learning Reuters SmartMoney.com Turner The Globe and Mail Wolters Kluwer References Software companies based in New York City Software companies established in 1998 Companies disestablished in 2006 Defunct software companies of the United States Defunct companies based in New York (state) 1998 establishments in New York City
30638405
https://en.wikipedia.org/wiki/Lightspeed%20Systems
Lightspeed Systems
Lightspeed Systems is a company based in Austin, Texas that builds and sells SaaS content-control software, mobile device management, and classroom management software to K–12 schools. Overview Founded in 1999 in Bakersfield, California, Lightspeed Systems, Inc., develops content filtering, mobile device management, and device monitoring software targeted to the education market. It is headquartered in Austin, Texas with an office in Brentwood, Essex (United Kingdom). The company has about 200 employees. In 2012, Inc.com ranked the company #1855 out of the Inc 5000. However, Inc.com did not list the company as part of the Inc 5000 after 2012. In 2019, Lightspeed Systems received an investment from Madison Dearborn Partners. Filter In 2017, Lightspeed announced the Relay Filter, a cloud-based filtering and device monitoring software package for school Chromebooks. It has since been renamed Lightspeed Filter. Support for Mac OS, Windows and iOS was announced in 2018. Lightspeed Systems advertises their Filter product as blocking "inappropriate" content, and a tool for CIPA compliance. Alert In 2021 Lightspeed announced Alert, a product the company says uses AI and human review to identify threats of school violence or student self-harm. Mobile Manager Lightspeed Systems introduced its mobile-device management program, Mobile Manager, in 2013. Classroom In April 2018, Lightspeed released Classroom, classroom management software which the company claims can monitor and control content loaded on devices used by students during class. Analytics In March 2019, Lightspeed Systems released Analytics for reporting on apps, applications, and web sites used in schools. Criticism Lightspeed systems has been criticized by students for its amount of content blocked by the system. San Diego Unified awards its contracts - (The latitude given to district managers to spend public money on companies they go on to work for has sparked new questions) https://www.voiceofsandiego.org/topics/education/revolving-door-between-sd-unified-contractors-still-open-despite-warnings/ Lightspeed Systems poaches direct accounts accounts from local authorites - (Talk Straight/Schools Broadband) http://www.edugeek.net/forums/internet-related-filtering-firewall/192875-lightspeed-systems-says-schools-broadband-telling-lies.html See also Windows Live Family Safety [iboss SASE Cloud Platform] References 1999 establishments in California Companies based in Bakersfield, California Education companies established in 1999 Content-control software Internet safety Software companies based in California Software companies of the United States Software companies established in 1999
34981907
https://en.wikipedia.org/wiki/Darktable
Darktable
Darktable (stylized as darktable) is a free and open-source photography application program and raw developer. Rather than being a raster graphics editor like Adobe Photoshop or GIMP, it comprises a subset of image editing operations specifically aimed at non-destructive raw image post-production. It is primarily focused on improving a photographer's workflow by facilitating the handling of large numbers of images. It is freely available in versions tailored for most major Linux distributions, macOS, Solaris and Windows and is released under the GPL-3.0-or-later. Features Darktable involves the concept of non-destructive editing, similar to that of some other raw manipulation software. Rather than being immediately applied to raster data of the image, the program keeps the original image data until final rendering at the exporting stage — while parameter adjustments made by a user display in real-time. The program features built-in ICC profiles, GPU acceleration (based on OpenCL), and supports most common image formats. Main features Non-destructive editing with the XMP change description entry Work in 32-bit float mode on a color channel in CIE LAB space Full implementation of color management Supports RAW, JPG, RGBE, PFM and more Completely modular architecture More than 30 modules for transformation, color correction, quality improvement and artistic effects Organize images and search by parameters Translated into 19 languages Support for shooting directly through the camera Find similar photos Support for geographical coordinates labels with the display of photos on the map Export to Flickr and Facebook An integrated mover for executing Lua scripts. Scripts can be linked to hotkeys or specific events, such as when importing new images. Masks Support for drawn masks was added in Darktable version 1.4, allowing application of effects to manually specified areas of an image. There are five mask types available: brush, circle, ellipse, bezier path, and gradient; all are resizable, allow fade-out radius for smooth blending and can have their opacity controlled. An arbitrary number of masks can be created and are collected into a "mask manager" on the left hand side of the darkroom UI. Color Darktable has built-in ICC profile support for sRGB, Adobe RGB, XYZ and linear RGB color spaces. Importing and exporting Raw image formats, JPEG, HDR and PFM images can be imported from disk or camera, and exported to disk, Picasa Web Albums, Flickr, email, and to a simple HTML-based web gallery as JPEG, PNG, TIFF, WebP, PPM, PFM and EXR images. Images can be exported to Wikimedia Commons using an external plugin. Scripting Darktable can be controlled by scripts written in Lua version 5.2. Lua can be used to define actions which Darktable should perform whenever a specified event is triggered. One example might be calling an external application during file export in order to apply additional processing steps outside of Darktable. Multi-mode histogram Multiple histogram types are available, all with individually selectable red, green and blue channels: linear, logarithmic and waveform (new in version 1.4). User interface Darktable has two main modes: lighttable and darkroom. Each represents a step in the image development process. Two more modes are tethering and a map view. Upon launching, lighttable opens by default, where image collections are listed. All panels in all modes can be minimized to save screen real estate. Lighttable The left panel is for importing images, displaying Exif information, and filtering. Rating and categorizing buttons are at the top, while the right-side panel features various modules such as a metadata editor and a tag editor. A module used to export images is located at the bottom-right. Darkroom The second mode, "darkroom", displays the image at center, with four panels around it; most tools appear on the right side. The left panel displays a pannable preview of the current image, an undo history stack, a color picker, and Exif information. A filmstrip with other images is displayed at the bottom, and can be sorted and filtered using lists from the upper panel. The latter also gives access to the preferences configuration. Darktable's configuration allows custom keyboard shortcuts and personalized defaults. Tethering The third mode allows tethering through gPhoto to some of the cameras which support it. Map The fourth mode can display maps from different online sources and geotags images by drag-and-drop. It also uses maps to show images already geotagged by a camera. Plugins darktable includes 67 image adjustment plugins, which it divides into 5 groups; Basic group Plugins for simple well-known photo adjustment operations include: contrast brightness saturation module; shadows and highlights; color reconstruction; base curve with presets to automatically improve contrast and colors; crop and rotate; orientation; exposure; demosaic; highlight reconstruction; white balance; invert and raw black/white point. Tone group Plugins related to contrast and lighting include: fill light for modifying the exposure based on pixel lightness; levels to set black; tone curve; zone system; filmic; local contrast; global tone mapping and tone mapping. Color group Plugins related to hue and saturation include: velvia, which mimics Velvia film colors by increasing saturation on lower saturated pixels more than on highly saturated pixels; channel mixer; output color profile; color contrast; color correction, to modify the global saturation or to give a tint; monochrome; color zones; color balance; vibrance; color look up table; input color profile and unbreak input color profile. Correction group Plugins for repairing visual imperfections include: dithering; sharpen; equalizer; denoise (non-local means); defringe; haze removal; denoise (bilateral filter); scale pixel; rotate pixels; liquify; perspective correction; lens correction using the LensFun library; retouch; spot removal; denoise (profiled); raw denoise; hot pixels and chromatic aberrations. Effect group Artistic postprocessing plugins used for visual effects include: watermark; framing; split-toning; vignetting; soften; grain; highpass; lowpass; lowlight vision; bloom; color mapping; colorize and graduated density. Development Google Summer of Code In 2011, the Darktable team participated in the Google Summer of Code (GSoC). The main goals were to remove libglade dependency from Darktable and to make room for more modularity. The input system for handling shortcuts was also rewritten and incorporated into version 0.9. Distribution Darktable is released under the GPL-3.0-or-later as free software. The current version of Darktable works on Linux, macOS and Windows. Many Linux distributions include Darktable in their default repositories, including Debian, Fedora, openSUSE, Arch Linux, and Gentoo Linux. Darktable also runs on Solaris 11, with packages in IPS format available from the maintainer. See also Adobe Photoshop Lightroom Comparison of raster graphics editors Rawstudio RawTherapee UFRaw References Bibliography External links Darktable usermanual in English (also available in French, Italian, and German) Darktable Darktable Tutorials Short and easy RAW development and Gimp integration tutorial Free photo software Free software programmed in C Graphics software that uses GTK MacOS graphics software Photo software for Linux Raw image processing software
29185545
https://en.wikipedia.org/wiki/Alpine%20Linux
Alpine Linux
Alpine Linux is a Linux distribution based on musl and BusyBox, designed for security, simplicity, and resource efficiency. It uses OpenRC for its init system and compiles all user-space binaries as position-independent executables with stack-smashing protection. Because of its small size, it is commonly used in containers providing quick boot-up times. Linux distributions like postmarketOS are based on Alpine Linux. History Originally, Alpine Linux began as a fork of the LEAF Project. The members of LEAF wanted to continue making a Linux distribution that could fit on a single floppy disk, whereas the Alpine Linux wished to include some more heavyweight packages: Squid and Samba. They also added security features and a newer kernel. Features Alpine uses its own package-management system, apk-tools, which originally was a collection of shell scripts but was later rewritten in C. Alpine's repositories currently contain a lot of packages commonly found in other Linux distributions, but is missing some packages (example: the Cinnamon desktop environment) Alpine Linux can be installed as a run-from-RAM operating system. The LBU (Alpine Local Backup) tool optionally allows all configuration files to be backed up to an APK overlay file (usually shortened to apkovl), a tar.gz file that by default stores a copy of all changed files in /etc (with the option to add more directories). This allows Alpine to work reliably in demanding embedded environments or to (temporarily) survive partial disk failures as sometimes experienced in public cloud environments. A hardened kernel was included in the default distribution for up to and including Alpine 3.7, which aids in reducing the impact of exploits and vulnerabilities. All packages are also compiled with stack-smashing protection to help mitigate the effects of userland buffer overflows. By default, it includes patches that allow using efficient meshed VPNs using the DMVPN standard. It has reliably had excellent support of Xen hypervisors in up-to-date versions, which avoids issues as experienced with Enterprise Distributions. (The standard Linux hypervisor KVM, is also available.) It allows very small Linux containers, around 8 MB in size, while a minimal installation to disk might be around 130 MB. Alpine Configuration Framework (ACF): While optional, ACF is an application for configuring an Alpine Linux machine, with goals similar to Debian's debconf. It is a standard framework based on simple Lua scripts. It previously used uClibc as its C standard library instead of the traditional GNU C Library (glibc) most commonly used. Although it is more lightweight, it does have the significant drawback of being binary incompatible with glibc. Thus, all software must be compiled for use with uClibc to work properly. As of 9 April 2014, Alpine Linux switched to musl, which is partially binary compatible with glibc. Unlike some other Linux distributions, Alpine does not use systemd as its init system. Instead, it uses OpenRC. References External links Light-weight Linux distributions Linux distributions without systemd X86-64 Linux distributions Linux distributions
18559817
https://en.wikipedia.org/wiki/Genedata
Genedata
Genedata is a Swiss-headquartered bioinformatics company that provides enterprise software solutions that support large-scale, experimental processes in life science research. The company focuses on automating data-rich, highly complex data workflows in biopharmaceutical R&D. It continuously develops and markets interoperable software solutions that together comprise the Genedata Biopharma Platform. Almost all of the world's top 50 biopharmaceutical companies, including some of the most innovative R&D organizations developing groundbreaking therapies in fields such as cancer immunology, cell & gene therapy and vaccines, license at least one component of the Genedata Biopharma Platform. The company is headquartered in Basel, Switzerland with subsidiaries and offices in Germany, the United Kingdom, the United States, Singapore, and Japan. Products Genedata software solutions capture, process, and analyze huge volumes of experimental data and drive the automation of complex experimental setups. The Genedata Biopharma Platform is a product portfolio comprising two main types of software products: Genedata Data Analysis Systems Genedata Workflow Systems Genedata Data Analysis Systems automate complex and data-rich processes throughout biopharma R&D, bringing built-in expert knowledge into the data analysis process. They support a vendor technology-agnostic approach; that is, they work for all types of instruments from different instrument vendors. Genedata Expressionist Streamlines biopharma mass spectrometry workflows across instruments and organizations. Genedata Imagence Automates high-content screening image analysis workflows based on a deep learning approach. Genedata Screener Captures, analyzes, and manages all screening data—automating even complex assays. Genedata Selector Streamlines NGS-based workflows from cell line development to biosafety. Genedata Workflow Systems provide a backbone for managing R&D processes. Genedata Biologics Transforms discovery of biotherapeutics including mAbs, bispecifics, ADCs, TCRs, and CAR T cells. Genedata Bioprocess Designs next-generation manufacturing processes for cell line development, upstream and downstream processing, formulation, and analytics. Genedata Profiler Breaks down data silos and fosters data-informed decisions for successful clinical trials. Genedata has been developing these platforms since 1997 and continues to develop its products, providing up to four software releases per year for each product to its customers under a software subscription license model. Services also offered include hosting under a Software as a Service (SaaS) model, operational IT services, integration and software customization, support and maintenance, as well as training and process consulting services. Industries Genedata has a strong presence in research & development laboratories where large quantities of complex experimental data are generated. Within the biopharmaceutical industry, Genedata collaborates with almost all of the top 50 biopharma organizations and many of the most innovative biotechnology companies worldwide. Genedata also works with leading agriscience and other life science organizations that address nutritional and health-related challenges. Contract research organizations (CROs) and academic research institutions also use Genedata software and services. Sites Genedata has established offices to address its customer's needs where they are based, around the world. Its offices are located in: Switzerland: Basel (headquarters) Germany: Munich Japan: Tokyo Singapore United Kingdom: Cambridge United States: Boston and San Francisco See also Bioinformatics companies Laboratory informatics Life sciences Visual analytics References External links Official website Staff listing - LinkedIn Bioinformatics companies Research support companies Software companies of Switzerland Companies established in 1997
2287686
https://en.wikipedia.org/wiki/ESTREAM
ESTREAM
eSTREAM is a project to "identify new stream ciphers suitable for widespread adoption", organised by the EU ECRYPT network. It was set up as a result of the failure of all six stream ciphers submitted to the NESSIE project. The call for primitives was first issued in November 2004. The project was completed in April 2008. The project was divided into separate phases and the project goal was to find algorithms suitable for different application profiles. Profiles The submissions to eSTREAM fall into either or both of two profiles: Profile 1: "Stream ciphers for software applications with high throughput requirements" Profile 2: "Stream ciphers for hardware applications with restricted resources such as limited storage, gate count, or power consumption." Both profiles contain an "A" subcategory (1A and 2A) with ciphers that also provide authentication in addition to encryption. In Phase 3 none of the ciphers providing authentication are being considered (The NLS cipher had authentication removed from it to improve its performance). eSTREAM portfolio the following ciphers make up the eSTREAM portfolio: These are all free for any use. Rabbit was the only one that had a patent pending during the eStream competition, but it was released into the public domain in October 2008. The original portfolio, published at the end of Phase 3, consisted of the above ciphers plus F-FCSR which was in Profile 2. However, cryptanalysis of F-FCSR led to a revision of the portfolio in September 2008 which removed that cipher. Phases Phase 1 Phase 1 included a general analysis of all submissions with the purpose of selecting a subset of the submitted designs for further scrutiny. The designs were scrutinized based on criteria of security, performance (with respect to the block cipher AES -- a US Government approved standard, as well as the other candidates), simplicity and flexibility, justification and supporting analysis, and clarity and completeness of the documentation. Submissions in Profile 1 were only accepted if they demonstrated software performance superior to AES-128 in counter mode. Activities in Phase 1 included a large amount of analysis and presentations of analysis results as well as discussion. The project also developed a framework for testing the performance of the candidates. The framework was then used to benchmark the candidates on a wide variety of systems. On 27 March 2006, the eSTREAM project officially announced the end of Phase 1. Phase 2 On 1 August 2006, Phase 2 was officially started. For each of the profiles, a number of algorithms has been selected to be Focus Phase 2 algorithms. These are designs that eSTREAM finds of particular interest and encourages more cryptanalysis and performance evaluation on these algorithms. Additionally a number of algorithms for each profile are accepted as Phase 2 algorithms, meaning that they are still valid as eSTREAM candidates. The Focus 2 candidates will be re-classified every six months. Phase 3 Phase 3 started in April 2007. Candidates for Profile 1 (software) are: CryptMT (version 3) Dragon HC (HC-128 and HC-256) LEX (LEX-128, LEX-192 and LEX-256) NLS (NLSv2, encryption only, not authentication) Rabbit Salsa20/12 SOSEMANUK Candidates for Profile 2 (hardware) are: DECIM (DECIM v2 and DECIM-128) F-FCSR (F-FCSR-H v2 and F-FCSR-16) Grain (Grain v1 and Grain-128) MICKEY (MICKEY 2.0 and MICKEY-128 2.0) Moustique, Pomaranch (version 3) Trivium Phase 3 ended on 15 April 2008, with the announcement of the candidates that had been selected for the final eSTREAM portfolio. The selected Profile 1 algorithms were: HC-128, Rabbit, Salsa20/12, and SOSEMANUK. The selected Profile 2 algorithms were: F-FCSR-H v2, Grain v1, Mickey v2, and Trivium. Submissions In eSTREAM portfolio The eSTREAM portfolio ciphers are, : Versions of the eSTREAM portfolio ciphers that support extended key lengths: Note that the 128-bit version of Grain v1 is no longer supported by its designers and has been replaced by Grain-128a. Grain-128a is not considered to be part of the eSTREAM portfolio. : No longer in eSTREAM portfolio This cipher was in the original portfolio but was removed in revision 1, published in September 2008. Selected as Phase 3 candidates but not for the portfolio Selected as Phase 2 focus candidates but not as Phase 3 candidates Selected as Phase 2 candidates but not as focus or Phase 3 candidates Not selected as focus or Phase 2 candidates See also AES process CAESAR Competition – Competition to design authenticated encryption schemes NESSIE CRYPTREC References External links Homepage for the project Discussion forum The eSTREAM testing framework Update 1: (PDF) Notes on the ECRYPT Stream Cipher project by Daniel J. Bernstein Cryptography contests Research projects Stream ciphers de:Stromverschlüsselung#eSTREAM
238042
https://en.wikipedia.org/wiki/End-to-end%20principle
End-to-end principle
The end-to-end principle is a design framework in computer networking. In networks designed according to this principle, guaranteeing certain application-specific features, such as reliability and security, requires that they reside in the communicating end nodes of the network. Intermediary nodes, such as gateways and routers, that exist to establish the network, may implement these to improve efficiency but cannot guarantee end-to-end correctness. The essence of what would later be called the end-to-end principle was contained in the work of Paul Baran and Donald Davies on packet-switched networks in the 1960s. Louis Pouzin pioneered the use of the end-to-end strategy in the CYCLADES network in the 1970s. The principle was first articulated explicitly in 1981 by Saltzer, Reed, and Clark. The meaning of the end-to-end principle has been continuously reinterpreted ever since its initial articulation. Also, noteworthy formulations of the end-to-end principle can be found before the seminal 1981 Saltzer, Reed, and Clark paper. A basic premise of the principle is that the payoffs from adding certain features required by the end application to the communication subsystem quickly diminish. The end hosts have to implement these functions for correctness. Implementing a specific function incurs some resource penalties regardless of whether the function is used or not, and implementing a specific function in the network adds these penalties to all clients, whether they need the function or not. Concept The fundamental notion behind the end-to-end principle is that for two processes communicating with each other via some communication means, the reliability obtained from that means cannot be expected to be perfectly aligned with the reliability requirements of the processes. In particular, meeting or exceeding very high-reliability requirements of communicating processes separated by networks of nontrivial size is more costly than obtaining the required degree of reliability by positive end-to-end acknowledgments and retransmissions (referred to as PAR or ARQ). Put differently, it is far easier to obtain reliability beyond a certain margin by mechanisms in the end hosts of a network rather than in the intermediary nodes, especially when the latter are beyond the control of, and not accountable to, the former. Positive end-to-end acknowledgments with infinite retries can obtain arbitrarily high reliability from any network with a higher than zero probability of successfully transmitting data from one end to another. The end-to-end principle does not extend to functions beyond end-to-end error control and correction, and security. E.g., no straightforward end-to-end arguments can be made for communication parameters such as latency and throughput. In a 2001 paper, Blumenthal and Clark note: "[F]rom the beginning, the end-to-end arguments revolved around requirements that could be implemented correctly at the endpoints; if implementation inside the network is the only way to accomplish the requirement, then an end-to-end argument isn't appropriate in the first place." The end-to-end principle is closely related, and sometimes seen as a direct precursor, to the principle of net neutrality. History In the 1960s, Paul Baran and Donald Davies, in their pre-ARPANET elaborations of networking, made brief comments about reliability that capture the essence of the later end-to-end principle. To quote from a 1964 Baran paper, "Reliability and raw error rates are secondary. The network must be built with the expectation of heavy damage anyway. Powerful error removal methods exist." Similarly, Davies notes on end-to-end error control, "It is thought that all users of the network will provide themselves with some kind of error control and that without difficulty this could be made to show up a missing packet. Because of this, loss of packets, if it is sufficiently rare, can be tolerated." The ARPANET was the first large-scale general-purpose packet switching network implementing several of the basic notions previously touched on by Baran and Davies. Davies had worked on simulation of datagram networks. Building on this idea, Louis Pouzin's CYCLADES network was the first to make the hosts responsible for the reliable delivery of data, rather than this being a centralized service of the network itself. Concepts implemented in this network influenced TCP/IP architecture. Applications ARPANET The ARPANET demonstrated several important aspects of the end-to-end principle. Packet switching pushes some logical functions toward the communication endpoints If the basic premise of a distributed network is packet switching, then functions such as reordering and duplicate detection inevitably have to be implemented at the logical endpoints of such a network. Consequently, the ARPANET featured two distinct levels of functionality: a lower level concerned with transporting data packets between neighboring network nodes (called Interface Message Processors or IMPs), and a higher level concerned with various end-to-end aspects of the data transmission. Dave Clark, one of the authors of the end-to-end principle paper, concludes: "The discovery of packets is not a consequence of the end-to-end argument. It is the success of packets that make the end-to-end argument relevant." No arbitrarily reliable data transfer without end-to-end acknowledgment and retransmission mechanisms The ARPANET was designed to provide reliable data transport between any two endpoints of the network much like a simple I/O channel between a computer and a nearby peripheral device. In order to remedy any potential failures of packet transmission normal ARPANET messages were handed from one node to the next node with a positive acknowledgment and retransmission scheme; after a successful handover they were then discarded, no source-to-destination re-transmission in case of packet loss was catered for. However, in spite of significant efforts, perfect reliability as envisaged in the initial ARPANET specification turned out to be impossible to providea reality that became increasingly obvious once the ARPANET grew well beyond its initial four-node topology. The ARPANET thus provided a strong case for the inherent limits of network-based hop-by-hop reliability mechanisms in pursuit of true end-to-end reliability. Trade-off between reliability, latency, and throughput The pursuit of perfect reliability may hurt other relevant parameters of a data transmissionmost importantly latency and throughput. This is particularly important for applications that value predictable throughput and low latency over reliabilitythe classic example being interactive real-time voice applications. This use case was catered for in the ARPANET by providing a raw message service that dispensed with various reliability measures so as to provide faster and lower latency data transmission service to the end hosts. TCP/IP Internet Protocol (IP) is a connectionless datagram service with no delivery guarantees. On the Internet, IP is used for nearly all communications. End-to-end acknowledgment and retransmission is the responsibility of the connection-oriented Transmission Control Protocol (TCP) which sits on top of IP. The functional split between IP and TCP exemplifies the proper application of the end-to-end principle to transport protocol design. File transfer An example of the end-to-end principle is that of an arbitrarily reliable file transfer between two endpoints in a distributed network of a varying, nontrivial size: The only way two endpoints can obtain a completely reliable transfer is by transmitting and acknowledging a checksum for the entire data stream; in such a setting, lesser checksum and acknowledgment (ACK/NACK) protocols are justified only for the purpose of optimizing performancethey are useful to the vast majority of clients, but are not enough to fulfill the reliability requirement of this particular application. A thorough checksum is hence best done at the endpoints, and the network maintains a relatively low level of complexity and reasonable performance for all clients. Limitations The most important limitation of the end-to-end principle is that its basic premise, placing functions in the application endpoints rather than in the intermediary nodes, is not trivial to implement. An example of the limitations of the end-to-end principle exists in mobile devices, for instance with mobile IPv6. Pushing service-specific complexity to the endpoints can cause issues with mobile devices if the device has unreliable access to network channels. Further problems can be seen with a decrease in network transparency from the addition of network address translation (NAT), which IPv4 relies on to combat address exhaustion. With the introduction of IPv6, users once again have unique identifiers, allowing for true end-to-end connectivity. Unique identifiers may be based on a physical address, or can be generated randomly by the host. See also Peer-to-peer Notes References Internet architecture Net neutrality Network architecture Programming paradigms
2153929
https://en.wikipedia.org/wiki/Interlock%20protocol
Interlock protocol
The interlock protocol, as described by Ron Rivest and Adi Shamir, was designed to frustrate eavesdropper attack against two parties that use an anonymous key exchange protocol to secure their conversation. A further paper proposed using it as an authentication protocol, which was subsequently broken. Brief history Most cryptographic protocols rely on the prior establishment of secret or public keys or passwords. However, the Diffie–Hellman key exchange protocol introduced the concept of two parties establishing a secure channel (that is, with at least some desirable security properties) without any such prior agreement. Unauthenticated Diffie–Hellman, as an anonymous key agreement protocol, has long been known to be subject to man in the middle attack. However, the dream of a "zipless" mutually authenticated secure channel remained. The Interlock Protocol was described as a method to expose a middle-man who might try to compromise two parties that use anonymous key agreement to secure their conversation. How it works The Interlock protocol works roughly as follows: Alice encrypts her message with Bob's key, then sends half her encrypted message to Bob. Bob encrypts his message with Alice's key and sends half of his encrypted message to Alice. Alice then sends the other half of her message to Bob, who sends the other half of his. The strength of the protocol lies in the fact that half of an encrypted message cannot be decrypted. Thus, if Mallory begins her attack and intercepts Bob and Alice's keys, Mallory will be unable to decrypt Alice's half-message (encrypted using her key) and re-encrypt it using Bob's key. She must wait until both halves of the message have been received to read it, and can only succeed in duping one of the parties if she composes a completely new message. The Bellovin/Merritt Attack Davies and Price proposed the use of the Interlock Protocol for authentication in a book titled Security for Computer Networks. But an attack on this was described by Steven M. Bellovin & Michael Merritt. A subsequent refinement was proposed by Ellison. The Bellovin/Merritt attack entails composing a fake message to send to the first party. Passwords may be sent using the Interlock Protocol between A and B as follows: A B Ea,b(Pa)<1>-------> <-------Ea,b(Pb)<1> Ea,b(Pa)<2>-------> <-------Ea,b(Pb)<2> where Ea,b(M) is message M encrypted with the key derived from the Diffie–Hellman exchange between A and B, <1>/<2> denote first and second halves, and Pa/Pb are the passwords of A and B. An attacker, Z, could send half of a bogus message—P?--to elicit Pa from A: A Z B Ea,z(Pa)<1>------> <------Ea,z(P?)<1> Ea,z(Pa)<2>------> Ez,b(Pa)<1>------> <------Ez,b(Pb)<1> Ez,b(Pa)<2>------> <------Ez,b(Pb)<2> At this point, Z has compromised both Pa and Pb. The attack can be defeated by verifying the passwords in parts, so that when Ea,z(P?)<1> is sent, it is known to be invalid and Ea,z(Pa)<2> is never sent (suggested by Davies). However, this does not work when the passwords are hashed, since half of a hash is useless, according to Bellovin. There are also several other methods proposed in, including using a shared secret in addition to the password. The forced-latency enhancement can also prevent certain attacks. Forced-Latency Interlock Protocol A modified Interlock Protocol can require B (the server) to delay all responses for a known duration: A B Ka-------------> <-------------Kb Ea,b(Ma)<1>----> <----Ea,b(Mb)<1> (B delays response a fixed time, T) Ea,b(Ma)<2>----> <----Ea,b(Mb)<2> (delay again) <----------data Where "data" is the encrypted data that immediately follows the Interlock Protocol exchange (it could be anything), encoded using an all-or-nothing transform to prevent in-transit modification of the message. Ma<1> could contain an encrypted request and a copy of Ka. Ma<2> could contain the decryption key for Ma<1>. Mb<1> could contain an encrypted copy of Kb, and Mb<2> could contain the decryption key for Mb<1> and the response, such as OK, or NOT FOUND, and the hash digest of the data. MITM can be attempted using the attack described in the Bellovin paper (Z being the man-in-the-middle): A Z B Ka------------->Kz-------------> <---------------Kz<-----------Kb Ea,z(Ma)<1>----> <----Ea,z(Mz)<1> (delayed response) Ea,z(Ma)<2>----> Ez,b(Ma)<1>-----> <-----Ez,b(Mb)<1> (delayed response) <----Ea,z(Mz)<2> Ez,b(Ma)<2>-----> <-----Ez,b(Mb)<2> (delayed response) <------------data <----------data In this case, A receives the data approximately after 3*T, since Z has to perform the interlocking exchange with B. Hence, the attempted MITM attack can be detected and the session aborted. Of course, Z could choose to not perform the Interlock Protocol with B (opting to instead send his own Mb) but then the session would be between A and Z, not A, Z, and B: Z wouldn't be in the middle. For this reason, the interlock protocol cannot be effectively used to provide authentication, although it can ensure that no third party can modify the messages in transit without detection. See also Computer security Cryptanalysis Secure channel Key management Cryptographic protocol Opportunistic encryption References External links Interlock protocol for authentication Full-Duplex-Chess Grandmaster (was: anonymous DH & MITM) Defense Against Middleperson Attacks (Zooko's Forced-Latency Protocol) Cryptographic attacks Key-agreement protocols
54787803
https://en.wikipedia.org/wiki/Journal%20of%20Computing%20Sciences%20in%20Colleges
Journal of Computing Sciences in Colleges
The Journal of Computing Sciences in Colleges is a bimonthly peer-reviewed, open access academic journal published by the Consortium for Computing Sciences in Colleges (CCSC) covering topics associated with computer science and education, and current technologies and methods in these areas. The journal publishes the proceedings from the conferences held annually in each of the CCSC regions. The journal was established in 1985 as the Journal of Computing in Small Colleges. Abstracting and indexing This journal is indexed by: ACM Computing Reviews ACM Digital Library ACM Guide to Computing Literature See also Association for Computing Machinery ACM Technical Symposium on Computer Science Education Computer Science Teachers Association References External links Computer science journals English-language journals Open access journals Publications established in 1985
61029521
https://en.wikipedia.org/wiki/Sri%20Lankan%20cyber%20security%20community
Sri Lankan cyber security community
The Cyber security (or information assurance) community in the Sri Lanka is diverse, with many stakeholders groups contributing to support the National Information and Cyber Security Strategy. The following is a list of some of these stakeholders. Government Computer Emergency Response Team Established under the Information and Communication Technology Agency in 2006, the Sri Lanka Computer Emergency Response Team (SLCERT) now functions incidentally as a government own private company under the preview of the Ministry of Digital Infrastructure and Information Technology providing the services of a computer emergency response team. Finance Sector Computer Security Incident Response Team The Finance Sector Computer Security Incident Response Team (FINCSIRT) specializes in computer security relating to the banking and financial industry in Sri Lanka. It is a joint initiative between the Central Bank of Sri Lanka, SLCERT and Sri Lankan Bankers Association. National Cyber Security Agency A National Cyber Security Agency has been proposed under the draft Cyber Security Bill. See also Sri Lankan intelligence agencies References Computer security organizations Cybercrime in Sri Lanka Internet in Sri Lanka
38044096
https://en.wikipedia.org/wiki/Unified%20Code%20Count%20%28UCC%29
Unified Code Count (UCC)
The Unified Code Counter (UCC) is a comprehensive software lines of code counter produced by the USC Center for Systems and Software Engineering. It is available to the general public as open source code and can be compiled with any standard ANSI C++ compiler. Introduction One of the major problems in software estimation is sizing which is also one of the most important attributes of a software product. It is not only the key indicator of software cost and time but also is a base unit to derive other metrics for project status and software quality measurement. The size metric is used as an essential input for most of cost estimation models such as COCOMO, SLIM, SEER-SEM, and PRICE TruePlanning for Software. Although source lines of code or SLOC is a widely accepted sizing metric, in general there is a lack of standards that enforce consistency of what and how to count SLOC. Center for Systems and Software Engineering (CSSE) at the University of Southern California has developed and released a code counting toolset called the Unified CodeCount (UCC), which ensures consistency across independent organizations in the rules used to count software lines of code. The primary purpose is to support sizing software counts and metrics for historical data collection and reporting purposes. It implements a code counting framework published by the Software Engineering Institute (SEI) and adapted by COCOMO. Logical and physical SLOC are among the metrics generated by the toolset. SLOC refers to Source Lines of Code and is a unit used to measure the size of software program based on a set of rules. SLOC is a key input for estimating project effort and is also used to calculate productivity and other measurements. There are two types of SLOC: physical and logical sloc. Physical SLOC (PSLOC)– One physical SLOC corresponds to one line starting with the first character and ending by a carriage return or an end-of-file marker of the same line. Blank and comment lines are not counted. Logical SLOC (LSLOC)– Lines of code intended to measure "statements", which normally terminate by a semicolon (C/C++, Java, C#) or a carriage return (VB, Assembly), etc. Logical SLOC are not sensitive to format and style conventions, but they are language dependent. The Unified CodeCount(UCC) differencing capability allows the user to count, compare, and collect logical differentials between two versions of the source code of a software product. The differencing capability allows users to count the number of added/new, deleted, modified, and unmodified logical SLOC of the current version in comparison with the previous version. History Many different code counting tools existed in the early 2000s. However, due to the lack of standard counting rules and software accessibility issues, the Cost Analysis Improvement Group (NCAIG) at the National Reconnaissance Office identified the need for a new code counting tool to analyze software program costs. In order to avoid any industry bias, the CodeCount tool was developed at the center under the direction of Dr. Barry Boehm, Merilee Wheaton, and A. Winsor Brown, with IV&V provided by The Aerospace Corporation. Many organizations including Northrop Grumman and The Boeing Company donated several code counting tools to the USC CSSE. The goal was to develop a public domain code counting tool that handles multiple languages and produces consistent results for large and small software systems. Project plans are developed every semester, and graduate students from USC doing directed research are assigned projects to update the code count tool. Vu Nguyen, a PhD student at USC, led several semesters of student projects. All changes are verified and validated by the Aerospace Corporation IV & V team which works closely with the USC Instructor on the projects. The beta versions are tested by industry Affiliates, and then released to the public as open source code. In 2006, work was done to develop a differencing tool which would compare two software system baselines to determine the differences between two versions of software. The CodeCount tool set, which is a precursor of UCC, was released in the year 2007. It was a collection of standalone programs written in a single language to measure source code written in languages like COBOL, Assembly, PL/1, Pascal, and Jovial. Nguyen produced the Unified CodeCount (UCC) system design as a framework and the existing code counters and differencing tool were merged into it. Additional features like unified counting and differencing capabilities, detecting duplicate files, support for text and CSV output files, etc. were also added. A presentation on "Unified Code Count with Differencing Functionality" was presented in the 24th International Forum on COCOMO in October 2009. UCC tool has been released to the public with a license enabling users to use and modify the code; if the modifications are to be distributed, the user must send a copy of the modifications to USC CSSE. Importance The Unified CodeCount (UCC) is used to analyze existing projects for physical and logical SLOC counts which directly relate to work accomplished. The data collected can then be used by software cost estimation models to accurately estimate time and cost taken for similar projects to get to a successful conclusion. There are many code count tools available in the market, however most have various draw backs such as: Some are proprietary, others are public domain Inconsistent or unpublished counting rules May not be maintained Many tools have different rules for counting giving inconsistent results CSSE was approached by NCAIG to create a code counting solution developed by non-biased, industry-respected institution and which provides the following features: Count software lines of code Consistently With documented standards Ability to easily add new languages Support and maintenance Compare different baselines of software Determine addition, modification, deletion Identify duplicate files Determine complexity Platform independent Command line interface Modes: Code counting only or counting plus differencing Counts multiple files and languages in a single pass Output reports Robust processing Options to improve performance Error log The UCC is the result of that effort, and is available as open source to the general public. Features The Unified CodeCount Toolset with Differencing Functionality (UCC) is a collection of tools designed to automate the collection of source code sizing and change information. The UCC runs on multiple programming languages and focuses on two possible Source Lines of Code (SLOC) definitions, physical and/or logical. The Differencing functionality can be used to compare two baselines of software systems and determine change metrics: SLOC addition, deletion, modification, and non-modification counts. The UCC toolset is copyright USC Center for Software Engineering but is made available with a Limited Public License which allows anyone to make modifications on the code. However, if they distribute that modified code to others, the person or agency has to return a copy to USC so the toolset can be improved for the benefit of all. Uses of CodeCount Counting Capabilities- UCC allows users to measure the size information of a baseline of a source program by analyzing and producing the count for: a) Logical SLOC b) Physical SLOC c) Comment d) Executable, data declaration e) Compiler directive SLOC f) Keywords Differencing Capabilities- UCC allows users to compare and measure the differences between two baselines of source programs. These differences are measured in terms of the number of logical SLOC added/new, deleted, modified, and unmodified. These differencing results can be saved to either plain text .txt or .csv files. The default is .csv, but .txt can be specified by using the –ascii switch. Counting and Differencing Directories- UCC allows users to count or compare source files by specifying the directories where the files are located. Support for various Programming Languages - The counting and differencing capabilities accept the source code written in C/C++, C#, Java, SQL, Ada, Perl, ASP.NET, JSP, CSS, HTML, XML, JavaScript, VB, Php, VbScript, Bash, C Shell Script, ColdFusion, Fortran, Midas, NeXtMidas, Pascal, Ruby, X-Midas, and Python. Command Arguments- The tool accepts user’s settings via command arguments. UCC is a command-line application and it is compiled under the application console mode. Duplication- For each baseline, two files are considered duplicates if they have same content or the difference is smaller than the threshold given through the command line switch -tdup. Two files may be identified as duplicates although they have different filenames. Comments and blank lines are not considered during duplication processing. Matching- When differencing, files from Baseline A are matched to files in Baseline B. Two files are matched if they have the same filename regardless of which directories they belong to. Remaining files are matched using a best-fit algorithm. Complexity Count- UCC produces complexity counts for all source code files. The complexity counts may include the number of math, trig, logarithm functions, calculations, conditionals, logicals, preprocessors, assignments, pointers, and cyclomatic complexity. When counting, the complexity results are saved to the file "outfile_cplx.csv", and when differencing the results are saved to the files "Baseline-A-outfile_cplx.csv" and "Baseline-B- outfile_cplx.csv". File Extensions. The tool determines which code counter to use for each file from the file extension. Functionality of CodeCount Execution speed: CodeCount is written in C/C++, and utilizes relatively simple algorithms to recognize comments and physical/logical lines. Testing has shown the UCC to process acceptably fast except in extreme situations. A number of switches are available to inhibit certain types of processing if needed. Users may be able to compile using optimization switches for faster execution; refer to the users manual the compiler being used. Reliability and Correctness CodeCount has been tested extensively in the laboratory, and is being used globally. There is a defect-reporting capability, and any defects reported are corrected promptly. It is not uncommon for users to add functionality or correct defects and notify the UCC managers along with providing the code for the changes. Documentation The UCC open source distribution contains Release Notes, User’s Manual, and Code Counting Standards for the language counters. The source code contains file headers and in-line comments. The UCC Software Development Plan, Software Requirements Specification, and Software Test Plan are available upon request. Ease of general maintenance The UCC is a monolithic, object-oriented toolset designed to simplify its maintenance. Ease of extension The "CSCI" CodeCount flavor lends itself to ease of extension. Users are able to easily add another language counter on their own. Users may also specify which file extensions will select a particular language counter. Compatibility CodeCount is designed to be compatible with COCOMO estimation mechanism is required or desired. Portability CodeCount has been tested on a wide variety of operating systems and hardware platforms and should be portable to any environment that has an ANSI standard C++ compiler. Availability of source code Source code for CodeCount is available as a downloadable zip file. Licensing Source code for CodeCount is provided under the terms of the USC-CSE Limited Public License, which allows anyone to make modifications on the code. However, if they distribute that modified code to others, the person or agency has to return a copy to USC so the toolset can be improved for the benefit of all. The full text of the license can be viewed at UCC License. Standards for the Language The main objective for the Unified CodeCount (UCC) is to provide counting methods that define a consistent and repeatable SLOC measurement. There are more than 20 SLOC counting applications, each of which produces the different physical and logical SLOC count, with some 75 commercially available software cost estimating tools existing in today’s market. The differences in cost results from the various tools show the deficiencies of the current techniques in estimating the size of the code, particularly true for the projects of the large magnitude, where cost estimation depends on automatic procedures to generate reasonably accurate predictions. This led to the need of a universal SLOC counting standard which would produce consistent results. SLOC serves as a main factor for cost estimation techniques. Although it is not the sole contributor to software cost estimation, it does provide the foundation for a number of metrics that are derived throughout the software development life cycle. The SLOC counting procedure can be automated, requiring less time and effort to produce metrics. A well defined set of rules identify what to include and exclude in SLOC counting measures. The two most accepted measures for SLOC are the number of physical and logical lines of code. In the UCC, logical SLOC measures the total number of source statements in a block of code. The three types of statements are: executable, declaration and compiler directives. Executable statements are eventually translated into machine code to cause run-time actions, while declaration and compiler directive statements affect compiler’s actions. The UCC treats the source statements as independent units at source code level, where a programmer constructs a statement and its sub-statements completely. The UCC assumes that the source code will compile; otherwise the results are unreliable. A big challenge was to decide the ends of each statement for counting logical SLOC. The semicolon option may sound appealing, but not all the popular languages uses the semicolon (like SQL, JavaScript, UNIX scripting languages, etc.). The Software Engineering Institute (SEI) at Carnegie Mellon University and COCOMO II SLOC defined a way to count ‘how many of what program elements’. The table 1 and 2 illustrates the summary of SLOC counting rules for logical lines of code for C/C++, Java, and C# programming languages. The UCC Code Counting Rules for each language are distributed with the open source release. Software design The Unified CodeCount (UCC) produces the counting by capturing the LSLOC strings from a file based on a counting specification document created for each language; this specification is proposed as a standard. The differencing feature compares the LSLOC strings from the two files captured during the counting process with the help of a common engine. UCC Architecture The main architecture of UCC can be seen as a hierarchical structure of the following components: MainObject The MainObject is the top level class which performs the command line parsing, to extract the list of files from the command parameters and then reads each file into the memory for counting or differentiation. The MainObject calls the CodeCounters in order to process the embedded languages. The output of the counting function provides the following sets of files(.txt) for duplicate and counting/complexity results: _outfile.txt the file where Main displays the counting results for source files of . is the name of the language of the source files, e.g., C_CPP for C/C++ files and Java for Java files. outfile_cplx.txt which shows the complexity results for the source file. Duplicates-_outfile.txt displays the list of duplicate files for the language . Duplicates-outfile_cplx.txt contains the complexity results for the duplicated files. DuplicatePairs.txt a text file listing matches between a source file and its duplicate file. DiffTool DiffTool is the derivative of MainObject, which parses the command line parameters and processes the list of files for each baseline. The DiffTool class provides the following sets of files(.txt,.csv) across baselines: Baseline--_outfile.txt counts results for source files of for Baseline A and Baseline B. Baseline--_cplx.txt Complexity results for Baseline A and Baseline B. MatchedPairs A text file listing matches between files in Baseline A and Baseline B. outfile_diff_results.txt Main differencing results in the plain text format. outfile_diff_results.csv Main differencing results in .csv format that can be opened using MS Excel. DiffTool performs the comparison between baselines, with the help of ‘CmpMngr’ class. CmpMngr CmpMngr calculates the differences by comparing two lists of LSLOC and determines the variations by calculating total LSLOC added, deleted, modified, unmodified from the two lists. CCodeCounter The CCodeCounter is used for pre-count processing, where it performs the following operations: Counts the blank lines and comments, Filters the literal strings, Counts the complexity of keywords, operators, etc Counts the compiler directive SLOC (using CountDirectiveSLOC method). Performs the language specific processing (creates sub classes). Future enhancements and release Future plans for UCC include improving complexity metrics computation, providing support for existing code counters and adding new counters for additional languages, better reporting, and improving performance. Counters for text, assembly, Cobol, Jovial, Matlab, and Pascal are in development. Also, a graphical user interface is being produced which may be used in place of the current command line interface. System Requirements Hardware RAM: minimum 512 MB. Recommended: 1024 MB HDD: minimum 100 MB disk space available. Recommended: 200MB. Software Operating Systems Linux 2.6.9 Unix Mac OS X Windows 9x/Me/XP/Vista Solaris Compilers Supported ANSI C/C++ Compiler See also Comparison of file comparison tools diff tool Software configuration management WinMerge UltraEdit Meld References External links (link broken 4/16/21) CSSE Tools – USC CSSE (Center for Systems and Software Engineering) Tools page, of which UCC is one UCC Users's manual SEI – Software Engineering Institute, Carnegie Mellon University NCAIG – National Reconnaissance Organization Cost Analysis and Improvement Group LocMetrics File comparison tools Linux integrated development environments Hex editors C (programming language) software Software metrics
811550
https://en.wikipedia.org/wiki/Over-the-air%20programming
Over-the-air programming
Over-the-air programming (OTA programming) refers to various methods of distributing new software, configuration settings, and even updating encryption keys to devices like mobile phones, set-top boxes, electric cars or secure voice communication equipment (encrypted two-way radios). One important feature of OTA is that one central location can send an update to all the users, who are unable to refuse, defeat, or alter that update, and that the update applies immediately to everyone on the channel. A user could 'refuse' OTA, but the 'channel manager' could also 'kick them off' the channel automatically. In the context of the mobile content world, these include firmware-over-the-air (FOTA), over-the-air service provisioning (OTASP), over-the-air provisioning (OTAP), or over-the-air parameter administration (OTAPA); or provisioning handsets with the necessary settings with which to access services such as wireless access point (WAP) or Multimedia Messaging Service (MMS). As mobile phones accumulate new applications and become more advanced, OTA configuration has become increasingly important as new updates and services come on stream. OTA via Short Message Service (SMS) optimises the configuration data updates in subscriber identity module (SIM) cards and handsets, and enables the distribution of new software updates to mobile phones or provisioning handsets with the necessary settings with which to access services such as WAP or MMS. OTA messaging provides remote control of mobile phones for service and subscription activation, personalisation, and programming of a new service for mobile operators and telco third parties. Various standardisation bodies were established to help develop, oversee, and manage OTA. One of them is the Open Mobile Alliance (OMA). More recently, with the new concepts of Wireless Sensor Networks and the Internet of Things (IoT), where the networks consist of hundreds or thousands of nodes, OTA is taken to a new direction: for the first time OTA is applied using unlicensed frequency bands (868 MHz, 900 MHz, 2400 MHz) and with low consumption and low data rate transmission using protocols such as 802.15.4 and ZigBee. Sensor nodes are often located in places that are either remote or difficult to access. As an example, Libelium has implemented a smart and easy-to-use OTA programming system for ZigBee WSN devices. This system enables firmware upgrades without the need of physical access, saving time and money if the nodes must be re-programmed. Smartphones On modern mobile devices such as smartphones, an over-the-air update may refer simply to a software update that is distributed over Wi-Fi or mobile broadband using a function built into the operating system, with the 'over-the-air' aspect referring to its use of wireless internet instead of requiring the user to connect the device to a computer via USB to perform the update. Firmware updates are available for download from the OTA service. Mechanism The OTA mechanism requires the existing software and hardware of the target device to support the feature, namely the receipt and installation of new software received via the wireless network from the provider. New software is transferred to the phone, installed, and put into use. It is often necessary to turn the phone off and back on for the new programming to take effect, though many phones will automatically perform this action. Methods Depending on implementation, OTA software delivery can be initiated upon action, such as a call to the provider's customer support system or other dial-able service, or can be performed automatically. Typically, it is done via the former method to avoid service disruption at an inconvenient time, but this requires subscribers to manually call the provider. Often, a carrier will send a broadcast SMS text message to all subscribers (or those using a particular model of phone) asking them to dial a service number to receive a software update. Verizon Wireless in the United States provides a number of OTA functions to its subscribers via the *228 USSD service code. Option 1 updates phone configuration, option 2 updates the PRL. Similarly Voitel Wireless and StraightTalk, which both use Verizon network, use 22890 service code to program Verizon based wireless phones. To provision parameters in a mobile device OTA, the device needs to have a provisioning client capable of receiving, processing and setting the parameters. For example, a Device Management client in a device may be capable of receiving and provisioning applications, or connectivity parameters. In general, the term OTA implies the use of wireless mechanisms to send provisioning data or update packages for firmware or software updates to a mobile device; this is so that the user does not have to go to a store or a service centre to have applications provisioned, parameters changed, or firmware or software updated. In this context, also two major types of OTA updates can be distinguished, namely OTA updates for product enhancement (e.g., customized product functions and services) and OTA updates for product optimization (e.g., repair or version updates). Non-OTA options for a user are: a) to go to a store and seek help, b) use a PC and a cable to connect to the device and change settings on a device, add software to device, etc. OTA standards There are a number of standards that describe OTA functions. One of the first was the GSM 03.48 series. The ZigBee suite of standards includes the ZigBee Over-the-Air Upgrading Cluster which is part of the ZigBee Smart Energy Profile and provides an interoperable (vendor-independent) way of updating device firmware. The current standards do not cover harvesting of client information which is routinely done by the phone manufacturer, the service provider and the program manager (Google). No restrictions have been developed for these illegal activities. Similarities OTA is similar to firmware distribution methods used by other mass-produced consumer electronics, such as cable modems, which use TFTP as a way to remotely receive new programming, thus reducing the amount of time spent by both the owner and the user of the device on maintenance. Over-the-air provisioning (OTAP) is also available in wireless environments (though it is disabled by default for security reasons). It allows an access point (AP) to discover the IP address of its controller. When enabled, the controller tells the other APs to include additional information in the Radio Resource Management Packets (RRM) that would assist a new access point in learning of the controller. It is sent in plain text however, which would make it vulnerable to sniffing. That is why it is disabled by default. See also Phone-as-Modem (PAM) Access Point Name (APN) References Mobile technology Telecommunication services
532369
https://en.wikipedia.org/wiki/DIGITAL%20Command%20Language
DIGITAL Command Language
DIGITAL Command Language (DCL) is the standard command language adopted by many of the operating systems created by Digital Equipment Corporation. DCL had its roots in IAS, TOPS-20, and RT-11 and was implemented as a standard across most of Digital's operating systems, notably RSX-11 and RSTS/E, but took its most powerful form in VAX/VMS (later OpenVMS). DCL continues to be developed by VSI as part of OpenVMS. Written when the programming language Fortran was in heavy use, DCL is a scripting language supporting several datatypes, including strings, integers, bit arrays, arrays and booleans, but not floating point numbers. Access to OpenVMS system services (kernel API) is through lexical functions, which perform the same as their compiled language counterparts and allow scripts to get information on system state. DCL includes IF-THEN-ELSE, access to all the Record Management Services (RMS) file types including stream, indexed, and sequential, but unfortunately lacks a DO-WHILE or other looping construct, requiring users to make do with IF and GOTO-label statements instead. DCL is available for other operating systems as well, including VCL and VX/DCL for Unix, VCL for MS-DOS, OS/2 and Windows, PC-DCL and Open DCL for Windows/Linux and Accelr8 DCL Lite for Windows. DCL is the basis of the XLNT language, implemented on Windows by an interpreter-IDE-WSH engine combination with CGI capabilities distributed by Advanced System Concepts Inc. from 1997. Command line parser For the OpenVMS implementation, the command line parser is a runtime library () that can be compiled into user applications and therefore gives a consistent command line interface for both OS supplied commands and user written commands. The command line must start with a verb and is then followed by up to 8 parameters (arguments) and/or qualifiers (switches in Unix terminology) which begin with a '/' character. Unlike Unix (but similar to DOS), a space is not required before the '/'. Qualifiers can be position independent (occurring anywhere on the command line) or position dependent, in which case the qualifier affects the parameter it appears after. Most qualifiers are position independent. Qualifiers may also be assigned values or a series of values. Only the first most significant part of the verb and qualifier name is required. Parameters can be integers or alphanumeric text. An example OS command may look like: set audit /alarm /enable=(authorization, breakin=all) show device /files $1$DGA1424: The second show command could also be typed as: sho dev $1$DGA1424:/fil While DCL documentation usually shows all DCL commands in uppercase, DCL commands are case-insensitive and may be typed in upper-, lower-, or mixed-case. Some implementations such as OpenVMS used a minimum uniqueness scheme in allowing commands to be shortened while others such as RSX-11 allowed commands to be abbreviated to a minimum of three characters. Unlike other systems which use paths for locating commands, DCL requires commands to be defined explicitly, either via CLD (Command Language Definition) definitions or a foreign symbol. Most OpenVMS-native commands are defined via CLD files; these are compiled by the CDU, the Command Definition Utility, and added to a DCL 'table' -- by default, although processes are free to use their own tables—and can then be invoked by the user. For example, defining a command FOO that accepts the option "/BAR" and is implemented by the image could be done with a CLD file similar to: DEFINE VERB FOO IMAGE "SYS$SYSEXE:FOO.EXE" QUALIFIER BAR The user can then type "", or "", and the FOO program will be invoked. The command definition language supports many types of options, for example dates and file specifications, and allows a qualifier to change the image invoked—for example "CREATE", to create a file, vs. "CREATE/DIRECTORY" to create a directory. The other (simpler, but less flexible) method to define commands is via foreign commands. This is more akin to the Unix method of invoking programs. By giving the command: foo :== $sys$sysexe:foo.exe the command 'FOO' will invoke FOO.EXE, and supply any additional arguments literally to the program, for example, "foo -v". This method is generally used for programs ported from Unix and other non-native systems; for C programs using argc and argv command syntax. Versions of OpenVMS DCL starting with V6.2 support the logical name for establishing Unix-style command paths. This mechanism is known as an Automatic Foreign Command. DCL$PATH allows a list of directories to be specified, and these directories are then searched for DCL command procedures (command.COM) and then for executable images (command.EXE) with filenames that match the command that was input by the user. Like traditional foreign commands, automatic foreign commands also allow Unix-style command input. Scripting DCL scripts look much like any other scripting language, with some exceptions. All DCL verbs in a script are preceded with a $ symbol; other lines are considered to be input to the previous command. For example, to use the TYPE command to print a paragraph onto the screen, one might use a script similar to: $ TYPE SYS$INPUT: This is an example of using the TYPE verb in the DCL language. $ EXIT Indirect variable referencing It is possible to build arrays in DCL that are referenced through translated symbols. This allows the programmer to build arbitrarily sized data structures using the data itself as an indexing function. $ i = 1 $ variable'i' = "blue" $ i = 2 $ variable'i' = "green" $ j = 1 $ color = variable'j' $ rainbow'color' = "red" $ color = variable'i' $ rainbow'color' = "yellow" In this example the variable is assigned the value "red", and is assigned the value "yellow". Commands The following is a list of DCL commands for common computing tasks that are supported by the OpenVMS command-line interface. COPY COPY/FTP CREATE DELETE DIRECTORY EDIT LOGOUT PRINT RENAME SET SHOW TYPE Lexical functions Lexical functions provide string functions and access to VMS-maintained data. Some Lexicals are: extract a substring obtain date/time info, e.g. for would return searches for a file, returns a null ("") if not found it's a privilege to have access to this. See also Comparison of command shells References Further reading External links VSI OpenVMS DCL Dictionary: A-M VSI OpenVMS DCL Dictionary: N-Z OpenVMS.org's DCL archive Command shells OpenVMS OpenVMS software Scripting languages
37755751
https://en.wikipedia.org/wiki/Banjo%20%28application%29
Banjo (application)
Banjo is a Utah-based surveillance software company that claimed to use AI to identify events for public safety agencies. It was founded in 2010 by Damien Patton. The company gained notoriety in 2020 when the State of Utah signed a $20 million contract for their "panopticon" software. In May, the company experienced backlash and suspending of contracts after Patton's membership in the Ku Klux Klan and participation in a drive-by terrorist attack on a synagogue was revealed. In 2020, the company had approximately 200 employees in South Jordan, Utah, Park City, Utah, Washington D.C., and Menlo Park, California and had received approximately $126 million in funding. In 2021, an audit requested by the State of Utah that tried to assess algorithmic bias in the AI declared that "Banjo does not use techniques that meet the industry definition of artificial Intelligence". History Social media application After building a "friend-finding" app called Peer Compass for a Las Vegas hackathon in 2010 and then a Google hackathon in 2011 (the app won both events), Damien Patton founded Banjo as a social media application for phones to aggregate and discover live events by scraping public, geotagged content from Instagram, Twitter, Facebook, Foursquare, Path, Google Plus, VKontakte and EyeEm, then indexed by location, time and content. In 2016, Banjo was gathering information from more than 1.2 billion public social media accounts. Shortly after the Google event, BlueRun Ventures invested $800,000 in the company. Techcrunch called it "the creepy/awesome cyber-stalking app" and stated it had 500,000 downloads by December 2011, when the company launched a web version. The app had one million users by April 2012. After attending South by Southwest in 2011 and seeing other friend-finding apps like Glancee, Sonar, and Highlight, Patton told his investors in 2012 that he was pivoting to Banjo and fired all but one employee. Banjo's location-enabled social aggregation application launched in November 2012, though it was still described in December 2012 as an app to find friends who are nearby. News application After Patton realized Banjo's potential application to the Boston Marathon bombing manhunt, Banjo went from social media to brand marketing and to service news companies, launching in January 2014 and calling it Banjo Enterprise. He also said the company quickly uncovered the 2014 Shooting of Michael Brown, and was alerted about the 2015 East Village gas explosion 58 minutes before the Associated Press reported it. The company's revenue in 2014 was under $1 million. In March 2014 the company completed a $16 million Series B round of financing from Balderton Capital with BlueRun Ventures investing again and VegasTechFund as a new investor. In May 2015 the company received their Series C financing when SoftBank invested $100 million into the company. At the time it had 60 employees on staff, including their chief data scientist Pedro Alves, a Mensa member. The company was based in Las Vegas. Patton said the company was looking into applications in the financial markets and had no intention on doing business with federal law enforcement agencies, stating "I don't think those agencies could fucking deal with someone like me". A profile of Patton in 2015 called him a "damn good driver" and gave an anecdote about their software uncovering a Florida State University shooting in 2014, explaining that is why NBC and ESPN were paying customers. Other customers included Anheuser Busch, the BBC, and Sinclair Broadcasting. In 2016 Entrepreneur said the software was used by "thousands of news outlets, insurance firms, security contractors and more". Live Time Intelligence application The company pivoted to their Live Time Intelligence application, used to highlight events on surveillance cameras for police and fire, in 2019. Social media is only used with any PII removed. Banjo suggests their service can be used for car crashes, school shootings, and fires. Banjo courted the state of Utah with its services, moving the company from Vegas to Park City, Utah in 2018 and signed a $750,000 contract in November 2018 with the attorney general's office. Park City signed a contract for free access to their services as part of a pilot program in February 2019, and the city reported about half of the city's private business surveillance cameras were feeding the system. In 2019 a lobbyist for the company told the Salt Lake Valley Emergency Communications Center that the company "essentially do what Palantir does, but they do it live." In 2020 the state of Utah signed a $20.7 million five-year contract with Banjo, leading to increased scrutiny. Vice said they were "Turning Utah Into a Surveillance Panopticon". Senator Mike Lee, who said he's known Patton for several years, complimented Live Time and their commitment to privacy. The Utah Attorney General's Office, Utah's Department of Public Safety, and University of Utah all signed contracts with the company, though the Utah Transit Authority demurred. Utah's Libertas Institute has expressed concern about Live Time, as has American Civil Liberties Union's Utah branch and the Electronic Freedom Foundation. In March 2020, VICE's Motherboard uncovered a "shadow company" named Pink Unicorn Labs that developed apps for iPhone and Android with no outward connection to Banjo. The apps were designed to harvest social media data and "secretly farming peoples' user tokens", with app names like "One Direction Fan App", "EDM Fan App", and "Formula Racing App." It led to comparisons to Cambridge Analytica. The Salt Lake Tribune discovered that Utah's Intermountain Healthcare had a $60,000 contract with Banjo, signed in April 2020. Intermountain would provide patient counts (room occupancy) and receive access to the LiveTime app. Goshen, Indiana's Police Department signed a $20,000 contract with Banjo in early March 2020. Neo-Nazi links In April 2020, Matt Stroud of OneZero uncovered CEO Patton's involvement with the Nashville, Tennessee-based Dixie Knights chapter of the Ku Klux Klan. When Patton was 17 (June 1992), he drove Grand Knight Leonard William Armstrong during a drive-by shooting with a TEC-9 on the West End Synagogue. Police arrested Patton and confiscated an AK-47. He was then released to the custody of Christian music producer Jonathan David Brown. Patton then fled the state with the assistance of Brown. Patton confirmed his membership in the KKK and involvement with the white power skinheads, which he called "the foot soldiers for groups like the Ku Klux Klan and the Aryan Nations". OneZero republished a picture printed in The Tennessean of Patton at an Aryan Nations meeting where he and other members are giving the Nazi salute. Patton served in the U.S. Navy and admitted to fraternizing with skinheads. In a 2015 Inc. profile, he said he wanted to enlist after seeing the 1990 Gulf War on TV. He said he rose up the ranks on the aircraft carrier Kitty Hawk, leaving after two tours, ending up in San Diego, California. He then became the chief mechanic on a NASCAR racecar sponsored by Lowe's after starting on a pit crew in 1993. Contract suspensions Utah Attorney General Sean Reyes suspended the $20 million contract in late April 2020 after the KKK ties came out and would review its use. The University of Utah suspended its use, stating it has "no tolerance for racism or bias. The university expects this of itself and its business partners." In May 2020 the university terminated its $500,000 contract and demanded all data be returned. NAACP's Salt Lake Branch president stated they were "appalled" and it was "extremely alarming as to the data that he has acquired". The Indiana contract was also suspended. On April 29, 2020, Banjo stated they would suspend all contracts in the state, though they did not mention Intermountain Healthcare. Rep. Andrew Stoddard stated the investigation into the company was "long overdue", and Rep. Cory Maloy also expressed concern. On May 4, Utah's Attorney General Office asked Utah State Auditor John Dougall to review their contracts with Banjo, to look for algorithmic bias, and to ensure privacy. The AI Now Institute linked Banjo with Clearview AI, as both have far-right ties. While Banjo doesn't have explicit far-right algorithmic goals like Clearview does, it still raises concerns about algorithmic bias, even if unintentional. Other historic Silicon Valley links to far-right ideology mentioned include Jeffrey Epstein, William Shockley, and James Damore. Resignation On May 8, 2020, Patton resigned from the board and as CEO of Banjo, removing all decisionmaking authority. Justin R. Lindsey, who has been CTO for less than a year, has been named CEO. Lindsey had been CTO of the FBI immediately after 9/11. Rebranding as safeXai In February 2021, it was discovered that the company quietly renamed itself as safeXai. The name had been filed in September 2020; Patton remains a minority shareholder. References External links Mobile software Mobile social software
35439977
https://en.wikipedia.org/wiki/NATO%20missile%20defence%20system
NATO missile defence system
The NATO missile defense system is a missile defense system being constructed by the North Atlantic Treaty Organization (NATO) in several member states and around the Mediterranean Sea. Plans for this system have changed several times since first studied in 2002, including as a response to Russian opposition. Background A missile defence feasibility study was launched after the 2002 Prague Summit. The NATO Consultation, Command and Control Agency (NC3A) and NATO's Conference of National Armaments Directors (CNAD) were also involved in negotiations. The study concluded that missile defence is technically feasible, and it provided a technical basis for ongoing political and military discussions regarding the desirability of a NATO missile defence system. The United States negotiated with Poland and the Czech Republic over the course of several years after on the deployment of interceptor missiles and a radar tracking system in the two countries. Both countries' governments indicated that they would allow the deployment. In April 2007, NATO's European allies called for a NATO missile defence system which would complement the American national missile defense system to protect Europe from missile attacks and NATO's decision-making North Atlantic Council held consultations on missile defence in the first meeting on the topic at such a senior level. In response, Russian Prime Minister Vladimir Putin claimed that such a deployment could lead to a new arms race and could enhance the likelihood of mutual destruction. He also suggested that his country would freeze its compliance with the 1990 Treaty on Conventional Armed Forces in Europe (CFE)—which limits military deployments across the continent—until all NATO countries had ratified the adapted CFE treaty. Secretary General Jaap de Hoop Scheffer claimed the system would not affect strategic balance or threaten Russia, as the plan is to base only ten interceptor missiles in Poland with an associated radar in the Czech Republic. On 14 July 2007, Russia gave notice of its intention to suspend the CFE treaty, effective 150 days later. On 14 August 2008, the United States and Poland came to an agreement to place a base with ten interceptor missiles with associated MIM-104 Patriot air defence systems in Poland. This came at a time when tension was high between Russia and most of NATO and resulted in a nuclear threat on Poland by Russia if the building of the missile defences went ahead. On 20 August 2008 the United States and Poland signed the agreement, while Russia sent word to Norway that it was suspending ties with NATO. During the 2008 Bucharest Summit, the alliance further discussed the technical details as well as the political and military implications of the proposed elements of the US missile defence system in Europe. Allied leaders recognized that the planned deployment of European-based US missile defence assets would help protect many Allies, and agreed that this capability should be an integral part of any future NATO-wide missile defence architecture. In August 2008, Poland and the United States signed a preliminary deal to place part of the missile defence shield in Poland that would be linked to air-defence radar in the Czech Republic. More than 130,000 Czechs signed a petition for a referendum on the base. On 20 March 2015 Russia's ambassador to Denmark wrote a letter to the editor of Jyllandsposten warning the Danes that their participation in this merge of assets would make their warships targets of Russian nuclear missiles. Denmark's former Minister for Foreign Affairs Holger K. Nielsen commented that if there's a war, Danish warships will be targets in any case. Active Layered Theater Ballistic Missile Defence On 17 September 2009, U.S. President Barack Obama announced that the planned deployment of long-range missile defence interceptors and equipment in Poland and the Czech Republic was not to go forward, and that a defence against short- and medium-range missiles using AEGIS warships would be deployed instead. Following the change in plans, Russian President Dimitri Medvedev announced that a proposed Russian Iskander surface to surface missile deployment in nearby Kaliningrad would also not go ahead. The two deployment cancellation announcements were later followed with a statement by newly named NATO Secretary General Anders Fogh Rasmussen calling for a strategic partnership between Russia and the Alliance, explicitly involving technological cooperation of the two parties' missile defence systems. According to a September 2009 White House Factsheet entitled "Fact Sheet on U.S. Missile Defense Policy - A "Phased, Adaptive Approach" for Missile Defense in Europe" contains the following four phases: Phase One (in the 2011 timeframe) – Deploy current and proven missile defense systems available in the next two years, including the sea-based Aegis Weapon System, the Standard Missile-3 (SM-3) interceptor (Block IA), and sensors such as the forward-based Army Navy/Transportable Radar Surveillance system (AN/TPY-2), to address regional ballistic missile threats to Europe and our deployed personnel and their families; Phase Two (in the 2015 timeframe) – After appropriate testing, deploy a more capable version of the SM-3 interceptor (Block IB) in both sea- and land-based configurations, and more advanced sensors, to expand the defended area against short- and medium-range missile threats; Phase Three (in the 2018 timeframe) – After development and testing are complete, deploy the more advanced SM-3 Block IIA variant under development, to counter short-, medium-, and intermediate-range missile threats; and Phase Four (in the 2020 timeframe) – After development and testing are complete, deploy the SM-3 Block IIB to help better cope with medium- and intermediate-range missiles and the potential future ICBM threat to the United States. The deployment of warships equipped with the Aegis RIM-161 SM-3 missile began after Obama's speech in September 2009. These missiles complement the Patriot missile systems already deployed by American units. Though initially supportive of the plan, once was actually deployed to the Black Sea, the Russian Foreign Ministry issued a statement voicing concern about the deployment. On February 4, 2010, Romania agreed to host the SM-3 missiles starting in 2015 at Deveselu. The first element of this revised system, an early warning radar station in Malatya, Turkey, went operational on 16 January 2012. The BMD component in Romania is undergoing an upgrade in May 2019; in the interim a THAAD unit, B Battery (THAAD), 62nd Air Defense Artillery Regiment, has emplaced in NSF Deveselu, Romania; the upgrade was completed August 9, 2019 and the THAAD battery has returned to its home station. Other parts of the missile defence system are planned to be built in Portugal, Poland, Romania and Spain. In September 2011, NATO invited India to be a partner in its ballistic missile defence system. V. K. Saraswat, the architect of Indian Ballistic Missile Defense Program, subsequently told the press, "We are analysing the report. It is under consideration." Also in September 2011, the White House released a Factsheet that reports on the European Phased Adaptive Approach (EPAA). With respect to EPAA's implementation as part of the NATO missile defense in Europe the factsheet notes the four phases outlined above: Phase One (2011 timeframe) will address short- and medium-range ballistic missile threats by deploying current and proven missile defense systems. It calls for the deployment of Aegis Ballistic Missile Defense (BMD)-capable ships equipped with proven SM-3 Block IA interceptors. In March of this year the USS Monterey was the first in a sustained rotation of ships to deploy to the Mediterranean Sea in support of EPAA. Phase One also calls for deploying a land-based early warning radar, which Turkey agreed to host as part of the NATO missile defense plan. Phase Two (2015 timeframe) will expand coverage against short- and medium-range threats with the fielding of a land-based SM-3 missile defense interceptor site in Romania and the deployment of a more capable SM-3 interceptor (the Block IB). On September 13, the United States and Romania signed the U.S.-Romanian Ballistic Missile Defense Agreement. Once ratified, it will allow the United States to build, maintain, and operate the land-based BMD site in Romania. The missile defense system in Deveselu became operational on 18 December 2015. Phase Three (2018 timeframe) will improve coverage against medium- and intermediate-range missile threats with an additional land-based SM-3 site in Poland and the deployment of a more advanced SM-3 interceptor (the Block IIA). Poland agreed to host the interceptor site in October 2009, and today, with the Polish ratification process complete, this agreement has entered into force. Phase Four (2020 timeframe) will enhance the ability to counter medium- and intermediate-range missiles and potential future inter-continental ballistic missile (ICBM) threats to the United States from the Middle East, through the deployment of the SM-3 Block IIB interceptor. Each phase will include upgrades to the missile defense command and control system. During its 2012 Chicago Summit NATO leaders declared that the NATO missile defence system has reached interim capability. Interim capability means that a basic command and control capability has been tested and installed at NATOs Headquarters Allied Air Command in Ramstein, Germany, while NATO Allies provide sensors and interceptors to connect to the system. It also means that US ships with anti-missile interceptors in the Mediterranean Sea and a Turkey-based radar system have been put under NATO command in the German base. "Our system will link together missile defence assets from different Allies – satellites, ships, radars and interceptors – under NATO command and control. It will allow us to defend against threats from outside the Euro-Atlantic area," NATO Secretary General Anders Fogh Rasmussen said. NATO long-term goal is to merge missile defence assets provided by individual Allies into a coherent defence system so that full coverage and protection for all NATO European populations, territory and forces against the threats posed by proliferation of ballistic missiles is ensured. This goal is expected to be released sometime between the end of the 2010s and the beginning of the 2020s. To this end Spain will host four US Aegis warships at its port in Rota while Poland and Romania have agreed to host US land-based SM-3 missiles in the coming years. According to a State Department official Frank A. Rose, the United States has "offered EPAA assets to the Alliance" as an "interim BMD capability", including the AN/TPY-2 radar deployed in Turkey, which is "under NATO operational control". Rose also said that "In addition, U.S. BMD-capable Aegis ships in Europe are also now able to operate under NATO operational control when threat conditions warrant." In 2020, the Aegis Ashore site in Poland had not yet been completed, due to incomplete auxiliary controls for heating, power, and cooling. Missile Defense Agency's Vice Admiral Jon Hill will announce in February 2020 whether another contractor will be required. The Aegis SM-3 Block IB missiles for Poland are already on-site; the Romanian site is operational. A 2012 GAO report found that the phase four interceptors may be poorly placed and of the wrong type to defend the United States. This capability was planned to be in place by 2020, but this has "been delayed to at least 2022 due to cuts in congressional funding." Some republicans including Mitt Romney, Dick Cheney and John McCain have called Obama's changes from the system Bush proposed a "gift" to Vladimir Putin, but Gates wrote in Duty: Memoirs of a Secretary at War that the change was made to ensure a more effective defense for Europe. National systems Poland has sought cooperation with France and Germany in the establishment of a joint missile defense system. See also Aegis Ashore Aegis Ballistic Missile Defense System Destroyer Squadron 60 Missile defense National missile defense References External links Active Layered Theatre Ballistic Missile Defence NATO topic page Missile defence Missile defense Missile defence
10940889
https://en.wikipedia.org/wiki/Saigon%20University
Saigon University
Saigon University (SGU) is a public university located in Ho Chi Minh City, Vietnam. The university offers over 30 degree programs through its academic faculties in 3 campuses, including law, business administration, information technology, applied mathematics, environmental science, biotechnology, electrical engineering, psychology, international studies, English language studies, Vietnam studies, library science and pedagogical subjects. History Saigon University was established on 25 April 2007 upon Government Decision No. 478/QĐ-TTTg by Vietnamese Prime Minister Nguyễn Tấn Dũng, operating under the People's Committee in Ho Chi Minh City. It was founded on the basis of the Ho Chi Minh City College of Pedagogy. The first enrollments for this university started in July 2007. Campus The headquarters of Saigon University is in Ho Chi Minh City, with an official address of 273 An Duong Vuong in District 5. Other campuses are located at: 04 Ton Duc Thang, District 1 105 Ba Huyen Thanh Quan, Ward 7, District 3 Saigon University Practice Primary School - 20 Ngo Thoi Nhiem, Ward 7, District 3 Saigon Practice High School - 220 Tran Binh Trong, Ward 4, District 5 A new campus is currently under development in the new urban area in Southern Ho Chi Minh City. Academics The structure of each "faculty" at SGU is comparable to those of "colleges" in the United States institutions, where each faculty is composed of two or more departments. Pedagogy-related faculties Natural Sciences Pedagogy Physics Chemistry Biology Mathematics and Applications Applied Mathematics Math Social Sciences Pedagogy Vietnam Literature History Geography Elementary Pedagogy Childhood Education Kindergarten Pedagogy Early Childhood Education Fine Arts Pedagogical Fine Arts Performing Arts Pedagogical Performing Arts Education Administration Psychology Education Administration Other faculties Culture & Tourism Vietnam Studies (Culture & Tourism) International Studies Tourism Foreign Language English Language Studies Pedagogical English Information Technology Business Administration Library and Information Science Library Science Office Administration Archival Science Executive Secretary Finance and Accounting Environmental Science Political Studies Law Engineering Industrial Engineering Agricultural Engineering Family Economics Electronics and Telecommunication Engineering Academic research centers Information Technology Center Key learning Facilities Center Foreign Language Center Information - Communications & Education Development Center Environment and Natural resources Student Assistance Center International Training Center Partner universities IMC University of Applied Sciences Krems External links Saigon University Universities in Ho Chi Minh City
25135080
https://en.wikipedia.org/wiki/Adam%20DeGraide
Adam DeGraide
Adam DeGraide (born September 20, 1971) is an American business executive involved in the digital marketing and entertainment industries. He is the co-founder of four software & digital marketing companies, an independent record label and an independent film company. He produced the movie short Most, with William Zabka. Business career Anthem Software After a 3rd successful business exit, DeGraide began his fourth software & digital marketing company, Anthem Software, in Orlando, Florida. Anthem is a software & digital marketing company assisting the local small business grow their business through business management software & digital marketing. Crystal Clear Digital Marketing In 2013, DeGraide began his third software & digital marketing company, Crystal Clear Digital Marketing, in Orlando, Florida. Crystal Clear is a digital marketing company assisting the local healthcare provider in Internet Marketing. DeGraide successfully completed the sale of Crystal Clear to Patient Now in November 2020 and immediately went on to start his 4th software and marketing company Anthem Business Software, LLC dedicated to helping small businesses. Astonish Results In early 2006, DeGraide co-founded Astonish Results. The company provided digital marketing and training services for independent insurance agencies in the U.S. In 2011, Astonish Results received an undisclosed amount of equity investment from investor Serent Capital to further develop Astonish Results' growth, tools, and services. After being rebranded as Intygral in March 2015, the company was then acquired by Zywave in July 2015 and has since been absorbed. Astonish Entertainment In May 2005 DeGraide founded Astonish Entertainment, also known as Astonish Records. He signed four rock artists: No More Kings with whom he used to play bass, Aranda, Soular, Dirt Poor Robins, and pop singer David Martin. BZ Results In 1997, DeGraide began his first company, BZ Results, in Providence, Rhode Island. BZ is a digital marketing company assisting car dealers in Internet marketing. The company was acquired in 2006 by Automatic Data Processing for $125 million. Elected Positions Disabled Veterans Insurance Careers elected Adam DeGraide to its strategic board. In 2006, DeGraide launched Astonish, which was previously ranked 267th on the Inc. 500 list of fastest-growing private companies in the U.S. and served more than 7,000 insurance industry users. References American chief executives 1971 births Living people
5671899
https://en.wikipedia.org/wiki/Linux%20User%20and%20Developer
Linux User and Developer
Linux User & Developer was a monthly magazine about Linux and related Free and open source software published by Future. It was a UK magazine written specifically for Linux professionals and IT decision makers. It was available worldwide in newsagents or via subscription, and it could be downloaded via Zinio or Apple's Newsstand. History and profile Linux User & Developer was first published in September 1999. In August 2014 its sister magazine, RasPi, was launched. The last issue of Linux User & developer was on 20 September 2018 (#196). All previous subscribers received issues of Linux Format as compensation for the next remaining issues of their subscription. Staff Chris Thornett - Editor References External links Official homepage Digital Editions of the Magazine Defunct computer magazines published in the United Kingdom Linux magazines Magazines established in 1999 Magazines disestablished in 2018 Monthly magazines published in the United Kingdom 1999 establishments in the United Kingdom
294229
https://en.wikipedia.org/wiki/DragonFly%20BSD
DragonFly BSD
DragonFly BSD is a free and open-source Unix-like operating system forked from FreeBSD 4.8. Matthew Dillon, an Amiga developer in the late 1980s and early 1990s and FreeBSD developer between 1994 and 2003, began working on DragonFly BSD in June 2003 and announced it on the FreeBSD mailing lists on 16 July 2003. Dillon started DragonFly in the belief that the techniques adopted for threading and symmetric multiprocessing in FreeBSD 5 would lead to poor performance and maintenance problems. He sought to correct these anticipated problems within the FreeBSD project. Due to conflicts with other FreeBSD developers over the implementation of his ideas, his ability to directly change the codebase was eventually revoked. Despite this, the DragonFly BSD and FreeBSD projects still work together, sharing bug fixes, driver updates, and other improvements. Intended as the logical continuation of the FreeBSD 4.x series, DragonFly has diverged significantly from FreeBSD, implementing lightweight kernel threads (LWKT), an in-kernel message passing system, and the HAMMER file system. Many design concepts were influenced by AmigaOS. System design Kernel The kernel messaging subsystem being developed is similar to those found in microkernels such as Mach, though it is less complex by design. However, DragonFly uses a monolithic kernel system. DragonFly's messaging subsystem has the ability to act in either a synchronous or asynchronous fashion, and attempts to use this capability to achieve the best performance possible in any given situation. According to developer Matthew Dillon, progress is being made to provide both device input/output (I/O) and virtual file system (VFS) messaging capabilities that will enable the remainder of the project goals to be met. The new infrastructure will allow many parts of the kernel to be migrated out into userspace; here they will be more easily debugged as they will be smaller, isolated programs, instead of being small parts entwined in a larger chunk of code. Additionally, the migration of select kernel code into userspace has the benefit of making the system more robust; if a userspace driver crashes, it will not crash the kernel. System calls are being split into userland and kernel versions and being encapsulated into messages. This will help reduce the size and complexity of the kernel by moving variants of standard system calls into a userland compatibility layer, and help maintain forwards and backwards compatibility between DragonFly versions. Linux and other Unix-like OS compatibility code is being migrated out similarly. Threading As support for multiple instruction set architectures complicates symmetric multiprocessing (SMP) support, DragonFly BSD now limits its support to the x86-64 platform. DragonFly originally ran on the x86 architecture, however as of version 4.0 it is no longer supported. Since version 1.10, DragonFly supports 1:1 userland threading (one kernel thread per userland thread), which is regarded as a relatively simple solution that is also easy to maintain. Inherited from FreeBSD, DragonFly also supports multi-threading. In DragonFly, each CPU has its own thread scheduler. Upon creation, threads are assigned to processors and are never preemptively switched from one processor to another; they are only migrated by the passing of an inter-processor interrupt (IPI) message between the CPUs involved. Inter-processor thread scheduling is also accomplished by sending asynchronous IPI messages. One advantage to this clean compartmentalization of the threading subsystem is that the processors' on-board caches in symmetric multiprocessor systems do not contain duplicated data, allowing for higher performance by giving each processor in the system the ability to use its own cache to store different things to work on. The LWKT subsystem is being employed to partition work among multiple kernel threads (for example in the networking code there is one thread per protocol per processor), reducing competition by removing the need to share certain resources among various kernel tasks. Shared resources protection In order to run safely on multiprocessor machines, access to shared resources (like files, data structures) must be serialized so that threads or processes do not attempt to modify the same resource at the same time. In order to prevent multiple threads from accessing or modifying a shared resource simultaneously, DragonFly employs critical sections, and serializing tokens to prevent concurrent access. While both Linux and FreeBSD 5 employ fine-grained mutex models to achieve higher performance on multiprocessor systems, DragonFly does not. Until recently, DragonFly also employed spls, but these were replaced with critical sections. Much of the system's core, including the LWKT subsystem, the IPI messaging subsystem and the new kernel memory allocator, are lockless, meaning that they work without using mutexes, with each process operating on a single CPU. Critical sections are used to protect against local interrupts, individually for each CPU, guaranteeing that a thread currently being executed will not be preempted. Serializing tokens are used to prevent concurrent accesses from other CPUs and may be held simultaneously by multiple threads, ensuring that only one of those threads is running at any given time. Blocked or sleeping threads therefore do not prevent other threads from accessing the shared resource unlike a thread that is holding a mutex. Among other things, the use of serializing tokens prevents many of the situations that could result in deadlocks and priority inversions when using mutexes, as well as greatly simplifying the design and implementation of a many-step procedure that would require a resource to be shared among multiple threads. The serializing token code is evolving into something quite similar to the "Read-copy-update" feature now available in Linux. Unlike Linux's current RCU implementation, DragonFly's is being implemented such that only processors competing for the same token are affected rather than all processors in the computer. DragonFly switched to multiprocessor safe slab allocator, which requires neither mutexes nor blocking operations for memory assignment tasks. It was eventually ported into standard C library in the userland, where it replaced FreeBSD's malloc implementation. Virtual kernel Since release 1.8 DragonFly has a virtualization mechanism similar to User-mode Linux, allowing a user to run another kernel in the userland. The virtual kernel (vkernel) is run in completely isolated environment with emulated network and storage interfaces, thus simplifying testing kernel subsystems and clustering features. The vkernel has two important differences from the real kernel: it lacks many routines for dealing with the low-level hardware management and it uses C standard library (libc) functions in place of in-kernel implementations wherever possible. As both real and virtual kernel are compiled from the same code base, this effectively means that platform-dependent routines and re-implementations of libc functions are clearly separated in a source tree. The vkernel runs on top of hardware abstractions provided by the real kernel. These include the kqueue-based timer, the console (mapped to the virtual terminal where vkernel is executed), the disk image and virtual kernel Ethernet device (VKE), tunneling all packets to the host's tap interface. Package management Third-party software is available on DragonFly as binary packages via pkgng or from a native ports collection – DPorts. DragonFly originally used the FreeBSD Ports collection as its official package management system, but starting with the 1.4 release switched to NetBSD's pkgsrc system, which was perceived as a way of lessening the amount of work needed for third-party software availability. Eventually, maintaining compatibility with pkgsrc proved to require more effort than was initially anticipated, so the project created DPorts, an overlay on top of the FreeBSD Ports collection. CARP support The initial implementation of Common Address Redundancy Protocol (commonly referred to as CARP) was finished in March 2007. As of 2011, CARP support is integrated into DragonFly BSD. HAMMER file systems Alongside the Unix File System, which is typically the default file system on BSDs, DragonFly BSD supports the HAMMER and HAMMER2 file systems. HAMMER2 is the default file system as of version 5.2.0. HAMMER was developed specifically for DragonFly BSD to provide a feature-rich yet better designed analogue of the increasingly popular ZFS. HAMMER supports configurable file system history, snapshots, checksumming, data deduplication and other features typical for file systems of its kind. HAMMER2, the successor of the HAMMER file system, is now considered stable, used by default, and the focus of further development. Plans for its development were initially shared in 2012. In 2017, Dillon announced that the next DragonFly BSD version (5.0.0) would include a usable, though still experimental, version of HAMMER2, and described features of the design. With the release after 5.0.0, version 5.2.0, HAMMER2 became the new default file system. devfs In 2007 DragonFly BSD received a new device file system (devfs), which dynamically adds and removes device nodes, allows accessing devices by connection paths, recognises drives by serial numbers and removes the need for pre-populated /dev file system hierarchy. It was implemented as a Google Summer of Code 2009 project. Application snapshots DragonFly BSD supports Amiga-style resident applications feature: it takes a snapshot of a large, dynamically linked program's virtual memory space after loading, allowing future instances of the program to start much more quickly than it otherwise would have. This replaces the prelinking capability that was being worked on earlier in the project's history, as the resident support is much more efficient. Large programs like those found in KDE Software Compilation with many shared libraries will benefit the most from this support. Development and distribution As with FreeBSD and OpenBSD, the developers of DragonFly BSD are slowly replacing pre-function prototype-style C code with more modern, ANSI equivalents. Similar to other operating systems, DragonFly's version of the GNU Compiler Collection has an enhancement called the Stack-Smashing Protector (ProPolice) enabled by default, providing some additional protection against buffer overflow based attacks. , the kernel is no longer built with this protection by default. Being a derivative of FreeBSD, DragonFly has inherited an easy-to-use integrated build system that can rebuild the entire base system from source with only a few commands. The DragonFly developers use the Git version control system to manage changes to the DragonFly source code. Unlike its parent FreeBSD, DragonFly has both stable and unstable releases in a single source tree, due to a smaller developer base. Like the other BSD kernels (and those of most modern operating systems), DragonFly employs a built-in kernel debugger to help the developers find kernel bugs. Furthermore, , a debug kernel, which makes bug reports more useful for tracking down kernel-related problems, is installed by default, at the expense of a relatively small quantity of disk space. When a new kernel is installed, the backup copy of the previous kernel and its modules are stripped of their debugging symbols to further minimize disk space usage. Distribution media The operating system is distributed as a Live CD and Live USB (full X11 flavour available) that boots into a complete DragonFly system. It includes the base system and a complete set of manual pages, and may include source code and useful packages in future versions. The advantage of this is that with a single CD users can install the software onto a computer, use a full set of tools to repair a damaged installation, or demonstrate the capabilities of the system without installing it. Daily snapshots are available from the master site for those who want to install the most recent versions of DragonFly without building from source. Like the other free and open-source BSDs, DragonFly is distributed under the terms of the modern version of the BSD license. Release history See also Comparison of BSD operating systems Comparison of open-source operating systems Comparison of operating system kernels References External links 2004 software Berkeley Software Distribution Free software operating systems Operating system distributions bootable from read-only media Software forks Software using the BSD license Unix variants X86 operating systems
17741849
https://en.wikipedia.org/wiki/University%20of%20Applied%20Sciences%20Ravensburg-Weingarten
University of Applied Sciences Ravensburg-Weingarten
Ravensburg-Weingarten University of Applied Sciences (RWU) (German: Hochschule Ravensburg-Weingarten) is a public university in the city of Weingarten, in the southern part of the German state of Baden-Württemberg. The university was founded in 1962 and tracks advanced fields in Applied Physics and Process Engineering, Electrical Engineering and Computer Science, Automotive and Mechatronics Engineering, Mechanical Engineering and Business & Technology Management as well as Social Work and Healthcare Management. The university has a very good reputation regarding engineering and technology studies; according to CHE ranking, published weekly by major German newspaper Die Zeit, RWU frequently ranks among the top universities in the fields of mechanical and electrical engineering, as well as computer science. History In 1962, Baden-Wuerttemberg Parliament decided to build a State School of Engineering (SIS) in Ravensburg with two departments, Mechanical Engineering and Physical Engineering. In 1967, the first students subsequently graduated as engineers. In 1971, The State Schools of Engineering become Fachhochschulen, commonly known as Universities of Applied Sciences English. In 1974, the Ministry of Education decided the establishment of two new faculties: Electronics and Business Administration. The university was further extended with the introduction of the Electronics program as part of the Physical Engineering department in May 1974. Today, Ravensburg-Weingarten University offers 35 undergraduate and graduate degree programs in German and English in four faculties: The Faculty of Electrical Engineering and Computer Science, the Faculty of Mechanical Engineering, the Faculty of Technology and Management and the Faculty of Social Work, Health and Nursing. RWU employs 296 academic and administrative staff members. PLUS studies In cooperation with the near Weingarten University of Education which specializes in teachers' education, the university provides some courses leading to a double graduation with a bachelor's degree in Engineering and an additionally possibility to obtain a university degree in Teaching at professional schools. Structure The university is divided into several departments which provide a broad range of Engineering, Social Work and Management courses: Department of Electrical Engineering and Computer Science Department of Mechanical Engineering Department of Social Work and Healthcare Management Department of Technology and Management References External links Website of Ravensburg-Weingarten University of Applied Sciences Universities of Applied Sciences in Germany Universities and colleges in Baden-Württemberg Educational institutions established in 1964 1964 establishments in Germany
2006843
https://en.wikipedia.org/wiki/Pitney%20Bowes
Pitney Bowes
Pitney Bowes Inc. is an American technology company most known for its postage meters and other mailing equipment and services, and with expansions into e-commerce, software, and other technologies. The company was founded by Arthur Pitney, who invented the first commercially available postage meter, and Walter Bowes as the Pitney Bowes Postage Meter Company on April 23, 1920. Pitney Bowes provides customer engagement, customer information management, global e-commerce, location intelligence, and mailing and shipping services to approximately 1 million customers in about 100 countries around the world in 2016. The company is a certified "work-share partner" of the United States Postal Service, and helps the agency sort and process 15 billion pieces of mail annually. Pitney Bowes has also commissioned surveys related to international e-commerce. Pitney Bowes is based in Stamford, Connecticut, and operates a 300,000-square-foot Global Technology Center for manufacturing and engineering in Danbury, Connecticut. The company has 33 operating centers throughout the United States, and additional offices in Hatfield (United Kingdom), New Delhi, Tokyo and Bielsko-Biala (Poland). As of December 2016, Pitney Bowes employed approximately 14,000 people worldwide. History In 1902, Arthur Pitney patented his first "double-locking" hand-cranked postage-stamping machine, and with patent attorney Eugene A. Rummler, founded the Pitney Postal Machine Company. In 1908, English emigrant and founder of the Universal Stamping Machine Company Walter Bowes began providing stamp-canceling machines to the United States Post Office Department. Bowes moved his operations to Stamford in 1917. These two companies merged to form the Pitney Bowes Postage Meter Company in 1920 with the invention of the first commercially available postage meter. The company created its first logo, which symbolized "the security of the metered mail system", in 1930. In 1950, Pitney Bowes initiated an advertising campaign in national publications with the message, "Metered mail makes the mailer's life easier". In 1971, the company introduced a new logo, which represented the "intersection of paper-based and electronic communication". Pitney Bowes was valued at around $18 billion in December 1998. In April 2003, Pitney Bowes filed a lawsuit in Seattle's King County Superior Court against Mark Browne and Howard Gray, who founded the competing company Nexxpost in 2002, as well as six other former employees, for engaging "in transgressions ranging from misappropriation of trade secrets to violating confidentiality agreements". The two companies reached a settlement in August 2003. The company reported a profit of $498.1 million in 2003. In 2005, Pitney Bowes' revenue and earnings increased by more than 11 percent, and the company employed 32,500 people. In 2006, the company had $5.7 billion in annual revenue, and more than 35,000 employees. In 2008, in conjunction with other companies, Pitney Bowes donated two of its 3,400 patents to the Eco-Patent Commons, which is operated by the World Business Council for Sustainable Development, in an effort to reduce pollution. One of the patents increases the lifespan of electronic scales, reducing landfill waste, and the other is an inkjet printing technology that reduces ink use. In 2009, Pitney Bowes was named one of the world's largest software companies by Software Magazine. The company earned $98.6 million during the last three months of 2009, compared to $74 million the year before; during the same time, revenue decreased by 6 percent, from $1.55 billion to $1.45 billion. In December 2009, Pitney Bowes opened its first customer innovation center in Shelton, Connecticut. The company sold its I.M. Pei & Partners-designed headquarters in Stamford for nearly $40 million in 2013, and relocated to a new, smaller headquarters in the city. According to the Hartford Courant, Pitney Bowes was eligible to receive as much as $27 million in subsidies over five years as part of the state's "First Five" program, for keeping 1,600 employees and adding 200 more. In 2014, the company announced plans for a rebrand. Pitney Bowes unveiled its new logo in January 2015, replacing one used since 1971; the rebranding campaign, which included an updated website and marketing, reportedly cost between $40 million and $80 million. In February 2012, the credit rating for Pitney Bowes International Holdings was lowered by Fitch Ratings from BBB+ to BBB. The ratings agency said its main concern was "the downward trajectory" of Pitney Bowes' revenue, and added that they have a "negative outlook." In March 2014, Moody's assigned a long-term rating of Baa2 to the company's proposed $350 million senior unsecured notes (due 2024) and reiterated their stable outlook on PBI. Moody's cited "an improvement in the company's operating margin to around 19%, from about 15% historically, following the sale of its labor intensive management services business" and "an operational restructuring which could yield annual cost savings of up to $170 million by 2016." In 2016, the company launched its first television advertising campaign in nearly twenty years; "Craftsmen of Commerce" cost $20 million and included three advertisements for national news and sports networks. Pitney Bowes announced a six-month startup accelerator program, "Scale-Up", in August 2016. Companies that participated in the program, which was a collaboration with NASSCOM's 10,000 Startups initiative, included: eCourierz, an online automated shipping tool; Infinite Analytics, a data analytics company; the digital health platform Medimojo; Niki, which uses artificial intelligence to make ordering processes simpler; Sponsifyme, a geolocation-integrated marketing platform, and Wedosky. The company employed 15,700 people and earned $3.4 billion in revenue in 2016, which was a 5 percent decrease from 2015. Profits in 2015 totaled $408 million, but were reduced to $95 million in 2016. Pitney Bowes' executives said the declines were caused by "the changeover to a new U.S. enterprise-business platform — a change that disrupted short-term business, but one they have said would significantly improve the company's long-term operations." In March 2017, Pitney Bowes left the S&P 500 Index, having been listed since the stock market index was established in 1957, and joined the S&P 400. Acquisitions and divestitures In 1995, Pitney Bowes sold Dictaphone Corp., which produced communication and dictation recording systems, to an affiliate of the investment group Stonington Partners Inc. for $450 million. Imagistics International was spun-off from Pitney Bowes' copier and fax business in 2001. Since 2001, Pitney Bowes has spent $1 billion on acquisitions. In 2001, Bell & Howell sold its international Mail and Messaging Technologies business to Pitney Bowes. Pitney Bowes also acquired Danka Services International (part of Danka Business Systems PLC) for $290 million in cash, and the French postage meter company Secap. In 2002, Pitney Bowes acquired the Omaha, Nebraska-based mail presorting company PSI Group for $130 million, followed by the Landover, Maryland-based DDD Company, which developed mail and messenger services, for $49.5 million in 2003. In 2004, Pitney Bowes acquired the Lanham, Maryland-based company Group 1 Software, which develops mailing technology, for $380 million, as well as International Mail Express for $29 million. In February 2005, Pitney Bowes completed transactions in Brazil and India, expanding into both markets for the first time. In Brazil, the company partnered with Semco Participacoes to form Pitney Bowes Semco Equipamentos e Servicos, offering mailing equipment, production mail, and software services. Pitney Bowes acquired the mailing division of Kilburn Office Automation Limited, forming the New Delhi-based Pitney Bowes India. Pitney Bowes acquired the litigation support services provider Compulit Inc. one month later, creating Pitney Bowes Legal Solutions. Pitney Bowes purchased the marketing services company Imagitas in 2005 for $230 million in stock, which was sold to Red Ventures in 2015. The company spun-off Capital Services in 2005 to New York private-equity group Cerberus Capital Management. Pitney Bowes acquired multiple companies in 2006, including Emtex and its output management software for $41 million, and the Providence, Rhode Island-based company Ibis Consulting, Inc., which provides electronic discovery services, for nearly $67 million. The company also acquired Advertising Audit Service, PMH Caramanning, and the Bellevue, Washington-based company Print Inc., which provides print management solutions. In 2007, Pitney Bowes acquired MapInfo Corporation and its location intelligence solutions. The company moved out of MapInfo's building in North Greenbush, New York's Rensselaer Technology Park, and into other offices within the science park. Pitney Bowes also acquired the Toronto-based customer relationship management services company Digital Cement for nearly $40 million in cash. The British software development company Portrait Software was acquired by Pitney Bowes in 2010 for nearly $64.8 million in cash. Pitney Bowes sold its management services division to Apollo Global Management in 2013 for $400 million. In May 2015, Pitney Bowes acquired the online shopping services provider Borderfree for about $395 million. Borderfree was founded in Israel in 1999 initially as a forex conversion site for retailers and subsequently pivoted its business to providing cross-border e-commerce solutions for US retailers. The company also acquired the cloud-based software developer Enroute Systems Corp. for an undisclosed amount, followed by the presort services provider Zip Mail Services. In mid 2016, Pitney Bowes acquired Maponics, which provides "geospatial boundary and contextual data", for an undisclosed amount. In February 2017, the company acquired the Naperville, Illinois-based mailing solutions company ProSORT for an undisclosed amount. Pitney Bowes merged its Des Plaines operating center into a larger Naperville facility. In September 2017, Pitney Bowes acquired Newgistics, an Austin-based e-commerce and retail logistics company, for $475 million, with the stated aim of "accelerating Pitney Bowes' expansion into the U.S. domestic parcels market." Following the acquisition, Newgistics CEO Todd Everett (who joined the company in 2005 as Director of Operations and was named CEO in 2015) continued to lead Newgistics within Pitney Bowes' corporate framework. In mid 2018, Pitney Bowes' Document Messaging Technologies (DMT) division was acquired by Platinum Equity in exchange for $361 million, and the newly acquired business was re-branded as BlueCrest. In August 2019, Syncsort announced plans to acquire Pitney Bowes' software solutions business for approximately $700 million. The transaction was completed in December 2019. Leadership Marc Lautenbach has served as Pitney Bowes' president and CEO since December 2012. He has been credited with prioritizing innovation and moving the company into e-commerce and other technology services. In 2016, Stanley Sutula III was named executive vice president and chief financial officer; he succeeded Michael Monahan. Other key personnel include: Lila Snyder, executive vice president and president of commerce services; James A. Fairweather, who serves as chief innovation officer, and Jason Dies, executive vice president and president of Sending Technology Solutions. Former CEOs include Murray D. Martin, who served from 2007–2012, and Michael J. Critelli, who served as chairman and chief executive for ten years. Other previous CEOs included: George Harvey, Fred Allen, John Nicklis, Harry Nordberg, and Walter Wheeler. Products and services Pitney Bowes introduced the Model M Postage Meter, which was authorized by the United States Postal Service on September 1, 1920. The company released the first mass-market meter designed for desktops in 1949. The first automatic mail sorters were launched by the company in 1957, and mail inserters were created in 1961 to increase productivity and decrease costs associated with volume mailing. In 1968, Pitney Bowes created the first bar code equipment for retail use. The company launched Postage by Phone in 1978, reducing reliance on post office visits. In 1986, the company began offering fax machines and scales with microprocessors. Pitney Bowes introduced Paragon, which calculates and affixes postage based on size and weight, in 1992. Line of credit for postage was launched in 1996, followed by D3 software, which allowed message management via email, fax, hard copy, and web, in 1998. In the 2000s, Pitney Bowes introduced its DM1000 Mailing System and IntelliLink technology, a new collection of digital postage meters called the DM Infinity Series, four AddressRight printers, and the IntelliJet Printing System. The company enhanced its Internet-based shipping service in 2011 with the introduction of its 'pbSmartPostage' mailing tool, which "[integrates] postage, package routing, shipping management and reporting into a Web app that can be accessed from any PC with a printer". In 2015, Pitney Bowes launched its AcceleJet inkjet system, which targets transactional printers and is intended for companies printing at high volumes. In addition to stuffing envelopes, weighing documents, and printing postage, the 2015 model of the Relay Multi-Channel Communication Suite scans and uploads files and offers email marketing functions. Pitney Bowes launched its EngageOne Video software solution in September 2015, providing interactive and personalized video delivery experiences. In January 2016, Pitney Bowes began using technology by Electric Imp to enable Internet connectivity for postage meters. The software creates a maintenance program, describes and tracks problems with machines, and enhances data sharing. In March, Pitney Bowes introduced Single Customer View, which uses the company's Spectrum Technology Platform to facilitate data sharing. The customer relationship management aggregator is not specific to the medical industry, but marks a push into the healthcare field. In April 2016, Pitney Bowes launched its "Commerce Cloud" system, allowing customers to calculate payments, print labels, and process international transactions, among other tasks, using the company's applications. In mid 2016, the company introduced its first channel program and partnered with information technology providers, including global systems integrators Accenture and Capgemini, to help companies find and communicate with customers. The company released a digital device, called SmartLink, in July 2016; the product was developed in collaboration with Electric Imp and connects postage meters to Pitney Bowes' cloud-computing technology, and enabling other maintenance and monitoring services as well. Pitney Bowes also released a suite of digital services, including: Clarity Advisor, which collects machine data to "combat unplanned downtime"; Clarity Optimizer, which uses analytics to increase productivity; and Clarity Scheduler which, according to Computer Weekly, "automates placement of the right job on the right machine at the right time". In September 2016, the company partnered with Lighthouse Computer Services to create data solutions designed to help businesses identify and keep customers, improve marketing initiatives, and reduce fraud. Pitney Bowes released its SendPro300 product in October, and announced its Commerce Complete for Retail platform for expanding global e-commerce businesses. One month later, the company released its location intelligence tool, called GeoVision, which uses data provided by PSMA Australia and allows companies to "visualise, analyse, and ultimately make use of that data to inform decisions". Pitney Bowes launched its SendPro C Series in September 2017. Pitney Bowes launched the subsidiary Wheeler Financial to provide equipment financing to small and medium businesses in March 2019. In 2021 Pitney Bowes launched SendPro Online in Australia, a shipping platform that allows its customers to manage parcel sending from their devices. References Further reading External links Pitney Bowes Home Page Companies listed on the New York Stock Exchange American companies established in 1902 Multinational companies headquartered in the United States Companies based in Stamford, Connecticut Manufacturing companies established in 1902 Manufacturing companies based in Connecticut Office supply companies of the United States Customer communications management 1902 establishments in Connecticut Postal systems
47208561
https://en.wikipedia.org/wiki/NotGTAV
NotGTAV
NotTheNameWeWanted (formerly known as NotGTAV) is a casual video game developed and published by NotGames. Parodying Grand Theft Auto V in a Snake manner, all profits gained from sales of the game are donated to the Peer Productions charity. The game was released for Microsoft Windows, iOS and Android in 2014, while ports for OS X and Linux followed in 2015. Gameplay NotGames describes NotTheNameWeWanted as a "ruthless Snake-like parody" of Rockstar Games' 2013 hit title, Grand Theft Auto V, despite not sharing any aspects with the game; NotGTAV uses a top-down view model, in contrast to Grand Theft Auto V third-person view, is set in the United Kingdom, rather than the United States, and employs hand-drawn 2D sprites, rather than a fully 3D environment. Release NotGTAV was initially released for Microsoft Windows on 4 April 2014. On 2 July 2015, after a successful Steam Greenlight campaign, NotGames released NotGTAV onto Steam, alongside ports for OS X and Linux, however, the game disappeared again just a week later after, due to Valve allegedly receiving a DMCA takedown notice from Rockstar Games. The game was restored to Steam within 24 hours, after the notice was being treated as a false complainant. As of 21 April 2018, the game was rebranded to NotTheNameWeWanted. References External links 2015 video games Android (operating system) games Casual games Fangames Grand Theft Auto IOS games Indie video games Linux games MacOS games Parody video games Snake video games Video games developed in the United Kingdom Windows games
65867752
https://en.wikipedia.org/wiki/Diffeo%20%28company%29
Diffeo (company)
Diffeo ( ) is a software company that developed a collaborative intelligence text mining product for defense, intelligence and financial services customers. The Diffeo product is a recommender engine that analyzes text in a user's working documents, such as draft emails and web pages, identifying named entities and proposing related entities. Diffeo was founded in 2012 and was acquired by Salesforce in 2019. The company grew out of NIST's Text Retrieval Conference where the founding team organized the Knowledge Base Acceleration (KBA) evaluation to measure the effectiveness of recommender engines. History Founding The company was founded by three Hertz Fellows, Dan Roberts, Max Kleiman-Weiner, and John Frank, a co-founder of MetaCarta. The name Diffeo comes from a shortening of diffeomorphism, which two of the cofounders were learning about in a class about blackholes by Andrew Strominger. Diffeo was one of the first residents in hack/reduce. Funding In 2016, the company raised a seed round of approximately two million dollars from investors including Basis Technology and Carahsoft. Also in 2016, Diffeo acquired Meta, a search engine company founded by Jason Briggs, Emily Pavlini, and Aaron Taylor through a business plan competition at Williams College. Research Diffeo's research focused on recommender engines and evaluation protocols for measuring the benefits of recommender engines for end users. As part of running the Knowledge Base Acceleration (KBA) track in NIST's Text Retrieval Conference from 2012 to 2014, the co-founders of Diffeo released a public dataset of timestamped news and blogs spanning approximately 12,000 hours. The KBA track aimed to measure approaches to accelerating the assimilation of knowledge into knowledge bases like Wikipedia. The company's researchers published papers and open source code on machine learning techniques including Jacobian regularization, singular spectrum analysis, and hierarchical agglomerative clustering for entity disambiguation. Post-Acquisition In 2021, Salesforce announced an AI-powered assistant that helps B2B sales people with their deals. Briggs, who was previously CEO at Diffeo, is the Senior Director of Product Management, and helped in the creation of this AI assistant. This technology comes from Salesforce's acquisition of Diffeo, which also brought them Briggs. Product & technology The Diffeo product, Diffeo Enterprise HierCoref (DEHC), is a recommender engine that allows users to "invite" an agent into their work documents in order to identify named entities and recommend related entities that it identifies by crawling the Web and the user's data repositories. For example, the product has plugins that enable it to analyze a user's emails and web pages open in their web browser. The company's user meetings, called The AI<>Tradecraft Forum, brought together speakers from the information extraction industry and the US Intelligence Community, including NGA, United States Army, AFOSI, and NSA. Awards Diffeo won the 2019 MassChallenge FinTech grand prize, was selected into the 2018 FinTech Innovation Lab and was one of 13 companies in the 2017 Salesforce AI Incubator. Diffeo won the Hertz Foundation's 2015 Newman Entrepreneurial Initiative. The company was also a performer in DARPA's Memex program, and won the grand prize in the NGA Disparate Data Challenge. See also Collaborative intelligence Text Retrieval Conference Recommender engine Named-entity recognition Salesforce External links Diffeo on Github.com Hierarchical agglomerative clustering library written in Rust: https://github.com/diffeo/kodama https://trec-kba.org/ https://trec-dd.org/ TREC KBA Streamcorpus at http://s3.amazonaws.com/aws-publicdatasets/trec/kba/index.html TREC KBA corpus information at NIST https://trec.nist.gov/data/kba.html References Salesforce Data visualization software Business software companies Data analysis software Business intelligence companies Data companies American companies established in 2012 Software companies established in 2012 2019 mergers and acquisitions Software companies of the United States
15547460
https://en.wikipedia.org/wiki/List%20of%20monochrome%20and%20RGB%20color%20formats
List of monochrome and RGB color formats
This list of monochrome and RGB palettes includes generic repertoires of colors (color palettes) to produce black-and-white and RGB color pictures by a computer's display hardware. RGB is the most common method to produce colors for displays; so these complete RGB color repertoires have every possible combination of R-G-B triplets within any given maximum number of levels per component. Each palette is represented by a series of color patches. When the number of colors is low, a 1-pixel-size version of the palette appears below it, for easily comparing relative palette sizes. Huge palettes are given directly in one-color-per-pixel color patches. For each unique palette, an image color test chart and sample image (truecolor original follows) rendered with that palette (without dithering) are given. The test chart shows the full 256 levels of the red, green, and blue (RGB) primary colors and cyan, magenta, and yellow complementary colors, along with a full 256-level grayscale. Gradients of RGB intermediate colors (orange, lime green, sea green, sky blue, violet, and fuchsia), and a full hue spectrum are also present. Color charts are not gamma corrected. {| style="border-style: none" border="0" cellpadding="0" |- || || |} These elements illustrate the color depth and distribution of the colors of any given palette, and the sample image indicates how the color selection of such palettes could represent real-life images. These images are not necessarily representative of how the image would be displayed on the original graphics hardware, as the hardware may have additional limitations regarding the maximum display resolution, pixel aspect ratio and color placement. Implementation of these formats is specific to each machine. Therefore, the number of colors that can be simultaneously displayed in a given text or graphic mode might be different. Also, the actual displayed colors are subject to the output format used - PAL or NTSC, composite or component video, etc. - and might be slightly different. For simulated images and specific hardware and alternate methods to produce colors other than RGB (ex: composite), see the List of 8-bit computer hardware palettes, the List of 16-bit computer hardware palettes and the List of video game console palettes. For various software arrangements and sorts of colors, including other possible full RGB arrangements within 8-bit color depth displays, see the List of software palettes. Monochrome palettes These palettes only have some shades of gray, from black to white, both considered the most possible darker and lighter "grays", respectively. The general rule is that those palettes have 2n different shades of gray, where n is the number of bits needed to represent a single pixel. Monochrome (1-bit grayscale) Monochrome graphics displays typically have a black background with a white or light gray image, though green and amber monochrome monitors were also common. Such a palette requires only one bit per pixel. {| | | | |} Where photo-realism was desired, these early computer systems had a heavy reliance on dithering to make up for the limits of the technology. {| | | |} In some systems, as Hercules and CGA graphic cards for the IBM PC, a bit value of 1 represents white pixels (light on) and a value of 0 the black ones (light off); others, like the Atari ST and Apple Macintosh with monochrome monitors, a bit value of 0 means a white pixel (no ink) and a value of 1 means a black pixel (dot of ink), which it approximates to the printing logic. 2-bit Grayscale In a 2-bit color palette each pixel's value is represented by 2 bits resulting in a 4-value palette (22 = 4). {| | | | |} 2-bit dithering: {| style="border-style: none" border="0" cellpadding="0" |- || || |} It has black, white and two intermediate levels of gray as follows: A monochrome 2-bit palette is used on: The Monochrome Display Adapter for the IBM PC NeXT Computer, NeXTcube and NeXTstation monochrome graphic displays. Original Game Boy system portable video game console. Macintosh PowerBook 150 monochrome LC displays. Commodore Amiga with A2024 monochrome monitor in high-resolution mode. The original Amazon Kindle The original Wonderswan The Tiger Electronics Game.com portable video game console The original Neo Geo Pocket. 4-bit Grayscale In a 4-bit color palette each pixel's value is represented by 4 bits resulting in a 16-value palette (24 = 16): {| | | | |} 4-bit grayscale dithering does a fairly good job of reducing visible banding of the level changes: {| | | |} A monochrome 4-bit palette is used on: MOS Technology VDC (on the Commodore 128 with monochrome monitor) Amstrad CPC series with a GT64/GT65 Green Monitor (16 unique green shades) Amstrad CPC Plus series with the MM12 Monochrome monitor (16 shades of grey) Some Apple PowerBooks equipped with monochrome displays like the PowerBook 5300 8-bit Grayscale {| | | | |} In an 8-bit color palette each pixel's value is represented by 8 bits resulting in a 256-value palette (28 = 256). This is usually the maximum number of grays in ordinary monochrome systems; each image pixel occupies a single memory byte. Most scanners can capture images in 8-bit grayscale, and image file formats like TIFF and JPEG natively support this monochrome palette size. Alpha channels employed for video overlay also use (conceptually) this palette. The gray level indicates the opacity of the blended image pixel over the background image pixel. Dichrome palettes 16-bit RG palette {| | | | |- | Additive RG | Additive RG color palette |} 16-bit RB palette {| | | | |- | Additive RB | Additive RB color palette |} 16-bit GB palette {| | | | |- | Additive GB | Additive GB color palette |} Regular RGB palettes Here are grouped those full RGB hardware palettes that have the same number of binary levels (i.e., the same number of bits) for every red, green and blue components using the full RGB color model. Thus, the total number of colors are always the number of possible levels by component, n, raised to a power of 3: n×n×n = n3. 3-bit RGB {| | | | |} 3-bit RGB dithering: {| | | |} Systems with a 3-bit RGB palette use 1 bit for each of the red, green and blue color components. That is, each component is either "on" or "off" with no intermediate states. This results in an 8-color palette ((21)3 = 23 = 8) that have black, white, the three RGB primary colors red, green and blue and their correspondent complementary colors cyan, magenta and yellow as follows: The color indices vary between implementations; therefore, index numbers are not given. The 3-bit RGB palette is used by: The ECMA-48 standard for text terminals (sometimes known as the "ANSI standard", although ANSI X3.128 does not define colors) Teletext Level 1/1.5 Teletext. Videotex Oric BBC Micro The original NEC PC-8801 up to the MkII The original NEC PC-9801 with original 8086 CPU before the VM/VX models All Sharp X1 models before the X1 Turbo Z The Sharp MZ 700 Fujitsu FM-7, FM New 7, FM 77 before the FM77AV Sinclair QL The Macintosh SE with a color printer or external monitor The SECAM version of the Atari 2600 The Color Maximite, a PIC32 based microcomputer Arcadia 2001 Casio PV-1000 6-bit RGB {| | | | |} Systems with a 6-bit RGB palette use 2 bits for each of the red, green, and blue color components. This results in a (22)3 = 43 = 64-color palette as follows: 6-bit RGB systems include the following: Enhanced Graphics Adapter (EGA) for IBM PC/AT (only 16 colors can be displayed simultaneously) Sega Master System video game console GIME for TRS-80 Color Computer 3 (only 16 colors can be displayed simultaneously) Pebble Time smartwatch which has a 6-bit (64 color) e-paper display Parallax Propeller using the reference VGA circuit 9-bit RGB {| | | | |} Systems with a 9-bit RGB palette use 3 bits for each of the red, green, and blue color components. This results in a (23)3 = 83 = 512-color palette as follows: 9-bit RGB systems include the following: Atari ST (Normally 4 to 16 at once without tricks) MSX2 computers (up to 16 at once) Sega Genesis video game console (64 at once) Sega Nomad TurboGrafx-16 (NEC PC-Engine) ZX Spectrum Next The NEC PC-8801 Mk II SR and later models (8 of them at once) The Mindset computer (16 at once) 12-bit RGB {| | | | |} Systems with a 12-bit RGB palette use 4 bits for each of the red, green, and blue color components. This results in a (24)3 = 163 = 4096-color palette. 12-bit color can be represented with three hexadecimal digits, also known as shorthand hexadecimal form, which is commonly used in web design. The palette is as follows: 12-bit RGB systems include the following: Amiga OCS/ECS (32, 64, or 4,096 colors) Apple IIgs Video Graphics Chip (3,200 colors) Atari STe (16 colors) Acorn Archimedes Sega Game Gear (32 colors) Hi-Text Level 2.5+ Teletext Neo Geo Pocket Color (147 colors) Atari Lynx (16 colors) NEC PC-9801 VM/VX models typically equipped with a NEC V30 or better, but before the PC9821 Series. The Sharp X1 Turbo Z Series Fujitsu FM-77AV The Amstrad CPC 664Plus, 6128Plus and GX4000 (32 colors) NeXTstation Color and NeXTstation Turbo Color WonderSwan Color Thomson TO8 The Allegro library supported in the (legacy) version 4, an emulated 12-bit color mode example code ("ex12bit.c"), using 8-bit indexed color in VGA/SVGA. It used two pixels for each emulated pixel, paired horizontally, and a specifically adapted 256-color palette. One range of the palette was many brightnesses of one primary color (say green), and another range of the other two primaries mixed together at different amounts and brightnesses (red and blue). It effectively reduced the horizontal resolution by half, but allowed a 12-bit "true color" in DOS and other 8-bit VGA/SVGA modes. The effect also somewhat reduced the total brightness of the screen. 15-bit RGB {| | | | |} Systems with a 15-bit RGB palette use 5 bits for each of the red, green, and blue color components. This results in a (25)3 = 323 = 32,768-color palette (commonly known as Highcolor) as follows: 15-bit systems include: Super Nintendo Entertainment System (256 colors) Truevision TARGA and AT-Vista graphic cards for IBM PC/AT and compatibles, and NU-Vista for Apple Macintosh Later models of Super VGA (SVGA) IBM PC compatible graphic cards Nintendo Game Boy Color/Advance/SP/Micro pocket video game consoles Nintendo DS (2D output) Neo Geo AES/Neo Geo CD video game consoles (4096 colors) The Sega 32X Addon for the Mega Drive/Genesis While the PlayStation utilized a 24-bit color depth for calculations and video, textures applied to 3D objects had a maximum color depth of 15-bit. 18-bit RGB {| | | | |} Systems with an 18-bit RGB palette use 6 bits for each of the red, green, and blue color components. This results in a (26)3 = 643 = 262,144-color palette as follows: 18-bit RGB systems include the following: IBM 8514 (256 colors out of 262,144) Video Graphics Array (VGA) for IBM PS/2 and IBM PC compatibles (256 simultaneous colors from a palette of 262,144) Atari Falcon (256 colors) Nintendo DS (3D output and 2D blended output) Used internally by many LCD monitors 24-bit RGB {| | | |} Often known as truecolor and millions of colors, 24-bit color is the highest color depth normally used, and is available on most modern display systems and software. Its color palette contains (28)3 = 2563 = 16,777,216 colors. 24-bit color can be represented with six hexadecimal digits. The complete palette (shown above) needs a squared image of 4,096 pixels wide (50.33 MB uncompressed), and there is not enough room in this page to show it at full. This can be imagined as 256 stacked squares like the following, every one of them having the same given value for the red component, from 0 to 255. The color transitions in these patches must be seen as continuous. If color stepping (banding) inside is visible, then probably the display is set to a Highcolor (15- or 16- bits RGB, 32,768 or 65,536 colors) mode or lesser. {| | style="color:#000;"| Red = 0 | style="color:#500;"| Red = 85 (1/3 of 255) |- | style="color:#a00;"| Red = 170 (2/3 of 255) | style="color:#f00;"| Red = 255 |} This is also the number of colors used in true color image files, like Truevision TGA, TIFF, JPEG (the last internally encoded as YCbCr) and Windows Bitmap, captured with scanners and digital cameras, as well as those created with 3D computer graphics software. 24-bit RGB systems include: Amiga Advanced Graphics Architecture (256 or 262144 colors) Nintendo 3DS PlayStation PlayStation Vita Later models of Super VGA (SVGA) IBM PC compatible graphic cards Truevision AT-Vista graphic cards for IBM PC/AT and compatibles, and NU-Vista for Apple Macintosh. The Philips CD-i Nintendo Switch 30-bit RGB Some newer graphics cards support 30-bit RGB and higher. Its color palette contains (210)3 = 10243 = 1,073,741,824 colors. However, there are few operating systems or applications that support this mode yet. For some people, it may be hard to distinguish between higher color palettes than 24-bit color offers. However, the range of luminance, or gray scale, offered in a 30-bit color system would have 1,024 levels of luminance rather than the 256 of the common standard 24-bit, to which the human eye is more sensitive than to hue. This reduces the banding effect for gradients across large areas. Non-regular RGB palettes These also are full RGB palette repertories, but either they do not have the same number of levels for every red, green and blue components, or they are bit levels based. Nevertheless, all of them are used in very popular personal computers. For further details on color palettes for these systems, see the article List of 8-bit computer hardware palettes. 4-bit RGBI {| style="border-style: none" border="0" cellpadding="0" |- || || |} The 4-bit RGBI palette is similar to the 3-bit RGB palette but adds one bit for intensity. This allows each of the colors of the 3-bit palette to have a dark and bright variant, potentially giving a total of 23×2 = 16 colors. However, some implementations had only 15 effective colors due to the "dark" and "bright" variations of black being displayed identically. This 4-bit RGBI schema is used in several platforms with variations, so the table given below is a simple reference for the palette richness, and not an actual implemented palette. For this reason, no numbers are assigned to each color, and color order is arbitrary. Note that "dark white" is a lighter gray than "bright black" in this example. IBM PC graphics A common use of 4-bit RGBI was on IBM PCs and compatible computers that used a 9-pin DE-9 connector for color output. These computers used a modified "dark yellow" color that appeared to be brown. On displays designed for the IBM PC, setting a color "bright" added ⅓ of the maximum to all three channels' brightness, so the "bright" colors were whiter shades of their 3-bit counterparts. Each of the other bits increased a channel by ⅔, except that dark yellow had only ⅓ green and was therefore brown instead of ochre. PC graphics standards using this RGBI mode include: IBM's original Color Graphics Adapter. IBM's Enhanced Graphics Adapter, in CGA modes "Tandy graphics" on IBM's PCjr and Tandy 1000-series computers. Plantronics Colorplus on a limited number of PC-compatible computers. The CGA palette is also used by default by IBM's later EGA, MCGA, and VGA graphics standards for backward compatibility, but these standards allow the palette to be changed, since they either provide extra video signal lines or use analog RGB output. The MOS Technology 8563 and 8568 Video Display Controller chips used on the Commodore 128 series for its 80-column mode (and the unreleased Commodore 900 workstation) also used the same palette used on the IBM PC, since these chips were designed to work with existing CGA PC monitors. Other uses Other systems using a variation of the 4-bit RGBI mode include: The ZX Spectrum series of computers, which lack distinct "dark" and "light" black colors, resulting in an effective 15-color palette. The Sharp MZ-800 series computers. The Thomson MO5 and TO7 where the intensity bit created a variation of both brightness and saturation. The Mattel Aquarius and AlphaTantel where the intensity bit created a variations of brightness and saturation. 3-level RGB {| style="border-style: none" border="0" cellpadding="0" |- || || |} 3-level RGB dithering: {| | | |} The 3-level, or 1-trit (NOT 3 bits) RGB uses three levels for every red, green and blue color component, resulting in a 33 = 27 colors palette as follows: This palette is used by: The Amstrad CPC 464 series of personal computers excluding the Plus models (up to 16 colors simultaneously) The Toshiba Pasopia 7 (uses hardware dithering to simulate intermediate color intensities, based on a mix of the full intensity RGB primaries.) 8-bit RGB (also known as 3-3-2 bit RGB and 8-8-4 bit RGB) {| style="border-style: none" border="0" cellpadding="0" |- || || |} The 3-3-2 bit RGB use 3 bits for each of the red and green color components, and 2 bits for the blue component, due to the eyes having lesser sensitivity to blue. This results in an 8×8×4 = 256-color palette as follows: This palette is used by The MSX2 series of personal computers. Palette 4 of the IBM PGC (palette 2 gives 2-3-3 bit RGB and palette 3 gives 3-2-3 bit RGB). Enterprise Computer VGA built-in output of the Digilent Inc. NEXYS 2, NEXYS 3 and BASYS2 FPGA boards. The Uzebox gaming console SGI Indy 8-bit XL graphics The Tiki 100 personal computer (only 16 colors can be displayed simultaneously) Wear OS smartwatches with ambient displays (only 16 colors can be displayed simultaneously) 16-bit RGB {| style="border-style: none" border="0" cellpadding="0" |- || || |} Most modern systems support 16-bit color. It is sometimes referred to as Highcolor (along with the 15-bit RGB), medium color or thousands of colors. It utilizes a color palette of 32×64×32 = 65,536 colors. Usually, there are 5 bits allocated for the red and blue color components (32 levels each) and 6 bits for the green component (64 levels), due to the greater sensitivity of the common human eye to this color. This doubles the 15-bit RGB palette. The 16-bit RGB palette using 6 bits for the green component: The Atari Falcon and the Extended Graphics Array (XGA) for IBM PS/2 use the 16-bit RGB palette. It must be noticed that not all systems using 16-bit color depth employ the 16-bit, 32-64-32 level RGB palette. Platforms like the Sharp X68000 home computer or the Neo Geo video game console employs the 15-bit RGB palette (5 bits are used for red, green, and blue), but the last bit specifies a less significant intensity or luminance. The 16-bit mode of the Truevision TARGA/AT-Vista/NU-Vista graphic cards and its associated TGA file format also uses 15-bit RGB, but it devotes its remaining bit as a simple alpha channel for video overlay. The Atari Falcon can also be switched into a matching mode by setting of an "overlay" bit in the graphics processor mode register when in 16-bit mode, meaning it can actually display in either 15- or 16-bit color depth depending on application. Color palette comparison side-by-side Basic color palettes 4-bit grayscale 3-bit RGB 4-bit RGBI 3 level RGB Notes Color values in bold exist in 2-bit (four color) grayscale palette. Color values in very bold exist in 1-bit, monochrome palette. In 4-bit RGBI, dark colors have rds intensity of the bright colors, not . Advanced color palettes 8-bit RGB (VGA) See also Bitmap Color Lookup Table Palette (computing) Grayscale Indexed color List of home computers by video hardware List of 8-bit computer hardware graphics List of 16-bit computer color palettes List of video game console palettes List of software palettes Computer display References Computer graphics Color depths Computing output devices
561920
https://en.wikipedia.org/wiki/Worshipful%20Company%20of%20Information%20Technologists
Worshipful Company of Information Technologists
The Worshipful Company of Information Technologists, also known as the Information Technologists' Company, is one of the livery companies of the City of London. The company was granted livery status by the Court of Aldermen on 7 January 1992, becoming the 100th livery company. It received its Royal Charter on 17 June 2010 from Prince Edward. Overview The company has over 800 members – all currently or formerly senior practitioners in the information technology industry. The Information Technologists' Company is unusual for a 'modern' (post 1926) livery company in that it has its own hall. The hall is located on Bartholomew Close, near to Barbican tube station, and was bought largely thanks to the generosity of Dame Stephanie Shirley and others. Prominent members of the company include Tim Berners-Lee, Vint Cerf, Sherry Coutu, Bill Gates, Tom Ilube, Mike Lynch, Ken Olisa, David Wootton, Dame Stephanie Shirley CH and several past Presidents of BCS, The Chartered Institute for IT, including Dame Stephanie. The company ranks 100th in the order of precedence for the City livery companies. Its motto is Cito, meaning 'swiftly' in Latin, a word which also incorporates the initials of the Company of Information Technologists. The company is a member of the Financial Services Group of Livery Companies, the other twelve members of which are the Chartered Accountants, Chartered Surveyors, Actuaries, Arbitrators, International Bankers, Chartered Secretaries and Administrators, Insurers, Solicitors, Management Consultants, Marketors, Tax Advisers, and World Traders. Activities The company has a significant charitable and educational programme which uses the expertise, resources and networks of its members, and it is also involved in a range of activities to promote the information technology profession. The four pillars of the company are charity, education, fellowship and industry. The company has a number of panels through which activities are organised. It is probably unique amongst Livery Companies in having an Ethical and Spiritual Development Panel, which considers such topics as the ethical and spiritual implications of the Internet – running colloquia on that topic in the House of Lords as far back as 1997. Working with charities Getting the maximum benefit from IT is now a pre-requisite, not just for commercial organisations but also for the charity sector. The company works with a wide range of non-profit organisations with the aim of helping them to gain the maximum benefit from their IT. Members give their time and expertise to provide pro-bono IT advice (usually at a strategic level). In addition, iT4Communities is the national IT volunteering programme, introducing volunteer IT professionals to charities needing IT help and support. iT4C was set up by the Worshipful Company in 2002 and since then has registered over 5,000 volunteers and more than 2,500 charities. iT4C has delivered over £3 million worth of support to the charity sector thanks to the work of the dedicated volunteer IT professionals. Education For hundreds of years, livery companies have supported schools in London and across the United Kingdom. Currently, the Worshipful Company of Information Technologists has a partnership with Lilian Baylis Technology School in Lambeth. Previous projects include HOLNET (the History of London on the Internet), which is now incorporated into the London Grid for Learning. In 2011, together with the Worshipful Company of Mercers (the premier livery company), they opened Hammersmith Academy. IT profession With members coming from all sectors of the IT field, the company can provide a neutral meeting ground for discussion of issues that are central to both the profession and the City of London. It also runs a Journeyman Scheme which supports young IT professionals in the early stages of their career. Support to the armed forces The company is affiliated with the Royal Corps of Signals, the Joint Forces Cyber Group and HMS Collingwood. It is also affiliated with 46F (Kensington) Squadron, Air Training Corps, and Beckenham and Penge Sea Cadets. List of recent Masters Company chaplain Father Marcus Walker References External links Worshipful Company of Information Technologists website Facebook page of the Worshipful Company of Information Technologists Twitter account of the Worshipful Company of Information Technologists 1992 establishments in England Computer science education in the United Kingdom Information technology organisations based in the United Kingdom Information Technologists Organizations established in 1992
27169354
https://en.wikipedia.org/wiki/IClone
IClone
iClone is a real-time 3D animation and rendering software program. Real-time playback is enabled by using a 3D videogame engine for instant on-screen rendering. Other functionality includes: full facial and skeletal animation of human and animal figures; lip-syncing; import of standard 3D file types including FBX; a timeline for editing and merging motions; a Python API and a scripting language (Lua) for character interaction; application of standard motion-capture files; the ability to control an animated scene in the same manner as playing a videogame; and the import of models from Google 3D Warehouse, among many other features. iClone is also notable for offering users royalty-free usage of all content that they create with the software, even when using Reallusion's own assets library. iClone is developed and marketed by Reallusion. History Reallusion launched iClone v1.0 in December, 2005 as a tool to create 3D animation and render animated videos. It supported real-time 3D animation and creation of avatars from photographs. Reallusion’s facial mapping and lip-synch animation technology derived from the 2001 release of CrazyTalk 2D animation software. The face mapping tools and real-time 3D animation environment made iClone popular with the community of Machinima, which is a video game based filmmaking technique that transformed gamers into filmmakers by capturing live video action from within video games and virtual worlds, like Quake and Second Life. The ability of gamers to sell or broadcast their films was challenged by game makers. iClone v1.0 was adopted by many Machinima filmmakers and was showcased at the 2005 Machinima Film Festival held at the Museum of the Moving Image in Queens, New York. Reallusion's Vice President John C Martin II presented the Machinima Festival attendees with a demo of iClone and news that Reallusion would provide a full commercial license for all movies produced with v1.0 and beyond as a counter-strike to game development companies’ policy. Machinima Festival 2005 wiki iClone v2.0 was released in March 2007 with an emphasis on new G2 character styles and the introduction of Clone Cloth, for creating custom clothing for actors through editing materials and applying them to pre-designed 3D avatar models; it became one of the first ways iClone users could create and sell their own content for iClone. V2.0 also brought particle effects, fog and HD video output. iClone v3.0 was launched in August 2008, adding a revised UI featuring scene manager for organizing projects and enabling the viewport for live direct object picking and interaction. G3 characters enhanced Clone Cloth options and made character faces more refined with facial Normal Maps. The Editor Mode and Director Mode were introduced to enable a scene editing mode and a live real-time director control mode where users could pilot characters and vehicles with videogame-like keyboard controls W, A, S, D. Animation created in Director Mode built a series of live motion data on the iClone timeline and was able to be tweaked in Editor Mode. Multi cameras were added in iClone v3.0 with camera switcher for filming scenes in multiple real-time angles. Character animation was made possible with motion editing for inverse and forward kinematics. Material editing became possible from within iClone so enhancing any prop or actor was capable by exporting and editing material textures and reapplying them to iClone. The stage was enhanced with Terrain, Sky, Water and the first appearance of SpeedTree natural tree and foliage designer. Multiple shader modes in Preview, Wireframe and Pixel shading became options for users to balance the screen output with their machine performance. The Certified Content creators program opened allowing iClone users to upload and market their custom content to a Reallusion hosted portal for content sales. iClone v4.0 – October 2009 – Drag and drop manipulation and a gizmo for transforming objects within the 3D viewport was added in iClone v4.0. Importing any image or video as a 3D object for real-time playback enabled direct compositing of real-time 3D and video in iClone. Videos were able to be imported as alpha transparent with iClone’s PopVideo companion. Visual effects in iClone were further enhanced with the introduction of real-time HDR (High Dynamic Range) and IBL (Image based lighting). Characters were enhanced with G4 options for enhanced character body styles, improved body mesh and ability to import Poser and Second Life generated texture maps. Jimmy-toon G4 character was introduced as a customizable cartoon bodystyle avatar for iClone. iClone 4.2 released in May 2010 added Stereo 3D support for rendering images and video in anaglyph, side by side and top down formats. iClone 5.0 and 5.5 released over 2011 and 2012 adding functions for motion capture, Human IK and a pipeline for importing and exporting FBX characters and props for use in game engines and other 3D production tools. Reallusion put emphasis on cross-compatibility with Unity, UDK, Autodesk 3D Studio Max, Maya, Z-brush, Allegorithmic, DAZ and Poser. The iClone Animation Pipeline became a trio of products: iClone PRO, 3DXchange for import, export and rigging, and the Mocap Device Plugin enabling real-time motion capture with the Microsoft Kinect for Windows and OpenNI sensor supported devices. The addition of HumanIK from Autodesk gave natural human motion to iClone and provided animation editing enhancement to generate better motions, foot & hand locking and reach targets for prop interaction. Animation generated with 5.0 and 5.5 could be exported for use external programs. Video game developers benefited from the iClone Animation Pipeline as a way to prepare custom actors for games with face and body animation ready to import into game projects. The iClone Animation Pipeline opened a portal for artists to access the Reallusion marketplace to acquire models, characters and motions for use in games and 3D development. iClone 6.0 released in December 2014, offered a large amount of visual and performance improvements like improved soft cloth physics simulation, object-oriented constraints, a new lighting system with the possibility for infinite lights instead of the previous 8 light system, light props, support for Allegorithmic's procedurally generated materials, and ultra realistic rendering with iClone's Indigo plug-in allowing users to raytrace their projects in Indigo RT for photo realistic results. This new iClone iteration was designed to allow for easy, future plug-in compatibility with other programs and applications. iClone 6.0 came updated for DirectX 11 bringing with it tessellation effects to add real-time geometric details to models, real-time surface smoothing to improve the appearance of objects and characters with more details and higher quality. Later iClone 6.02 was offered in a DirectX 9 version for legacy users that could not immediately upgrade to DirectX 11. iClone 7 released in 2017, bringing with it a completely new, visual posture. Compared to previous versions, iClone 7 included the latest visual technologies such as PBR (Physically-based Rendering), GI (Global Illumination), IBL (Image-based lighting), and a Real Camera System to help users produce photorealistic animations. Reallusion also upgraded iClone's data-exchange module - 3DXchange 7, making it smoother and more powerful as well. Compared to previous versions, the 3DXchange 7 can import and export more types of items like cameras, PBR content, and character morphs. Applications Besides being used as a 3D moviemaking tool, iClone is also a platform for video game development and previsualization allowing users to import and export content such as characters, props and animation data with external 3D tools like Unreal Engine, Unity, Autodesk Maya, 3ds Max, Blender, ZBrush, Poser and many others through popular industry file formats like FBX, OBJ and BVH. Other applications include using iClone as a 3D simulator for education, industry and business since iClone's real-time capabilities allow for direct "WASD" controls through keyboards or other input devices. Motion capture known as Mocap, is another iClone feature that allows users to connect multiple motion capturing hardware, from popular industry sources, and combine them into one or more subjects in real-time. This is done through iClone's universal mocap platform Motion LIVE. With iClone Motion LIVE users are able to acquire profiles (plug-ins) for their necessary hardware (gear) to seamlessly stream, capture and later edit animation data inside iClone's buil-in motion editing suite. These motions can then be exported by using iClone's 3DXchange Pipeline software using FBX and BVH. iClone also includes an Unreal Engine 4 LIVE LINK plug-in which live streams full-body, hand and facial animation directly from iClone to UE4 for realtime animation production. A similar plug-in is currently being developed for use with iClone and Unity's Mecanim system. Since its 6.5 update, iClone allows for quick 360-Video output, allowing users to turn animation projects into 360 panorama videos. Another update is the Alembic export capability, an interchange point cache format used by visual effects and animation professionals, that allows iClone animation detail export to other game engines, or 3D tools. Version 7.0 introduced a reworked architecture that would finally grant iClone customization options for users who wished to create their own Python based plug-ins for use with a wide range of motion capture suits, facial mocap profiles, hardware devices and others. Features Production – Preset Layouts for Directing, In-screen Editing, Drag-n-Drop Creation, Play-to-Create Controls, Animation Path & Transition. Actor – Character Base & Templates, Custom 3D Head from Photo, Facial & Body Deformation, Custom Clothing Design. Animation – MixMoves Motion Graph System, Motion Capture with Depth Cam, Face and Body Puppeteering, Face and Body Motion Key Editing, Audio Lipsyncing, Character Embedded Performances. Prop – Interactive Props with iScript, Soft and Rigid Body Physics Animation, FLEX & Spring simulation, Multi-channel Material Textures, Animated UV Props, Prop Puppet. Stage – Modular Scene Construction, Flexible environment System (Atmosphere, HDR, IBL), Ambient Occlusion, Toon Shader, Fog. Camera – Camera Gizmo & Camera Studio (PIP), Animatable Lenses, Link-to & Look-at, Lighting Systems, Depth of Field, Shadow. Video Effects – Real-time Particle FX, Material FX, Media Compositing, Post FX. Render & Output – Real-time Render, Image Sequence for Post Editing, Popular Image & Video Format Output, 3D Stereo Output. Content Users can purchase content from the Reallusion Content Store for iClone, CrazyTalk, CrazyTalk Animator, FaceFilter and 3DXChange. The store also hosts content packs from third-party developers such as Daz 3D, 3D Total Materials, 3D Universe, Dexsoft, Quantum Theory Ent. and others. The Reallusion Marketplace provides a trading platform for independent content developers. See also Moviestorm Muvizu Xtranormal Shark 3D References External links 3D animation software 3D graphics software Windows graphics-related software Anatomical simulation Lua (programming language)-scriptable software Machinima Motion capture
500988
https://en.wikipedia.org/wiki/Mac%20OS%20X%2010.1
Mac OS X 10.1
Mac OS X 10.1 (code named Puma) is the second major release of macOS, Apple's desktop and server operating system. It superseded Mac OS X 10.0 and preceded Mac OS X 10.2. Version 10.1 was released on September 25, 2001 as a free update for Mac OS X 10.0 users. The operating system was handed out for no charge by Apple employees after Steve Jobs' keynote speech at the Seybold publishing conference in San Francisco. It was subsequently distributed to Mac users on October 25, 2001 at Apple Stores and other retail stores that carried Apple products. System requirements Supported computers: Power Mac G3 Power Mac G4 Power Mac G4 Cube iMac G3 eMac PowerBook G3, except for the original PowerBook G3 PowerBook G4 iBook RAM: 128 megabytes (MB) (unofficially 64 MB minimum) Hard Drive Space: 1.5 gigabytes (GB) Features Apple introduced many features that were missing from the previous version, as well as improving overall system performance. This system release brought some major new features to the Mac OS X platform: Performance enhancements — Mac OS X 10.1 introduced large performance increases throughout the system. Easier CD and DVD burning — better support in Finder as well as in iTunes DVD playback support — DVDs can be played in Apple DVD Player More printer support (200 printers supported out of the box) — One of the main complaints of version 10.0 users was the lack of printer drivers, and Apple attempted to remedy the situation by including more drivers, although many critics complained that there were still not enough. Faster 3D (OpenGL performs 20% faster) — The OpenGL drivers and handling were vastly improved in this version of Mac OS X, which created a large performance gap for 3D elements in the interface, and 3D applications. Improved AppleScript — The scripting interface now allows scripting access to many more system components, such as the Printer Center, and Terminal, thus improving the customizability of the interface. As well, Apple introduced AppleScript Studio, which allows a user to create full AppleScript applications in a simple graphical interface. Improved filehandling - The Finder was enhanced to optionally hide file extensions on a per-file basis. The Cocoa API was enhanced to allow developers to set traditional Mac type and creator information directly without relying on Carbon to do it. ColorSync 4.0, the color management system and API. Image Capture, for acquiring images from digital cameras and scanners. Menu Extras, a set of items the user can add to the system menu, replacing the supplied Dock Extras from Mac OS X 10.0 Cheetah. Apple switched to using Mac OS X as the default on all then-new Macs with the 10.1.2 release. Applications found on Mac OS X 10.1 Puma Address Book AppleScript Calculator Chess Clock CPU Monitor DVD Player Image Capture iMovie Internet Connect Internet Explorer for Mac iTunes Mail Preview Process Viewer (now Activity Monitor) QuickTime Player Sherlock Stickies System Preferences StuffIt Expander TextEdit Terminal Release history References External links Mac OS X v10.1 review at Ars Technica from apple.com from apple.com 1 PowerPC operating systems 2001 software Computer-related introductions in 2001
10141552
https://en.wikipedia.org/wiki/Dunbar%20High%20School%20%28Fort%20Myers%2C%20Florida%29
Dunbar High School (Fort Myers, Florida)
Dunbar High School is a school located in Fort Myers, Florida. It was established in 1926 and re-established in 2000. This secondary school is home to the Dunbar High School Academy of Technology Excellence and the Dunbar High School Center for Math and Science. It is the home of the "Fighting Tigers". The school mascot is a tiger and the school colors are orange and green. The school received an "A" grade for the 2009–2010 school year, along with two other Lee County schools. History In 1926, Dunbar High School was constructed as the third public high school in Lee County, on what is now High Street in Fort Myers. It was named for the poet, Paul Laurence Dunbar. The construction of this school, along with the adjacent Williams Primary, provided K-12 educational opportunities for the black children of the area. This Dunbar High School graduated its last class of students in 1962. This original facility, renamed as the Dunbar Community School, continues to provide services to meet the educational needs of community members young and old. In 1962, students moved to a new school on Edison Avenue which was named Dunbar Senior High School. Graduates emerged from the halls of this school from 1962 through 1969. In 1969, this school was closed due to changes required by the federal desegregation order, and students were reassigned to the various traditionally white schools. The school would later reopen as Dunbar Middle School. Notably, the middle school on Edison Avenue changed its name to again honor the poet – it became Paul Laurence Dunbar Middle School. In the fall of 2000, Paul Laurence Dunbar Middle School moved to its new location, on Winkler Avenue Extension, just south of Colonial Boulevard. Also, in the fall of 2000, Dunbar High School was opened - on East Edison Avenue (where the middle school was located). The Center for Math and Science The Center for Math and Science's title is usually shortened to just "C.M.S." CMS is a program that helps Dunbar students exceed in math and science related areas. A separate application is required to be admitted into the center. Registered students have the opportunity to listen to a number of special guest speakers and an optional study period called CMS Research. The Dunbar High School Center for Math and Science has had success in both the Thomas Alva Edison Kiwanis Science and Engineering Fair and the Florida State Science Olympiad. Recently the Science Olympiad team received first place in the regional competition at FAU. Science Olympiad DHS is also home to a Science Olympiad team. 2006: In the 2006 Florida state competition, Dunbar fielded one team which placed sixth. 2007:In the 2007 competition season, Dunbar fielded two teams in both the regional and state competitions. On Dunbar's C-15 team, team members Joseph Scofield and Smit Patel placed third in Forensics, and team members Abigail Bryant and Josh Katine placed third in Rocks & Minerals. Dunbar's C-18 team placed first in Remote Sensing with team members Juan Carlos Quijada and Woody Culp. C-15 received 8th Place overall with a total of 240 points. C-18 received Tenth Place overall with a total of 263 points. Notably, Ernest R. Greer achieved third place by himself in a team competition. 2008: In 2008, the Dunbar team competed in the regional competition at FAU. The team received first place overall in this competition. The Academy for Technology Excellence In the fall of 2005, Dunbar High School began the Academy for Technology Excellence. Upon completion of the program, students are prepared to excel in a technologically advanced society. The program offers 9th-12th grade students hands-on experiences taught by highly certified instructors. All ATE students have the opportunity to acquire 12 or more recognized industry standard computer certifications in areas associated with information technology. Students completing any of the component programs will have far greater fluency with technology, with specific productivity software, and with critical thinking skills that embody so much of the technical work that they do. The Academy offers honors weighted credit and dual enrollment credit for many of its courses, plus meets the requirements for the Florida Gold Seal Scholarship. Students exiting these programs will be ready to either advance to the next level of formal education or directly enter the workforce and become a technical specialist, systems engineer, PC support technician, office end-user specialists, web designer, software developer, database administrator, security specialists and many more. In addition, students will have an earning potential ranging from $35,000-$50,000 annually upon successful completion of these certifications. Upon graduation, students will have all the prerequisite skills and knowledge to choose a career path within the Information Technology (IT). Combined with Dunbar's rigorous Center for Math & Science and AP programs, students who are part of the Academy for Technology Excellence will have a competitive and empowering edge over most college-bound students. Microsoft In November 2007 recognized Dunbar High School as the nation's first Microsoft Certified High School. Also the week of December 10 Microsoft was at Dunbar High School filming a documentary. Certifications Offered The Academy for Technology Excellence is dedicated to providing a rigorous and deliberate track as a part of the Information Technology computer science fields. As a response, the school offered a complete immersion track of courses that allowed students to have the potential to earn up to 12 industry standard, IT computer certifications. Academy students will have a 2 period block in the Academy technology labs. Tier One (Year One) CompTIA A+ (Essentials & IT Technician) Microsoft Certified Application Specialist in Word Microsoft Certified Application Specialist in Excel Microsoft Certified Application Specialist in PowerPoint Microsoft Certified Application Specialist in Microsoft Outlook Tier Two (Year Two) Cisco Certified Entry Networking Technician (CCENT) Cisco Certified Network Associate (CCNA) CompTIA Network+ Microsoft Certified Application Specialist in Access Tier Three (Year Three) Microsoft Certified Professional (MCP) Microsoft Certified Technology Specialist: Windows 7 (MCTS:Windows 7) Microsoft Certified Information Technology Professional: Windows 7 (MCITP:Windows 7) Microsoft Certified Solutions Associate: Windows 7 (MCSA:Windows 7) CompTIA Server+ Tier Four (Year Four) Microsoft Certified Technology Specialist for Windows Server 2012 Active Directory Configuration Microsoft Certified Technology Specialist for Windows Server 2012 Network Infrastructure Configuration Microsoft Certified Information Technology Professional: Server Administrator (MCITP:Server Administrator) Microsoft Certified Solutions Associate: Server 2012 (MCSA:Server 2012) CompTIA Security+ Awards and Achievements 208 IT Certification tests passed in 2009-2010 as of 1-5-2010 328 IT Certification tests passed in 2008-2009 313 IT Certification tests passed in 2007-2008 236 IT Certification test passed in 2006-2007 Selected as one of the top 15 innovative technology programs in the nation by T.H.E. journal Denise Spence, magnet grant technology lead teacher, was selected as the 1st ever Lee County Career and Technical Education teacher of the year for 2006 The Junior Class of 2009 is the first Academy class to have every student a Microsoft Certified Professional in March 2008 Athletics In 2011 the girls basketball team won the FHSAA class 4A state championship. References High schools in Lee County, Florida Public high schools in Florida Education in Fort Myers, Florida Buildings and structures in Fort Myers, Florida 1926 establishments in Florida Educational institutions established in 1926
56916541
https://en.wikipedia.org/wiki/Piposh
Piposh
Piposh () is an Israeli media franchise that started as a series of comedic point-and-click adventure video games developed by Guillotine and published by Hed Arzi Multimedia for Windows. Based on an eponymous actor-turned-detective who embarks on several adventures solving murders, the titles include Piposh, Piposh 2, Halom SheItgashem (spinoff), and Piposh 3D: HaMahapecha. An English version of Piposh entitled Piposh: Hollywood was published in 2002. The series, which began in 1999, was created at a time when the Israeli video gaming industry was at its peak, particularly in terms of adventure gaming, and served as a notable example of a work targeted specifically at the young local market, with aspects such as inside jokes relating to Israeli culture. The games became very popular within Israel, although never becoming financially successful, and its developers struggled to live off the proceeds. Piposh evolved into a franchise, with a television series, a comic book, and an album being created. The series continues to have a dedicated fanbase, and Piposh conventions have been regularly held as a way for fans to celebrate the games together. While the titles have often been criticized for their amateurish visual style and clunky game mechanics, they are looked at fondly by critics who see Piposh as a source of Israeli pride and a key milestone in the advancement of the local industry throughout the 21st century. In November 2018, it was announced a reboot was in development, with the first part of a four-part game to be released in September 2020. Plot and Gameplay The series consists of point and click adventures, requiring players to interact with an inventory and with characters in order to complete puzzles and advance the story. The main character is intended to be a caricature of the arrogant Israeli, with a worldview that "the whole world is an idiot – and that talk can not be done quietly, only with shouting". The plot of Piposh follows the quirky adventures of its main character, flawed actor Hezi Piposh and "morbidly tactless guy in his attempts to reach Hollywood and make it big". In the first game, he boards the wrong boat and has numerous antics, including being caught up in a murder. He can interrogate characters, break into their rooms, and accuse them of murder. In Piposh 2, Piposh finds himself trapped on an island populated by dwarves, who help him assemble an aviation device to escape. Halom SheHitgashem is focused on eight strange acquaintances who are invited to the castle of an eccentric man. Piposh 3D revolves around a political revolution that ensues because the entire country decides to become vegetarian, replacing meat with tofu. History Conception and In the Interest of Ratings (1997–98) In the late 1980 while Ronan Gluzman was at the age of 13, he began his computer gaming career by creating games and animations on Macromedia Mac Director software. After he left the Israeli army and while his brother Roy was still a member, they set up a small graphics business in Pardes Hanna (פרדס חנה), where they did routine projects for commission. In 1997, one of their projects reached the desk of the CEO of video game distributor Hed Artzi Multimedia (הד ארצי מולטימדיה), the multimedia arm of Israeli record label Hed Arzi Music, who liked the illustration style of the duo and suggested they create a Hebrew-language computer game. He gave them NIS 10,000 in return for 50% of the future profits from the sales. Despite having no experience in the video gaming industry, the team went to work and after nine months of development, released In the Interest of Ratings (בתככי הרייטינג), a title which focused on the incompetent detective Elimelech Egoz (voiced by Moshe Ferster), who goes on holiday at the fictional Nofei Hadera Hotel only to be greeted by a bizarre murder case. The game contained intertextuality, inside jokes, irony, and cynicism. In the Interest of Ratings received negative reviews in the press, including in Internet Captain (קפטן האינטרנט), the technology/multimedia branch of the leading Israeli newspaper Haaretz; however it received positive reviews in youth-oriented newspapers. Ultimately the game was relatively popular, though it was not financially successful. But it led the duo to decide to try to develop a second video game, which would become Piposh, under their new development company Guillotine (גיליוטין). Piposh (1998–99) The Gluzman brothers wrote the text and the code, drew the backgrounds and characters, and created the animations; in addition they convinced actors Shai Avivi, Amos Shuv, Anat Magen, Ilan Peled, Dudu Zar, Meni Pe'er, and Ilan Ganani (as Piposh) to voice characters voluntarily. The game was developed by four people in a garage, at a time when the computer gaming industry in Israel was considered practically non-existent. Glutzman recalls that Guillotine was not particularly well managed, lacked business planning and didn't have a financial focus, and instead consisted of a naive team who wanted to simply make something "fun and cool". Ynet asserts the duo was irrational for choosing set up a development company for Israeli games that only appealed to the local market. Ronen would later describe the contemporary Israeli video gaming industry as an imaginary thing to pursue, for four young developers working from their garage. During development, Ronan used the program Macromedia Director 4, and he created each game screens from scratch. At the time of the game's release, Ronan published his phone number to allow stuck players to phone in for help. After a year and a half of development, Piposh was released; it became a hit among teenagers and sold 6000 copies over a period of one year in retail sales, and would ultimately sell or over 7000 copies. They achieved this without reducing the original selling price. Haaretz asserts that many thousands more probably illegally pirated it. The game reached 5th place on the sales charts, and for a few weeks was competitive with the best-selling games from abroad. Some of Piposh's royalties were donated to the animal rights and welfare organizations. The game originally had text and voice-work in Hebrew, though a few years later it was made available in both English and Russian, and due to its success Guillotine was able to distribute the games to the United States and Russia. Due to the title's low minimum requirements, it was able to be played on computers with weaker processing power. The game was deemed an adventure gaming success aimed primarily at the local market, having been developed independently, almost underground. While not officially a Piposh title, In the Interest of Ratings has a link to the series; within Piposh, the main character – a flawed actor – claims to have played Egoz in the previous game. Rest of the series (2000–04) Roy left the series during production of the second game, believing that in addition to piracy, the distribution networks weren't as efficient at bringing video games to stores as other products like Israeli music. According to Ronan, his brother did not have prowess in the business and marketing side to succeed in the industry. After leaving, Roy moved away from sensationalist nature of the entertainment and gaming industry, and instead focused in personal creative projects. On his own Ronan began to suffer financially, which would continue for years while making the Piposh games. By 2003 he was living on 30 shekels a day. He would later explain that at the time, to be a video game developer in Israel, one was expected not to make money. While the first game took 1.5 years to make, the team spent a year making Piposh 2, and a year making Halom SheItgashem. Each time the developers made a game, they did it independently and without external funding. Once a title neared completion, Guillotine spoke to their distribution company to print discs and send them to stores. However, as the distribution company was very large they didn't have time to invest into a small local game like Piposh so the games ended up being sent out two weeks later than intended, once the advertising and buzz had died down. For each title, Ronan produced advertising materials such as posters, business cards and funny stickers. Piposh 3D was the series' first foray into 3D graphics, after using traditional animation for the previous games; this title was directed by Roy Lazarowitz. By this point the developers had grown tired of their distributor and sought to use Indogram, which represented fewer games and could therefore invest more time into each. It was developed using the A5 engine, which Guillotine had purchased specifically for this title. 2004 saw the English release of Piposh entitled Piposh Holywood, which was to be published by The Phantom after Phantom Entertainment bought the distribution rights electronically. Ynet noted that if successful, it would have been a notable achievement for the industry, seeing the first Israeli video game released on console, but noting the unclear outcome of this venture. Guillotine had aimed to translate their first Piposh game to break it into the international market, however the text and dubbing translations became an astronomical task due to the original game having a badly-constructed programming interface. The foreign adaption of the first game is known worldwide by its English title. The games were not intended to meet the technological standards of the global industry, particularly in terms of graphics. They were not targeted at a worldwide audience, instead appealing to Israeli teenagers who were not necessarily gamers, but who wanted immature content with cultural references they would understand. Haaretz contends that despite the development team tapping into the minds of local youth, the rampant pirate copying phenomenon prevented them from making a living of their games. Piposh saw numerous opportunities for franchise expansion over its history. The team gave up a NIS 20,000 offer by Burger Ranch to distribute the games because they were vegetarians. Soon after the release of Piposh, the duo started creating a weekly comic section in the local youth newspapers Rosh 1 and Maariv LeYanar, and also created a comic book which was published by Hed Artzi, released under the title "Piposh and Other Vegetables" (פיפוש ושאר ירקות), which could be purchased from the official Guillotine site. They also lectured to youth at an animation and comics festival at the Tel Aviv Cinematheque. In 2000, the developers thought of porting Piposh to mobile phones, having already set up documents such as an instillation guide; however this never came to fruition. A spin-off television series named Batheshet Moav (בטשת מואב) – ten episodes in length and about five minutes per episode – first aired on the Beep channel in either 2001 or February 5, 2002. A previous concept for the show – where viewers could interact by choosing which of two outcomes they wanted to watch – was offered to Fox Kids. While Fox wanted to sign a contract, Guillotine refused as they did not want to give up the rights and the artistic freedom, for instance they would be censored from using the swear word קיבינימט and from mentioning that Piposh's father drowned in the Yarkon River. An album featuring music, interviews, and other content entitled At the Piposh Tavern was released by Guillotine and the Tag Group in 2004. A fourth game (excluding the spinoff) was to be released but this was eventually cancelled. Running until at least 2004, 'Piposh Congresses' have been held as national meeting for all the fans of Piposh and Guillotine. The developers sold an "I'm Pipposi Proud" game package, which included 5 game discs including the first three titles, plus a demo of Piposh 3D and a disc with rare files and documents. Aftermath (2005–17) The last title developed by Guillotine was the network-strategy nonsense game Vajimon, which saw players fight against each other over the internet using a variety of vegetables. Around 2003–4, the company's operations were terminated and its managers retired. Ronen left the industry soon after due to the financial and mental burden the series had given him, and became a yacht skipper. He earned money as a writer of games for interactive TV and cell phone in a leading high tech company. In 2005, the brothers lectured for a Gameology course at Beit Berl College as part of the first Israeli curriculum course for video game development. The program was started by Dr. Diana Silverman Keller who discovered while writing her thesis that there was no formal education in the field of game creation in the country. She noted that "Despite the success of Israeli high-tech, the Israelis do not excel at developing games". Keller contacted Ronan, who at the time was known as the "spiritual father of the Israeli gaming community", and he soon became a central figure in the program. On May 7, 2008, on Israel's 60th Independence Day, all Guillotine games were re-released free to download on the Internet. By this time, it was impossible to get the games anymore, as there were no orderly backups. Ronan Gluzman noted that the games were unplayable unless the player had a Windows 95 computer, and commented "I felt that I had to do this, in order to save the game from falling into the abyss". Tal Dinovich, a fan of the series who had volunteered for years on the site and answered phone calls from stuck players, helped locate all the game files and create updated versions that would work on Windows XP. Ronan noted it was strange to revisit these games after having not touched them since the series ended in 2003. While he had since moved on from the series, Ronen did create a Facebook group for the fans who had grown up with the series and wanted to reminisce. According to Calcalist, the series released 7 games, 4 of which are available for free download. As part of the campaign, Ronen aimed to print special nostalgic shirts, as this was a common fan request. In December 2008, a group of Piposh fans, believing that it had died prematurely, started a campaign to create a new chapter of the series and started looking for volunteers. The game, to be developed through their company Sellotape (סלוטייפ) was entitled Piposh 2.5 and the Stolen Vase. The official website of the project wrote that while the developers were unable to reach the quality level of the official games, they hoped to emulate it as much as possible. Their aim was to "remember, save, and enjoy" the Piposh series. The plot, taking place immediately after Piposh 2, saw the protagonist try to find the magic 'vase of life' that can be used to resurrect characters that died in the prequel. The producer/director was Ben Werchizer, the artist/designer was Daniel Avdo, the music and dubbing producer was Itay Jeroffi, and the writer was Shahar Kraus; they were looking for professional programmers in Director or Flash as well as voice talent. Their site contained a "build your own character" minigame from the series as a flash game, and a forum dedicated to the resurrection of Piposh. The project was discontinued around 2010. Video game archivist and founder of the Movement for Preservation of Games in Israel Raphael Ben-Ari got in touch with the series' creators and detailed the story behind its development process and in a documentary film. The documentary was uploaded to YouTube on Ben-Ari's Oldschool account. Ben-Ari believes the work boosted the Piposh community in Israel and introduced the series to many new people. In 2015, Renard was given the "Half Life" award at the 2015 GameIS Awards.due to his work on the series having a "significant impact on the short history of the Israeli gaming industry", thereby helping it to flourish. During this time, Piposh was surrounded by a loyal fanbase who held onto the franchise even while no games were in active development. Writing a game quote in one of the Piposh Facebook groups would be met with relevant written responses and memes, and links to legally download the games. Over the years, the large community of fans who love and reminisce over the series "ran Facebook groups, organized events and constantly asked for a new game". Comeback (2018–Present) In 2018, Piposh creators announced they would produce a reboot of the game, due for release in September 2019. State of Israeli industry Despite having a weak presence overall, the Israeli video gaming industry was at its peak in the 1990s, with several companies creating games, developing engines, and trying to become successful both at home and abroad. One example is Machshevet (מחשבת), which began as an importer and translator of games to the Israeli market. In the mid-1990s, the company would create children's video games based on popular legends and fairytales called 'CD F'doph' (סי.די פדוף"). Created during the golden age of the point and click adventure games, two of Machshevet's popular titles are: Dimension Commander: The Adventures of Wonderland (שליט המימדים: מסע ההרפתקאות המופלא) (1996) and Armed & Delirious (1997). Compedia (קומפדיה), while primarily a technologies, platforms and systems developer, also created educational computer games which were distributed to over 40 countries, including the Advanced Thinking Skills series (1992–93), the Timmy series (1994–95), the Gordy series (סדרת גורדי), the Itamar series (סדרת איתמר), Gordy in the Movie Adventure (גורדי בהרפתקה מהסרטים), In Search of the Lost Words (בעקבות המילים האבודות 1996/1998), and Julia: Back to the Sweet 60's (2001). Other notable Israeli video games of the time include: Xonix (1984), Intifada (אינתיפאדה 1989), and The Sea Dealer (סוחר הים 1989), Bonus (בונוס), Avish (אביש 1993), Duvalle (דובל'ה 1993–1997), Jane's IAF: Israeli Air Force (1998), and You Think You're Smart (אתה חושב שאתה חכם 1998–2001). Throughout the 1990s, Israeli developers also created various "silly little games...to promote children's food products" such as Vered Hagalil and Bamba. Factor believed that while the 80s and 90s saw the serious development of games within Israel, with the country being at the peak of its video gaming industry, it was still defined by a level of inferiority on the world stage, which had only started to improve in the early 2010s with social and casual gaming apps. Makorrishon wrote that Piposh was created at a time when it was generally accepted that to make good games one had to work abroad in a large company. Video game developer Adiel Gur thinks that Guillotine's did not have many contemporaries because of a lack of interest from high-tech companies to make games, a belief from producers that the sector is unprofitable, a lack of resources and tools for developers to make professional games, and the difficulty in amassing a team of workers from different fields (graphic artists, programmers, scriptwriters etc.). Ynet wrote that considering the number of Israelis studying computer science and the reputation of the country as a hub for high-tech entrepreneurs, it was surprising "how much an Israeli gaming industry does not exist", describing it as a "local vacuum". Roy explained that in his opinion: "People who have the ability to develop computer games prefer to develop navigation applications and worm software that will sabotage the reactor in Iran". In one case, the developers of Ballerium (2001) had to first present a comprehensive business plan in order to lure potential investors. In 2001, Ynet wrote that it was difficult to name Israeli games that were not children's edutainment, and though able to recall some popular titles, with exception to Jane's IAF: Israeli Air Force called them "negligible in terms of scope and technicality" and non-competitive in the global market. In 2008, the site asserted that until recently, apart from children's edutainment developer Compedia and the Piposh series, there was no video gaming industry in Israel, though expressed hope that the action game Rising Eagle would change that. Meanwhile, in 2008 Eser wrote: "It's not that I think there is no future for the Israeli computer games industry, and that there is no need to nurture it". Critical reception The games became a cult hit in Israel, known for their unique humor, original characters and a satirical look at Israeli society. Ronen is notable for being one of a few Israelis who have managed to develop a computer game within the country and distribute it in Israel. According to Timeout, the games are hilarious and brave. Eser felt that Piposh was a "semi-decent adventure game". Bikorate felt the title had a "fascinating plot and humor ahead of its time". The student newsletter Factor thought the game was "more funny, more ingenious, and generally more universal" than In the Interest of Ratings had been a year earlier, and felt the two games were "exemplary examples of Israeli creativity". Sport5 listed the game in their article There is honor: the greatest Israeli computer games ever, rating it 10/10 and praising its unique crazy and sharp sense of creativity and humour, as well as for sparking one of the "most cohesive computer game communities ever in the country". Haaretz deemed it ultimately a failed initiative to establish the Israeli video gaming industry, despite its significance. Vgames deemed the series "the most daring attempt to create computer games for Israeli audiences only". In 2001, Ynet asserted that the few games of Israeli origin such as Piposh were "quite negligible in terms of scope and technicality", and were "not close to being competitive in the global market", however in 2003 the site saw the series as a source of Israeli pride. The site would later come to feel that the series offered some of the most "bizarre, funny and hallucinatory quests" to come out of gaming, noting that the first title is widely regarded as a cult game, and that many peculiar moments from the series are etched into the minds of players. Bikorate thought Piposh 2 surprised with its "witty and satirical jokes about Israeli society". The Hebrew site Eser, despite having sympathy for Israeli game developers, who are "forced to work in almost impossible conditions" when compared to their American and European colleagues, gave Piposh 3D a scathing review; while calling its premise amusing, and noting that the patient Piposh fanbase would overlook its inferior graphics the site felt the game was very bad, likening it to a "singer singing without makeup". Ynet criticised 3D's graphics and interface, though noted its humour and charm might make up for the experience. Ben-Ari feels a sense of patriotism toward the game as a piece of video gaming software that is proudly and unabashedly Israeli. Factor thought the series attempt to enter the 3D market was an "utter failure". Even though Walla thought Piposh 3D was a bad game, it thought the game looked like a masterpiece in comparison with Dangerous Vaults, a Lara Croft parody that forces players to have sex with animals. When news of a fan-made Piposh 2.5 was brought to Nana 10, the site made an emotional appeal to its readers, hoping to attract anyone who could "help this promising project take shape", while hoping it would be the first of many. Meanwhile, 2all.co.il wished the developers success. Noting a recent campaign to name a road in Israel "Lara Croft street", Nana 10 hoped that soon there would be a "Piposh street" to honour the series. The paper believed Piposh to the possibly the most successful and profitable Israeli computer gaming project, deeming Guillotine a champion of the local industry. Makorrishon deemed Guillotine a "stubborn pioneer" due to persisting with making a game in Israel, though noted that despite their bravery, the company did close down a mere 6 years later. Looking back at the games in 2008, Ronen was impressed at how the humour held up, and noted ultimately that Piposh is "not a game of technology, but of people, behavior and relationships". Comedy Children listed the original title as the only Israeli entry in an article entitled "The Most Funny Games". According to Gadgety, the games have "earned cult status among Israeli players". Calcalist deemed it "most important series of games - well, the only one - that was done in Israel in this millennium". Additionally, it argued that the series' humor that combined the nonsense of LucasArts and Sierra with the experimental weirdness of Channel 2. Ronen has said that "What Piposh is good at is being crazy, being delusional and telling a good story with an emphasis on humor and Israeliness". References External links Renan (Renard) and Roi Gluzman are hosted at the Magnet (Channel 6) (Hebrew video) A rare interview with Renard and Roi Gluzman (Hebrew video) Interview with Moshe Ferster in Zumbit (Hebrew video) Renard and Roy in Zumbit (Hebrew video) The funniest game in Hebrew (and how it was developed) – Oldschool (Hebrew video) A search in a balloon (Hebrew video) Miyazaki interview (Hebrew podcast) Google Drive of game documents Piposh 3D review (Hebrew) 1999 notes by Guillotine 2 1999 video games Adventure games Video games developed in Israel Windows games Windows-only games
26715
https://en.wikipedia.org/wiki/Slashdot
Slashdot
Slashdot (sometimes abbreviated as /.) is a social news website that originally billed itself as "News for Nerds. Stuff that Matters". It features news stories on science, technology, and politics that are submitted and evaluated by site users and editors. Each story has a comments section attached to it where users can add online comments. The website was founded in 1997 by Hope College students Rob Malda, also known as "CmdrTaco", and classmate Jeff Bates, also known as "Hemos". In 2012, they sold it to DHI Group, Inc. (i.e., Dice Holdings International, which created the Dice.com website for tech job seekers). In January 2016, BIZX acquired both slashdot.org and SourceForge. In December 2019, BIZX rebranded to Slashdot Media. Summaries of stories and links to news articles are submitted by Slashdot's own users, and each story becomes the topic of a threaded discussion among users. Discussion is moderated by a user-based moderation system. Randomly selected moderators are assigned points (typically 5) which they can use to rate a comment. Moderation applies either −1 or +1 to the current rating, based on whether the comment is perceived as either "normal", "offtopic", "insightful", "redundant", "interesting", or "troll" (among others). The site's comment and moderation system is administered by its own open source content management system, Slash, which is available under the GNU General Public License. In 2012, Slashdot had around 3.7 million unique visitors per month and received over 5300 comments per day. The site has won more than 20 awards, including People's Voice Awards in 2000 for "Best Community Site" and "Best News Site". At its peak use, a news story posted to the site with a link could overwhelm some smaller or independent sites. This phenomenon was known as the "Slashdot effect". History 1990s Slashdot was preceded by Rob Malda's personal website "Chips & Dips", which launched in October 1997, featured a single "rant" each day about something that interested its author – typically something to do with Linux or open source software. At the time, Malda was a student at Hope College in Holland, Michigan, majoring in computer science. The site became "Slashdot" in September 1997 under the slogan "News for Nerds. Stuff that Matters," and quickly became a hotspot on the Internet for news and information of interest to computer geeks. The name "Slashdot" came from a somewhat "obnoxious parody of a URL" – when Malda registered the domain, he desired to make a name that was "silly and unpronounceable" – try pronouncing out, "h-t-t-p-colon-slash-slash-slashdot-dot-org". By June 1998, the site was seeing as many as 100,000 page views per day and advertisers began to take notice. By December 1998, Slashdot had net revenues of $18,000, yet its Internet profile was higher and revenues were expected to increase. On June 29, 1999, the site was sold to Linux megasite Andover.net for $1.5 million in cash and $7 million in Andover stock at the Initial public offering (IPO) price. Part of the deal was contingent upon the continued employment of Malda and Bates and on the achievement of certain "milestones". With the acquisition of Slashdot, Andover.net could now advertise itself as "the leading Linux/Open Source destination on the Internet". Andover.net merged with VA Linux on February 3, 2000, changed its name to SourceForge, Inc. on May 24, 2007, and then became Geeknet, Inc. on November 4, 2009. 2000s Slashdot's 10,000th article was posted after two and a half years on February 24, 2000, and the 100,000th article was posted on December 11, 2009 after 12 years online. During the first 12 years, the most active story with the most responses posted was the post-2004 US Presidential Election article "Kerry Concedes Election To Bush" with 5,687 posts. This followed the creation of a new article section, politics.slashdot.org, created at the start of the 2004 election on September 7, 2004. Many of the most popular stories are political, with "Strike on Iraq" (March 19, 2003) the second-most-active article and "Barack Obama Wins US Presidency" (November 5, 2008) the third-most-active. The rest of the 10 most active articles are an article announcing the 2005 London bombings, and several articles about Evolution vs. Intelligent Design, Saddam Hussein's capture, and Fahrenheit 9/11. Articles about Microsoft and its Windows Operating System are popular. A thread posted in 2002 titled "What's Keeping You On Windows?" was the 10th-most-active story, and an article about Windows 2000/NT4 source-code leaks the most visited article with more than 680,000 hits. Some controversy erupted on March 9, 2001 after an anonymous user posted the full text of Scientology's "Operating Thetan Level Three" (OT III) document in a comment attached to a Slashdot article. The Church of Scientology demanded that Slashdot remove the document under the Digital Millennium Copyright Act. A week later, in a long article, Slashdot editors explained their decision to remove the page while providing links and information on how to get the document from other sources. Slashdot Japan was launched on May 28, 2001 (although the first article was published April 5, 2001) and is an official offshoot of the US-based Web site. the site was owned by OSDN-Japan, Inc., and carried some of the US-based Slashdot articles as well as localized stories. An external site, New Media Services, has reported the importance of Online Moderation last December 1, 2011. On Valentine's Day 2002, founder Rob Malda proposed to longtime girlfriend Kathleen Fent using the front page of Slashdot. They were married on December 8, 2002, in Las Vegas, Nevada. Slashdot implemented a paid subscription service on March 1, 2002. Slashdot's subscription model works by allowing users to pay a small fee to be able to view pages without banner ads, starting at a rate of $5 per 1,000 page views – non-subscribers may still view articles and respond to comments, with banner ads in place. On March 6, 2003, subscribers were given the ability to see articles 10 to 20 minutes before they are released to the public. Slashdot altered its threaded discussion forum display software to explicitly show domains for links in articles, as "users made a sport out of tricking unsuspecting readers into visiting [Goatse.cx]." In observance of April Fools' Day in 2006, Slashdot temporarily changed its signature teal color theme to a warm palette of bubblegum pink and changed its masthead from the usual, "News for Nerds" motto to, "OMG!!! Ponies!!!" Editors joked that this was done to increase female readership. In another supposed April Fools' Day joke, User Achievement tags were introduced on April 1, 2009. This system allowed users to be tagged with various achievements, such as "The Tagger" for tagging a story or "Member of the {1,2,3,4,5} Digit UID Club" for having a Slashdot UID consisting of a certain number of digits. While it was posted on April Fools' Day to allow for certain joke achievements, the system is real. Slashdot unveiled its newly redesigned site on June 4, 2006, following a CSS Redesign Competition. The winner of the competition was Alex Bendiken, who built on the initial CSS framework of the site. The new site looks similar to the old one but is more polished with more rounded curves, collapsible menus, and updated fonts. On November 9 that same year, Malda wrote that Slashdot attained 16,777,215 (or 224 − 1) comments, which broke the database for three hours until the administrators fixed the problem. 2010s On January 25, 2011, the site launched its third major redesign in its 13.5-year history, which gutted the HTML and CSS, and updated the graphics. On August 25, 2011, Malda resigned as Editor-in-Chief with immediate effect. He did not mention any plans for the future, other than spending more time with his family, catching up on some reading, and possibly writing a book. His final farewell message received over 1,400 comments within 24 hours on the site. On December 7, 2011, Slashdot announced that it would start to push what the company described as "sponsored" Ask Slashdot questions. On March 28, 2012, Slashdot launched Slashdot TV. Two months later, in May 2012, Slashdot launched SlashBI, SlashCloud, and SlashDataCenter, three websites dedicated to original journalistic content. The websites proved controversial, with longtime Slashdot users commenting that the original content ran counter to the website's longtime focus on user-generated submissions. Nick Kolakowski, the editor of the three websites, told The Next Web that the websites were “meant to complement Slashdot with an added layer of insight into a very specific area of technology, without interfering with Slashdot’s longtime focus on tech-community interaction and discussion.” Despite the debate, articles published on SlashCloud and SlashBI attracted attention from io9, NPR, Nieman Lab, Vanity Fair, and other publications. In September 2012, Slashdot, SourceForge, and Freecode were acquired by online job site Dice.com for $20 million, and incorporated into a subsidiary known as Slashdot Media. While initially stating that there were no plans for major changes to Slashdot, in October 2013, Slashdot launched a "beta" for a significant redesign of the site, which featured a simpler appearance and commenting system. While initially an opt-in beta, the site automatically began migrating selected users to the new design in February 2014; the rollout led to a negative response from many longtime users, upset by the added visual complexity, and the removal of features, such as comment viewing, that distinguished Slashdot from other news sites. An organized boycott of the site was held from February 10 to 17, 2014. The "beta" site was eventually shelved. In July 2015, Dice announced that it planned to sell Slashdot and SourceForge; in particular, the company stated in a filing that it was unable to "successfully [leverage] the Slashdot user base to further Dice's digital recruitment business". On January 27, 2016, the two sites were sold to the San Diego-based BizX, LLC for an undisclosed amount. Administration Team It was run by its founder, Rob "CmdrTaco" Malda, from 1998 until 2011. He shared editorial responsibilities with several other editors including Timothy Lord, Patrick "Scuttlemonkey" McGarry, Jeff "Soulskill" Boehm, Rob "Samzenpus" Rozeboom, and Keith Dawson. Jonathan "cowboyneal" Pater is another popular editor of Slashdot, who came to work for Slashdot as a programmer and systems administrator. His online nickname (handle), CowboyNeal, is inspired by a Grateful Dead tribute to Neal Cassady in their song, "That's It for the Other One". He is best known as the target of the usual comic poll option, a tradition started by Chris DiBona. Software Slashdot runs on Slash, a content management system available under the GNU General Public License. Early versions of Slash were written by Rob Malda in the spring of 1998. After Andover.net bought Slashdot in June 1999, Slash remains Free software and anyone can contribute to development. Peer moderation Slashdot's editors are primarily responsible for selecting and editing the primary stories that are posted daily by submitters. The editors provide a one-paragraph summary for each story and a link to an external website where the story originated. Each story becomes the topic for a threaded discussion among the site's users. A user-based moderation system is employed to filter out abusive or offensive comments. Every comment is initially given a score of −1 to +2, with a default score of +1 for registered users, 0 for anonymous users (Anonymous Coward), +2 for users with high "karma", or −1 for users with low "karma". As moderators read comments attached to articles, they click to moderate the comment, either up (+1) or down (−1). Moderators may choose to attach a particular descriptor to the comments as well, such as "normal", "offtopic", "flamebait", "troll", "redundant", "insightful", "interesting", "informative", "funny", "overrated", or "underrated", with each corresponding to a −1 or +1 rating. So a comment may be seen to have a rating of "+1 insightful" or "−1 troll". Comments are very rarely deleted, even if they contain hateful remarks. Starting in August 2019 anonymous comments and postings have been disabled. Moderation points add to a user's rating, which is known as "karma" on Slashdot. Users with high "karma" are eligible to become moderators themselves. The system does not promote regular users as "moderators" and instead assigns five moderation points at a time to users based on the number of comments they have entered in the system – once a user's moderation points are used up, they can no longer moderate articles (though they can be assigned more moderation points at a later date). Paid staff editors have an unlimited number of moderation points. A given comment can have any integer score from −1 to +5, and registered users of Slashdot can set a personal threshold so that no comments with a lesser score are displayed. For instance, a user reading Slashdot at level +5 will only see the highest rated comments, while a user reading at level −1 will see a more "unfiltered, anarchic version". A meta-moderation system was implemented on September 7, 1999, to moderate the moderators and help contain abuses in the moderation system. Meta-moderators are presented with a set of moderations that they may rate as either fair or unfair. For each moderation, the meta-moderator sees the original comment and the reason assigned by the moderator (e.g. troll, funny), and the meta-moderator can click to see the context of comments surrounding the one that was moderated. Features Tags Slashdot uses a system of "tags" where users can categorize a story to group them together and sorting them. Tags are written in all lowercase, with no spaces, and limited to 64 characters. For example, articles could be tagged as being about "security" or "mozilla". Some articles are tagged with longer tags, such as "whatcouldpossiblygowrong" (expressing the perception of catastrophic risk), "suddenoutbreakofcommonsense" (used when the community feels that the subject has finally figured out something obvious), "correlationnotcausation" (used when scientific articles lack direct evidence; see correlation does not imply causation), or "getyourasstomars" (commonly seen in articles about Mars or space exploration). Culture As an online community with primarily user-generated content, many in-jokes and internet memes have developed over the course of the site's history. A popular meme (based on an unscientific Slashdot user poll) is, "In Soviet Russia, noun verb you!" This type of joke has its roots in the 1960s or earlier, and is known as a "Russian reversal". Other popular memes usually pertain to computing or technology, such as "Imagine a Beowulf cluster of these", "But does it run Linux?", or "Netcraft now confirms: BSD (or some other software package or item) is dying." Users will also typically refer to articles referring to data storage and data capacity by inquiring how much it is in units of Libraries of Congress. Sometimes bandwidth speeds are referred to in units of Libraries of Congress per second. When numbers are quoted, people will comment that the number happens to be the "combination to their luggage" (a reference to the Mel Brooks film Spaceballs) and express false anger at the person who revealed it. Slashdotters often use the abbreviation TFA which stands for The fucking article or RTFA ("Read the fucking article"), which itself is derived from the abbreviation RTFM. Usage of this abbreviation often exposes comments from posters who have not read the article linked to in the main story. Slashdotters typically like to mock then United States Senator Ted Stevens' 2006 description of the Internet as a "series of tubes" or former Microsoft CEO Steve Ballmer's chair-throwing incident from 2005. Microsoft founder Bill Gates is a popular target of jokes by Slashdotters, and all stories about Microsoft were once identified with a graphic of Gates looking like a Borg from Star Trek: The Next Generation. Many Slashdotters have long talked about the supposed release of Duke Nukem Forever, which was promised in 1997 but was delayed indefinitely (the game was eventually released in 2011). References to the game are commonly brought up in other articles about software packages that are not yet in production even though the announced delivery date has long passed (see vaporware). Having a low Slashdot user identifier (user ID) is highly valued since they are assigned sequentially; having one is a sign that someone has an older account and has contributed to the site longer. For Slashdot's 10-year anniversary in 2007, one of the items auctioned off in the charity auction for the Electronic Frontier Foundation was a 3-digit Slashdot user ID. Traffic and publicity In 2006, Slashdot had approximately 5.5 million users per month. The primary stories on the site consist of a short synopsis paragraph, a link to the original story, and a lengthy discussion section, all contributed by users. At its peak, discussion on stories could get up to 10,000 posts per day. Slashdot has been considered a pioneer in user-driven content, influencing other sites such as Google News and Wikipedia. There has been a dip in readership as of 2011, primarily due to the increase of technology-related blogs and Twitter feeds. In 2002, approximately 50% of Slashdot's traffic consisted of people who simply check out the headlines and click through, while others participate in discussion boards and take part in the community. Many links in Slashdot stories caused the linked site to get swamped by heavy traffic and its server to collapse. This was known as the "Slashdot effect", a term first coined on February 15, 1999 that refers to an article about a "new generation of niche Web portals driving unprecedented amounts of traffic to sites of interest". Slashdot has received over twenty awards, including People's Voice Awards in 2000 in both of the categories for which it was nominated (Best Community Site and Best News Site). It was also voted as one of Newsweeks favorite technology Web sites and rated in Yahoo!'s Top 100 Web sites as the "Best Geek Hangout" (2001). The main antagonists in the 2004 novel Century Rain, by Alastair Reynolds – The Slashers – are named after Slashdot users. The site was mentioned briefly in the 2000 novel Cosmonaut Keep, written by Ken MacLeod. Several tech celebrities have stated that they either checked the website regularly or participated in its discussion forums using an account. Some of these celebrities include: Apple co-founder Steve Wozniak, writer and actor Wil Wheaton, and id Software technical director John Carmack. See also Digg Fark Hacker News Phoronix Reddit Solidot , a Chinese clone of Slashdot, whose name comes from "solidus" (alternate name of slash) and "dot" References External links Online computer magazines Geeknet Internet properties established in 1997 Internet services supporting OpenID Reputation management Washtenaw County, Michigan
2530467
https://en.wikipedia.org/wiki/Vince%20Young
Vince Young
Vincent Paul Young Jr. (born May 18, 1983) is a former American football quarterback who played in the National Football League (NFL) for six seasons. Young was drafted by the Tennessee Titans with the third overall pick in the 2006 NFL Draft, and he was also selected to be the Madden '08 cover athlete. Young played college football for the University of Texas. As a junior, he won the Davey O'Brien Award, awarded annually to the best college quarterback in the nation. He finished second behind Reggie Bush in Heisman Trophy voting. After the Heisman voting, Young led his team to a BCS National Championship against the defending BCS national champion USC Trojans in the 2006 Rose Bowl, a game lauded as one of the most-anticipated and greatest in the history of college football. Texas retired Young's jersey on August 30, 2008. He spent the first five seasons of his career with the Titans where he compiled a 30–17 starting record. In his rookie season, Young was named the NFL Offensive Rookie of the Year and was named to the AFC Pro Bowl team as a reserve. In 2009, Young earned his second Pro Bowl selection and was named Sporting News NFL Comeback Player of the Year. He later played one year as a backup with the Philadelphia Eagles in 2011 and had offseason stints with the Buffalo Bills, the Green Bay Packers and Cleveland Browns from 2012–2014. In 2017, he attempted a comeback in the Canadian Football League with the Saskatchewan Roughriders, but was released before the season began. Early life Young grew up in the Hiram Clarke neighborhood of Houston, Texas, where he was primarily raised by his mother and his grandmother. His father, Vincent Young Sr., missed much of Vince's college career due to a 2003 burglary conviction and prison sentence. Young credits his mother and grandmother for keeping him away from the street gangs. At the age of seven, Young was struck by a vehicle while riding his bicycle at the corner of Tidewater and Buxley, streets in his Houston neighborhood. The accident nearly killed him, leaving him hospitalized for months after the bicycle's handle bar went into his stomach. Today, he credits this event for making him into a "tougher" individual. Young wore the #10 to show love and respect for his mother, Felicia Young, whose birthday is June 10. Young attended Dick Dowling Middle School in Hiram Clarke. Some of Young's friends were a part of the "Hiram Clarke Boys", a local street gang; many of those friends died as a result of their activities. Young's mother confronted him after he had been involved in a fight between gangs, and told him that he needed to change his behavior. High school career Young was coached by Ray Seals at Madison High School in Houston, where he started at quarterback for three years and compiled 12,987 yards of total offense during his high school career. During his senior season, he led his Madison Marlins to a 61–58 victory in the 5A Regionals over the previously undefeated Galena Park North Shore Mustangs, accounting for more than 400 yards of total offense while passing for three touchdowns and rushing for two more before a crowd of 45,000 in the Houston Astrodome. After beating Missouri City Hightower 56–22 in the state quarterfinals, Houston Madison faced Austin Westlake in the state semi-finals. Although Young completed 18-of-30 passes for 400 yards and five touchdowns and rushed for 92 yards (on 18 carries) and a touchdown, Houston Madison lost by a score of 48–42. Among the honors Young received in high school were: being named Parade's and Student Sports' National Player of the Year after compiling 3,819 yards and 59 touchdowns as a senior, being named 2001 Texas 5A Offensive Player of the Year, designation as The Sporting News's top high school prospect, and the Pete Dawkins Trophy in the U.S. Army All-American Bowl. Young was also a varsity athlete in numerous other sports. In basketball, he played as a guard/forward and averaged more than 25 points per game over his career. This allowed him to be a four-year letterman and two-time all-district performer. In track and field, he was a three-year letterman and member of two district champion 400-meter relay squads. In baseball he played for two seasons, spending time as both an outfielder and pitcher. He also made the all-state team in football and in track. College career Young enrolled at the University of Texas, where he played for coach Mack Brown's Texas Longhorns football team from 2002 to 2005. He was part of an exceptionally strong Texas recruiting class that included future NFL players Rodrique Wright, Justin Blalock, Brian Robison, Kasey Studdard, Lyle Sendlein, David Thomas, Selvin Young, and Aaron Ross. Young redshirted his freshman year. 2003 season As a redshirt freshman during the 2003 season, Young was initially second on the depth chart behind Chance Mock. However, Mock was benched halfway through the season (in the game against Oklahoma) in favor of Young. After that game, Young and Mock alternated playing time, with Young's running ability complementing Mock's drop-back passing. 2004 season As a redshirt sophomore in the 2004 season, Young started every game and led the Longhorns to an 11–1 season record (losing 12–0 to rival Oklahoma in a shutout), a top 5 final ranking, and the school's first-ever appearance in the Rose Bowl, in which they defeated the University of Michigan. He began to earn his reputation as a dual-threat quarterback by passing for 1,849  and rushing for 1,189 yards. The Texas coaches helped facilitate this performance by changing the team offensive scheme from the more traditional I-formation to a Shotgun formation with three wide receivers. This change gave the offense more options in terms of play selection, and consequently made it more difficult to defend against. Before his junior season, Young appeared on the cover of Dave Campbell's Texas Football alongside Texas A&M quarterback Reggie McNeal. 2005 season: National Championship As a redshirt junior in the 2005 season, Young led the Longhorns to an 11–0 regular season record. The Longhorns held a #2 ranking in the preseason, and held that ranking through the season except for one week when they were ranked #1 in the Bowl Championship Series. Texas then won the Big 12 championship game and still held their #2 BCS ranking, which earned them a berth in the National Championship Rose Bowl game against the USC Trojans. Before the game, the USC Trojans were being discussed on ESPN and other media outlets as possibly the greatest college football team of all time. Riding a 34-game winning streak, including the previous National Championship, USC featured two Heisman Trophy winners in the backfield – quarterback Matt Leinart (2004 Heisman winner) and running back Reggie Bush (2005 Heisman winner—since vacated). In the 2006 Rose Bowl, Young accounted for 467 yards of total offense (200 rushing, 267 passing) and three rushing touchdowns (including a 9-yard touchdown scramble on 4th down with 19 seconds left) to lead the Longhorns to a 41–38 victory. This performance led to him winning Rose Bowl MVP honors. Young finished the season with 3,036 yards passing and 1,050 yards rushing earning him the Davey O'Brien Award. He was also named the Longhorns MVP. He was named an All-American. In recognition of his Rose Bowl accomplishments, Young was inducted into the Rose Bowl Hall of Fame in 2018. Early in his college career, Young had been criticized as "great rusher...average passer", and his unconventional throwing motion had been criticized as being "side-arm" as opposed to the conventional "over the top" throwing motion typically used by college quarterbacks. Young reached a win/loss record as a starter of 30–2, ranking him #1 of all University of Texas quarterbacks by number of wins, although his successor, Colt McCoy, would far surpass him with 45. His .938 winning percentage as a starting quarterback ranks sixth best in Division I history. Young's career passing completion percentage is the best in Texas history, 60.8%. During his career at Texas (2003–05), Young passed for 6,040 yards (No. 5 in Texas history) and 44 touchdowns (No. 4 in Texas history) while rushing for 3,127 yards (No. 1 on Texas's all-time QB rushing list/No. 7 on Texas's all-time list) and 37 touchdowns (No. 5 on Texas's all-time rushing touchdowns list/Tied for No. 1 among quarterbacks). He was also #10 on ESPN/IBM's list of the greatest college football players ever. In 2007, ESPN compiled a list of the top 100 plays in college football history; Young's game-winning touchdown in the 2006 Rose Bowl ranked #5. The University of Texas retired Young's #10 jersey during the 2008 season-opening football game on August 30, 2008. Statistics List of accomplishments and records Vince Young was the first player in NCAA I-A history to pass for 3,000 yards and rush for 1,000 yards in the same season. The only other player to do so was Dan LeFevour of Central Michigan University. Young owns five of the top seven single-game quarterback rushing performances in UT history: 267 yards vs Oklahoma State as a Junior; 200 yards vs USC as a Junior; 192 yards vs. Michigan as a Sophomore; 163 yards vs. Nebraska as a Freshman; 158 yards at Texas Tech as a Sophomore. Young has six of the top 8 longest runs by a quarterback in UT history. Young became the first player in UT history to pass and rush for 1,000 or more yards in the same season. Young became the first quarterback in UT history to have three 100-yard rushing games (vs. Oklahoma, at Baylor, vs. Nebraska) in the same season and is tied with Ricky Williams (1995) for the third-most 100-yard games by a freshman in school history. Young's 17 wins and 43 touchdowns accounted for in 2003–2004 were the most ever by a UT quarterback in their first two years. However, Colt McCoy surpassed both of these, accounting for 57 touchdowns and 20 wins in 2006–2007. Young is a two-time winner of the Rose Bowl MVP award, joining Ron Dayne, Bob Schloredt, and Charles White as the only two-time winners. He passed for 44 touchdowns (No. 4 in UT history) Rose Bowl Record & BCS Championship Game Record – Total yards (467) Rose Bowl Record – Touchdowns responsible for (5), tied by Mark Sanchez in 2009 Rose Bowl Record & BCS Record- Net rushing yards by a quarterback (200), broke his own record Rose Bowl Record – Points responsible for (30), tied by Mark Sanchez in 2009 Bowl Record – Net rushing yards by a quarterback (201), surpassed by Dwight Dasher in 2009 New Orleans Bowl BCS Record – Total yards (467), surpassed by Tim Tebow in 2009 BCS Record – Touchdowns responsible for (5), tied Matt Leinart, tied by Mark Sanchez in 2009 BCS Record – Rushing touchdowns (4), tied Dominick Davis and Ron Dayne BCS Record – Points Scored (24), tied Dominick Davis and Ron Dayne BCS Record & BCS Championship Game Record – Most rushing yards per attempt (10.53) BCS Championship Game Record – Rushing yards (200) BCS Championship Game Record – Net rushing yards by a quarterback (200) BCS Championship Game Record – Rushing touchdowns (3), tied LenDale White in same game BCS Championship Game Record – Pass completions (30) BCS Championship Game Record – Passes without an interception (40) BCS Championship Game Record – Completion percentage (75.0%) BCS Championship Game Record – Points Scored (20), tied Peter Warrick UT Record – Touchdown passes, season (26), tied with Chris Simms, surpassed by Colt McCoy UT Record – Passing completion percentage, careers (61.8%), surpassed by McCoy UT Record – Total Offense, game (506) UT Record – Total Offense, season (4,086), surpassed by McCoy UT Record – Total Offense, career (9,167), surpassed by McCoy UT Record – Average gain per play, season (8.5 yards) UT Record – Average gain per play, career (7.8 yards) UT Record – Pass completion percentage, game (86.2%) against Colorado in 2005, surpassed by McCoy UT Record – Pass completion percentage, career (61.8%) (min 100 attempts), surpassed by McCoy in 2009 UT Record – Wins by a quarterback, Career (30), surpassed by McCoy UT Record – Longest run by a Quarterback (80 yards) UT Record – Most rushing yards by a Quarterback, game (267), against Oklahoma State, broke his own record previously set against Michigan UT Record – Most rushing yards by a Quarterback, career (3,127), also 5th best by any UT player UT Record – Most rushing touchdowns by a quarterback, season (14), surpassed by Sam Ehlinger in 2018 UT Record – Most rushing touchdowns by a quarterback, career (37), also 4th best by any UT player UT Record – Most games rushing and passing for more than 100 yards, career (5 games) UT Record – Most 300 yard total offense games, season (6), tied by and then surpassed by McCoy UT Record – Most 300 yard total offense games, career (10), surpassed by McCoy UT Record – Most 400 yard total offense games, season (2), tied by and then surpassed by McCoy UT Record – Most 400 yard total offense games, career (4) UT Record – Most 500 yard total offense games, season (1) UT Record – Most 500 yard total offense games, career (1) UT Record – Most Offensive yards, game (506 yards), against Oklahoma State on October 29, 2005, broke his own record UT Record – Most 100 yard rushing games by a quarterback, season (3 games), tied his own record twice Big 12 & UT Record – Passing efficiency, season (163.9), surpassed by Sam Bradford in 2007 for Big 12 and McCoy for UT Big 12 & UT Record – Win/loss record as a starter of 30–2 gives him a .938 winning percentage as a starting quarterback. This also ranks sixth best in NCAA Division I football history. Big 12 & UT Record – Yards per rush, career (6.8) In the Rose Bowl on January 4, 2006, the BCS National Championship, he completed 30 of 40 passes for 267 yards and carried the ball 19 times for 200 yards and 3 rushing touchdowns. Those 200 rushing yards set a Bowl game rushing record by a QB. He was named Rose Bowl MVP for the second time in his career. UT beat USC by the score of 41 to 38 and Vince Young ran in the winning touchdown. In this game, UT ended USC's 34-game win streak. Young's 467 total yards set a new Rose Bowl record. College awards and honors 2006 – ESPY Award for Best Championship Performance. 2006 – ESPY Award for Best Game (2006 Rose Bowl; joint award shared between Texas and USC – accepted award along with Matt Leinart). 2006 – Big 12 Male Athlete of the Year (for 2005–2006 scholastic year) 2006 – Manning Award winner 2006 – Rose Bowl MVP (at end of 2005 season) 2005 – BCS National Championship 2005 – The Cingular All-America Player of the Year Award 2005 – All-American Offensive Player 2005 – The Maxwell Award – College Player of the Year 2005 – Davey O'Brien National Quarterback Award 2005 – 1st Team All-Big 12 Conference honors (unanimous decision) 2005 – Rose Bowl Most Valuable Player (at end of 2004 season) 2003 – Big 12 Conference Offensive Freshman of the Year Texas Longhorns #10 retired Professional career Throughout the 2005 season Young had indicated that he planned to return to the University of Texas for his senior year in 2006. The day after Texas won the BCS National Championship, Young accepted an invitation to appear on The Tonight Show with Jay Leno. When Leno asked Young whether he would stay for his senior year of college or declare for the 2006 NFL Draft, Young replied that he would discuss the matter with his pastor, his family, and coach Mack Brown. On January 8, 2006, Young announced he would enter the NFL draft, where he was expected to be drafted early in the first round. Young finished third in the bidding for the NFL Comeback Player of the Year award behind Tampa Bay Buccaneers running back Carnell Williams and the winner, New England Patriots quarterback Tom Brady. Shortly thereafter, Young was announced as the Sporting News comeback player of the year. Young played in the 2010 Pro Bowl, taking the roster spot of the injured Philip Rivers after Ben Roethlisberger and Carson Palmer declined to replace Rivers due to their own respective injuries. It was the 2nd Pro Bowl appearance of his career, his first being after his 2006 NFL Offensive Rookie of the Year award-winning season. 2010 season Young led the Titans to a 4–5 record in nine of their first ten games in 2010 while throwing for ten touchdowns with a 98.6 passer rating. During a Week 11 loss to the Washington Redskins, Young suffered a torn flexor tendon in his right thumb, and was held out of the game after he prepared to reenter. Following the game, Young threw his shoulder pads into the crowd as he left the field, had an altercation with Coach Fisher in the locker room, and stormed out. Fisher then declared that Rusty Smith would become the Titans' starting quarterback. On January 5, 2011, Titans owner Bud Adams issued a press release stating that Young would no longer be on the team's roster for the 2011–12 season. Vince Young would finish his Titans career with a 30–17 record (63.8%) over five years. As a Titan, Young finished with a 75.4 quarterback rating, with 42 touchdown passes and 42 interceptions. On July 28, 2011, Young was released by the Titans. Philadelphia Eagles Young was signed by the Philadelphia Eagles to a one-year contract on July 29, 2011. Upon signing, Young declared the Eagles would become the "Dream Team", a label which would become highly publicized by media outlets. 2011 season Young's first start as an Eagle came on November 20, 2011 in a Sunday Night Football match up against the New York Giants. Young played quarterback in the Eagles' 17–10 win, finishing the game with 258 passing yards, two touchdowns, and three interceptions. The Eagles subsequently lost Young's second start of the season, 38–20 the following week against the New England Patriots. Young finished with 400 yards with one touchdown and one interception in a losing effort. In his third and final start the following week, Young threw one touchdown and four interceptions as the Eagles lost to the Seattle Seahawks 31–14, dropping the Eagles record to 4–8 and Young's record as a starter to 1–2 on the season. The loss would be the final regular season game of Young's career. Final NFL years Young signed a one-year deal with the Buffalo Bills on May 11, 2012. He was released by the team on August 27, 2012. On August 6, 2013, Young signed a one-year contract with the Green Bay Packers. He was released by the team on August 31, 2013. On May 1, 2014, Young signed a one-year contract with the Cleveland Browns. He was released by the team on May 12, 2014. Saskatchewan Roughriders In early February 2017, the Saskatchewan Roughriders of the Canadian Football League (CFL) added Young to their negotiation list. A couple weeks later Leigh Steinberg, Young's agent, confirmed he'd conducted talks on behalf his client with the Roughriders. On March 8, 2017, Young was rumored to be signing with the Roughriders imminently. On March 9, 2017, the Roughriders held a press conference to formally announce the signing of Young. Young entered training camp fighting alongside Bryan Bennett and Brandon Bridge for the backup quarterback position to CFL-veteran Kevin Glenn. On June 6, 2017, partway through training camp, Young suffered a hamstring injury. On June 12, 2017, following the team's first preseason game, it was announced that Young would miss 4–6 weeks with a torn hamstring. Five days later, he was waived by the Roughriders. NFL career statistics Regular season Source: Awards and honors 2006 NFL Rookie of the Week Awards (four separate weekly awards) 2006 NFL AP Offensive Rookie of the Year 2006 Diet Pepsi NFL Rookie of the Year 2007 Pro Bowl Cover of Madden NFL 08 2010 Pro Bowl Retirement, post-NFL career On June 14, 2014, Young announced his retirement. After announcing his retirement, he did say for a "guaranteed offer", he would come out of retirement. Young stated he also planned to work at the University of Texas in some form following his retirement. On August 14, 2014, Young had been hired by the University of Texas to work for its Division of Diversity and Community Engagement as a development officer for program alumni relations and raising money for programs that assist first-generation and low-income college students. His employment with the University of Texas ended on March 9, 2019 due to poor performances and absences, having been given job warnings dating back to 2017. In 2021, he was hired by the University of Texas as a special assistant in the athletic department. Personal life As a result of his strong on-field performance and his ties to the Houston area, January 10, 2006, was proclaimed "Vince Young Day" in his hometown. The Texas Senate passed a resolution on February 20, 2007, to declare the day "Vince Young Day" throughout the state. Young has been in a number of television commercials for Madden 2008 (for which he was on the cover), Reebok with Allen Iverson, a television commercial for Vizio, and Campbell's Chunky Soup. He also appears in rapper Mike Jones's video, "My 64". Young was also interviewed by 60 Minutes for an episode that was aired on September 30, 2007. Young re-enrolled at the University of Texas for the 2008 spring semester. In 2013, Young graduated from Texas with a degree in youth and community studies from the College of Education. Young continues to live in Houston's Hiram Clarke neighborhood. Young's grandmother, Betty, lives in the Sunnyside area of Houston. Disappearance On September 9, 2008, a distraught Young left his home without his cell phone. The reasons given were that Young was upset over being booed by fans after throwing a second interception against the visiting Jacksonville Jaguars the previous day and the sprained medial collateral ligament in his left knee suffered four plays after head coach Jeff Fisher prodded him back into the game. Young postponed a doctor's examination until the following day. After speaking to members of Young's family, Fisher called Nashville police. After a four-hour search, they found Young, who agreed to meet with Fisher and police at the team's training facility. In regards to the incident, Young's mother (Felicia Young) stated that her son was "hurting inside and out." Financial problems In September 2012, the Associated Press reported that Young had spent much of the $34 million salary he earned in the NFL and was facing financial problems after defaulting on a $1.9 million high interest payday loan. Young filed a lawsuit seeking to stop the lender, Pro Player Funding LLC, from enforcing a judgment of nearly $1.7 million with a claim that the loan documents were forged and he did not knowingly execute the loan. However, Young had authorized $1 million in loan payments to Pro Player directly from his Eagles salary prior to defaulting and Young's signatures on loan documents were notarized. Young also filed lawsuits against his former agent, Major Adams, and a North Carolina financial planner, Ronnie Peoples, alleging that they misappropriated $5.5 million of funds. When asked to give a general assessment of Young's finances, Young's attorney, Trey Dolezal, stated "I would just say that Vince needs a job." Young's financial problems have reportedly been a result of lavish spending and, by his account, the betrayal of trusted advisers. In addition to the $34 million salary during his career in the NFL, Young had signed $30 million in endorsement deals with Reebok, Campbell's Soup, Madden NFL, Vizio and the National Dairy Council. In January 2014, Young filed for Chapter 11 bankruptcy in a Houston federal bankruptcy court. On January 30, Young petitioned the court to dismiss the bankruptcy filing due to a settlement with Adams and Peoples, and a resulting settlement with Pro Player Funding. Lawsuits In December 2008, Young filed suit against former Major League Baseball player Enos Cabell and two others for applying for a trademark to use his initials and "Invinceable" nickname to sell products without his permission in 2006. The suit claims that their use of Young's name has damaged endorsement deals for Young; he asked the court to give him the exclusive rights to use the initials and nickname. Impersonator On September 23, 2011, Stephan Pittman, a registered sex offender in Maryland, was arrested on felony fraud charges for impersonating Young. Legal issues On January 25, 2016, Young was arrested for DWI in Austin, Texas. He pleaded no contest, was fined $300, and ordered to undergo 60 hours of community service. On February 5, 2019, Young was arrested for DWI in Fort Bend County, Texas. He was released on a $500 bail the same day. See also Madden Curse References External links Philadelphia Eagles bio Tennessee Titans bio Texas Longhorns bio 1983 births Living people African-American players of American football African-American players of Canadian football All-American college football players American expatriate sportspeople in Canada American football quarterbacks Buffalo Bills players Canadian football quarterbacks Cleveland Browns players Green Bay Packers players Maxwell Award winners National Football League Offensive Rookie of the Year Award winners Philadelphia Eagles players Players of American football from Houston Players of Canadian football from Houston Saskatchewan Roughriders players Tennessee Titans players Texas Longhorns football players
19189459
https://en.wikipedia.org/wiki/Wuala
Wuala
Wuala was a secure online file storage, file synchronization, versioning and backup service originally developed and run by Caleido Inc. It is now part of LaCie, which is in turn owned by Seagate Technology. The service stores files in data centres that are provided by Wuala in multiple European countries (France, Germany, Switzerland). An earlier version also supported distributed storage on other users' machines, however this feature has been dropped. On 17 August 2015 Wuala announced that it was discontinuing its service and that all stored data would be deleted on 15 November 2015. Wuala recommended a rival cloud storage startup, Tresorit, as an alternative to its remaining customers. History Most research and development occurred at the Swiss Federal Institute of Technology (ETH) in Zürich. 14 August 2008 An "open beta"-java-applet, available from the website, could be run from a web browser. 19 September 2008 The Wuala Webstart project was registered on SourceForge.net. 26 October 2008 An Alpha release REST API, at a very early stage of development, supported HTTP GET requests for content that was either public, or shared through a keyed hyperlink. 16 December 2008 The Uniform Resource Locator changed from http://wua.la/ to http://www.wuala.com/ and files that were public, or shared through a keyed hyperlink, were made accessible through web browsers. 19 March 2009 LaCie announced a merger with Caleido AG. Wuala described the merger as being between Wuala and LaCie (not Caleido AG and LaCie). 5 January 2010 Post-merger announcement of the first joint products. 23 May 2011 All pro features - backup, sync, file versioning and time travel - are available for everyone at no cost 28 September 2011 The "trade storage" feature was discontinued. 11 June 2014 The storage plan was shifted to a paid-only service 31 October 2014 Wuala announced that existing free-only storage would be terminated at the end of 2014 and customers wishing to save their data should migrate away or purchase a paid plan 17 August 2015 Wuala announced that it would allow no further renewals or purchase of storage. The service will transition to read-only on 30 September 2015 and all stored data will be deleted on 15 November 2015 Features Any registered user can: keep files private share files with other registered users share files with unregistered users, through a keyed hyperlink publish files backup file synchronization file versioning. Registered and unregistered users can: receive streaming media. When a user adds a file to Wuala, or saves changes to a file that is served by Wuala, the user's local copy of file is: first encrypted, then chunked into redundant fragments using Reed-Solomon error correction codes. The fragments are then uploaded to the data centers. Storage Wuala offered free accounts that had 5 GB of storage for no charge. As of 11 June 2014 they shifted to a paid-only service. As of the end of 2014 they will no longer support any form of free-only storage, shifting entirely to a payment based usage model. Users of joint products may start with greater amounts of storage for a limited period: with a LaCie external hard disk drive, 10 GB for one year or with a LaCie USB flash drive, 4 GB for two years. Additional storage may be bought. As of June 2014, the referral system was shut down due to the new paid-only policy. For bought storage: prices range from 29 EUR/year for 20 GB to 999 EUR/year for 1 TB. Pricing changed in June 2014: The storage plan starts with 0.99 € per month (9 € per year) for 5 GB and ends at 159.90 € per Month (1799 € per year) for 2 TB of storage. Trading One of the distinguishing features of Wuala, the ability to trade local disk storage space against cloud storage, is no longer available. User interfaces Desktop application The user interface offers most of the features that are normally associated with a file manager. Additional features come through integration. A registered user can install the Java-based client application (SWT-GUI): on any number of Linux, Mac OS X and Microsoft Windows computers with FUSE, MacFUSE and Dokan respectively for file system integration. Wuala Webstart and web browsers Through a web browser, on a computer that has Java installed: the user can start/trust a Java applet, which downloads and runs a class loader allowing fast start of the latest version of the Wuala application. If the computer is without Java, or if running of the class loader is prevented: any folder that is public, or shared with a weblink, can be browsed. Non-graphical interfaces Support for the following may be limited: command-line interface daemon headless system Security According to Wuala's FAQ, the software uses AES-256 for encryption and RSA-2048 for key exchange and signatures. Keys are organized in a key management scheme called Cryptree. According to the FAQ, Wuala employs full client-side encryption. All files and their metadata – most OS X metadata is not supported – get encrypted before they are uploaded. The encryption key is stored such that no one, not even LaCie that operates the service, can decrypt the stored files. The disadvantage of this is that Wuala has no password recovery and all data processing needs to be done in the client (for example creating a search index). The advantage is significantly improved privacy. Since the source code to Wuala has not been released, it is difficult to ensure that the software does what it states it does (including proper client-side encryption). Also, updates are pushed automatically to the client machine. These facts mean that users of Wuala are not safe from possible backdoors in code. Reviews 2007-10-18: Unlimited online storage for free, almost: Wuala | Webware - CNET 2008-05-22: Online Storage with Wuala | Linux Journal 2008-08-14: Wuala Makes Online Storage Social | News & Opinion | PCMag.com 2008-08-14: Wuala P2P online storage service goes live (Download Squad) 2008-07-18: You Have Three Days To Check Out Wuala's 'Social Grid' Storage (TechCrunch) See also Comparison of file synchronization software List of online backup services Comparison of online backup services Notes and references External links 2008 software Distributed data storage systems Distributed file systems File hosting File sharing services
8177460
https://en.wikipedia.org/wiki/History%20of%20CP/CMS
History of CP/CMS
This article covers the History of CP/CMS — the historical context in which the IBM time-sharing virtual machine operating system was built. CP/CMS development occurred in a complex political and technical milieu. Historical notes, below, provides supporting quotes and citations from first-hand observers. Early 60s: CTSS, early time-sharing, and Project MAC The seminal first-generation time-sharing system was CTSS, first demonstrated at MIT in 1961 and in production use from 1964 to 1974. It paved the way for Multics, CP/CMS, and all other time-sharing environments. Time-sharing concepts were first articulated in the late 50s, particularly as a way to meet the needs of scientific computing. At the time, computers were primarily used for batch processing — where jobs were submitted on punch cards, and run in sequence. Time-sharing let users interact directly with a computer, so that calculation and simulation results could be seen immediately. Scientific users quickly embraced the concept of time-sharing, and pressured computer vendors such as IBM for improved time-sharing capabilities. MIT researchers spearheaded this effort, launching Project MAC, which was intended to develop the next generation of time-sharing technology and which would ultimately build Multics, an extremely feature-rich time-sharing system that would later inspire the initial development of UNIX. This high-profile team of leading computer scientists formed very specific technical recommendations and requirements, seeking an appropriate hardware platform for their new system. The technical problems were awesome. Most early time-sharing systems sidestepped these problems by giving users new or modified languages, such as Dartmouth BASIC, which were accessed through interpreters or restricted execution contexts. But the Project MAC vision was for shared, unrestricted access to general-purpose computing. Along with other vendors, IBM submitted a proposal to Project MAC. However, IBM's proposal was not well received: To IBM's surprise, MIT chose General Electric as the Multics system vendor. The fallout from this event led directly to CP/CMS. IBM and the System/360 In the early 60s, IBM was struggling to define its technical direction. The company had identified a problem with its past computer offerings: incompatibility between the many IBM products and product lines. Each new product family, and each new generation of technology, forced customers to wrestle with an entirely new set of technical specifications. IBM products incorporated a wide variety of processor designs, memory architectures, instruction sets, input/output strategies, etc. This was not, of course, unique to IBM. All computer vendors seemed to begin each new system with a "clean sheet" design. IBM saw this as both a problem and an opportunity. The cost of software migration was an increasing barrier to hardware sales. Customers could not afford to upgrade their computers, and IBM wanted to change this. IBM embarked on a very risky undertaking: the System/360. This product line was intended to replace IBM's diverse earlier offerings, including the IBM 7000 series, the canceled IBM 8000 series, the IBM 1130 series, and various other specialized machines used for scientific and other applications. The System/360 would span an unprecedented range of processing power, memory size, device support, and cost; and more important, it was based on a pledge of backward compatibility, such that any customer could move software to a new system without modification. In today's world of standard interfaces and portable systems, this may not seem such a radical goal; but at the time, it was revolutionary. Before System/360, each computer model often had its own specific devices and programs that could not be used with other systems. Buying a bigger CPU also meant buying new printers, card readers, tape drives, etc. In addition, customers would have to rewrite their programs to run on the new CPU, something customers often balked at. With the S/360, IBM wanted to offer a huge range of computer systems, all sharing a single processor architecture, instruction set, I/O interface, and operating system. Customers would be able to "mix and match" to meet current needs; and they could confidently upgrade their systems in the future, without the need to rewrite all their software applications. IBM's focus remained on its traditional customer base: large organizations doing administrative and business data processing. At the start of the System/360 project, IBM did not fully appreciate the amount of risk involved. System/360 ultimately gave IBM total dominance over the computer industry; but at first, it nearly put IBM out of business. IBM took on one of the largest and most ambitious engineering projects in history, and in the process discovered diseconomies of scale and the mythical man-month. Extensive literature on the period, such as that by Fred Brooks, illustrate the pitfalls. It was during the period of System/360 panic that Project MAC asked IBM to provide computers with extensive time-sharing capabilities. This was not the direction the System/360 project was going. Time-sharing wasn't seen as important to IBM's main customer base; batch processing was key. Moreover, time-sharing was new ground. Many of the concepts involved, such as virtual memory, remained unproven. For example: At the time, nobody could explain why the troubled Manchester/Ferranti Atlas virtual memory "didn't work". This was later explained as due to thrashing, based on CP/CMS and M44/44X research. As a result, IBM's System/360 announcement in April 1964 did not include key elements sought by the time-sharing advocates, particularly virtual memory capabilities. Project MAC researchers were crushed and angered by this decision. The System/360 design team met with Project MAC researchers, and listened to their objections; but IBM chose another path. In February 1964, at the height of these events, IBM had launched its Cambridge Scientific Center (CSC), headed by Norm Rassmussen. CSC was to serve as the link between MIT researchers and the IBM labs, and was located in the same building with Project MAC. IBM fully expected to win the Project MAC competition, and to retain its perceived lead in scientific computing and time-sharing. One of CSC's first projects was to submit IBM's Project MAC proposal. IBM had received intelligence that MIT was leaning toward the GE proposal, which was for a modified 600-series computer with virtual memory hardware and other enhancements; this would eventually become the GE 645. IBM proposed a modified S/360 that would include a virtual memory device called the "Blaauw Box" – a component that had been designed for, but not included in, the S/360. The MIT team rejected IBM's proposal. The modified S/360 was seen as too different from the rest of the S/360 line; MIT did not want to use a customized or special-purpose computer for MULTICS, but sought hardware that would be widely available. GE was prepared to make a large commitment to time-sharing, while IBM was seen as obstructive. Bell Laboratories, another important IBM customer, soon made the same decision, and rejected the S/360 for time-sharing. 1964–67: CP-40, S/360-67, and TSS The loss of Project MAC was devastating for CSC, which essentially lost its reason for existence. Rasmussen remained committed to time-sharing, and wanted to earn back the confidence of MIT and other researchers. To do this, he made a bold decision: The now-idle CSC team would build a time-sharing operating system for the S/360. Robert Creasy left Project MAC to lead the CSC team, which promptly began work on what was to become CP-40, the first successful virtual machine operating system based on fully virtualized hardware. IBM's loss of Project MAC and Bell Laboratories caused repercussions elsewhere at IBM. IBM created a corporate task force to find a satisfactory way to meet customer time-sharing requirements. The team, which included key staff from CSC, designed a new S/360 model that was closer to the goals of the MIT researchers. This was to become the IBM System/360-67, announced in August 1965 and shipped in July 1966. IBM's announcement also included a new time-sharing operating system, TSS/360, optimistically scheduled for June 1967 release. IBM reorganized its development and manufacturing divisions to correct perceived problems, and perhaps to punish those responsible for IBM's loss of face. CSC soon submitted a successful proposal to MIT's Lincoln Laboratory for a S/360-67, marking an improvement in IBM's credibility at MIT. By committing to a "real" time-sharing product, rather than a customized RPQ system, IBM was showing the kind of commitment MIT found with GE. CSC also continued work on CP-40, ostensibly to provide research input to the S/360-67 team — but also because the CSC team had grown skeptical about the TSS project, which faced a very aggressive schedule and lofty goals. Since a S/360-67 would not be available to CSC for some time, the team conceived an ingenious stopgap measure: building their own virtual-memory S/360. They designed a set of custom hardware and microcode changes that could be implemented on a S/360-40, providing a comparable platform capable of supporting CP-40's virtual machine architecture. Actual CP-40 and CMS development began in mid 1965, even before the arrival of their modified S/360-40. First production use of CP-40 was in January 1967. The TSS project, in the meantime, was running late and struggling with problems. CSC personnel became increasingly convinced that CP/CMS was the correct architecture for S/360 time-sharing. 1967–68: CP-67 In September 1966, CSC staff began the conversion of CP-40 and CMS to run on the S/360-67. CP-67 was a significant reimplementation of CP-40; Varian reports that the design was "generalized substantially, to allow a variable number of virtual machines, with larger virtual memories", that new data structures replaced "the control blocks describing the virtual machines [which] had been a hard-coded part of the nucleus", that CP-67 added "the concept of free storage, so that control blocks could be allocated dynamically", and that "the inter-module linkage was also reworked, and the code was made re-entrant." Since CSC's -67 would not arrive for some time, CSC further modified the microcode on its own customized S/360-40 to simulate the S/360-67 – particularly its different approach to virtual memory. CSC repeatedly and successfully used simulation to work around the absence of hardware: when waiting for its modified S/360-40, for its S/360-67, and later for the first S/370 prototypes. This can be seen as a logical outgrowth of "virtual machine" thinking. During this period, early testing of CP-67 was also done at sites where S/360-67 hardware was available – notably IBM's Yorktown Heights lab and MIT's Lincoln Laboratory. Observers of CP-67 at Lincoln Labs, already frustrated with severe TSS problems, were very impressed by CP-67. They insisted that IBM provide them a copy of CP/CMS. According to Varian, this demand "rocked the whole company", which had invested so heavily in TSS. However, because the site was so critical, IBM complied. Varian and others speculate that this chain of events could have been "engineered" by Rasmussen, as a "subterfuge" to motivate IBM's continued funding CSC's work on the "counter-strategic" CP/CMS, which he "was told several times to kill". By April 1967 – just a few months after CP-40 went into production – CP/CMS was already in daily use at Lincoln Labs. Lincoln Lab personnel worked closely with CSC in improving CP/CMS. They "began enhancing CP and CMS as soon as they were delivered. The Lincoln and Cambridge people worked together closely and exchanged code on a regular basis", beginning a tradition of code sharing and mutual support that would continue for years. At around the same time, Union Carbide, another influential IBM customer, followed the same path – deciding to run CP/CMS, to send personnel to work with CSC, and to contribute to the CP/CMS development effort. CP-40, CP-67, and CMS were essentially research systems at the time, built away from IBM's mainstream product organizations, with active involvement of outside researchers. Experimenting was both an important goal and a constant activity. Robert Creasy, the CP-40 project leader, later wrote: <blockquote>The design of CP/CMS [was] by a small and varied software research and development group for its own use and support… [and] for experimenting with time-sharing system design.... Schedules and budgets, plans and performance goals did not have to be met.… We also expected to redo the system at least once after we got it going. For most of the group, it was meant to be a learning experience. Efficiency was specifically excluded as a software design goal, although it was always considered. We did not know if the system would be of practical use.... In January, 1965, after starting work on the system, it became apparent from presentations to outside groups that the system would be controversial.<ref>Creasy, op. cit., p.' 470 — CP/CMS as experiment</ref></blockquote> TSS, in the meantime, described by Varian as an "elegant and very ambitious system," was exhibiting "serious stability and performance problems, for it had been snatched from its nest too young." In February 1968, at the time of SHARE 30, there were eighteen S/360-67 sites attempting to run TSS. During the conference, IBM announced via "blue letter" that TSS was being decommitted — a great blow to the time-sharing community. This decision would be temporarily reversed, and TSS would not be scrapped until 1971. However, CP/CMS soon began gaining attention as a viable alternative. 1968–72: CP/CMS releases May 1968: Version 1 of CP/CMS was released to eight installations. It was made available as part of the IBM Type-III Library in June. Shortly thereafter, two time-sharing businesses were launched based on the resale of CP/CMS: National CSS and IDC. These ventures took key personnel from CSC, Lincoln Labs, and Union Carbide; and they drew attention to the viability of CP/CMS, the S/360-67, and virtual memory. April 1969: CP/CMS had been installed at fifteen sites. June 1969: Version 2 of CP/CMS was released. November 1971: Version 3.1 of CP/CMS was released, capable of supporting sixty CMS users on a -67 – impressive performance. Early 1972: CP/CMS Version 3.2 was released, a maintenance release with no new functions. CP-67 was now running on 44 processors, a quarter of which were inside IBM. At the time of the S/360-67, software was "bundled" into computer hardware purchases; see "IBM's unbundling of software and services". In particular, IBM operating systems were available without additional charge to IBM customers. CP/CMS was unusual in that it was delivered as unsupported Type-III software in source code form – meaning that CP/CMS sites ran an unsupported operating system. The need for self-support and community support helped lead to the creation of a strong S/360-67 and CP/CMS user communities, precursors to the open source movement. In the summer of 1970, the CP/CMS team had begun work on a System/370 version of CP/CMS; this would become VM/370. CP-370 proved vital to the S/370 project, by providing a usable simulation of a S/370 on S/360-67 hardware – a reprise of CSC's earlier hardware simulation strategies. This approach enabled S/370 development and testing before S/370 hardware was available. A shortage of prototype S/370s was causing critical delays for the MVS project, in particular. This remarkable technical feat transformed MVS development, won an award for the CP-370 developers, and probably rescued the CP project from extinction, despite aggressive efforts to cancel the project. August 1972 marked the end of CP/CMS, with IBM's "System/370 Advanced Function" announcement. This included: the new S/370-158 and -168; address relocation hardware on all S/370s; and four new operating systems: DOS/VS (DOS with virtual storage), OS/VS1 (OS/MFT with virtual storage), OS/VS2 (OS/MVT with virtual storage, which would grow into SVS and MVS), and VM/370 — the re-implemented CP/CMS. By this time the VM and CP/CMS development team had swelled to 110 people, including documenters. VM/370 was now a real IBM system, no longer part of the IBM Type-III Library. Source code distribution continued for several releases, however; see CP/CMS as free software for details. 1968–86?: VP/CSS In 1968, the principals of a small consulting firm in Connecticut called Computer Software Systems had the audacious idea of leasing an IBM System/360-67 to run CP/CMS and reselling computer time. It was audacious because IBM would not typically lease its $50–100K/month systems to a two-person startup. The third and fourth employees were Dick Orenstein, one of the authors of CTSS, and Dick Bayles, from CSC – the primary architect of CP-67. Other key hires from the CP/CMS world included Harold Feinleib, Mike Field, and Robert Jesurum (Bob Jay). By late November 1968, having persuaded IBM to accept the order (no small feat) and secured initial funding, CSS took delivery on their first S/360-67. They began selling time in December 1968. CSS quickly discovered that, by selling every available virtual machine minute at published prices, they could barely take in enough revenue to pay the machine lease. A whirlwind development program followed, ramping up CP/CMS performance to the point where it could be resold profitably. This began a fork of CP/CMS source code that evolved independently for some fifteen years. This operating system was soon renamed VP/CSS; the company went public, and was renamed National CSS. Although VP/CSS shared much with its CP/CMS parent and its VM/370 sibling, it diverged from them in many important ways. For business reasons, the system had to run at a profit; and its users, if frustrated, could stop paying at any time simply by hanging up the phone. These forces gave a high priority to factors affecting performance, usability, and customer support. VP/CSS soon became known for routinely supporting two to three times as many interactive users as on comparable VM systems. Early NCSS enhancements involved such areas as page migration, dispatching, file system, device support, and efficient fast-path hypervisor functions accessed via the diagnose (DIAG) instruction. Later features included a packet-switched network, FILEDEF-level (pipe) interprocess/intermachine communication, and database integration. Similar features also appeared in the VM implementation. Ultimately, the NCSS development team rivaled the size of IBM's, implementing a wide array of features. The VP/CSS platform remained in use through at least the mid-80s. NCSS was acquired by Dun & Bradstreet in 1979; renamed DBCS (Dun & Bradstreet Computer Services); increased its focus on the NOMAD product; changed its business strategy to embrace VM and other platforms; and in the process discontinued support and development of VP/CSS, probably the last non-VM fork of CP/CMS. 1964?–72?: IDC's use of CP/CMS Interactive Data Corporation (IDC) followed a plan similar to that of National CSS, selling time-sharing services based on the CP/CMS platform. Its focus at the time was in financial services. Varian reports that IDC had "several" S/360-67 systems running CP/CMS and one of IBM's "first relocating S/370", presumably referring to the S/370-145 of 1971, with the first DAT box; but perhaps to the systems announced in 1972 with the "System/370 Advanced Function" announcement, including the S/370-158 and -168. [Further details and citations are sought on the history of IDC and CP/CMS time-sharing.] Historical notes The following notes provide brief quotes, primarily from Pugh and Varian [see references], illustrating the development context of CP/CMS. Direct quotes rather than paraphrases are provided here, because the authors' perspectives color their interpretations. Early time-sharing and CTSS: Christopher Strachey filed a patent application for "time-sharing" in February 1959. He gave a paper "Time Sharing in Large Fast Computers" at the first UNESCO Information Processing Conference in Paris in June that year, where he passed the concept on to J. C. R. Licklider. CTSS was the seminal system that "taught the world how to do time-sharing." It was first demonstrated at MIT in 1961, on an IBM 709, and was in production use from 1964 to 1974. Programmers were Marjorie Merwin-Daggett, Robert Daley, Robert Creasy, Jessica Hellwig, Richard Orenstein (later a cofounder of National CSS), and Lyndalee Korn, all working under Professor Fernando Corbató. MIT developers asked for and received considerable assistance from IBM in making hardware modifications to facilitate CTSS processing.F.J. Corbató, M.M. Daggett, R.C. Daley, R.J. Creasy, J.D. Hellwig, R.H. Orenstein, and L.K. Korn, The Compatible Time-Sharing System: A Programmer’s Guide, The MIT Press, Cambridge, MA, 1963 [cited in Varian, op. cit., p. 3]. Creasy describes the important influence of CTSS on CP/CMS: "CTSS… most strongly influenced the CP/CMS system design.… [It] provided a subset of the machine for use by normal batch programs… run without modification as with a normal system. The time-sharing supervisor would steal and restore the machine without their knowledge. This technique was extended to its fullest in CP/CMS. Many other CTSS design elements and system facilities, like the user interface, terminal control, disk file system, and attachment of other computers, served as operational prototypes.… The necessity of compatibility for evolutionary growth of software was demonstrated by CTSS; for hardware, by the IBM System/360 family." Role of John McCarthy (of LISP fame) in timesharing: "At about this time [April 1961] John McCarthy… gave a special evening lecture [stressing the future importance of time-sharing that] ended on the speculative note that computation might eventually 'be organized as a public utility, just as the telephone system is a public utility.' This insightful prediction can be seen to anticipate the role of the Internet. Similar sentiments were later expressed forcefully by Alan Kay and others. IBM leadership had a very different view of computation. IBM and MIT: IBM’s president T.J. Watson "had very shrewdly given" MIT an IBM 704 for use by MIT and other New England schools, beginning what was to be a very close relationship. IBM established an MIT Liaison Office, housed at the MIT Computation Center, staffing it with skilled technicians. Watson recalls that he "went up to MIT in 1955 and urged them to start training computer scientists.… [with] a very aggressive college discount program… [so that] within five years there was a whole new generation of computer scientists who made it possible for the market to boom." Varian adds the following interesting footnote: "It appears that (without a clear directive from Corporate management) IBM’s Cambridge Branch Office decided to interpret Watson’s original grant to MIT as authorization for them to upgrade the system at MIT whenever IBM produced a more powerful computer."T.J. Watson, Jr., Father, Son, and Co.: My Life at IBM and Beyond, Bantam Books, New York, 1990, pp. 244–5 [cited in Varian, op. cit., p. 4] Project MAC and MULTICS: MIT's Project MAC was launched in 1963 to build a new time-sharing system following in the footsteps of CTSS. IBM submitted a bid to provide a modified S/360 with address translation (the "Blaauw Box"), which was also bid to Bell Labs around the same time. MIT and Bell Labs both chose another vendor. This had "important consequences for IBM. Seldom after that would IBM processors be the machines of choice for leading-edge academic computer science research." MULTICS and UNIX (plus various other minicomputer platforms) became the de facto research systems. MIT input to S/360: IBM personnel with close ties to MIT became "strong proponents of time-sharing" and kept System/360 designers aware of work at MIT, including the purpose of the CTSS hardware enhancements. System/360 architects visited MIT and talked with Professor Corbató. Nevertheless, the IBM belief was now that "time-sharing would never amount to anything and that what the world needed was faster batch processing." When the System/360 was announced in 1964 without address relocation, MIT and other time-sharing advocates felt betrayed. Virtual memory and timesharing: "In June [1964]...[MIT was] adamant that hardware-aided dynamic address translation (DAT) was essential" for time-sharing, a "still-experimental mode of operation whereby users at several consoles could share the facilities of a computer.... The most fundamental problem....was that of reallocating memory areas to user programs dynamically." MIT wouldn't back down on this position, felt betrayed by IBM, and ultimiately turned away from IBM to GE for a MULTICS platform. Cambridge Scientific Center (CSC): Established in 1964 by Norm Rasmussen, in the same building with Project MAC (a location with "ten or fifteen time-sharing systems being coded or tested or accessed" in the mid 60s), CSC developed and maintained close ties with MIT researchers. "All of IBM’s contractual relationships with MIT were turned over to the new Scientific Center to administer." After losing Project MAC, the team unexpectedly had nothing to do. This was the environment where CP-40 came to life. Two competing strategies at IBM in 1964: IBM engineers were divided over the right technical path for the company: "Unifying the architecture and control programs of business and scientific computers both large and small" (championed by Brooks/Amdahl; this group rejected dynamic address translation, fearing "unevaluated techniques or technologies" as the basis of an entire product line), versus Changing "the way computing power was meted out in universities and laboratories" (i.e. timesharing, championed by MIT researchers working closely with IBM). Divided opinion about the S/360: CSC staff became champions of the System/360 architecture, in the face of deep skepticism in the scientific community. Creasy notes: "The family concept of the IBM System/360… was a most amazing turning point in computer development, one which was not universally greeted with enthusiasm. We believed [at CSC] that the architecture of System/360, combining scientific and commercial instruction sets, would be around for a significant period of time. [Eliminating] the trauma associated with widespread recoding of programs [via the S/360 promise of backward compatibility] also pointed to a long life. In addition, we speculated that many operating systems and a large number of application programs would be produced over the lifetime of that machine design." These proved to be good predictions. IBM's 1965 reorganization: The two IBM product divisions pursuing time-sharing development — Advanced Systems Development Division (ASDD) and Data Systems Division (DSD) — were "phased out; their… resources were assigned to the new Systems Development Division (SDD)… and the new Systems Manufacturing Division [(SMD)]." The disbanding of large organizations previously responsible for time-sharing efforts suggests the political forces at work. 360/67 and TSS: Rasmussen felt betrayed by IBM's decision to ignore time-sharing, and he decided "that the Cambridge Scientific Center would write a time-sharing system for System/360." The loss of Project MAC had finally attracted attention within IBM, and resources were made available to help "win bids for time-sharing systems." Rough specifications were prepared for the new S/360-67, which would incorporate address translation (via the "DAT Box", which unlike the "Blaauw Box", supported both segment and page tables) and a new operating system: TSS. "A group of six sites… had a non-disclosure agreement" during the system's development (probably Lincoln Lab, University of Michigan, Carnegie University, Bell Labs, General Motors, and Union Carbide). The University of Michigan and MIT's Lincoln Laboratory were two of the first sales, and had a role in the hardware design. TSS was announced in August 1965, an "elegant and very ambitious system" but "snatched from its nest too young" with "serious stability and performance problems." CSC financial resources: Rasmussen used creative accounting to fund the creation of CP-40. Varian: "When IBM gave the 7094 to the MIT Computation Center, it retained the night shift on that machine for its own use. So, because the Scientific Center had inherited IBM’s contracts with MIT, Rasmussen 'owned' eight hours of 7094 time per day. He traded part of that time to the [MIT] Computation Center for CTSS time for his programmers to use in doing their development work. He 'sold' the remainder [of the 7094 time] to IBM hardware developers in Poughkeepsie, who badly needed 7094 time to run a design automation program that was critical for System/360 hardware development. With the internal funds he acquired this way, he paid for the modifications to the Model 40… [and for] part-time employees, mainly MIT students, and to pay the [temporary] salaries of IBMers who came to Cambridge to work on the system… [using] 'unbudgeted revenues'...to keep a very low profile." Rasmussen also sold spare time on another, temporary S/360-40, provided by IBM to CSC while they waited for their modified virtual memory system. If there was a perception at MIT that MIT funds went into CP/CMS, it may have come from these complicated transactions. Of course, regardless of funding issues, researchers from outside IBM, especially from MIT and Union Carbide, clearly made direct and indirect contributions. This also would have clouded perceptions of authorship. Anti-timesharing decisions: "During 1961 and 1962 time-sharing [efforts had involved] close contact with the MIT Computation Center through sales and special engineering personnel." After MIT's criticisms, and their ultimate choice of GE for MULTICS, an IBM task force "made useful suggestions but effectively endorsed the work of the 360 designers by reporting that too little was known about the time-sharing mode of operation to justify [its pursuit].… In 1967, an industry observer counted about forty general-purpose time-sharing installations in the United States – up from ten in 1965 and up from one (the MIT demonstration) in 1961. Some of the cost of development was being offset by a research agency in the Department of Defense, which sponsored six of the first dozen." Informal release of CP/CMS: CP-67/CMS "was announced informally because it was developed outside of the product development organizations in the product and marketing divisions. See IBM Installation Newsletter 68-10, 31 May 1968, 'New Type III Programs,' pp. 13–5." Note that this description does not mention company politics, the possible use of public funds, the role of Lincoln Labs, etc. Relationship of early time-sharing to introduction of HP-35: Before the availability of powerful handheld calculators in 1972, during the "first decade of time-sharing, one use for a terminal was to request minor calculations that needed to be carried to more decimal places than possible with a slide rule." This was one reason time-sharing was so important to scientific and academic users. Virtual memory, and IBM's fear of risk: "In the aftermath of the System/360 trauma," a reference to large-project woes such as those described by Fred Brooks, IBM executives "took steps to ensure that the company would never again become committed to such a high-risk program." At the height of this loin-girding is when timesharing and virtual memory were shunted aside by IBM, in favor of mainstream commercial batch processing. It is ironic that, by 1970–71, other forces within IBM "set in motion an effort to create the Future System (FS) [project with] technological objectives at least as risky as those of System/360.… Three and a half years later, the project was abandoned." It is well known that Gene Amdahl, a key FS player, continued to pursue FS objectives and technologies after leaving IBM. Intent and use of CP/CMS: Creasy provides this insight: "CMS was developed… to support its own development and maintenance… [and] maintain the other components of VM/370.… In most cases, a subset of features was selected [to implement,] with the expectation of future work. We expected many operating systems to flourish in the virtual machine environment. What better place for experimentation with new system ideas? This has not been the case. Instead, many features were added to CMS to extend its usage into areas better served by new systems." A generation later, as we face a diversity of platforms built in the collaborative open source world, it is easy to understand Creasy's hopes for CP/CMS as a development incubator, and his disappointment in what must have seemed missed opportunities. This has been the fate of many research systems, but few share the 40+ year arc of the concepts launched with CP-40. See also CP-40 – the IBM research system that validated the virtual machine concept VM – IBM's virtual machine operating system family, a reimplementation of CP/CMS Virtualization and hypervisor – concepts pioneered with CP/CMS IBM System/360-67 – the main CP/CMS hardware platform VP/CSS – a proprietary operating system from National CSS that began as a fork of CP/CMS source code Time-sharing – an industry heavily influenced by CP/CMS Time-sharing system evolution Notes References Background information Peter J. Denning, "Performance Modeling: Experimental Computer Science at its Best", Communications of the ACM, President's Letter (November 1981)― Cites the following papers relating to the IBM M44/44X: L. Belady, "A study of replacement algorithms for virtual storage computers," IBM Systems Journal Vol. 5, No. 2 (1966), pp. 78–101. L. Belady and C. J. Kuehner, "Dynamic space sharing in computer systems," Communications of ACM Vol. 12 No. 5 (May 1969), pp. 282–88. L. Belady, R. A. Nelson, and G. S. Shedler, "An anomaly in the space-time characteristics of certain programs running in paging machines," Communications of ACM Vol. 12, No. 6 (June 1969), pp. 349–53. L.W. Comeau, "CP-40, the Origin of VM/370," Proceedings of SEAS AM82 (September 1982)― description of CP-40, cited in Varian [above] Harold Feinleib, " A technical history of National CSS", Computer History Museum (March 2005). PDF. W. O'Neill, "Experience using a time sharing multiprogramming system with dynamic address relocation hardware," Proc. AFIPS Computer Conference 30 (Spring Joint Computer Conference, 1967), pp. 611–21― [Describes the experimental IBM M44/44X, reports performance measurements related to memory and paging, and discusses performance impact of multiprogramming and time-sharing.] Dick Orenstein, " From the very beginning… from my vantage point", Computer History Museum (January 2005). PDF― early history of National CSS D. Sayre, On Virtual Systems,'' IBM Thomas J. Watson Research Center (April 15, 1966)― An early virtual machine paper, describing multiprogramming with the IBM M44/44X, an experimental paging system. Citations Family tree CP CMS History IBM mainframe operating systems Virtualization software VM (operating system)
4468459
https://en.wikipedia.org/wiki/Computer%20museum
Computer museum
A computer museum is devoted to the study of historic computer hardware and software, where a "museum" is a "permanent institution in the service of society and of its development, open to the public, which acquires, conserves, researches, communicates, and exhibits the tangible and intangible heritage of humanity and its environment, for the purposes of education, study, and enjoyment", as defined by the International Council of Museums. Some computer museums exist within larger institutions, such as the Science Museum in London and the Deutsches Museum in Munich. Others are dedicated specifically to computing, such as: the Computer History Museum in Mountain View, California, the American Computer & Robotics Museum in Bozeman, Montana, The National Museum of Computing at Bletchley Park, The Centre for Computing History in Cambridge, the Nexon Computer Museum in Jeju Province. Some specialize in the early history of computing, others in the era that started with the first personal computers such as the Apple I and Altair 8800, Apple IIs, older Apple Macintoshes, Commodore Internationals, Amigas, IBM PCs and more rare computers such as the Osborne 1. Some concentrate more on research and conservation, others more on education and entertainment. There are also private collections, most of which can be visited by appointment. See also List of computer museums List of science museums Computer Conservation Society (UK) History of computer hardware IT History Society KansasFest annual event for Apple II computer enthusiasts. Held every July at Rockhurst University, Kansas City, Missouri. Retrocomputing Vintage Computer Festival held annually in Mountain View, California, and elsewhere Further reading Bell, Gordon (2011). Out of a Closet: The Early Years of the Computer Museums. Microsoft Technical Report MSR-TR-2011-44. Bruemmer, Bruce H. (1987). Resources for the History of Computing: A Guide to U.S. & Canadian Records. Charles Babbage Institute. Cortada, James W. (1990). Archives of Data-Processing History: A Guide to Major U.S. Collections. Greenwood References Types of museums
479801
https://en.wikipedia.org/wiki/Max%20Mathews
Max Mathews
Max Vernon Mathews (November 13, 1926 in Columbus, Nebraska, USA – April 21, 2011 in San Francisco, CA, USA) was a pioneer of computer music. Biography Mathews studied electrical engineering at the California Institute of Technology and the Massachusetts Institute of Technology, receiving a Sc.D. in 1954. Working at Bell Labs, Mathews wrote MUSIC, the first widely used program for sound generation, in 1957. For the rest of the century, he continued as a leader in digital audio research, synthesis, and human-computer interaction as it pertains to music performance. In 1968, Mathews and L. Rosler developed Graphic 1, an interactive graphical sound system on which one could draw figures using a light-pen that would be converted into sound, simplifying the process of composing computer generated music. Also in 1970, Mathews and F. R. Moore developed the GROOVE (Generated Real-time Output Operations on Voltage-controlled Equipment) system, a first fully developed music synthesis system for interactive composition and realtime performance, using 3C/Honeywell DDP-24 (or DDP-224) minicomputers. It used a CRT display to simplify the management of music synthesis in realtime, 12bit D/A for realtime sound playback, an interface for analog devices, and even several controllers including a musical keyboard, knobs, and rotating joysticks to capture realtime performance. Although MUSIC was not the first attempt to generate sound with a computer (an Australian CSIRAC computer played tunes as early as 1951), Mathews fathered generations of digital music tools. He described his work in parental terms, in the following excerpt from "Horizons in Computer Music", March 8–9, 1997, Indiana University: In 1961, Mathews arranged the accompaniment of the song "Daisy Bell" for an uncanny performance by computer-synthesized human voice, using technology developed by John Kelly, Carol Lochbaum, Joan Miller and Lou Gerstman of Bell Laboratories. Author Arthur C. Clarke was coincidentally visiting friend and colleague John Pierce at the Bell Labs Murray Hill facility at the time of this remarkable speech synthesis demonstration and was so impressed that he later told Stanley Kubrick to use it in 2001: A Space Odyssey, in the climactic scene where the HAL 9000 computer sings while his cognitive functions are disabled. Mathews directed the Acoustical and Behavioral Research Center at Bell Laboratories from 1962 to 1985, which carried out research in speech communication, visual communication, human memory and learning, programmed instruction, analysis of subjective opinions, physical acoustics, and industrial robotics. From 1974 to 1980 he was the Scientific Advisor to the Institute de Recherche et Coordination Acoustique/Musique (IRCAM), Paris, France, and since 1987 has been Professor of Music (Research) at Stanford University. He served as the Master of Ceremonies for the concert program of NIME-01, the inaugural conference on New interfaces for musical expression. Mathews was a member of the National Academy of Sciences, the National Academy of Engineering and a fellow in the American Academy of Arts and Sciences, the Acoustical Society of America, the IEEE, and the Audio Engineering Society. He received a Silver Medal in Musical Acoustics from the Acoustical Society of America, and the Chevalier de l'ordre des Arts et Lettres, République Française. The Max portion of the software package Max/MSP is named after him (the MSP portion is named for Miller Puckette, who teaches at UC San Diego). Mathews died on the morning of 21 April 2011 in San Francisco, California of complications from pneumonia. He was 84. He was survived by his wife, Marjorie, his three sons and six grandchildren. See also Qwartz Electronic Music Awards Algorithmic composition Graphical sound References External links the GROOVE System on '120 Years Of Electronic Music' The Digital Computer as a Musical Instrument; Science, Volume 142, Issue 3592, pp. 553–557 1963–11 Max Mathews at cSounds.com Max Mathews received the Qwartz d'Honneur – 2008 Max Matthews 1926–2011 on Stretta blog Max Mathews 1926–2011 by Geeta Dayal, Frieze Magazine, May 9, 2011 Max Mathews, Computer Music Pioneer, R.I.P. Max Mathews interview in Computer Music Journal by Tae Hong Park The GROOVE System Max Mathews Interview for the NAMM (National Association of Music Merchants) Oral History Program March 29, 2007 1926 births Members of the United States National Academy of Sciences 2011 deaths American electrical engineers Fellow Members of the IEEE Scientists at Bell Labs Members of the United States National Academy of Engineering Chevaliers of the Ordre des Arts et des Lettres California Institute of Technology alumni People from Columbus, Nebraska Articles containing video clips
36046864
https://en.wikipedia.org/wiki/ERPNext
ERPNext
ERPNext is a free and open-source integrated Enterprise Resource Planning (ERP) software developed by Frappe Technologies Pvt. Ltd. and is built on MariaDB database system using Frappe, a Python based server-side framework. ERPNext is a generic ERP software used by manufacturers, distributors and services companies. It includes modules like accounting, CRM, sales, purchasing, website, e-commerce, point of sale, manufacturing, warehouse, project management, inventory, and services. Also, it has domain specific modules like schools, healthcare, agriculture, and non-profit. ERPNext is an alternative to NetSuite and QAD, and similar in function to Odoo (formerly OpenERP), Tryton and Openbravo. ERPNext was included in the ERP FrontRunners List by Gartner as a Pacesetters. Core modules ERPNext contains these modules: Accounting Asset management Customer relationship management (CRM) Human resource management (HRM) Payroll Project management Purchasing Sales management Warehouse management system Website Industry modules Manufacturing - Manufacturing Point of sale (POS) - Retail Student Information system - Education Hospital Information system - Healthcare Agriculture Management - Agriculture Nonprofit Organization - Non Profit Software license ERPNext is released under the GPL-3.0-only license. Consequently, ERPNext does not require license fees as opposed to proprietary ERP vendors. In addition, as long as the terms of the licenses are adhered to, modification of the program is possible. Architecture ERPNext has a Model-View-Controller architecture with metadata modeling tools that add flexibility for users to adapt the software to unique purposes without the need for programming. Some attributes of the architecture are: All objects in the ERP are DocTypes (not to be confused with HTML DocTypes) and the Views are generated directly in the browser. Client interacts with the server via JSON data objects on a Representational state transfer (RESTful) supporting server. There is ability to plug-in (event driven) code on the client and server side. The underlying web app framework is called "Frappe Framework" and is maintained as a separate open source project. Frappe started as a web based metadata framework inspired from Protégé though it has evolved differently. This architecture allows rapid application development (RAD). Source code and documentation ERPNext source code is hosted on GitHub, using the Git revision control system, and the contributions are also handled using GitHub. A complete user manual is available at the project website. Software as a service ERPNext is available both on user hosting and as a Software as a service (SaaS) from their website. Investment In November 2020, Rainmatter incubator invested ₹10 crore ($1.3M) in Frappe Technologies PL, to support development of ERPNext, other open source products, and scaling plans. Release history FOSS United FOSS United (formerly ERPNext Open Source Software Foundation) is a non-for-profit organization. The goal of the foundation is to provide a platform for the FOSS community of India to come together and build open source applications. Foundation also organises various events like conference and code sprints. See also iDempiere List of free and open source software packages References External links . ERP software Accounting software 2008 software Free customer relationship management software Free accounting software Point of sale companies Free ERP software Enterprise resource planning software for Linux CRM software companies Software using the GPL license
20571108
https://en.wikipedia.org/wiki/1967%20USC%20Trojans%20football%20team
1967 USC Trojans football team
The 1967 USC Trojans football team represented the University of Southern California (USC) in the 1967 NCAA University Division football season. In their eighth year under head coach John McKay, the Trojans compiled a 10–1 record (6–1 against conference opponents), won the Athletic Association of Western Universities (AAWU or Pac-8) championship, and outscored their opponents by a combined total of 258 to 87. The team was ranked #1 in the final AP and Coaches Polls. Steve Sogge led the team in passing, completing 75 of 151 passes for 1,032 yards with seven touchdowns and seven interceptions. O. J. Simpson led the team in rushing with 291 carries for 1,543 yards and 13 touchdowns. Earl McCullouch led the team in receiving with 30 catches for 540 yards and five touchdowns. Simpson won the Walter Camp Award. Robert Kardashian is said to have met OJ Simpson while serving as a water boy for the team. The relationship would later culminate in Kardashian being a part of the Dream Team in the O. J. Simpson murder case. Schedule Personnel Game summaries Washington OJ Simpson 30 rushes, 235 yards UCLA The University of California at Los Angeles, 7-0-1 and ranked Number 1, with senior quarterback Gary Beban as a Heisman Trophy candidate, played the University of Southern California, 8-1 and ranked Number 4, with junior running back O. J. Simpson as a Heisman candidate. This game is widely regarded as the signature game in the UCLA–USC rivalry and the Trojans won the game by a score of 21-20. 1967 Trojans in the NFL O. J. Simpson Ron Yary Awards and honors O. J. Simpson (Junior), Running back, Walter Camp Award Ron Yary (Senior), Tackle, Outland Trophy References USC USC Trojans football seasons College football national champions Pac-12 Conference football champion seasons Rose Bowl champion seasons USC Trojans football
6526
https://en.wikipedia.org/wiki/Cassandra
Cassandra
Cassandra or Kassandra; Ancient Greek: Κασσάνδρα, , also ), (sometimes referred to as Alexandra) was a Trojan priestess of Apollo in Greek mythology cursed to utter true prophecies, but never to be believed. In modern usage her name is employed as a rhetorical device to indicate someone whose accurate prophecies are not believed. Cassandra was said to be a daughter of King Priam and Queen Hecuba of Troy. Her older brother was Hector, hero of the Greek-Trojan war. The older and most common versions state that she was admired by the god Apollo, who sought to win her with the gift to see the future. According to Aeschylus, she promised him her favors, but after receiving the gift, she went back on her word and refused the god. The enraged Apollo could not revoke a divine power, so he added to it the curse that though she would see the future, nobody would believe her prophecies. In other sources, such as Hyginus and Pseudo-Apollodorus, Cassandra broke no promise; the powers were given to her as an enticement. When these failed to make her love him, Apollo cursed Cassandra to always be disbelieved, in spite of the truth of her words. Some later versions have her falling asleep in a temple, where the snakes licked (or whispered in) her ears so that she could hear the future. Etymology Hjalmar Frisk (Griechisches Etymologisches Wörterbuch, Heidelberg, 1960–1970) notes "unexplained etymology", citing "various hypotheses" found in Wilhelm Schulze, Edgar Howard Sturtevant, J. Davreux, and Albert Carnoy. R. S. P. Beekes cites García Ramón's derivation of the name from the Proto-Indo-European root *(s)kend- "raise". The Online Etymology Dictionary states "though the second element looks like a fem. form of Greek andros "of man, male human being." Watkins suggests PIE *(s)kand- "to shine" as source of second element. The name also has been connected to kekasmai "to surpass, excel." Biography Cassandra was one of the many children born to the king and queen of Troy, Priam and Hecuba. She is the fraternal twin sister of Helenus, as well as the sister to Hector and Paris. One of the oldest and common versions of her myth states that Cassandra was admired for her beauty by the god Apollo, who sought to win her with the gift to see the future. According to Aeschylus, Cassandra promised Apollo favors, but, after receiving the gift, went back on her word and refused Apollo. Since the enraged Apollo could not revoke a divine power, he added a curse that nobody would believe Cassandra's prophecies. Mythology Cassandra appears in texts written by Homer, Virgil, Aeschylus and Euripides. Each author depicts her prophetic powers differently. In Homer's work, Cassandra is mentioned a total of four times "as a virgin daughter of Priam, as bewailing Hector’s death, as chosen by Agamemnon as his slave mistress after the sack of Troy, and as killed by Clytemnestra over Agamemnon’s corpse after Clytemnestra murders him on his return home." In Virgil's work, Cassandra appears in book two of his epic poem titled Aeneid, with her powers of prophecy restored. Unlike Homer, Virgil presents Cassandra as having fallen into a manic state and her prophecies reflect it. In book 2, she gives her prophecy of why Agamemnon deserves the death he got:Quid me vocatis sospitem solam e meis, umbrae meorum? te sequor, tota pater Troia sepulte; frater, auxilium Phrygum terrorque Danaum, non ego antiquum decus video aut calentes ratibus ambustis manus, sed lacera membra et saucios vinclo gravi illos lacertos. te sequor… (Ag. 741–747) Why do you call me, the lone survivor of my family, My shades? I follow you, father buried with all of Troy; Brother, bulwark of Trojans, terrorizer of Greeks, I do not see your beauty of old or hands warmed by burnt ships, But your lacerated limbs and those famous shoulders savaged By heavy chains. I follow you…Later on in Virgil's work, this behavior is reflected in act's 4 and 5 as "Her mantic vision in act 4 will be supplemented by a further (in)sight into what is going on inside the palace in act 5 when she becomes a quasi-messenger and provides a meticulous account of Agamemnon’s murder in the bath: “I see and I am there and I enjoy it, no false vision deceives my eyes: let’s watch” (video et intersum et fruor, / imago visus dubia non fallit meos: / spectemus." Gift of prophecy Cassandra was given the gift of prophecy, but was also cursed by the god Apollo so that her true prophecies would not be believed. Many versions of the myth relate that she incurred the god's wrath by refusing him sexual favours after promising herself to him in exchange for the power of prophecy. In Aeschylus' Agamemnon, she bemoans her relationship with Apollo: Apollo, Apollo! God of all ways, but only Death's to me, Once and again, O thou, Destroyer named, Thou hast destroyed me, thou, my love of old! And she acknowledges her fault: I consented [marriage] to Loxias [Apollo] but broke my word. ... Ever since that fault I could persuade no one of anything. Latin author Hyginus in Fabulae says: Louise Bogan, an American poet, writes that another way Cassandra, as well as her twin brother Helenus, had earned their prophetic powers: "she and her brother Helenus were left overnight in the temple of the Thymbraean Apollo. No reason has been advanced for this night in the temple; perhaps it was a ritual routinely performed by everyone. When their parents looked in on them the next morning, the children were entwined with serpents, which flicked their tongues into the children's ears. This enabled Cassandra and Helenus to divine the future." It would not be until Cassandra is much older that Apollo appears in the same temple and tried to seduce Cassandra, who rejects his advances, and curses her by making her prophecies not be believed. Her cursed gift from Apollo became an endless pain and frustration to her. She was seen as a liar and a madwoman by her family and by the Trojan people. Because of this, her father, Priam, had locked her away in a chamber and guarded her like the madwoman she was believed to be. Though Cassandra made many predictions that went unbelieved, the one prophecy that was believed was that of Paris being her abandoned brother. Cassandra and the Fall of Troy Before the fall of Troy Before the fall of Troy took place, Cassandra foresaw that if Paris goes to Sparta and brings Helen back as his wife, the arrival of Helen would spark the downfall and destruction of Troy during the Trojan War. Despite the prophecy and ignoring Cassandra's warning, Paris still went to Sparta and returns with Helen. While the people of Troy rejoice, Cassandra, angry with Helen's arrival, furiously snatched away Helen's golden veil and tore at her hair. In Virgil's epic poem, the Aeneid, Cassandra warned the Trojans about the Greeks hiding inside the Trojan Horse, Agamemnon's death, her own demise at the hands of Aegisthus and Clytemnestra, her mother Hecuba's fate, Odysseus's ten-year wanderings before returning to his home, and the murder of Aegisthus and Clytemnestra by the latter's children Electra and Orestes. Cassandra predicted that her cousin Aeneas would escape during the fall of Troy and found a new nation in Rome. During the fall of Troy Coroebus and Othronus came to the aid of Troy during the Trojan War out of love for Cassandra and in exchange for her hand in marriage, but both were killed. According to one account, Priam offered Cassandra to Telephus’s son Eurypylus, in order to induce Eurypylus to fight on the side of the Trojans. Cassandra was also the first to see the body of her brother Hector being brought back to the city. In The Fall of Troy, told by Quintus Smyrnaeus, Cassandra had attempted to warn the Trojan people that Greek warriors were hiding in the Trojan Horse while they were celebrating their victory over the Greeks with feasting. Disbelieving Cassandra, the Trojans resort to calling her names and hurling insults at her. Attempting to prove herself right, Cassandra took an axe in one hand and a burning torch in the other, and ran towards the Trojan Horse, intent on destroying the Greeks herself, but the Trojans stopped her. The Greeks hiding inside the Horse were relieved, but alarmed by how clearly she had divined their plan. At the fall of Troy, Cassandra sought shelter in the temple of Athena. There she embraced the wooden statue of Athena in supplication for her protection, but was abducted and brutally raped by Ajax the Lesser. Cassandra clung so tightly to the statue of the goddess that Ajax knocked it from its stand as he dragged her away. The actions of Ajax were a sacrilege because Cassandra was a supplicant at the sanctuary, and under the protection of the goddess Athena and Ajax further defiled the temple by raping Cassandra. In Apollodorus chapter 6, section 6, Ajax's death comes at the hands of both Athena and Poseidon "Athena threw a thunderbolt at the ship of Ajax; and when the ship went to pieces he made his way safe to a rock, and declared that he was saved in spite of the intention of Athena. But Poseidon smote the rock with his trident and split it, and Ajax fell into the sea and perished; and his body, being washed up, was buried by Thetis in Myconos". In some versions, Cassandra intentionally left a chest behind in Troy, with a curse on whichever Greek opened it first. Inside the chest was an image of Dionysus, made by Hephaestus and presented to the Trojans by Zeus. It was given to the Greek leader Eurypylus as a part of his share of the victory spoils of Troy. When he opened the chest and saw the image of the god, he went mad. The aftermath of Troy and Cassandra's death Once Troy had fallen, Cassandra was taken as a pallake (concubine) by King Agamemnon of Mycenae. While away at war and unknown to Agamemnon, his wife, Clytemnestra, had taken Aegisthus as her lover. Cassandra and Agamemnon are later killed by both Clytemnestra and Aegisthus. Various sources state that Cassandra and Agamemnon had twin boys, Teledamus and Pelops, who were murdered by Aegisthus. The final resting place of Cassandra is either in Amyclae or Mycenae. In Amyclae, Cassandra has statue in both there and in Leuctra. In Mycenae, German business man and pioneer archeologist Heinrich Schliemann discovered in Grave Circle A the graves of Cassandra and Agamemnon and telegraphed back to King George of Greece:With great joy I announce to Your Majesty that I have discovered the tombs which the tradition proclaimed by Pausanias indicates to be the graves of Agamemnon, Cassandra, Eurymedon and their companions, all slain at a banquet by Clytemnestra and her lover Aegisthos.However, it was later discovered that the graves predated the Trojan War by at least 300 years. Agamemnon by Aeschylus The play Agamemnon from Aeschylus's trilogy Oresteia depicts the king treading the scarlet cloth laid down for him, and walking offstage to his death. After the chorus's ode of foreboding, time is suspended in Cassandra's "mad scene". She has been onstage, silent and ignored. Her madness that is unleashed now is not the physical torment of other characters in Greek tragedy, such as in Euripides' Heracles or Sophocles' Ajax. According to author Seth Schein, two further familiar descriptions of her madness are that of Heracles in The Women of Trachis or Io in Prometheus Bound. She speaks, disconnectedly and transcendent, in the grip of her psychic possession by Apollo, witnessing past and future events. Schein says, "She evokes the same awe, horror and pity as do schizophrenics". Cassandra is one of those "who often combine deep, true insight with utter helplessness, and who retreat into madness." Eduard Fraenkel remarked on the powerful contrasts between declaimed and sung dialogue in this scene. The frightened and respectful chorus are unable to comprehend her. She goes to her inevitable offstage murder by Clytemnestra with full knowledge of what is to befall her. See also Apollo archetype Novikov self-consistency principle The Boy Who Cried Wolf Tiresias Voice in the Wilderness Notes References Primary sources Homer. Iliad XXIV, 697–706; Odyssey XI, 405–434; Aeschylus. Agamemnon Euripides. The Trojan Women; Electra Bibliotheca III, xii, 5; Epitome V, 17–22; VI, 23 Virgil. Aeneid II, 246–247, 341–346, 403–408 Lycophron. Alexandra Triphiodorus: The Sack of Troy Quintus Smyrnaeus: Posthomerica Further reading Clarke, Lindsay. The Return from Troy. HarperCollins (2005). . Marion Zimmer Bradley. The Firebrand. Patacsil, Par. Cassandra. In The Likhaan Book of Plays 1997–2003. Villanueva and Nadera, eds. University of the Philippines Press (2006). Ukrainka, Lesya. "Cassandra". Original Publication: Lesya Ukrainka. Life and work by Constantine Bida. Selected works, translated by Vera Rich. Toronto: Published for the Women's Council of the Ukrainian Canadian Committee by University of Toronto Press (1968). pp. 181–239 Schapira, Laurie L. The Cassandra Complex: Living with Disbelief: A Modern Perspective on Hysteria. Toronto: Inner City Books (1988). . External links Classical oracles Mythological Greek seers Greek mythological priestesses Mythological rape victims Princesses in Greek mythology Trojans Children of Priam Female lovers of Apollo Women of the Trojan war Characters in the Aeneid Metamorphoses characters Characters in Greek mythology Women in Greek mythology
64039706
https://en.wikipedia.org/wiki/Comparison%20of%20Gaussian%20process%20software
Comparison of Gaussian process software
This is a comparison of statistical analysis software that allows doing inference with Gaussian processes often using approximations. This article is written from the point of view of Bayesian statistics, which may use a terminology different from the one commonly used in kriging. The next section should clarify the mathematical/computational meaning of the information provided in the table independently of contextual terminology. Description of columns This section details the meaning of the columns in the table below. Solvers These columns are about the algorithms used to solve the linear system defined by the prior covariance matrix, i.e., the matrix built by evaluating the kernel. Exact: whether generic exact algorithms are implemented. These algorithms are usually appropriate only up to some thousands of datapoints. Specialized: whether specialized exact algorithms for specific classes of problems are implemented. Supported specialized algorithms may be indicated as: Kronecker: algorithms for separable kernels on grid data. Toeplitz: algorithms for stationary kernels on uniformly spaced data. Semisep.: algorithms for semiseparable covariance matrices. Sparse: algorithms optimized for sparse covariance matrices. Block: algorithms optimized for block diagonal covariance matrices. Markov: algorithms for kernels which represent (or can be formulated as) a Markov process. Approximate: whether generic or specialized approximate algorithms are implemented. Supported approximate algorithms may be indicated as: Sparse: algorithms based on choosing a set of "inducing points" in input space. Hierarchical: algorithms which approximate the covariance matrix with a hierarchical matrix. Input These columns are about the points on which the Gaussian process is evaluated, i.e. if the process is . ND: whether multidimensional input is supported. If it is, multidimensional output is always possible by adding a dimension to the input, even without direct support. Non-real: whether arbitrary non-real input is supported (for example, text or complex numbers). Output These columns are about the values yielded by the process, and how they are connected to the data used in the fit. Likelihood: whether arbitrary non-Gaussian likelihoods are supported. Errors: whether arbitrary non-uniform correlated errors on datapoints are supported for the Gaussian likelihood. Errors may be handled manually by adding a kernel component, this column is about the possibility of manipulating them separately. Partial error support may be indicated as: iid: the datapoints must be independent and identically distributed. Uncorrelated: the datapoints must be independent, but can have different distributions. Stationary: the datapoints can be correlated, but the covariance matrix must be a Toeplitz matrix, in particular this implies that the variances must be uniform. Hyperparameters These columns are about finding values of variables which enter somehow in the definition of the specific problem but that can not be inferred by the Gaussian process fit, for example parameters in the formula of the kernel. Prior: whether specifying arbitrary hyperpriors on the hyperparameters is supported. Posterior: whether estimating the posterior is supported beyond point estimation, possibly in conjunction with other software. If both the "Prior" and "Posterior" cells contain "Manually", the software provides an interface for computing the marginal likelihood and its gradient w.r.t. hyperparameters, which can be feed into an optimization/sampling algorithm, e.g., gradient descent or Markov chain Monte Carlo. Linear transformations These columns are about the possibility of fitting datapoints simultaneously to a process and to linear transformations of it. Deriv.: whether it is possible to take an arbitrary number of derivatives up to the maximum allowed by the smoothness of the kernel, for any differentiable kernel. Example partial specifications may be the maximum derivability or implementation only for some kernels. Integrals can be obtained indirectly from derivatives. Finite: whether finite arbitrary linear transformations are allowed on the specified datapoints. Sum: whether it is possible to sum various kernels and access separately the processes corresponding to each addend. It is a particular case of finite linear transformation but it is listed separately because it is a common feature. Comparison table Notes References External links The website hosting C. E. Rasmussen's book Gaussian processes for machine learning; contains a (partially outdated) list of software. Comparisons of mathematical software Statistical software Statistics-related lists
24396320
https://en.wikipedia.org/wiki/Kymberly%20Pine
Kymberly Pine
Kymberly Marcos Pine (born September 8, 1970) is an American politician and Democrat who served two terms on the Honolulu City Council representing District 1 from 2013 to 2021. She was the Chair of the Council Committee on Business, Economic Development and Tourism. Prior to being elected to the City Council, she served as a Representative to the State House of Representatives for four terms. On October 28, 2019, Pine announced her candidacy for Mayor of Honolulu. Early life and education Pine grew up on the North Shore of Oahu. Her father is a professor of philosophy at the University of Hawaiʻi and Honolulu Community College. Her mother is a retired nurse of Spanish, Filipino, and Chinese descent, born and raised on Oʻahu. Her maternal grandparents were a Filipino immigrant and a Maui-born Filipino plantation worker. Her paternal grandparents are of Irish, English and Scottish descent. She is directly related to Ferdinand Marcos. Her grandfather served in the United States Coast Guard as a chef during the Attack on Pearl Harbor. Pine has worked with American Veterans – Hawaiʻi, a non-profit transitional home for former homeless veterans, located in the Ewa District, Hawaii. Pine played shortstop on the Manoa All Star Little League Team, disguising herself as a boy in order to qualify. She attended Waialua and Moanalua High Schools and was a member of the Hawaiʻi Olympic Development Soccer Team. Selected as Oʻahu Interscholastic Association West All-Star MVP player, she also ran cross-country and track, placing second in the OIA in various competitions. Pine graduated from the University of California at Berkeley in 2000 with a degree in English. She was a member of the Alpha Chi Omega sorority. Career Pine served as the Director of the Hawaii House of Representatives Minority Research Office (2002–2004). She was the chief of staff for Representative David Pendleton from 1997 to 2001. Pine was elected to the State House of Representatives in 2004 to represent district 43, defeating an incumbent with about 60% of the vote. District 43 then covered the Ewa Beach, Iroquois Point, and Puʻuloa areas. She served in the state house from 2004 to 2012 and was the first Republican to be elected to this seat since Hawaiʻi's statehood. In 2007, Pine was named one of the nation's 100 most influential half-Filipino women by the Filipina Women's Network. From 2010 to 2012 she served as the House Minority floor leader. One of Pine's 2012 priorities for the legislative session was to keep the community informed regarding the closure of the Hawaii Medical Center West facility in Ewa Beach in December 2011. In 2012, she created the Hire Leeward Job and Career Fair. Originally it was a five-year initiative to help the 1,000 people that lost their jobs from the HMC West and East locations. They expected 600–800 attendees, and 3,000 showed. The events brought thousands of jobs to West Oʻahu residents each year. In August 2019, they completed the 7th Annual Hire Leeward Job and Career Fair. Honolulu City Council In 2012, Pine was elected to the Honolulu City Council, representing District 1, which includes the areas of ʻEwa, ʻEwa Beach, Kapolei, Honokai Hale, Ko Olina, Nanakuli, Maʻili, Waiʻanae, Makaha, Keaau and Makua. She beat incumbent Tom Berg by more than 25 percentage points. In 2016, she won her re-election campaign with a landslide to serve her second, 4-year term. During her time at the Honolulu City Council, she fought for the improvement of parks throughout District 1 and allocated millions of dollars in Capital Improvement Project funds to improve infrastructure, roads, parks and public facilities throughout the Leeward Coast. In 2017, Councilmember Pine spearheaded efforts in getting Kapolei/ʻEwa designated as a "Blue Zone Project" area. The project creates community awareness through events and opportunities for Leeward residents to learn about healthy lifestyle choices in order to reduce the high rate of health risks impacting the district. As Chair of the Zoning and Housing Committee on the City Council, she worked with developers, nonprofits and lawmakers to change long-standing practices that made if difficult to build more affordable housing. This resulted in several affordable housing projects on Oʻahu that provide units at all income levels on Oʻahu, which currently experiences a housing crisis. She also worked to pass legislation that protects residential zoned areas to reduce the impact of illegal construction and uses. Pine resigned from the Republican Party on November 9, 2016, stating that many of the national party's new priorities had diverted from her long-held philosophical beliefs about inclusivity and progress. Leeward Oʻahu The Leeward Oʻahu district of Ewa Beach, Kapolei, Honokai Hale, Ko Olina, Nanakuli, Maʻili, Waiʻanea, Makaha, Keaau and Makua, previously experienced millions of dollars in neglected infrastructure improvements. Since Pine became a state House member in 2004, the Leeward Coast obtained over $1 billion in infrastructure improvements. Her efforts have resulted in the crackdown of illegal dumping, improvements to parks, and enhanced safety in the Leeward district. In 2012, she created the Hire Leeward Job and Career Fair. Originally a five-year initiative to help the 1,000 people impacted by the closure of the Hawaiʻi Medical Center East in Liliha and the Hawaiʻi Medical Center West in Ewa, the job fair drew 3,000 attendees. As of 2020, the annual Hire Leeward Job and Career Fair continues to place thousands of West Oʻahu residents in Leeward jobs. Safety initiatives To improve the safety of both tourists and residents on public beaches, Pine introduced Bill 39 in 2019, which became Ordinance 19-26, that extends lifeguard hours for the island of Oahu from sunup to sundown, in response to drownings that occurred in the hours before and after lifeguards were on duty. She originally introduced Res. 16-43 to extend lifeguard hours in March 2016, and then introduced a bill to make this law in 2019. The resolution authorized a four-day work week with ten-hour shifts for lifeguards based upon a successful pilot program for lifeguards at Haunama Bay. Environmental issues Pine promoted several pieces of legislation to protect the environment, including a 2017 bill banning the use of Styrofoam food containers. In 2019, Pine joined the Honolulu City Council to pass Bill 40, which became Ordinance 19-30, thought at the time to be the most comprehensive phase-out of plastics in the nation. Citing case studies that flexible schedules improved workplace performance, reduced sick time and worker's compensation claims and reduced energy use to mitigate carbon emissions, on January 15, 2020, Pine introduced Resolution 20-8. This Resolution asked the city to adopt a four-day, ten-hour work week for city workers. As Chair of the Business, Economic Development and Tourism Committee, Pine created several pieces of legislation called the "Keep Hawaiʻi Hawaiʻi" Package. In 2019, Pine introduced Bill 34 to require the visitor industry to provide annual reports on the progress of sustainability efforts to the city. She introduced Bill 51, "Keep Hawaiʻi Hawaiʻi – A Promise to Our Keiki," which became Ordinance 20-002 in 2020 and asks tourists and locals to sign a pledge to respect the environment, wildlife and culture of Oʻahu. Bill 3 (2020) introduced a Keep Hawaiʻi Hawaiʻi Pass to allow tourists and locals to purchase a pass to several city attractions and Bill 68 (2019) would create a fund for the proceeds to supplement impacts to City emergency services, infrastructure, parks and beaches from tourism. She also introduced resolutions to ask the state legislature to require that educational videos be shown to airline and cruise passengers regarding environmental and cultural issues and encouraged the state legislature to consider visitor impact fees. Inclusionary housing and homelessness In 2019, Pine supported legislation to limit short-term vacation rentals that negatively impacted many residential neighborhoods and constrained the rental housing market by taking units offline. The hospitality industry also blamed vacation rentals for suppressed growth in tourism-related dollars. The bill became Ordinance 18 (2019). Pine, who chaired the Committee on Zoning and Housing, created legislation (Bill 7 2019) that changed zoning requirements to allow owners of small lots to develop affordable rental housing, which became Ordinance 19-8 (2019). In response to critical shortages in housing, in 2019, Pine introduced legislation to amend land use regulations for low-rise apartment dwellings and enable low-cost housing construction, and Bill 29 to incentivize development of affordable housing. Bill 58 (2017)/Ordinance 18-10, redrafted in part by Pine, established incentives for developing affordable housing that extends affordability from 10 years to 30 years, depending on the number of units. She supported legislation to end "monster homes" (Ordinance 18-6) on Oʻahu which were blamed by many groups for violating safety codes, functioning as illegal rentals and filling residential streets with illegal parking. New regulations require parking spaces based on the size of the home, minimum yard setbacks and limits to the number of wet bars and bathrooms. In June 2019, Pine appropriated and the City Council approved $23 million for Pine's plan to address homelessness in each of the nine council districts. Funds can also be used for facilities including rest stops, shelters, outreach centers and affordable housing in Waianae; provides $2 million for homeless service zones with a hygiene facility; and creates a center where health and human services can be administered. Sweeping change Pine entered the 2020 mayoral race calling for sweeping changes to end corruption in government in the wake of the Katherine (former Honolulu City deputy prosecutor) and Louis Kealoha (former Honolulu police chief) scandal, the Federal investigation of HART, and the Save Sherwood Forest protests in Waimanalo. Pine has said publicly that she considers herself "as an outsider to Honolulu politicians." She has publicly challenged the policies of Mayor Kirk Caldwell and the project delays and mismanagement of the city's Honolulu Rail Transit (HART). Originally estimated to cost taxpayers $5.3 billion, it is now estimated to total $9–13 billion, and has become the subject of an FTA investigation. In 2017, Pine opposed lifting the rail tax cap. Pine opposed Bill 66, a Caldwell-supported bill to raise fares on public transportation, including TheBus and TheHandiVan. Pine opposed the construction of a 17-acre, $32 million sports complex in Waimanalo at Sherwood Forest. Hundreds of Native Hawaiians opposed the complex and 28 were arrested for blocking access to Waimanalo Bay Beach Park. The protesters cited concerns about archaeological and cultural significance, overdevelopment and traffic. Pine publicly opposed construction of thirteen 260-foot wind turbines in the Palehua Agricultural lands by EE Ewa LLC's (Eurus Energy), in a power purchase by Hawaiian Electric. She joined the local neighborhood boards, who opposed the building on sacred Hawaiian lands. In addition, the turbines could impact homeowners' views. Council Member Pine also challenged the Police Commission in September, 2019, voting with the city council against a $100 million payout for criminal attorneys representing convicted former police chief Louis Kealoha. She has publicly stated that she feels that it's "ridiculous" and that the city should not be financially responsible for the willful criminal acts of city employees, opposing the use of taxpayer funds for out-of-state lawyers to defend employees of the rail project, the police department or the prosecutor's office who may have participated in illegal acts while on city time. She also spoke against the Police Commission's decision to grant a $250,000 severance payout to Louis Kealoha. Parks Pine brought millions of dollars to the Leeward Coast to improve infrastructure, enhance security and clean up public parks for her constituents. She has sought to fund improvements and enhancements for public parks and recreation. Pine introduced several measures in support of public parks and public safety throughout Oʻahu. Pine's Resolution 19-333 would enable the city to increase the number of park rangers and expand the program island-wide in order to promote public safety and environmental protection. Pine supports alternative funding through private sponsorship for the historic Honolulu Zoo, which, due to lack of funding, lost its accreditation in 2016. In addition, in 2015, she introduced Bill 78, CD1, FD1 to acknowledge sponsorship of city assets with name recognition to enhance public-private partnership possibilities. (Ordinance 15-42, 2015). In 2018, Pine accused Mayor Caldwell of playing favorites with park monies, spending the bulk of funds allocated for park improvements at popular tourist site, Ala Moana Park, while Leeward Coast parks suffered from potholed parking lots, homeless encampments, trash and safety improvements that were not addressed. Citing inequitable distribution of city resources, Pine introduced Resolution 19-091, to require and audit of the Department of Parks and Recreation to determine whether all Oahu Parks were receiving fair treatment. After years of citing criminal activity and homeless encampments at Oneʻula Beach Park in Ewa Beach, nightly closure hours were finally instituted in January 2020, and a master plan for improvements was implemented. Women In 2016, Pine called for a performance audit to examine how the Prosecutor's office and the Honolulu Police Department handle domestic violence, enforce temporary restraining orders and process cases through the courts. Resolution 16-001 produced a report showing that cases of domestic violence increased 600% from 2013 to 2016 and that only 14% of those cases ever reached court. In 2020, Pine authored Bill 10, to ensure gender-equity and fair allocation of permits for the use of park facilities for sports after female surfers complained that they had not been able to obtain a permit for North Shore, Oahu female surf contests during pristine surf seasons for ten years. Additional city legislation Pine supported Res. 18-73, a resolution to facilitate the development of a race track or raceway in Honolulu with the use of private investment and funding. In 2016, Pine and a fellow councilmember introduced Bill 24 to strengthen the enforcement of restrictions on illegal dumping of bulky items. The new law allowed inspectors to fine the individual perpetrators who illegally dump bulky items, not just the nearby residents and managers. In response to complaints of frequently closed and overwhelmed Leeward refuse centers, she introduced Resolution 19-101 to require that the City Department of Environmental Services conduct an evaluation of Leeward sites and provide recommendations on how to improve services. In the report, the department identified staffing shortages and lack of capacity as challenges. Cyber crime Pine was a victim of cybercrime in 2011 and worked to help strengthen the state's cybercrime laws by introducing four groundbreaking bills to curb the growing cyber crime trend in Hawaii. The bills were the result of the cyber crime informational briefing co-chaired by Pine. On July 10, 2012, all four bills Pine introduced to curb Hawaii's growing cyber crime trend became law. Under these laws, prosecutors and law enforcement increase ability to investigate, obtain evidence, and bring cyber criminals to justice with new or stiffer penalties: (HB 1777) Measure allowing out-of-state records to be subpoenaed in criminal cases. Authorizes judges in Hawaii's State court system to require that certain records located or held by entities outside Hawaii be released to the prosecution or defense in a criminal case. Prosecutors will now be able to obtain evidence that is often in the hands of mainland corporations, such as cell phone records. The Honolulu Prosecutor's Office advocated for the bill, testifying that it was the most important action Hawaii could take to aid in the prosecution of cybercriminals. (HB 1788) a measure to strengthen Hawaii's existing computer fraud and unauthorized computer access laws. A cybercrime omnibus bill that strengthens existing computer crime laws by making computer fraud laws mirror Hawaii's identify theft laws; the result is that accessing a computer with the intent to commit theft becomes a more serious offense. The law also imposes harsher penalties by reclassifying the severity of computer fraud and unauthorized computer access offenses. Notably, the bill creates the new offense of Computer Fraud in the Third Degree, a class C felony; this particular crime would involve knowingly accessing a computer, computer system, or computer network with intent to commit theft in the third or fourth degree. (HB 2295) a measure to prohibit adults from soliciting minors to electronically transmit nude images of a minor(s). HB 2295 expands the existing offense of Use of a Computer in the Commission of a Separate Crime to include situations where a perpetrator knowingly uses a computer to pursue, conduct surveillance on, contact, harass, annoy, or alarm the victim or intended victim of the crimes of Harassment under HRS 711-1106 or Harassment by Stalking under HRS 711‑1106.5. This law recognizes that using a computer to commit such crimes is an aggravating factor that justifies an additional penalty. (SB 2222) addresses "sexting". The bill would create two new offenses in HRS chapter 712 that would: Prohibit an adult from intentionally or knowingly soliciting a minor to electronically transmit a nude image (photo or video) of a minor to any person (misdemeanor); prohibit a minor from knowingly electronically transmitting a nude image of him/herself or any other minor to any person, or intentionally or knowingly soliciting another minor to do so (petty misdemeanor); and prohibit a person of any age from knowingly possessing a nude image transmitted by a minor (but a person charged with this crime would have an affirmative defense that he/she made reasonable efforts to destroy the nude image (petty misdemeanor)). 2020 Honolulu mayoral election On October 28, 2019, Pine announced her candidacy for Mayor of Honolulu. She placed fourth in the August nonpartisan blanket primary and did not advance to the November general election. Affiliations Her affiliations include membership in the parish of Our Lady of Perpetual Help Catholic Church, where she serves as a lector and a member of the Filipino Catholic Club. She is a former member of the Ewa Beach Lions Club, former AYSO Soccer coach, and former chairperson of the Ewa Beach Weed and Seed Neighborhood Restoration Project. Personal life Pine is a practicing Catholic. She is married to LCDR Brian Ryglowski, USN. Pine gave birth to their daughter in March 2015, and was the first sitting council member to have a baby while in office. She lives in Ewa Beach with her family, including two dogs and two cats. See also List of American politicians who switched parties in office References External links Kym Pine at City and County of Honolulu 1970 births American politicians of Filipino descent American women of Filipino descent in politics Asian-American people in Hawaii politics Catholics from Hawaii Hawaii Democrats Hawaii Republicans Honolulu City Council members Living people Members of the Hawaii House of Representatives University of California, Berkeley alumni Women city councillors in Hawaii Women state legislators in Hawaii Asian-American city council members 21st-century American politicians 21st-century American women politicians
208594
https://en.wikipedia.org/wiki/John%20Warnock
John Warnock
John Edward Warnock (born October 6, 1940) is an American computer scientist and businessman best known for co-founding Adobe Systems Inc., the graphics and publishing software company, with Charles Geschke. Warnock was President of Adobe for his first two years and chairman and CEO for his remaining sixteen years at the company. Although he retired as CEO in 2000, he still co-chaired the board with Geschke. Warnock has pioneered the development of graphics, publishing, Web and electronic document technologies that have revolutionized the field of publishing and visual communications. Early life and education Warnock was born and raised in Salt Lake City, Utah. Although he failed mathematics in ninth grade while graduating from Olympus High School in 1958, Warnock went on to earn a Bachelor of Science degree in mathematics and philosophy, a Doctor of Philosophy degree in electrical engineering (computer science), and an honorary degree in science, all from the University of Utah. At the University of Utah he was a member of the Gamma Beta Chapter of the Beta Theta Pi fraternity. He also has an honorary degree from the American Film Institute. He currently lives in the San Francisco Bay Area with his wife Marva E. Warnock, an illustrator. They have three children. Career Warnock's earliest publication and subject of his master's thesis, was his 1964 proof of a theorem solving the Jacobson radical for row-finite matrices, which was originally posed by the American mathematician Nathan Jacobson in 1956. In his 1969 doctoral thesis, Warnock invented the Warnock algorithm for hidden surface determination in computer graphics. It works by recursive subdivision of a scene until areas are obtained that are trivial to compute. It solves the problem of rendering a complicated image by avoiding the problem. If the scene is simple enough to compute then it is rendered; otherwise it is divided into smaller parts and the process is repeated. Warnock notes that for this work he received "the dubious distinction of having written the shortest doctoral thesis in University of Utah history". In 1976, while Warnock worked at Evans & Sutherland, a Salt Lake City-based computer graphics company, the concepts of the PostScript language were seeded. Prior to co-founding Adobe, with Geschke and Putman, Warnock worked with Geschke at Xerox's Palo Alto Research Center (Xerox PARC), where he had started in 1978. Unable to convince Xerox management of the approach to commercialize the InterPress graphics language for controlling printing, he, together with Geschke and Putman, left Xerox to start Adobe in 1982. At their new company, they developed an equivalent technology, PostScript, from scratch, and brought it to market for Apple's LaserWriter in 1985. In the spring of 1991, Warnock outlined a system called "Camelot", that evolved into the Portable Document Format (PDF) file-format. The goal of Camelot was to "effectively capture documents from any application, send electronic versions of these documents anywhere, and view and print these documents on any machines". Warnock's document contemplated: One of Adobe's popular typefaces, Warnock, is named after him. Adobe's PostScript technology made it easier to print text and images from a computer, revolutionizing media and publishing in the 1980s. In 2003, Warnock and his wife donated 200,000 shares of Adobe Systems (valued at over $5.7 million) to the University of Utah as the main gift for a new engineering building. The John E. and Marva M. Warnock Engineering Building was completed in 2007 and houses the Scientific Computing and Imaging Institute and the Dean of the University of Utah College of Engineering. Dr. Warnock holds seven patents. In addition to Adobe Systems, he serves or has served on the board of directors at ebrary, Knight-Ridder, MongoNet, Netscape Communications and Salon Media Group. Warnock is a past Chairman of the Tech Museum of Innovation in San Jose. He also serves on the Board of Trustees of the American Film Institute and the Sundance Institute. His hobbies include photography, skiing, Web development, painting, hiking, curation of rare scientific books and historical Native American objects. A strong supporter of higher education, Warnock and his wife, Marva, have supported three presidential endowed chairs in computer science, mathematics and fine arts at the University of Utah and also an endowed chair in medical research at Stanford University. Recognition The recipient of numerous scientific and technical awards, Warnock won the Software Systems Award from the Association for Computing Machinery in 1989. In 1995 Warnock received the University of Utah Distinguished Alumnus Award and in 1999 he was inducted as a Fellow of the Association for Computing Machinery. Warnock was awarded the Edwin H. Land Medal from the Optical Society of America in 2000. In 2002, he was made a Fellow of the Computer History Museum for "his accomplishments in the commercialization of desktop publishing with Chuck Geschke and for innovations in scalable type, computer graphics and printing." Oxford University's Bodleian Library bestowed the Bodley Medal on Warnock in November 2003. In 2004, Warnock received the Lovelace Medal from the British Computer Society in London. In October 2006, Warnock—along with Adobe co-founder Charles Geschke—received the American Electronics Association's Annual Medal of Achievement Award, being the first software executives to receive this award. In 2008, Warnock and Geschke received the Computer Entrepreneur Award from the IEEE Computer Society "for inventing PostScript and PDF and helping to launch the desktop publishing revolution and change the way people engage with information and entertainment". In September 2009, Warnock and Geschke were chosen to receive the National Medal of Technology and Innovation, one of the nation's highest honors bestowed on scientists, engineers and inventors. In 2010, Warnock and Geschke received the Marconi Prize, considered the highest honor specifically for contributions to information science and communications. Warnock is a member of the National Academy of Engineering, the American Academy of Arts and Sciences, and the American Philosophical Society, the latter being America's oldest learned society. He has received honorary degrees from the University of Utah, the American Film Institute, and The University of Nottingham in the UK. See also Warnock algorithm References External links Interview in Knowledge@Wharton published January 20, 2010 Biography at Computer History Museum Biography on Adobe Web site Warnock's Utah Bed and Breakfast-The Blue Boar Inn Warnock's Rare Book Room educational site which allows visitors to examine and read some of the great books of the world Warnock's Splendid Heritage website which documents rare American Indian objects from private collections 1940 births Adobe Inc. people American computer programmers Computer graphics professionals Computer graphics researchers Fellows of the Association for Computing Machinery Living people University of Utah alumni National Medal of Technology recipients Businesspeople from Salt Lake City Members of the United States National Academy of Engineering American technology company founders Scientists at PARC (company)
2172795
https://en.wikipedia.org/wiki/MediaPortal
MediaPortal
MediaPortal is an open-source media player and digital video recorder software project, often considered an alternative to Windows Media Center. It provides a 10-foot user interface for performing typical PVR/TiVo functionality, including playing, pausing, and recording live TV; playing DVDs, videos, and music; viewing pictures; and other functions. Plugins allow it to perform additional tasks, such as watching online video, listening to music from online services such as Last.fm, and launching other applications such as games. It interfaces with the hardware commonly found in HTPCs, such as TV tuners, infrared receivers, and LCD displays. The MediaPortal source code was initially forked from XBMC (now Kodi), though it has been almost completely re-written since then. MediaPortal is designed specifically for Microsoft Windows, unlike most other open-source media center programs such as MythTV and Kodi, which are usually cross-platform. Features DirectX GUI Video Hardware Acceleration VMR / EVR on Windows Vista / 7 TV / Radio (DVB-S, DVB-S2, DVB-T, DVB-C, Analog television (Common Interface, DVB radio, DVB EPG, Teletext, etc...) IPTV Recording, pause and time shifting of TV and Radio broadcasts Music player Video/DVD player Picture player Internet Streams Integrated Weather Forecasts Built-in RSS reader Metadata web scraping from TheTVDB and The Movie Database Plug ins Skins Graphical User Interfaces Control MediaPortal can be controlled by any input device, that is supported by the Windows Operating System. PC Remote Keyboard / Mouse Gamepad Kinect Wii Remote Android / iOS/ WebOS / S60 handset devices Television MediaPortal uses its own TV-Server to allow to set up one central server with one or more TV cards. All TV related tasks are handled by the server and streamed over the network to one or more clients. Clients can then install the MediaPortal Client software and use the TV-Server to watch live or recorded TV, schedule recordings, view and search EPG data over the network. Since version 1.0.1, the client plugin of the TV-Server has replaced the default built-in TV Engine. Even without a network (i.e. a singleseat installation), the TV-Server treats the PC as both the server and the client. The TV-Server supports watching and recording TV at the same time with only one DVB/ATSC TV Card, on the same transponder (multiplex). Broadcast Driver Architecture is used to support as many TV cards as possible. The major brands of cards, like digital-everywhere, Hauppauge, Pinnacle, TechnoTrend and TechniSat, including analog cards, provide BDA drivers for their cards. Video/DVD player The video player of MediaPortal is a DirectShow Player, so any codec/filter can be used. MediaPortal uses the codec from LAV Filters by default, but the codec can be changed to all installed ones, such like Ffdshow, PowerDVD, CoreAVC, Nvidia PureVideo etc. MediaPortal also support video post-processing, with any installed codec. Due to the DirectShow player implementation, MediaPortal can play all media files that can be played on Windows. Music player The default internal music player uses the BASS Engine with the BASS audio library. The alternative player is the internal DirectShow player. With the BASS Engine MediaPortal supports visualizations from Windows Media Visualizations, Winamp Visualizations including MilkDrop, Sonic and Soundspectrum G-Force. Picture player/organizer Digital pictures/photos can be browsed, managed and played as slide shows with background music or radio. The picture player uses different transitions or the Ken Burns effect between each picture. Exif data are used to rotate the pictures automatically, but the rotation can be done manually too. Zooming of pictures is also possible. Online videos OnlineVideos is a plugin for MediaPortal to integrate seamless online video support into MediaPortal. OnlineVideos supports almost 200 sites/channels in a variety of languages and genres such like YouTube, iTunes Movie Trailers, Discovery Channel etc. Series MP-TVSeries is a popular TV Series plug-in for MediaPortal. It focuses on managing the user's TV Series library. The MP-TVSeries plugin will scan the hard drive (including network and removable drives) for video files, it then analyzes them by their path structures to determine if they are TV Shows. If the file(s) are recognized then the plugin will go online and retrieve information about them. You can then browse, manage and play your episodes from inside MediaPortal in a nice graphical layout. The information and fan art it retrieves is coming from TheTVDB.com which allows any user to add and update information. The extension will automatically update any information when new episodes/files are added. Movies Moving Pictures is a plug-in that focuses on ease of use and flexibility. Point it to your movies collection and Moving Pictures will automatically load media rich details about your movies as quickly as possible with as little user interaction as possible. Once imported you can browse your collection via an easy to use but highly customizable interface. Ambilight AtmoLight is a plug-in that makes it possible to use all sorts of Ambilight solutions which currently are: AmbiBox AtmoOrb AtmoWin BobLight Hue Hyperion It also allows easy expansion for any future Ambilight solutions. Hardware Hardware, SD single tuner For standard definition resolution playback and recording with MPEG-2 video compression using a single TV tuner: 1.4 GHz Intel Pentium III or equivalent processor 256 MB (256 MiB) of system RAM Hardware, HDTV For HDTV (720p/1080i/1080p) playback/recording, recording from multiple tuners, and playback of MPEG-4 AVC (H.264) video: 2.8 GHz Intel Pentium 4 or equivalent processor 512 MB of system RAM Display and storage, SD and HD DirectX 9.0 hardware-accelerated GPU with at least 128MB of video memory Graphics chips which support this and are compatible with MediaPortal: ATI Radeon series 9600 (or above) NVIDIA GeForce 6600 (or above), GeForce FX 5200 (or above) and nForce 6100 series (or above) Intel Extreme Graphics 2 (integrated i865G) Matrox Parhelia SiS Xabre series XGI Volari V Series and XP Series 200 MB free harddisk-drive space for the MediaPortal software 12 GB or more free harddisk-drive space for Hardware Encoding or Digital TV based TV cards for timeshifting purposes Operating system and software Supported operating systems - version 1.7.1 Windows Media Center Edition 2005 with Service Pack 3 Windows Vista 32 and 64-bit with Service Pack 2 or later Windows 7 32 and 64-bit Windows 8 32 and 64-bit (as of v1.3.0) Windows 8.1 32 and 64-bit (as of v1.5.0) As of version 1.7, MediaPortal is not officially supported on Windows XP It will install, but warn the user of the unsupported status while doing so. Software prerequisites - version 1.7.1 Microsoft .NET Framework 4.0 - with the .NET 3.5 features enabled, (as of v1.6.0) DirectX 9.0c Windows Media Player 11 (Only required on XP SP3, Windows Vista comes with WMP11 and Windows 7 comes with WMP12 already) See also Media PC Windows XP Media Center Edition (MCE) Windows Media Center Extender Windows Media Connect Windows Media Player XBMC – the GPL open source software that MediaPortal was originally based upon. Comparison of PVR software packages Microsoft PlaysForSure 2Wire MediaPortal List of codecs List of free television software References External links 2004 software Free television software Video recording software Free video software Television technology Television time shifting technology Software forks Internet television software Windows-only free software
5962343
https://en.wikipedia.org/wiki/Amazon%20S3
Amazon S3
Amazon S3 or Amazon Simple Storage Service is a service offered by Amazon Web Services (AWS) that provides object storage through a web service interface. Amazon S3 uses the same scalable storage infrastructure that Amazon.com uses to run its global e-commerce network. Amazon S3 can be employed to store any type of object, which allows for uses like storage for Internet applications, backup and recovery, disaster recovery, data archives, data lakes for analytics, and hybrid cloud storage. AWS launched Amazon S3 in the United States on March 14, 2006, then in Europe in November 2007. Design Although Amazon Web Services (AWS) does not publicly provide the details of S3's technical design, Amazon S3 manages data with an object storage architecture which aims to provide scalability, high availability, and low latency with 99.999999999% durability and between 99.95% to 99.99% availability (though there is no service-level agreement for durability). The basic storage units of Amazon S3 are objects which are organized into buckets. Each object is identified by a unique, user-assigned key. Buckets can be managed using either the console provided by Amazon S3, programmatically using the AWS SDK, or with the Amazon S3 REST application programming interface (API). Objects can be managed using the AWS SDK or with the Amazon S3 REST API and can be up to five terabytes in size with two kilobytes of metadata. Additionally, objects can be downloaded using the HTTP GET interface and the BitTorrent protocol. Requests are authorized using an access control list associated with each object bucket and support versioning which is disabled by default. Since buckets are typically the size of an entire file system mount in other systems, this access control scheme is very coarse-grained. In other words, unique access controls cannot be associated with individual files. Bucket names and keys are chosen so that objects are addressable using HTTP URLs: http://s3.amazonaws.com/bucket/key (for a bucket created in the US East (N. Virginia) region) https://s3.amazonaws.com/bucket/key http://s3-region.amazonaws.com/bucket/key https://s3-region.amazonaws.com/bucket/key http://s3.region.amazonaws.com/bucket/key https://s3.region.amazonaws.com/bucket/key http://s3.dualstack.region.amazonaws.com/bucket/key (for requests using IPv4 or IPv6) https://s3.dualstack.region.amazonaws.com/bucket/key http://bucket.s3.amazonaws.com/key http://bucket.s3-region.amazonaws.com/key http://bucket.s3.region.amazonaws.com/key http://bucket.s3.dualstack.region.amazonaws.com/key (for requests using IPv4 or IPv6) http://bucket.s3-website.region.amazonaws.com/key (if static website hosting is enabled on the bucket) http://bucket.s3-accelerate.amazonaws.com/key (where the filetransfer exits Amazons network at the last possible moment so as to give the fastest possible transfer speed and lowest latency) http://bucket.s3-accelerate.dualstack.amazonaws.com/key http://bucket/key (where bucket is a DNS CNAME record pointing to bucket.s3.amazonaws.com) https://access_point_name-account ID.s3-accesspoint.region.amazonaws.com (for requests via an access point granting restricted access to a bucket) Amazon S3 can be used to replace significant existing (static) web-hosting infrastructure with HTTP client accessible objects. The Amazon AWS authentication mechanism allows the bucket owner to create an authenticated URL which is valid for a specified amount of time. Every item in a bucket can also be served as a BitTorrent feed. The Amazon S3 store can act as a seed host for a torrent and any BitTorrent client can retrieve the file. This can drastically reduce the bandwidth cost for the download of popular objects. While the use of BitTorrent does reduce bandwidth, AWS does not provide native bandwidth limiting and, as such, users have no access to automated cost control. This can lead to users on the free-tier of Amazon S3, or small hobby users, amassing dramatic bills. AWS representatives have stated that a bandwidth limiting feature was on the design table from 2006 to 2010, but in 2011 the feature is no longer in development. A bucket can be configured to save HTTP log information to a sibling bucket; this can be used in data mining operations. There are various User Mode File System (FUSE)-based file systems for Unix-like operating systems (Linux, etc.) that can be used to mount an S3 bucket as a file system such as S3QL. The semantics of the Amazon S3 file system are not that of a POSIX file system, so the file system may not behave entirely as expected. Hosting websites Amazon S3 provides the option to host static HTML websites with index document support and error document support. Websites hosted on S3 may designate a default page to display and another page to display in the event of a partially invalid URL, such as a 404 error, which provide useful content to visitors of a URL containing a CNAME record hostname rather than a direct Amazon S3 bucket reference when the URL does not contain a valid S3 object key, such as when a casual user initially visits a URL that is a bare non-Amazon hostname. Amazon S3 logs Amazon S3 allows users to enable or disable logging. If enabled, the logs are stored in Amazon S3 buckets which can then be analyzed. These logs contain useful information such as: Date and time of access to requested content Protocol used (HTTP, FTP, etc.) HTTP status codes Turnaround time HTTP request message Amazon S3 tools Amazon S3 provides an API for developers. The AWS console provides tools for managing and uploading files but it is not capable of managing large buckets or editing files. Third-party websites or software some software have the capability to edit files on Amazon S3. Amazon S3 storage classes Amazon S3 offers four different storage classes that offer different levels of durability, availability, and performance requirements. Amazon S3 Standard is the default class. Amazon S3 Standard Infrequent Access (IA) is designed for less frequently accessed data. Typical use cases are backup and disaster recovery solutions. Amazon S3 One Zone-Infrequent Access is designed for data that is not often needed but when required, needs to be accessed rapidly. Data is stored in one zone and if that zone is destroyed, all data is lost. Amazon Glacier is designed for long-term storage of data that is infrequently accessed and where retrieval latency of minutes or hours is acceptable. "Glacier Deep Archive" is an alternative with a retrieval time of at least 12 hours, but 1/4th the price. It is intended as an alternative to magnetic tape libraries, and is designed for long term retention of data for 7 to 10 years. Notable users Photo hosting service SmugMug has used Amazon S3 since April 2006. They experienced a number of initial outages and slowdowns, but after one year they described it as being "considerably more reliable than our own internal storage" and claimed to have saved almost $1 million in storage costs. Netflix uses Amazon S3 as their system of record. Netflix implemented a tool, S3mper, to address the Amazon S3 limitations of eventual consistency. S3mper stores the filesystem metadata: filenames, directory structure, and permissions in Amazon DynamoDB. reddit is hosted on Amazon S3. Bitcasa, and Tahoe-LAFS-on-S3, among others, use Amazon S3 for online backup and synchronization services. In 2016, Dropbox stopped using Amazon S3 services and developed its own cloud server. Mojang hosts Minecraft game updates and player skins on Amazon S3. Tumblr, Formspring, and Pinterest host images on Amazon S3. Swiftype's CEO has mentioned that the company uses Amazon S3. Amazon S3 was used by some enterprises as a long term archiving solution until Amazon Glacier was released in August 2012. The API has become a popular method to store objects. As a result, many applications have been built to natively support the Amazon S3 API which includes applications that write data to Amazon S3 and Amazon S3-compatible object stores: S3 API and competing services The broad adoption of Amazon S3 and related tooling has given rise to competing services based on the S3 API. These services use the standard programming interface; however, they are differentiated by their underlying technologies and supporting business models. A cloud storage standard (like electrical and networking standards) enables competing service providers to design their services and clients using different parts in different ways yet still communicate and provide the following benefits: Increase competition by providing a set of rules and a level playing field, encouraging market entry by smaller companies which might otherwise be precluded. Encourage innovation by cloud storage & tool vendors, & developers because they can focus on improving their own products and services instead of focusing on compatibility. Allow economies of scale in implementation (i.e., if a service provider encounters an outage or as clients outgrow their tools and need faster operating systems or tools, they can easily swap out solutions). Provide timely solutions for delivering functionality in response to demands of the marketplace (i.e., as business growth in new locations increases demand, clients can easily change or add service providers simply by subscribing to the new service). History Amazon Web Services introduced Amazon S3 in 2006. Amazon S3 is reported to store more than 100 trillion objects . This is up from 10 billion objects as of October 2007, 14 billion objects in January 2008, 29 billion objects in October 2008, 52 billion objects in March 2009, 64 billion objects in August 2009, 102 billion objects in March 2010, and 2 trillion objects in April 2013. In November 2017 AWS added default encryption capabilities at bucket level. See also Amazon Elastic Block Storage (EBS) Timeline of Amazon Web Services References Citations Sources S3 Cloud storage File hosting Network file systems de:Amazon Web Services#Amazon Simple Storage Service (S3)
22860272
https://en.wikipedia.org/wiki/Business%20process%20management
Business process management
Business process management (BPM) is the discipline in which people use various methods to discover, model, analyze, measure, improve, optimize, and automate business processes. Any combination of methods used to manage a company's business processes is BPM. Processes can be structured and repeatable or unstructured and variable. Though not required, enabling technologies are often used with BPM. It can be differentiated from program management in that program management is concerned with managing a group of inter-dependent projects. From another viewpoint, process management includes program management. In project management, process management is the use of a repeatable process to improve the outcome of the project. Key distinctions between process management and project management are repeatability and predictability. If the structure and sequence of work is unique, then it is a project. In business process management, a sequence of work can vary from instance to instance: there are gateways, conditions; business rules etc. The key is predictability: no matter how many forks in the road, we know all of them in advance, and we understand the conditions for the process to take one route or another. If this condition is met, we are dealing with a process. As an approach, BPM sees processes as important assets of an organization that must be understood, managed, and developed to announce and deliver value-added products and services to clients or customers. This approach closely resembles other total quality management or continual improvement process methodologies. ISO 9000 promotes the process approach to managing an organization. ...promotes the adoption of a process approach when developing, implementing and improving the effectiveness of a quality management system, to enhance customer satisfaction by meeting customer requirements. BPM proponents also claim that this approach can be supported, or enabled, through technology. As such, many BPM articles and scholars frequently discuss BPM from one of two viewpoints: people and/or technology. BPM streamlines business processing by automating workflows; while RPA automates tasks by recording a set of repetitive activities implemented by human. Organizations maximize their business automation leveraging both technologies to achieve better results. Definitions The Workflow Management Coalition, BPM.com and several other sources use the following definition: Business process management (BPM) is a discipline involving any combination of modeling, automation, execution, control, measurement and optimization of business activity flows, in support of enterprise goals, spanning systems, employees, customers and partners within and beyond the enterprise boundaries. The Association of Business Process Management Professionals defines BPM as: Business process management (BPM) is a disciplined approach to identify, design, execute, document, measure, monitor, and control both automated and non-automated business processes to achieve consistent, targeted results aligned with an organization’s strategic goals. BPM involves the deliberate, collaborative and increasingly technology-aided definition, improvement, innovation, and management of end-to-end business processes that drive business results, create value, and enable an organization to meet its business objectives with more agility. BPM enables an enterprise to align its business processes to its business strategy, leading to effective overall company performance through improvements of specific work activities either within a specific department, across the enterprise, or between organizations. Gartner defines business process management as: "the discipline of managing processes (rather than tasks) as the means for improving business performance outcomes and operational agility. Processes span organizational boundaries, linking together people, information flows, systems, and other assets to create and deliver value to customers and constituents." It is common to confuse BPM with a BPM suite (BPMS). BPM is a professional discipline done by people, whereas a BPMS is a technological suite of tools designed to help the BPM professionals accomplish their goals. BPM should also not be confused with an application or solution developed to support a particular process. Suites and solutions represent ways of automating business processes, but automation is only one aspect of BPM. Changes The concept of business process may be as traditional as concepts of tasks, department, production, and outputs, arising from job shop scheduling problems in the early 20th century. The management and improvement approach , with formal definitions and technical modeling, has been around since the early 1990s (see business process modeling). Note that the term "business process" is sometimes used by IT practitioners as synonymous with the management of middleware processes or with integrating application software tasks. Although BPM initially focused on the automation of business processes with the use of information technology, it has since been extended to integrate human-driven processes in which human interaction takes place in series or parallel with the use of technology. For example, workflow management systems can assign individual steps requiring deploying human intuition or judgment to relevant humans and other tasks in a workflow to a relevant automated system. More recent variations such as "human interaction management" are concerned with the interaction between human workers performing a task. , technology has allowed the coupling of BPM with other methodologies, such as Six Sigma. Some BPM tools such as SIPOCs, process flows, RACIs, CTQs and histograms allow users to: visualize – functions and processes measure – determine the appropriate measure to determine success analyze – compare the various simulations to determine an optimal improvement improve – select and implement the improvement control – deploy this implementation and by use of user-defined dashboards monitor the improvement in real time and feed the performance information back into the simulation model in preparation for the next improvement iteration re-engineer – revamp the processes from scratch for better results This brings with it the benefit of being able to simulate changes to business processes based on real-world data (not just on assumed knowledge). Also, the coupling of BPM to industry methodologies allows users to continually streamline and optimize the process to ensure that it is tuned to its market need. research on BPM has paid increasing attention to the compliance of business processes. Although a key aspect of business processes is flexibility, as business processes continuously need to adapt to changes in the environment, compliance with business strategy, policies, and government regulations should also be ensured. The compliance aspect in BPM is highly important for governmental organizations. BPM approaches in a governmental context largely focus on operational processes and knowledge representation. There have been many technical studies on operational business processes in the public and private sectors, but researchers rarely take legal compliance activities into account—for instance, the legal implementation processes in public-administration bodies. Life-cycle Business process management activities can be arbitrarily grouped into categories such as design, modeling, execution, monitoring, and optimization. Design Process design encompasses both the identification of existing processes and the design of "to-be" processes. Areas of focus include representation of the process flow, the factors within it, alerts and notifications, escalations, standard operating procedures, service level agreements, and task hand-over mechanisms. Whether or not existing processes are considered, the aim of this step is to ensure a correct and efficient new design. The proposed improvement could be in human-to-human, human-to-system or system-to-system workflows, and might target regulatory, market, or competitive challenges faced by the businesses. Existing processes and design of a new process for various applications must synchronize and not cause a major outage or process interruption. Modeling Modeling takes the theoretical design and introduces combinations of variables (e.g., changes in rent or materials costs, which determine how the process might operate under different circumstances). It may also involve running "what-if analysis"(Conditions-when, if, else) on the processes: "What if I have 75% of resources to do the same task?" "What if I want to do the same job for 80% of the current cost?". Execution Business process execution is broadly about enacting a discovered and modeled business process. Enacting a business process is done manually or automatically or with a combination of manual and automated business tasks. Manual business processes are human-driven. Automated business processes are software-driven. Business process automation encompasses methods and software deployed for automating business processes. Business process automation is performed and orchestrated at the business process layer or the consumer presentation layer of SOA Reference Architecture. BPM software suites such as BPMS or iBPMS or low-code platforms are positioned at the business process layer. While the emerging robotic process automation software performs business process automation at the presentation layer, therefore is considered non-invasive to and de-coupled from existing application systems. One of the ways to automate processes is to develop or purchase an application that executes the required steps of the process; however, in practice, these applications rarely execute all the steps of the process accurately or completely. Another approach is to use a combination of software and human intervention; however this approach is more complex, making the documentation process difficult. In response to these problems, companies have developed software that defines the full business process (as developed in the process design activity) in a computer language that a computer can directly execute. Process models can be run through execution engines that automate the processes directly from the model (e.g., calculating a repayment plan for a loan) or, when a step is too complex to automate, Business Process Modeling Notation (BPMN) provides front-end capability for human input. Compared to either of the previous approaches, directly executing a process definition can be more straightforward and therefore easier to improve. However, automating a process definition requires flexible and comprehensive infrastructure, which typically rules out implementing these systems in a legacy IT environment. Business rules have been used by systems to provide definitions for governing behavior, and a business rule engine can be used to drive process execution and resolution. Monitoring Monitoring encompasses the tracking of individual processes, so that information on their state can be easily seen, and statistics on the performance of one or more processes can be provided. An example of this tracking is being able to determine the state of a customer order (e.g. order arrived, awaiting delivery, invoice paid) so that problems in its operation can be identified and corrected. In addition, this information can be used to work with customers and suppliers to improve their connected processes. Examples are the generation of measures on how quickly a customer order is processed or how many orders were processed in the last month. These measures tend to fit into three categories: cycle time, defect rate and productivity. The degree of monitoring depends on what information the business wants to evaluate and analyze and how the business wants it monitored, in real-time, near real-time or ad hoc. Here, business activity monitoring (BAM) extends and expands the monitoring tools generally provided by BPMS. Process mining is a collection of methods and tools related to process monitoring. The aim of process mining is to analyze event logs extracted through process monitoring and to compare them with an process model. Process mining allows process analysts to detect discrepancies between the actual process execution and the a priori model as well as to analyze bottlenecks. Predictive Business Process Monitoring concerns the application of data mining, machine learning, and other forecasting techniques to predict what is going to happen with running instances of a business process, allowing to make forecasts of future cycle time, compliance issues, etc. Techniques for predictive business process monitoring include Support Vector Machines, Deep Learning approaches, and Random Forest. Optimization Process optimization includes retrieving process performance information from modeling or monitoring phase; identifying the potential or actual bottlenecks and the potential opportunities for cost savings or other improvements; and then, applying those enhancements in the design of the process. Process mining tools are able to discover critical activities and bottlenecks, creating greater business value. Re-engineering When the process becomes too complex or inefficient, and optimization is not fetching the desired output, it is usually recommended by a company steering committee chaired by the president / CEO to re-engineer the entire process cycle. Business process reengineering (BPR) has been used by organizations to attempt to achieve efficiency and productivity at work. Suites A market has developed for enterprise software leveraging the business process management concepts to organize and automate processes. The recent convergence of this software from distinct pieces such as business rules engine, business process modelling, business activity monitoring and Human Workflow has given birth to integrated Business Process Management Suites. Forrester Research, Inc recognize the BPM suite space through three different lenses: human-centric BPM integration-centric BPM (Enterprise Service Bus) document-centric BPM (Dynamic Case Management) However, standalone integration-centric and document-centric offerings have matured into separate, standalone markets. Rapid application development using no-code/low-code principles is becoming an ever prevalent feature of BPMS platforms. RAD enables businesses to deploy applications more quickly and more cost effectively, while also offering improved change and version management. Gartner notes that as businesses embrace these systems, their budgets rely less on the maintenance of existing systems and show more investment in growing and transforming them. Practice While the steps can be viewed as a cycle, economic or time constraints are likely to limit the process to only a few iterations. This is often the case when an organization uses the approach for short to medium term objectives rather than trying to transform the organizational culture. True iterations are only possible through the collaborative efforts of process participants. In a majority of organizations, complexity requires enabling technology (see below) to support the process participants in these daily process management challenges. To date, many organizations often start a BPM project or program with the objective of optimizing an area that has been identified as an area for improvement. Currently, the international standards for the task have limited BPM to the application in the IT sector, and ISO/IEC 15944 covers the operational aspects of the business. However, some corporations with the culture of best practices do use standard operating procedures to regulate their operational process. Other standards are currently being worked upon to assist in BPM implementation (BPMN, enterprise architecture, Business Motivation Model). Technology BPM is now considered a critical component of operational intelligence (OI) solutions to deliver real-time, actionable information. This real-time information can be acted upon in a variety of ways – alerts can be sent or executive decisions can be made using real-time dashboards. OI solutions use real-time information to take automated action based on pre-defined rules so that security measures and or exception management processes can be initiated. Because "the size and complexity of daily tasks often requires the use of technology to model efficiently" when resources in technology became increasingly widespread with general availability to businesses to provide to their staff, "Many thought BPM as the bridge between Information Technology (IT) and Business." There are four critical components of a BPM Suite: Process engine – a robust platform for modeling and executing process-based applications, including business rules Business analytics – enable managers to identify business issues, trends, and opportunities with reports and dashboards and react accordingly Content management – provides a system for storing and securing electronic documents, images, and other files Collaboration tools – remove intra- and interdepartmental communication barriers through discussion forums, dynamic workspaces, and message boards BPM also addresses many of the critical IT issues underpinning these business drivers, including: Managing end-to-end, customer-facing processes Consolidating data and increasing visibility into and access to associated data and information Increasing the flexibility and functionality of current infrastructure and data Integrating with existing systems and leveraging service oriented architecture (SOA) Establishing a common language for business-IT alignment Validation of BPMS is another technical issue that vendors and users must be aware of, if regulatory compliance is mandatory. The validation task could be performed either by an authenticated third party or by the users themselves. Either way, validation documentation must be generated. The validation document usually can either be published officially or retained by users. Cloud computing BPM Cloud computing business process management is the use of (BPM) tools that are delivered as software services (SaaS) over a network. Cloud BPM business logic is deployed on an application server and the business data resides in cloud storage. Market According to Gartner, 20% of all the "shadow business processes" are supported by BPM cloud platforms. Gartner refers to all the hidden organizational processes that are supported by IT departments as part of legacy business processes such as Excel spreadsheets, routing of emails using rules, phone calls routing, etc. These can, of course also be replaced by other technologies such as workflow and smart form software. Benefits The benefits of using cloud BPM services include removing the need and cost of maintaining specialized technical skill sets in-house and reducing distractions from an enterprise's main focus. It offers controlled IT budgeting and enables geographical mobility.. Internet of things The emerging Internet of things poses a significant challenge to control and manage the flow of information through large numbers of devices. To cope with this, a new direction known as BPM Everywhere shows promise as a way of blending traditional process techniques, with additional capabilities to automate the handling of all the independent devices. See also Application service provider Business intelligence Business object Business-oriented architecture Business process automation Business process orientation CIFMS Comparison of business integration software Enterprise planning systems Human resource management system Integrated business planning International Conference on Business Process Management ITIL Managed services Manufacturing process management Process architecture Total quality management References Further reading Wil van der Aalst, Kees Max van Hee (2002). "Workflow Management: Models, Methods, and Systems" , 9780262011891 Wil van der Aalst and Christian Stahl (2011). "Modeling Business Processes" Wil van der Aalst (2011). "Process Mining: Discovery, Conformance and Enhancement of Business Processes" Alan P. Brache. How Organizations Work: Taking a Holistic Approach to Enterprise Health. Roger Burlton (2001). Business Process Management: Profiting From Process. James F. Chang (2006). Business Process Management Systems. Dirk Draheim (2005). Business Process Technology. Jay R. Galbraith (2005). Designing the Customer-Centric Organization: A Guide to Strategy, Structure and Process. Jean-Noël Gillot (2008). The complete guide to Business Process Management. Michael Hammer, James A. Champy. Reengineering the Corporation: A Manifesto for Business Revolution. Paul Harmon (2007). Business Process Change: A Guide for Business Managers and BPM and Six Sigma Professionals. Keith Harrison-Broninski (2005). Human Interactions: The Heart and Soul of Business Process Management Arthur ter Hofstede, Wil van der Aalst, Michael Adams, Nick Russell (2010). Modern Business Process Automation: YAWL and its Support Environment, John Jeston and Johan Nelis (2008) Management by Process: A roadmap to sustainable Business Process Management. and Business Process Management: Practical Guidelines to Successful Implementations (2006) John Jeston and Johan Nelis Business Process Management: Practical Guidelines to Successful Implementations, Ryan K. L. Ko, Stephen S. G. Lee, Eng Wah Lee (2009) Business Process Management (BPM) Standards: A Survey. In: Business Process Management Journal, Emerald Group Publishing Limited. Volume 15 Issue 5. . Christine Mckinty and Antoine Mottier (2016). "Designing Efficient BPM Applications – A Process-Based Guide for Beginners" (O'Reilly) Martyn Ould (2005). Business Process Management: A Rigorous Approach. Geary A. Rummler, Alan P. Brache. Improving Performance: How to Manage the White Space in the Organization Chart. Terry Schurter, Steve Towers. Customer Expectation Management: Success Without Exception. Bruce Silver (2011). "BPMN Method and Style: With BPMN Implementer's Guide" , 9781596931930 Alec Sharp, Patrick McDermott (2008). "Workflow Modeling: Tools for Process Improvement and Applications Development" , 9781596931930 Howard Smith, Peter Fingar (2003). Business Process Management: The Third Wave. Andrew Spanyi (2003). Business Process Management Is a Team Sport: Play It to Win! Mathias Weske (2012). "Business Process Management: Concepts, Languages, Architectures (Second Edition)" Howard Smith, Peter Fingar. Business Process Management: The Third Wave. External links ABPMP.org The Association of Business Process Management Professionals The NIST Definition of Cloud Computing. Peter Mell and Timothy Grance, NIST Special Publication 800-145 (September 2011). National Institute of Standards and Technology, U.S. Department of Commerce. Cloud Computing Synopsis and Recommendations. Peter Mell and Timothy Grance, NIST Special Publication 800-146 (May 2011). National Institute of Standards and Technology, U.S. Department of Commerce. Platform-as-a-Service Private Cloud with Oracle Fusion Middleware. An Oracle White Paper (October 2009). Oracle.com Process execution through application ontologies. The movie demonstrates how semantic technologies can be applied as execution engine. Business Process Management is primarily an attitude. A YouTube movie on BPM approaches explaining the use case of BPM. Information technology management
25976475
https://en.wikipedia.org/wiki/DJ%20Spooky%20discography
DJ Spooky discography
This is a discography for electronic and experimental hip hop musician DJ Spooky. It lists studio albums, singles, EPs, collaborations, sideman appearances and albums released under his given name Paul D. Miller. Albums Necropolis (Knitting Factory Works KFW 185), March 1996 Songs of a Dead Dreamer (Asphodel Records 0961), April 1996 Synthetic Fury EP (Asphodel Records 0110), February 1998 Haunted Breaks Volumes I and II (Liquid Sky Music), October and December 1998 Riddim Warfare (Outpost-Geffen CD), September 1998; (Asphodel Records Vinyl), December 2002 Under the Influence (Six Degrees PRCD 1056-2) (DJ mix record), September 2001 Songs of a Dead Dreamer (2002 Edition) (Asphodel Records 2009), January 2002 Modern Mantra (Shadow/Instinct SDW 135-2) (DJ mix record), May 2002 Optometry (Thirsty Ear THI 57121.2), July 2002 DJ Spooky That Subliminal Kid vs. Twilight Circus Dub Sound System - Riddim Clash, 2004 Drums of Death DJ Spooky vs. Dave Lombardo (Thirsty Ear), April 2005 The Secret Song (Thirsty Ear), October 2009 Of Water and Ice (Jamendo), June 2013 Singles and EPs Galactic Funk (Asphodel 101), 1997 Object Unknown (with remixes by DJ Spooky and Kut Masta Kurt) (Outpost/Geffen CD; Asphodel vinyl), August 1998 Peace in Zaire (with remixes by Ambassador Jr., and The Dub Pistols) promotional/White Label Only (Outpost/Geffen), April 1999 Subliminal Minded EP-Peace in Zaire Remixes (Bar None Records), October 1999 Catechism featuring Killah Priest (Synchronic), August 2002, SYC 002 Optometrix 12” (Thirsty Ear), June 2003, THI 57132.1 Labels Outpost-Geffen Records Trojan Records Thirsty Ear Records Asphodel Records Paul D. Miller and miscellaneous albums Death in Light of the Phonograph: Excursions into the Pre-linguistic Asphodel Records, September 1996, (limited edition) Originally accompanied installation at Annina Nosei Gallery. The Viral Sonata Asphodel Records. Originally accompanied installation for The Whitney Biennial 1997 ftp>snd>untitled> Nest Magazine CD, accompanying November 2001 issue Another Forensic Charade, accompanying catalogue to exhibition at Magasin 3, Stockholm, Sweden, September, December 2001 (limited edition) Collaborative releases and mix records Automaton: Dub Terror Exhaust (Strata), 1994 Template 12” DJ Spooky and Totemplow (Manifold Records), 1998 10” DJ Spooky and Alan Licht (Manifold Records), 1998 Kaotik : Transgression DJ Spooky and Totemplow (Manifold Records), June 1999 10” DJ Spooky and Arto Lindsay (Manifold Records), July 1999 10” DJ Spooky and Quoit (Manifold Records), 2000 10” DJ Spooky and Merzbow (Manifold Records), 2000 DJ Spooky vs. The Freight Elevator Quartet: File Under Futurism (Caipirinha Records), September 1999 The Quick and the Dead DJ Spooky and Scanner (Sulphur Records) Meld series, 2000-01-31, Cat No. MELCD001 (UK) BBWULCD004 (US) Anodyne (Main, core and Peripheral mixes) Picture disk w/Sound Secretion (BSI Records) Cat. BSI 014-1, October 2000 Cinemage by Ryuichi Sakamoto w/ David Sylvian, 2000. Catechism (DJ Spooky w/Killah Priest) (Blue Juice Records/UK) BJ007, 2001-06-26 DJ Spooky: Under the Influence: A mix with Six Degrees Records, September 2001 Mondern Mantra: A label mix for Shadow Records, May 2002 Dubtometry featuring Mad Professor, Lee “Scratch” Perry, and others — a remix of “Optometry (Thirsty Ear), March 2003, THI 57128.2 Rhythm Science Audio Companion — C-Side: companion to Rhythm Science book, 2003 Riddim Clash with Twilight Circus (PLAY Label), April 2004 Celestial Mechanix: a label mix for Thirsty Ear Records, June 2004 DJ Spooky presents In Fine Style: 50,000 Volts of Trojan Records: a 2CD label mix for Trojan Records, June 2006 DJ Spooky presents Riddim Come Forward: 50,000 Volts of Trojan Records: a 2CD label mix for Trojan Records, UK release, October 2006 Creation Rebel (Trojan/Sanctuary), October 2007 Film scores SLAM (Offline/Tri-Mark) Grand Prize winner, Sundance, 1998; Cannes, Camera D'Or, 1998; October 1998 commercial release. Quattro Noza (Fountainhead Films) Sundance competition finalist, 2003 Multimedia, web and misc. projects Stuzzicadenti DJ Spooky and Diego Cortez, May 2000 Marcel Duchamp remix, LA Museum of Contemp. Art, 2002 Remixes Hooverphonic – "2Wicky" Epic Spookey Ruben – "Incidental Drift Mix" TVT Records DJ Krush – "Ryuki" (Lulu's Peace Mix) TVT Records Ben Neill – "Pentagram: La Mer Mix and Undertow Mix" (Verve Antilles) 1996 Ben Neill – "Sistrum into Grapheme" Astralwerks James Plotkin Sawtooth Swirl "DJ Spooky's Irreducible Gated Momentum Dub Mix" (9:40), 1997 (Rawkus ptv 1136-2) Walter Ruttman's Weekend for Engaged Magazine Vol. 6, London, UK Hovercraft Stereo Specific Polymerization "Mad Psychotic Hyper-Accelerated Lower East Side Mix" (5:57) Earth Crooked Axis for String Quartet "Kool Stereo Arc" Dub Mix (8:42) Arto Lindsay – "Mundo Civilizado Inversion Mix" 7:28, Gramavision, June 1997, GLP 79519 Metallica – For Whom the Bell Tolls "The Irony Of It All" 4:41 for Spawn Soundtrack, Immortal/Epic Records, New Line Cinema; August 1997, EK68494 Sublime – "Doin' Time" (Life Sentence Remix) 5:43 MCA Records; September 1997 Free Kitten – Jam #1 "Spatialized Chinatown Express Mix" (6:24), Kill Rock Stars, November 1997. KRS 286 Nick Cave – "Red Right Hand" for Scream 2 soundtrack Capitol Records, December 1997 The Swirlies – "In Harmony: DJ Spooky's Retrograde Transposition Mix" (6:15) Taang! Records Bally Sagoo – Tum Bin Jiya "Isomorphic Flux Mix" 6:47 Higher Ground/Sony Skeleton Key – "Wide Open" (DJ Spooky's Full Spectrum Mix) Capitol Records KoЯn – "Got the Life" Immortal/Epic, November 1998 (Limited Edition) Cibo Matto – "Swords and Paintbrush" Warner Bros., 1999 Steve Reich – "City Life" Coalition/None Such Recordings, March 1999 Show Lee Netsu: Electro: snd>>cd: zero sum mix BMG Funhouse (Japan) BVCR-11015, October 1999 Hydroponic Sound System Routine Insanity/Evolution Records, September 2000 Kahimi Karie Tilt Polydor KK Records (Japan), 2000 "Rock the Nation" (DJ Spooky sound Unbound instrumental remix) with Michael Franti and Spearhead (Six Degrees Records), 2001 Merzbow – Ikebana Merzbow Important Records, 2003 Meat Beat Manifesto - "Storm The Studio R.M.X.S." Tino Corp., September 2003 Sub Rosa Revisited, a catalog mix. Sub Rosa SR 201, November 2003 Yoko Ono – "Rising" on "Yes, I'm a Witch" Astralwerks 2007 Bob Marley - "Mr. Brown" Creation Rebel 2007 Sagol 59 - "Leeches Remix" JDub Records 2008 Tracks on compilations, soundtracks and miscellaneous releases "Galactic Funk" on This is Home Entertainment (Home Entertainment Records/Liquid Sky Music) "Hologrammic Dub" on The Night Shift, 1996 (C&S Records) "Surface Noise" (Theme of the Hungry Ghost), Sonic Soul Records 001,1996 "Fourth Inversion" on The Resonance Found at the Core of the Bubble, 1996 (Bubble Core Records) "Prologue (The Duchamp Effect)" on Mind The Gap 15 Gozno Circus GC021, 1997 "Muzique Mecanique Dub" and "Muzique Psychotique" on Electric Ladyland Vol. 3 (Force X Records). "Vorticities" on State Of the Union, 1996 (Atatvistic Records) "Step In Stand Clear" on Storm of Drones (Sombient) "Temporally Displaced", also on Offbeat Collaboration with Amiri Baraka on Black Dada Nihilismus on Offbeat: A Red Hot Sound Trip, 1996, Red Hot/Whitney Museum/Wax Trax-TVT compilation released in conjunction with The Whitney Museum's "The Beats: A Retrospective" installation. In Visual Ocean on Gilles Deleuze: In Memoriam (Mille Plateaux Records) "The Nasty Data Burst" & "Journey" (Paraspace Mix) on Valis: The Destruction of Syntax (Subharmonic Records) "Machinic Phylum" (Crippled Symmetries Mix) on Future Audio, 1996 (Freeze Records) "Primary Inversion" on This is Home Entertainment 2, 1996 (Liquid Sky Music/Jungle Sky Records HE 008) "Black Djinn Trance" (w/Bill Laswell, Jah Wobble) on War Smash Hits, 1996 (Sub Rosa Records SR 105) "Zero Gravity Dub" on Synthetic Pleasures Vol. 2 Caipirinha Productions, 1997 "Soon Forward, Anansi's Gambit" (DJ Spooky's On the Island of the Lost Souls Mix), and "Why Patterns" on Incursions in Illbient, 1997 (Asphodel 0968) "Island Of Lost Souls" on The Freestyle Files, 1997, Studio K7 (Germany) The Western Lands (A Dangerous Road Mix) w/ William Burroughs, Bill Laswell, etc. on Material; Seven Souls, 1997, (Triloka/Mercury 314 534 905) Iannis Xenakis: Analogiques A + B on Ianissimo! (w/STX Ensemble), 1997 (Vandenburg Wave VAN 0003) Iannis Xenakis: Kraanerg (w STX Ensemble), 1997 (Asphodel 0975) Discord w/ Ryuichi Sakamoto, David Torn and orchestra. Live in Japan, 1997 (Güt/For Life FLCG 3028) "Object Unknown", "Pandemonium", and "Degree Zero Launch" CD-ROM (The Product), 1998 "Reconstruction and The 6th Degree", 1998 on Electric Ladyland 5 (Mille Plateau 48) "He Who Leaves No Trace" for Invisible Soundtracks (Leaf Records) "Solar Physics" DJ Spooky and Sir Menelik for Rawkus Records "Polymorphia 2000: Ill Konceptual Mix" for Kunsthalle, Vienna "Haunted: Ill Konceptual Mix" for Kunsthaus, Zurich "Reciprocal Presupposition; Seuqigolana" (Suntropic Inversion Mix) for The End of Utopia (Sub Rosa/SR132), 1998 "Stereo Specific Polymerization" (Beneath the Underdog Mix) (Word Sound Records) 10", 1998 "Interlude" DJ Spooky and Vinicius Cantuaria for Onda Sonora (Red Hot and Lisbon) "Soon Forward, Synaptic Dissonance" for Asphodelic (Asphodel), 1999 "Turn Table Eyzd, UMM" for Hi-Fidelity Dub Chapter II Guidance Recordings GDRC-575), January 2000 "Conduit 23" for “Wreck This Mess Remission 2” Noise Museum "Reciprocal Presupposition and Dance of the Morlocks" on '"Condo Painting'" soundtrack Gallery Six Records, April, 2000 "Rapper’s Relight" on one:it’s all good, man Saul Goodman records, February 2001 "Another Forensic Charade" on Electric Ladyland — Clickhop Version 1.0 Mille Plateaux Records, 2001 "If/When" on Scissors (Play label/Japan; Play 002), June 2001 "FTP>Bundle / Conduit 23" on An Anthology of Noise & Electronic Music (Sub Rosa/SR 190), April 2002 "Catechism" (instrumental) (Blue Juice Records/UK) BJCD013, September 2002 "That Subliminal Kid vs. The Last Mohican" on Thirsty Ear Presents The Blue Series Sampler (The Shape of Jazz to Come) (Thirsty Ear), 2003 "Strictly Turtableyzed Hmm.." on Hi-Fidelity Dub Sessions (Guidance Recordings), 2004 Sideman appearances Arto Lindsay – Mundo Civilizado, 1996, For Life/Ryko/Bar None Pilgrimage – 9 Songs of Ecstasy, 1997, Point Music 314 536 201-2/4 Ben Neill – The Gold Bug, 1998, Antilles Records Gary Lucas Golgothaon "Improve the Shining Hour" Knitting Factory, 2000 Lost Objects w/Bang on a Can, Concerto Köln, RIAS Kammerchor Teldec Classics, Germany 8573-84107-2, May 2001 Hip hop discographies
43168962
https://en.wikipedia.org/wiki/Surface%20%282012%20tablet%29
Surface (2012 tablet)
The first-generation Surface (launched as Surface with Windows RT, later marketed as Surface RT) is a hybrid tablet computer developed and manufactured by Microsoft. Announced in June 2012, it was released in October 2012, and was the first personal computer designed in-house by Microsoft. Positioned as a competitor to Apple's iPad line, Surface included several distinctive features, including a folding kickstand which allowed the tablet to stand at an angle, and the availability of optional attachable protective covers incorporating a keyboard. Surface served as the launch device for Windows RT, a limited version of Windows 8 designed for devices based on ARM architecture, with the ability to run only Metro-style Windows applications developed for it and distributed through the Windows Store. Surface was met with mixed reviews. Although praised for its hardware design, accessories, and aspects of its operating system, criticism was directed towards the performance of the device, as well as the limitations of the Windows RT operating system and its application ecosystem. Sales of the Surface were poor, with Microsoft cutting its price worldwide and taking a US$990 million loss in July 2013 as a result. It was succeeded by the Surface 2 in 2013, with the newer Windows RT 8.1, which was also made available for the original Surface. Support for the OS will end in 2023. History The device was announced at a press-only event in Los Angeles and was the first PC which Microsoft had designed and manufactured in-house. The Surface only supports WiFi for wireless connectivity, with no cellular variant. The tablet went on sale in eight countries - Australia, Canada, China, France, Germany, Hong Kong, the United Kingdom, and the United States. The Surface Pro was launched later. Features Hardware The Surface tablet has a display of 1366x768 pixels on a five-point multi-touch touchscreen with Gorilla Glass 2. The device measures and is made from magnesium. The kickstand, USB port, and a magnetic keyboard interface give the Surface ability to add a wireless mouse, external keyboard, or a thumb drive. There is also a slot for a microSD card to add up to 200 GB. Software Surface runs Windows RT, which is preloaded with Windows Mail, Calendar, Contacts, Sports, News, Travel, Finance, Camera, Weather, Reader, SkyDrive, Store, Photos, Skype (no longer supported), Maps, Games, Messaging, Bing, Desktop, and Xbox Music and Xbox Video Windows Store applications, and supports Microsoft Office Home and Student 2013 RT, which includes Word, PowerPoint, Excel, and OneNote within the Desktop application. Windows RT only allows installing Windows Store applications. Windows RT is compiled entirely for the ARM instruction set architecture. A major update to Windows RT was launched on October 17, 2013, called Windows RT 8.1. This update brought many improvements to the Surface, such as an overhauled Mail app, more Bing apps like Reading List, and OneDrive (updated from SkyDrive). It also brought support for larger tiles, a help and tips app, Internet Explorer 11, Outlook 2013 RT, changes to PC Settings, lock screen photo slideshow, infinitely re-sizable apps, a Start button, and speed improvements. Later, an update to Windows RT 8.1, dubbed Windows RT 8.1 Update added a search button to the Start Screen, as well as the taskbar on the Modern UI, and a title bar for Modern UI apps. Accessories Surface launched with two accessories, the Type Cover and the Touch Cover. The Touch Cover came in white, black, magenta, red, and cyan, while the Type Cover came in black. Limited edition Touch Covers were released featuring laser-etched artwork on the back. The Touch and Type Covers double as keyboards and magnetically attach to the Surface's "accessory spine". Later, adapters for micro-HDMI to HDMI and VGA were released. Reception CNET praised the design of Surface, noting that it " Looked practical without being cold, and just feels like a high-quality device that Microsoft cut few corners to make." The kickstand was also praised for its feel and quality, while both the kickstand and the keyboard cover accessories were also noted for having a "satisfying" clicking sound when engaged or attached. The covers were deemed "essential to getting the complete Surface experience", with the touch cover praised for having a more "spacious" typing area than other tablet keyboard attachments, and for being usable after getting used to its soft feel. The Type Cover was recommended over the Touch Cover due to its higher quality and more conventional key design. Surface's display was praised for its larger size and widescreen aspect ratio over the iPad line but panned for having "muted" color reproduction. While the touchscreen-oriented aspects of the Windows 8 interface were praised for being "elegant", albeit harder to learn than Android or iOS, the Windows RT operating system was criticized for still requiring the use of the mouse-oriented desktop interface to access some applications and settings not accessible from within the "Metro" shell, and for its poor application ecosystem, with Windows Store's state on-launch compared to "a ghost town after the apocalypse." The poor performance of the Surface, especially in comparison to other Tegra 3-based tablets, also led to disapproval. In conclusion, it was felt that "paired with a keyboard cover, the Surface is an excellent Office productivity tool (the best in tablet form) and if your entertainment needs don't go far beyond movies, TV shows, music, and the occasional simple game, you're covered there as well", but that assuming Windows Store would eventually improve its application selection, "both it and the Surface's wonky performance keep a useful productivity device from reaching true tablet greatness." Sales of the Surface and other Windows RT devices were poor; in July 2013, Microsoft reported a loss of US$900 million due to lackluster sales of the Surface, and cut its price by 30% worldwide. Microsoft's price cut did result in a slight increase of market share for the device; by late-August 2013, usage data from the advertising network AdDuplex (which provides advertising services within Windows Store apps) revealed that Surface's share had increased from 6.2 to 9.8%. Timeline References External links Microsoft Surface Windows RT devices Tablet computers Tablet computers introduced in 2012
250381
https://en.wikipedia.org/wiki/National%20Physical%20Laboratory%20%28United%20Kingdom%29
National Physical Laboratory (United Kingdom)
The National Physical Laboratory (NPL) is the national measurement standards laboratory of the United Kingdom. It is one of the most extensive government laboratories in the UK and has a prestigious reputation for its role in setting and maintaining physical standards for British industry. Founded in 1900, it is one of the oldest metrology institutes in the world. The former heads of NPL include many individuals who were pillars of the British scientific establishment. Research work at NPL contributed to the advancement of many disciplines of science, including the development of atomic clocks as well as packet switching, which is today one of the fundamental technologies of the Internet. NPL is based at Bushy Park in Teddington, England. It is under the management of the Department for Business, Energy and Industrial Strategy. History Precursors In the 19th century, the Kew Observatory was run by self-funded devotees of science. In the early 1850s, the observatory began charging fees for testing meteorological instruments and other scientific equipment. As universities in the United Kingdom created and expanded physics departments, the governing committee of the Observatory became increasingly dominated by paid university physicists in the last two decades of the nineteenth century, by which time instrument-testing was the observatory’s main role. Physicists sought the establishment of a state-funded scientific institution for testing electrical standards. Founding The National Physical Laboratory was established in 1900 at Bushy House in Teddington on the site of the Kew Observatory. Its purpose was "for standardising and verifying instruments, for testing materials, and for the determination of physical constants". The laboratory was run by the UK government, with members of staff being part of the civil service. It grew to fill a large selection of buildings on the Teddington site. Late 20th century Administration of NPL was contracted out in 1995 under a Government Owned Contractor Operated model, with Serco winning the bid and all staff transferred to their employment. Under this regime, overhead costs halved, third-party revenues grew by 16% per annum, and the number of peer-reviewed research papers published doubled. NPL procured a large state-of-the-art laboratory under a Private Finance Initiative contract in 1998. The construction was undertaken by John Laing. 21st century The maintenance of the new laboratory building, which was being undertaken by Serco, was transferred back to the DTI in 2004 after the private sector companies involved made losses of over £100m. It was decided in 2012 to change the operating model for NPL from 2014 onwards to include academic partners and to establish a postgraduate teaching institute on site. The date of the changeover was later postponed for a year. The candidates for lead academic partner were the Universities of Edinburgh, Southampton, Strathclyde and Surrey with an alliance of the Universities of Strathclyde and Surrey chosen as preferred partners. Funding was announced in January 2013 for a new £25m Advanced Metrology Laboratory that will be built on the footprint of an existing unused building. The operation of the laboratory transferred back to the Department for Business, Innovation and Skills (now the Department for Business, Energy and Industrial Strategy) ownership on 1 January 2015. Notable researchers Researchers who have worked at NPL include: D. W. Dye who did important work in developing the technology of quartz clocks; the inventor Sir Barnes Wallis who did early development work on the "Bouncing Bomb" used in the "Dam Busters" wartime raids; H.J. Gough, one of the pioneers of research into metal fatigue, who worked at NPL for 19 years from 1914 to 1938; and Sydney Goldstein and Sir James Lighthill who worked in NPL's aerodynamics division during World War II researching boundary layer theory and supersonic aerodynamics respectively. Alan Turing, known for his work at the Government Code and Cypher School (GC&CS) at Bletchley Park during the Second World War to decipher German encrypted messages, worked at the National Physical Laboratory from 1945 to 1947. He designed there the ACE (Automatic Computing Engine), which was one of the first designs for a stored-program computer. Dr Clifford Hodge also worked there and was engaged in research on semiconductors. Others who have spent time at NPL include Robert Watson-Watt, generally considered the inventor of radar, Oswald Kubaschewski, the father of computational materials thermodynamics and the numerical analyst James Wilkinson. Metallurgist Walter Rosenhain appointed the NPL's first female scientific staff members in 1915, Marie Laura Violet Gayler and Isabel Hadfield. Research NPL research has contributed to physical science, materials science, computing, and bioscience. Applications have been found in ship design, aircraft development, radar, computer networking, and global positioning. Atomic clocks The first accurate atomic clock, a caesium standard based on a certain transition of the caesium-133 atom, was built by Louis Essen and Jack Parry in 1955 at NPL. Calibration of the caesium standard atomic clock was carried out by the use of the astronomical time scale ephemeris time (ET). This led to the internationally agreed definition of the latest SI second being based on atomic time. Computing Early computers NPL has undertaken computer research since the mid-1940s. From 1945, Alan Turing led the design of the Automatic Computing Engine (ACE) computer. The ACE project was overambitious and floundered, leading to Turing's departure. Donald Davies took the project over and concentrated on delivering the less ambitious Pilot ACE computer, which first worked in May 1950. Among those who worked on the project was American computer pioneer Harry Huskey. A commercial spin-off, DEUCE was manufactured by English Electric Computers and became one of the best-selling machines of the 1950s. Packet switching Beginning in the mid-1960s, Donald Davies and his team at the NPL pioneered packet switching, now the dominant basis for data communications in computer networks worldwide. Davies designed and proposed a national data network based on packet switching in his 1965 Proposal for the Development of a National Communications Service for On-line Data Processing. Subsequently, the NPL team (Davies, Derek Barber, Roger Scantlebury, Peter Wilkinson, Keith Bartlett, and Brian Aldous) developed the concept into a local area network which operated from 1969 to 1986, and carried out work to analyse and simulate the performance of packet-switched networks, including datagram networks. Their research and practice influenced the ARPANET in the United States, the forerunner of the Internet, and other researchers in the UK and Europe, including Louis Pouzin. NPL sponsors a gallery, opened in 2009, about the development of packet switching and "Technology of the Internet" at The National Museum of Computing. Internetworking NPL internetworking research was led by Davies, Barber and Scantlebury, who were members of the International Networking Working Group (INWG). Connecting heterogeneous computer networks creates a "basic dilemma" since a common host protocol would require restructuring the existing networks. NPL connected with the European Informatics Network (Barber directed the project and Scantlebury led the UK technical contribution) by translating between two different host protocols; that is, using a gateway. Concurrently, the NPL connection to the Post Office Experimental Packet Switched Service used a common host protocol in both networks. NPL research confirmed establishing a common host protocol would be more reliable and efficient. The EIN protocol helped to launch the proposed INWG standard. Bob Kahn and Vint Cerf acknowledged Davies and Scantlebury in their 1974 paper "A Protocol for Packet Network Intercommunication". Scrapbook Scrapbook was an information storage and retrieval system that went live in mid-1971. It included what would now be called word processing, e-mail and hypertext. In this it anticipated many elements of the World Wide Web. The project was managed by David Yates who said of it "We had a community of bright people that were interested in new things, they were good fodder for a system like Scrapbook" and "When we had more than one Scrapbook system, hyperlinks could go across the network without the user knowing what was happening". It was decided that any commercial development of Scrapbook should be left to industry and it was licensed to Triad and then to BT who marketed it as Milepost and developed a transaction processor as an additional feature. Various implementations were marketed on DEC, IBM and ITL machines. All NPL implementations of Scrapbook were closed down in 1984. Network security In the early 1990s, the NPL developed three formal specifications of the MAA: one in Z, one in LOTOS, and one in VDM. The VDM specification became part of the 1992 revision of the International Standard 8731-2, and three implementations in C, Miranda, and Modula-2. Electromagnetics NPL provides accurate and repeatable measurements of electromagnetic parameters across the entire spectrum, from DC up to optical frequencies, which can be traced back to the SI system. Many new technologies like 5G, and the use cases they enable, like smart cities, Industry 4.0, connected and autonomous vehicles (CAV) and precision farming, rely on accurate and traceable measurements at RF, microwave and millimetre-wave frequencies. NPL’s work helps to test and validate new technology innovations and bring them to market. A 2020 study by researchers from Queen Mary University of London and NPL successfully used microwaves to measure blood-based molecules known to be influenced by dehydration. Metrology The National Physical Laboratory is involved with new developments in metrology, such as researching metrology for, and standardising, nanotechnology. It is mainly based at the Teddington site, but also has a site in Huddersfield for dimensional metrology and an underwater acoustics facility at Wraysbury Reservoir near Heathrow Airport. Directors of NPL Directors of NPL include a number of notable individuals: Sir Richard Tetley Glazebrook, 1900–1919 Sir Joseph Ernest Petavel, 1919–1936 Sir Frank Edward Smith, 1936–1937 (acting) Sir William Lawrence Bragg, 1937–1938 Sir Charles Galton Darwin, 1938–1949 Sir Edward Victor Appleton, 1941 (acting) Sir Edward Crisp Bullard, 1948–1955 Dr Reginald Leslie Smith-Rose, 1955–1956 (acting) Sir Gordon Brims Black McIvor Sutherland, 1956–1964 Dr John Vernon Dunworth, 1964–1977 Dr Paul Dean, 1977–1990 Dr Peter Clapham, 1990–1995 Managing Directors Dr John Rae, 1995–2000 Dr Bob McGuiness, 2000–2005 Steve McQuillan, 2005–2008 Dr Martyn Sené, 2008–2009, 2015 (acting) Dr Brian Bowsher, 2009–2015 Chief Executive Officers Dr Peter Thompson, 2015–present NPL buildings See also List of UK government scientific research institutes National Institute of Standards and Technology in the United States National Physical Laboratory of India References Further reading External links Official website The birth of the Internet in the UK Google video featuring Roger Scantlebury, Peter Wilkinson, Peter Kirstein and Vint Cerf, 2013 NPL Video Podcast Second Health in Second Life NMS Home Page NPL YouTube channel NPL Sports and Social Club Ethos Journal profile of the National Physical Laboratory The National Physical Laboratory apprentices Benjamin Stone MP & the NPL – UK Parliament Living Heritage 1900 establishments in the United Kingdom Buildings and structures in the London Borough of Richmond upon Thames Bushy Park Department for Business, Energy and Industrial Strategy Laboratories in the United Kingdom Metrology Organisations based in the London Borough of Richmond upon Thames Physics laboratories Research institutes established in 1900 Research institutes in London Science and technology in London Serco Standards organisations in the United Kingdom Teddington Alan Turing
17213
https://en.wikipedia.org/wiki/KornShell
KornShell
KornShell (ksh) is a Unix shell which was developed by David Korn at Bell Labs in the early 1980s and announced at USENIX on July 14, 1983. The initial development was based on Bourne shell source code. Other early contributors were Bell Labs developers Mike Veach and Pat Sullivan, who wrote the Emacs and vi-style line editing modes' code, respectively. KornShell is backward-compatible with the Bourne shell and includes many features of the C shell, inspired by the requests of Bell Labs users. KornShell, i.e. ksh2020, a "major release for several reasons" (such as removal of EBCDIC support, dropped support for binary plugins written for ksh93u+ and removal of some broken math functions), was released by AT&T, but is not maintained or supported (by AT&T; wasn't even on release date). Features KornShell complies with POSIX.2, Shell and Utilities, Command Interpreter (IEEE Std 1003.2-1992.) Major differences between KornShell and the traditional Bourne shell include: job control, command aliasing, and command history designed after the corresponding C shell features; job control was added to the Bourne Shell in 1989 a choice of three command line editing styles based on vi, Emacs, and Gosling Emacs associative arrays and built-in floating-point arithmetic operations (only available in the version of KornShell) dynamic search for functions mathematical functions process substitution and process redirection C-language-like expressions enhanced expression-oriented and loops dynamic extensibility of (dynamically loaded) built-in commands (since ) reference variables hierarchically nested variables variables can have member functions associated with them object-oriented-programming (since ) variables can be objects with member (sub-)variables and member methods object methods are called with the object variable name followed (after a dot character) by the method name special object methods are called on: object initialization or assignment, object abandonment () composition and aggregation is available, as well as a form of inheritance History KornShell was originally proprietary software. In 2000 the source code was released under a license particular to AT&T, but since the ksh93q release in early 2005 it has been licensed under the Eclipse Public License. KornShell is available as part of the AT&T Software Technology (AST) Open Source Software Collection. As KornShell was initially only available through a proprietary license from AT&T, a number of free and open source alternatives were created. These include , , , and . The functionality of the original KornShell, , was used as a basis for the standard POSIX.2, Shell and Utilities, Command Interpreter (IEEE Std 1003.2-1992.) Some vendors still ship their own versions of the older variant, sometimes with extensions. is maintained on GitHub. As "Desktop KornShell" (), is distributed as part of the Common Desktop Environment. This version also provides shell-level mappings for Motif widgets. It was intended as a competitor to Tcl/Tk. The original KornShell, , became the default shell on AIX in version 4, with ksh93 being available separately. UnixWare 7 includes both and . The default Korn shell is , which is supplied as , and the older version is available as . UnixWare also includes when CDE is installed. The ksh93 distribution underwent a less stable fate after the authors left AT&T around 2012 at stable version ksh93u+. The primary authors continued working on a ksh93v- beta branch until around 2014. That work was eventually taken up primarily by Red Hat in 2017 (due to customer requests) and resulted in the eventual initial release of ksh2020 in the Fall of 2019. That initial release (although fixing several prior stability issues) introduced some minor breakage and compatibility issues. In March 2020, AT&T decided to roll back the community changes, stash them in a branch, and restart from ksh93u+, as the changes were too broad and too ksh-focused for the company to absorb into a project in maintenance mode. Bugfix development continues on the ksh93u+m branch, based on the last stable AT&T release (ksh93u+ 2012-08-01). Primary contributions to the main software branch For the purposes of the lists below, the main software branch of KSH is defined as the original program, dating from July of 1983, up and through the release of KSH2020 in late 2019. Continuing development of follow-on versions (branches) of KSH have split into different groups starting in 2020 and are not elaborated on below. Primary individual contributors The following are listed in a roughly ascending chronological order of their contributions: David G. Korn (AT&T Bell Laboratories, AT&T Laboratories, and Google; and creator) Glenn S. Fowler (AT&T Bell Laboratories, AT&T Laboratories) Kiem-Phong Vo (AT&T Bell Laboratories, AT&T Laboratories) Adam Edgar (AT&T Bell Laboratories) Michael T. Veach (AT&T Bell Laboratories) Patrick D. Sullivan (AT&T Bell Laboratories) Matthijs N. Melchior (AT&T Network Systems International) Karsten-Fleischer (Omnium Software Engineering) Boyer-Moore Siteshwar Vashisht (Red Hat) Kurtis Raider Integration consultant Roland Mainz Primary corporate contributors The following are listed in a roughly ascending chronological order of their contributions: AT&T Bell Laboratories AT&T Network Systems International AT&T Laboratories (now AT&T Labs) Omnium Software Engineering Oracle Corporation Google Red Hat Donated corporate resources Besides the primary major contributing corporations (listed above), some companies have contributed free resources to the development of KSH. These are listed below (alphabetically ordered): Coverity GitHub Travis CI Variants There are several forks and clones of KornShell:  – a fork of included as part of CDE.  – a fork of that provides access to the Tk widget toolkit.  – a port of OpenBSD's variant of KornShell, intended to be maximally portable across operating systems. It was used as the default shell in DeLi Linux 7.2.  – a Linux port of OpenBSD's variant of KornShell, with minimal changes.  – a free implementation of the KornShell language, forked from OpenBSD . It was originally developed for MirOS BSD and is licensed under permissive (though not public domain) terms; specifically, the MirOS Licence. In addition to its usage on BSD, this variant has replaced on Debian, and is the default shell on Android.  – an AmigaOS variant that provides several Amiga-specific features, such as ARexx interoperability. In this tradition MorphOS uses in its SDK. MKS Inc.'s MKS Korn shell – a proprietary implementation of the KornShell language from Microsoft Windows Services for UNIX (SFU) up to version 2.0; according to David Korn, the MKS Korn shell was not fully compatible with KornShell in 1998. In SFU version 3.0 Microsoft replaced the MKS Korn shell with a new POSIX.2-compliant shell as part of Interix. KornShell is included in UWIN, a Unix compatibility package by David Korn. See also Comparison of computer shells List of Unix commands test (Unix) References Further reading David G. Korn, Charles J. Northrup and Jeffery Korn The New KornShell—ksh93, Linux Journal, Issue 27, July 1996 External links MirBSD Korn Shell (mksh) Cross-platform software Free software programmed in C Scripting languages Software that uses Meson Unix shells
7145862
https://en.wikipedia.org/wiki/Cambridge%20Scientific%20Center
Cambridge Scientific Center
The IBM Cambridge Scientific Center was a company research laboratory established in February 1964 in Cambridge, Massachusetts. Situated at 545 Technology Square (Tech Square), in the same building as MIT's Project MAC, it was later renamed the IBM Scientific Center. It is most notable for creating the CP-40 and the control program portions of CP/CMS, a virtual machine operating system developed for the IBM System/360-67. History The IBM Data Processing Division (DPD) sponsored five Scientific Center research groups in the United States and some others around the world to work with selected universities on a variety of customer-related projects. The IBM Research Division in Yorktown Heights, NY was a separate laboratory organization at the Thomas J. Watson Research Center that tended more to "pure" research topics. The DPD Scientific Centers in the late 1960s were located in Palo Alto, California, Houston, Texas, Washington, D.C., Philadelphia, Pennsylvania, Cambridge, Massachusetts, and Grenoble, France. The IBM Time-Life Programming Center in Manhattan, New York worked with the scientific centers but had a slightly different reporting line. Established by Norm Rasmussen, the Cambridge Scientific Center worked with computing groups at both MIT and Harvard, in the same building as Project MAC and the IBM Boston Programming Center (BPC). Additional joint projects involved the MIT Lincoln Laboratory on the outskirts of Boston and Brown University in Providence, RI. The scientific center in 1969 had three main departments: Computer Graphics under Craig Johnson, Operations Research under John Harmon, and Operating Systems under Richard (Rip) Parmelee. In December 1975 Richard MacKinnon became director of the center, succeeding Dr. William Timlake who, in turn, had succeeded Rasmussen. As the third director, MacKinnon was to serve as its longest-tenured director. During his tenure, Cambridge Scientific Center was responsible for a number of enhancements to the VM/370 operating system which became IBM's most popular interactive computing system. These included: an enhanced scheduler for the operating system based on the work of Lynn Wheeler; the VNET networking capability based upon the work of Edson Hendricks and Tim Hartman; multiprocessor support for the IBM asymmetric MPs, led by Howard Holley; IBM's first UNIX system under VM for the National Security Agency; a remote operations capability for VM led by Love Seawright and David Boloker and done in conjunction with the University of Maine, Orono {and which spread throughout IBM's processor lines}; a special controller which allowed ASCII terminals to access VM {done in conjunction with Yale university Comp Center and its director, Greydon Freeman} ; and the ASCII software support for the IBM PC which allowed PCs to access IBM and many other non-IBM mainframes {the work of Jim Perchik}. The VNET networking software became the basis for IBM's internal corporate data network {which had over 3,000 IBM processor nodes} and the university BITNET network which was facilitated by Cambridge in conjunction with Yale Computer Center {Grey Freeman} and CUNY computer center {Ira Fuchs}. MacKinnon served at Cambridge for 18 years and in July 1992 had the unenviable responsibility of closing CSC when IBM decided to close all its scientific centers worldwide. IBM closed the center on July 31, 1992. Selected publications R. J. Adair, R. U. Bayles, L. W. Comeau, and R. J. Creasy, "A Virtual Machine System for the 360/40," IBM Corporation, Cambridge Scientific Center, Report No. 320-2007 (May 1966). R. A. Meyer and L. H. Seawright, "A Virtual Machine Timesharing System," IBM Systems Journal 9, No.3, 199-218 (1970). R. P. Parmelee, T. L. Peterson, C. C. Tillman, and D. J. Hatfield, "Virtual Storage and Virtual Machine Concepts," IBM Systems Journal 11, No.2, 99-130 (1972). E. C. Hendricks and T. C. Hartmann, "Evolution of a Virtual Machine Subsystem," IBM Systems Journal 18, No.1, 111-142 (1979). L. H. Holley, R. P. Parmelee, C. A. Salisbury, and D. N. Saul, "VM/370 Asymmetric Multiprocessing," IBM Systems Journal 18, No.1, 47-70 (1979). L. H. Seawright and R. A. MacKinnon, "VM/370 - A Study of Multiplicity and Usefulness," IBM Systems Journal 18, No. 1, 4-17 (1979). R. J. Creasy, "The Origin of the VM/370 Time-Sharing System," IBM Journal of Research and Development 25, No.5, 483-490 (September 1981). F. T. Kozuh, D. L. Livingston, and T. C. Spillman, "System/370 Capability in a Desktop Computer," IBM Systems Journal 23, No.3, 245-254 (1984). Y. Bard, "The VM Performance Planning Facility (VMPPF)," Computer Measurement Group (CMG) Transactions 53, 53- 59 (Summer 1986). See also IBM Research References IBM facilities 1964 establishments in Massachusetts 1992 disestablishments in Massachusetts VM (operating system)
570838
https://en.wikipedia.org/wiki/Gateway%2C%20Inc.
Gateway, Inc.
Gateway, Inc., previously Gateway 2000, was an American computer hardware company. The company develops, manufactures, supports, and markets a wide range of personal computers, computer monitors, servers, and computer accessories. It was acquired by hardware and electronics corporation, Acer, in October 2007. History Gateway was founded on September 5, 1985, on a farm outside Sioux City, Iowa, by Ted Waitt, Norm Waitt (Ted's brother), and Mike Hammond. The origins of the company's name and cow motif can be traced to the meatpacking industry in the Sioux City area in the late 19th century. Before the Big Sioux and Missouri rivers were spanned by bridges, it was common to transport cattle into Sioux City by ferry, and every so often, a cow would slip off the ferry deck. The farmers were often left with no choice but to give up the cow for lost and get the rest across the fast-moving river. Ted Waitt's ancestor was an enterprising individual who would round up these cattle before they could drown and sold them to the meatpacking plants once rescued. Also, North Sioux City, SD is sometimes referred to as the "Gateway to South Dakota" due to its location. Gateway 2000 was also an innovator in low-end computers with the first sub-$1,000 name-brand PC, the all-in-one Astro. Gateway built brand recognition in part by shipping computers in spotted boxes patterned after Holstein cow markings. In 1989, Gateway moved its corporate offices and production facilities to North Sioux City, South Dakota. In line with the Holstein cow mascot, Gateway opened a chain of farm-styled retail stores called Gateway Country Stores, mostly in suburban and rural areas across the United States. It dropped the "2000" from its name on October 31, 1998 in an effort to appeal to non-millennial markets. Gateway acquired Advanced Logic Research, a maker of high-end personal computers and servers, the year prior. AOL acquired Gateway.net, the online component of Gateway, Inc. in October 1999 for US$800 million. To grow beyond its model of selling high-end PCs by phone, and to attract top management and engineers, Gateway relocated its base of operations to La Jolla, California, in May 1998. In an effort to cut operating costs, Gateway made another move, this time to Poway, California, in October 2001. After acquiring eMachines in 2004, Gateway again relocated its corporate headquarters, to Irvine, California. In 2003, the Securities and Exchange Commission filed fraud charges against three former Gateway executives: CEO Jeff Weitzen, former chief financial officer John Todd, and former controller Robert Manza. The lawsuit alleged that the executives engaged in securities violations and misled investors about the health of the company. Weitzen was cleared of securities fraud in 2006; however, Todd and Manza were found liable for inflating revenue in a jury trial which concluded in March 2007. In 2002, Gateway expanded into the consumer electronics world with products that included plasma screen TVs, digital cameras, DLP projectors, wireless Internet routers, and MP3 players. While the company enjoyed some success in gaining substantial market share from traditional leaders in the space, particularly with plasma TVs and digital cameras, the limited short-term profit potential of those product lines led then-CEO Wayne Inouye to pull the company out of that segment during 2004. Gateway still acts as a retailer selling third-party electronic goods online. Gateway moved build-to-order desktop, laptop, and server manufacturing back to the United States with the opening of its Gateway Configuration Center in Nashville, in September 2006. It employed 385 people in that location. By April 2007, Gateway notebook computers were produced in China and its desktops had "made in Mexico" stickers. On October 16, 2007, Acer completed its acquisition of Gateway. In September 2020, Acer granted Gateway branding and licensing rights to Bmorn Technology, a Shenzhen based technology company to manufacture and sell Gateway branded laptops and tablets through Walmart. The new line of laptops is a simple rebadging of Acer's existing EVOO branded laptops. The laptops are tuned in partnership with THX. Current and previous products Previous hardware In September 2002, Gateway entered the consumer electronics market with aggressively priced plasma TVs. At the time, Gateway's US$2,999 price for a 42" plasma TV undercut name-brand competitors by thousands of dollars per unit. In 2003, the company expanded the range of plasma TVs and added digital cameras, MP3 players, and other devices. By early 2004, in terms of volume, Gateway had moved into a leadership position in the plasma TV category in the United States. However, the pressure to achieve profits after the acquisition of eMachines led the company to phase Gateway-branded consumer electronics out of their product line. eMachines eMachines was founded in 1998, brand of low-end personal computers. It was acquired by Gateway, Inc. in 2004, the latter in turn was acquired by Acer Inc. in 2007. The eMachines brand was discontinued in 2013. See also List of computer system manufacturers References External links Acer Inc. acquisitions Companies based in Irvine, California American companies established in 1985 Computer companies established in 1985 1985 establishments in Iowa Computer companies of the United States Amiga companies Display technology companies 2007 mergers and acquisitions American subsidiaries of foreign companies
21311829
https://en.wikipedia.org/wiki/International%20Institute%20of%20Information%20Technology%2C%20Bhubaneswar
International Institute of Information Technology, Bhubaneswar
The International Institute of Information Technology, Bhubaneswar (IIIT-BH or IIIT-BBSR) is a state university located in Bhubaneswar, Odisha, India. It is a University Grants Commission (India) (UGC) recognised Unitary Technical University. The campus is located in Gothapatna, Bhubaneswar. The campus houses classrooms, laboratories, library, hostel, faculty living quarters, sports facilities, and an auditorium. History IIIT Bhubaneswar was initiated by the Government of Odisha and registered as a society in 2006. It started operating in September 2007. IIIT Bhubaneswar owes its origins to the initiative of the Government Odisha. In 2005, the then President of India Dr. APJ Abdul Kalam laid down the foundation stone of the institute. The institute was registered as a society in 2006. The management of the institute is in the hands of a governing body, consisting of representatives from the government of Odisha, leaders from the IT industry and educationists. Campus The campus of IIIT Bhubaneswar is on the outskirts of Bhubaneswar on an area of . Class rooms at IIIT Bhubaneswar are equipped with the projectors and computers. The institute has labs such Computer Architecture, Database Lab, VLSI lab, MATLAB, Engineering Drawing, Chemistry Lab, Physics Lab, and Java Development Lab. The computers are powered with Ubuntu Operating System and Open source development is also encouraged. 24 hours internet facility is available both at hostel and campus. The library at IIIT Bhubaneswar have a collection of books journals and magazines. IIIT Bhubaneswar is a member of IEEE Computer Society, IEEE Xplore Digital Library and many more. It houses classrooms, laboratories, library, hostel, faculty living quarters, sports facilities and an auditorium with two canteens. The college has two generators which provide power backup during power cuts. The college provides 50 – 60 MBPS dedicated wifi. Library The Textbook Library has titles in multiple copies so as to provide the students access to text books. The reference books collection has books on subjects including technical, humanities, fiction, non-fiction, handbooks and encyclopedias. The Intranet hosts a large collection of e-books. The library subscribes to journal data bases covering IEEE, ACM, Elsevier, Springer, Wiley collection, ASTM, McGraw Hill, and J-Gate. Hostel IIIT Bhubaneswar is fully residential. The hostel has eight blocks of nine stories. There is separate accommodation for boys and girls. The hostel can accommodate up to 1350 students. Classrooms The institute has classrooms of different form factors to address student's needs. There are classrooms with fixed furniture arranged like a gallery. There are classrooms with flexible furniture to create a desired layout. There are also classrooms with no furniture so as to have a dynamic audience. The classrooms can accommodate audience of capacity ranging from 30 to 300. The classrooms are equipped with audio, video and presentation equipment. Laboratories High Performance Computing (HPC) Lab The High Performance Computing (HPC) lab at IIIT-Bh has 12 compute nodes, 1 GPU node, 1 master node, and 1 storage array. CLIA Lab Sandhan (Indian language Search Engine) is a mission mode project funded by TDIL, Ministry of Communication & Information Technology, government of India. Its objective is to develop a search system for tourism in nine Indian languages viz., Bengali, Hindi, Marathi, Tamil, Telugu, Punjabi, Oriya, Assamese & Gujarati. It is a consortium mode project consists of institutes like: IIT Bombay (consortium leader), AU KBC Chennai, AUCE G Chennai, CDAC Pune, CDAC Noida, DAIICT Gandhinagar, Gauhati University, IIIT Bhubaneswar, IIIT Hyderabad, IIT Kharagpur, ISI Kolkata, Jadavpur University. The Sandhan system for five Indian Languages viz: Hindi, Bengali, Marathi, Tamil & Telugu is already hosted in the TDIL server for public use and other language will be hosted soon. The system will enable searching the Indian language content and thus address the gap that exists in fulfilling the information need of the huge Indian population not conversant with English – estimated at 10% of the population. The segments expected to be benefited are tourism sector, students and even business sectors. Virtual Instrumentation Lab Real Time Speech Processing Real Time Image Processing Labview-2010,2012(Software Platform) Xilinx Mentor Graphics Mobile Computing Lab The institute has a laboratory for Mobile Computing which consists of ten Apple iMac desktops. Characterization Lab The proposed characterization lab would be the first of its kind in the eastern region of the nation. The laboratory will provide facility for chip testing. Although the chip designing has reached an appreciable stage in the State, the chip testing facility is not available. Now the chip designers have to tie up with IT companies having such facilities in Silicon Valley of the United States of America. The characterization laboratory will also provide a common facility to local IT entrepreneurs, students and research scholars as well, and such a facility will attract more IT investors to the State. Academics Academic programmes The institute offers undergraduate and postgraduate programmes. Undergraduate programs award BTech in information technology and areas of Electrical and Electronics, Electronics and Communication, information Technology and computer science. MTech and PhD programmes are available in computer science and electronics and communication. The institute is a University Grants Commission (India) recognised Unitary Technical University. Admissions to undergraduate programmes is through the Joint Entrance Examination (JEE Mains). Admission through 50% All India Quota taken by Joint Seat Allocation Authority Government Funded Technical Institutes and 50% reserved quota for Odisha Students and take own counselling. University and Industry tie-ups The university has a memorandum of understanding (MOU) signed with the University of North Texas for research and education collaboration. Professor Saraju Mohanty was instrumental in making this IIITBH and UNT MOU a reality. IIITBH also has MOU with National University of Singapore, concerning academics, research and joint projects. IBM initiated a university programme with the institute to conduct projects as well as training sessions. With the programme the students get an opportunity to create real life software. IIIT Bhubaneswar and Capgemini have joined hands to create study materials and tools for the students. Organisation and administration Governance Student life Student societies P-Society (the programming society of the college) Association for Computing Machinery Student Chapter IEEE Student Branch News & Publication Society (NAPS) The Robotics and Automation Society Music Club (cult society) Sports Society Dance Club Film And Theatre Society -'Nukkad' and 'Aakansh' Art and design Society (Paracosm) Photography society (PhotoGeeks) Entrepreneurship Cell (E-Cell) Advaita Advaita is the Annual-cultural and technical festival of the institute. It is largest and awaited grand College Fest of Odisha. Advaita 2018 was held from 8 to 11 February 2018; its theme was "Apocalypse". Advaita 2019 was held from 7 to 10 February 2019; its the theme was "Cosmo Carnival". Nebulae "Nebulae" is the welcome party for the BTech freshmen. It is usually held within two weeks of the start of the academic year and conducted by the sophomores although 3rd and 4th year seniors participate. References Engineering colleges in Odisha Science and technology in Bhubaneswar Educational institutions established in 2006 Universities and colleges in Bhubaneswar Colleges affiliated with Biju Patnaik University of Technology 2006 establishments in Orissa
37128546
https://en.wikipedia.org/wiki/Basis%20of%20estimate
Basis of estimate
Basis of Estimate a tool used in the field of project management by which members of the project team, usually estimators, project managers, or cost analysts, calculate the total cost of the project. Through carefully planned equations, hierarchical listing of elements, standard calculations, checklists of project elements and other methods, the project team adds in all expenses of a project, from labor to materials to administrative costs. These calculations formulate a Basis of Estimate which is, when completed, a number that can be used to determine the ability of the firm or company to carry out the project, or used as a tool in competing for a contract bid or otherwise proposing the project to another. Definition A Basis of Estimate (BOE) is an analyzed and carefully calculated number that can be used for proposals, bidding on government contracts, and executing a project with a fully calculated budget. The BOE is a tool, not just a simple calculation, it is created through careful analysis and intricate calculations that create a specific number that can be used to base the project execution on with complete confidence as well as win over a contract. Uses The total number, as well as the smaller numbers for various elements within a project, can be used for managing a project team, determining the teams efficiency, and ensuring that the project is not wasting materials and budget unnecessarily. The BOE can be used to ensure financial stability of a company. Through accurate budgeting and proper calculations all projects, regardless of size and scope, can incorporate a BOE. Through the incorporation of this essential tool, a company's financial budget can run effectively and smoothly based on fine tuned calculations. Furthermore, BOEs are used within the realm of government contracting and management. While most often used in defense contracting, the BOE can be used in any department of the federal government. Within the United States Department of Defense (DoD), the BOEs presented and accepted are all regulated by the Defense Contract Audit Agency (DCAA) and the Defense Contract Management Agency (DCMA). These agencies monitor all BOEs that are presented and accepted by the DoD and scrutinize them to ensure maximum efficiency by the companies winning the government contracts. Earned Value Management Earned Value Management is a second tool within project management that allows for the tracking of progress throughout the life cycle of a project. BOEs, when executed properly and with the aid of certain software packages, allow for a seamless transition from project proposal to execution by transferring data from the BOE directly into processing it through Earned Value Management and various software programs associated with that. Experienced Program Managers are trained to develop and implement EVMS programs for organizations requiring them as a result of their US Government contractual requirements. It is important to note that EVMS is required for projects exceeding a certain dollar threshold per ANSI EIA 748 (See Earned Value Management). Programs below the required threshold may still implement EVMS; however, they must do so prudently to obtain the optimum benefit from its employ. Here again, trained Program Managers will have the requisite skills to assist organizations who wish to track progress via EVMS - even when their contracts or programs do not require it. Tools To create a BOE companies, throughout the past few decades, have used spreadsheet programs and skilled cost analysts to enter thousands of lines of data and create complex algorithms to calculate the costs. These positions require a high level of skill to ensure accuracy and knowledge of using these basic level programs. In recent times, software companies have begun releasing specific software designed to create a BOE with much less effort and time and expense of labor, which ultimately is the goal of a BOE in the first place. These software programs allow members of the project team to form the calculations and input data and create a final number with much less effort as the process is streamlined and much of the work is done by the internal software programming. The software comes with algorithms and equations that are common to fields of industry and therefore enables the users to simply select the equation and run the analysis without needing to recreate the equations every time. These software packages also enable the government agencies, or those that the company is reporting to or bidding with, to analyze the BOE more efficiently through pivot tables and other more streamlined features, including standardized reports and consistent reporting visuals. References External links DCMA Official Page Florida State Manual on BOE Program Management, EVMS and BOE support consultants Project management
82738
https://en.wikipedia.org/wiki/Xerox%20Star
Xerox Star
The Xerox Star workstation, officially named Xerox 8010 Information System, was the first commercial personal computer to incorporate technologies that have since become standard in personal computers, including a bitmapped display, a window-based graphical user interface, icons, folders, mouse (two-button), Ethernet networking, file servers, print servers, and e-mail. Introduced by Xerox Corporation on April 27, 1981, the name Star technically refers only to the software sold with the system for the office automation market. The 8010 workstations were also sold with software based on the programming languages Lisp and Smalltalk for the smaller research and software development market. History The Xerox Alto The Xerox Star systems concept owes much to the Xerox Alto, an experimental workstation designed by the Xerox Palo Alto Research Center (PARC). The first Alto became operational in 1972. The Alto had been strongly influenced by what its designers had seen previously with NLS (at SRI) and PLATO (at University of Illinois). At first, only a few Altos had been built. Although by 1979 nearly 1,000 Ethernet-linked Altos had been put into operation at Xerox and another 500 at collaborating universities and government offices, it was never intended to be a commercial product. Then in 1977, Xerox started a development project which worked to incorporate the Alto innovations into a commercial product; their concept was an integrated document preparation system, centered around the (then expensive) laser printing technology and oriented towards large corporations and their trading partners. When the resulting Xerox Star system was announced in 1981, the cost was about $75,000 ($ in today's dollars) for a basic system, and $16,000 ($ today) for each added workstation. A base system would have an 8010 Star workstation, and a second 8010 dedicated as a server (with RS232 I/O), and a floor-standing laser printer. The server software included a File Server, a Print Server, and distributed services (Mail Server, Clearinghouse Name Server / Directory, and Authentication Server). Customers could connect Xerox Memorywriter typewriters to this system over ethernet and send email, using the Memorywriter as a teletype. The Xerox Star development process The Star was developed at Xerox's Systems Development Department (SDD) in El Segundo, California, which had been established in 1977 under the direction of Don Massaro. A section of SDD, SDD North, was located in Palo Alto, California, and included some people borrowed from PARC. SDD's mission was to design the "Office of the future", a new system that would incorporate the best features of the Alto, was easy to use, and could automate many office tasks. The development team was headed by David Liddle, and would eventually grow to more than 200 developers. A good part of the first year was taken up by meetings and planning, the result of which was an extensive and detailed functional specification, internally termed the "Red Book". This became the bible for all development tasks. It defined the interface and enforced consistency in all modules and tasks. All changes to the functional specification had to be approved by a review team which maintained standards rigorously. One group in Palo Alto worked on the underlying operating system interface to the hardware and programming tools. Teams in El Segundo and Palo Alto collaborated on developing the user interface and user applications. The staff relied heavily on the technologies they were working on, file sharing, print servers and e-mail. They were even connected to the Internet, then named ARPANET, which helped them communicate between El Segundo and Palo Alto. The Star was implemented in the programming language Mesa, a direct precursor to Modula-2 and Modula-3. Mesa was not object-oriented, but included processes (threads) and monitors (mutexes) in the language. Mesa required creating two files for every module: a definition module specified data structures and procedures for each object, and one or more implementation modules contained the code for the procedures. Traits were a programming convention used to implement object-oriented capabilities and multiple inheritance in the Star/Viewpoint customer environment. The Star team used a sophisticated integrated development environment (IDE), named internally Tajo and externally Xerox Development Environment (XDE). Tajo had many similarities with the Smalltalk-80 environment, but it had many added tools. For example, the version control system DF, which required programmers to check out modules before they could be changed. Any change in a module which would force changes in dependent modules were closely tracked and documented. Changes to lower level modules required various levels of approval. The software development process was intense. It involved much prototyping and user testing. The software engineers had to develop new network communications protocols and data-encoding schemes when those used in PARC's research environment proved inadequate. Initially, all development was done on Alto workstations. These were not well suited to the extreme burdens placed by the software. Even the processor intended for the product proved inadequate and involved a last minute hardware redesign. Many software redesigns, rewrites, and late additions had to be made, variously based on results from user testing, and marketing and systems considerations. A Japanese language version of the system was produced in conjunction with Fuji Xerox, code named J-Star, and full support for international customers. In the end, many features from the Star Functional Specification were not implemented. The product had to get to market, and the last several months before release focused on reliability and performance. System features User interface The key philosophy of the user interface was to mimic the office paradigm as much as possible in order to make it intuitive for users. The concept of what you see is what you get (WYSIWYG) was considered paramount. Text would be displayed as black on a white background, just like paper, and the printer would replicate the screen using Interpress, a page description language developed at PARC. One of the main designers of the Star, Dr. David Canfield Smith, invented the concept of computer icons and the desktop metaphor, in which the user would see a desktop that contained documents and folders, with different icons representing different types of documents. Clicking any icon would open a window. Users would not start programs first (e.g., a text editor, graphics program or spreadsheet software), they would simply open the file and the appropriate application would appear. The Star user interface was based on the concept of objects. For example, a word processing document would hold page objects, paragraph objects, sentence objects, word objects, and character objects. The user could select objects by clicking on them with the mouse, and press dedicated special keys on the keyboard to invoke standard object functions (open, delete, copy, move) in a uniform way. There was also a "Show Properties" key used to display settings, called property sheets, for the particular object (e.g., font size for a character object). These general conventions greatly simplified the menu structure of all the programs. Object integration was designed into the system from the start. For example, a chart object created in the graphing module could be inserted into any type of document. This type of ability eventually became available as part of the operating system on the Apple Lisa and was featured in Mac OS System 7 as Publish and Subscribe. It became available on Microsoft Windows with the introduction of Object Linking and Embedding (OLE) in 1990. This approach was also later used on the OpenDoc software platform in the mid-to-late 1990s, and in the AppleWorks (originally ClarisWorks) package available for the Apple Mac (1991) and Microsoft Windows (1993). Hardware Initially, the Star software was to run on a new series of virtual-memory processors, described in a PARC technical report called, "Wildflower: An Architecture for a Personal Computer", by Butler Lampson. The machines had names that always began with the letter D. They were all microprogrammed processors; for the Star software, microcode was loaded that implemented an instruction set designed for Mesa. It was possible to load microcode for the Interlisp or Smalltalk environments, but these 3 environments could not run at the same time. The next generation of these machines, the Dorado, used an emitter coupled logic (ECL) processor. It was four times faster than Dandelion on standard benchmarks, and thus competitive with the fastest super minicomputers of the day. It was used for research but was a rack-mounted CPU that was never intended to be an office product. A network router called Dicentra was also based on this design. The Dolphin, built with transistor-transistor logic (TTL) technology, including 74S181 ALUs. It was intended to be the Star workstation, but its cost was deemed too much to meet the project goals. The complexity of the software eventually overwhelmed its limited configuration. At one time in Star's development, it took more than half an hour to reboot the system. The actually released Star workstation hardware was known as a Dandelion (often shortened to "Dlion"). It was based on the AMD Am2900 bitslice microprocessor technology. An enhanced version of the Dandelion, with more microcode space, was dubbed the "Dandetiger". The base Dandelion system had 384 kB memory (expandable to 1.5 MB), a 10 MB, 29 MB or 40 MB 8" hard drive, an 8" floppy drive, mouse and an Ethernet connection. The performance of this machine, which sold for $20,000, was about 850 in the Dhrystone benchmark — comparable to that of a VAX-11/750, which cost five times more. The cathode ray tube (CRT) display (black and white, 1024×809 pixels with 38.7 Hz refresh) was large by the time's standards. It was meant to be able to display two 8.5×11 in pages side by side in true size. An interesting feature of the display was that the overscan area (borders) could be programmed with a 16×16 repeating pattern. This was used to extend the root window pattern to all the edges of the monitor, a feature that is unavailable even today on most video cards. Marketing and commercial reception The Xerox Star was not originally meant to be a stand-alone computer, but to be part of an integrated Xerox "personal office system" that also connected to other workstations and network services via Ethernet. Although a single unit sold for $16,000, a typical office would need to buy at least 2 or 3 machines along with a file server and a name server/print server. Spending $50,000 to $100,000 for a complete installation was not an easy sell, when a secretary's annual salary was about $12,000 and a Commodore VIC-20 cost around $300. Later incarnations of the Star would allow users to buy one unit with a laser printer, but even so, only about 25,000 units were sold, leading many to consider the Xerox Star a commercial failure. The workstation was originally designed to run the Star software for performing office tasks, but it was also sold with different software for other markets. These other configurations included a workstation for Interlisp or Smalltalk, and a server. Some have said that the Star was ahead of its time, that few outside of a small circle of developers really understood the potential of the system, considering that IBM introduced their 8088-based IBM PC running the comparatively primitive PC DOS the same year that the Star was brought to market. However, comparison with the IBM PC may be irrelevant: well before it was introduced, buyers in the Word Processing industry were aware of the 8086-based IBM Displaywriter, the full-page portrait black-on-white Xerox 860 page display system and the 120 page-per-minute Xerox 9700 laser printer. Furthermore, the design principles of Smalltalk and modeless working had been extensively discussed in the August 1981 issue of Byte magazine, so Xerox PARC's standing and the potential of the Star can scarcely have been lost on its target (office systems) market, who would never have expected IBM to position a mass-market PC to threaten far more profitable dedicated WP systems. Unfortunately, the influential niche market of pioneering players in electronic publishing such as Longman were already aligning their production processes towards generic markup languages such as SGML (forerunner of HTML and XML) whereby authors using inexpensive offline systems could describe document structure, making their manuscripts ready for transfer to computer to film systems that offered far higher resolution than the then maximum of 360 dpi laser printing technologies. Another possible reason given for the lack of success of the Star was Xerox's corporate structure. A longtime copier company, Xerox played to their strengths. They already had one significant failure in making their acquisition of Scientific Data Systems pay off. It is said that there were internal jealousies between the old line copier systems divisions that were responsible for bulk of Xerox's revenues and the new upstart division. Their marketing efforts were seen by some as half-hearted or unfocused. Furthermore, the most technically savvy sales representatives that might have sold office automation equipment were paid large commissions on leases of laser printer equipment costing up to a half-million dollars. No commission structure for decentralized systems could compete. The multi-lingual technical documentation market was also a major opportunity, but this needed cross-border collaboration for which few sales organisations were ready at the time. Even within Xerox Corporation, in the mid-1980s, there was little understanding of the system. Few corporate executives ever saw or used the system, and the sales teams, if they requested a computer to assist with their planning, would instead receive older, CP/M-based Xerox 820 or 820-II systems. There was no effort to seed the 8010/8012 Star systems within Xerox Corporation. Probably most significantly, strategic planners at the Xerox Systems Group (XSG) felt that they could not compete against other workstation makers such as Apollo Computer or Symbolics. The Xerox name alone was considered their greatest asset, but it did not produce customers. Finally, by today's standards, the system would be considered very slow, due partly to the limited hardware of the time, and partly to a poorly implemented file system; saving a large file could take minutes. Crashes could be followed by an hours-long process called file scavenging, signaled by the appearance of the diagnostic code 7511 in the top left corner of the screen. In the end, the Star's weak commercial reception probably came down to price, performance in demonstrations, and weakness of sales channels. Even then Apple Computer's Lisa, inspired by the Star and introduced 2 years later, was a market failure, for many of the same reasons as the Star. To credit Xerox, they did try many things to try to improve sales. The next release of Star was on a different, more efficient hardware platform, Daybreak, using a new, faster processor, and accompanied by significant rewriting of the Star software, renamed ViewPoint, to improve performance. The new system, dubbed the Xerox 6085 PCS, was released in 1985. The new hardware provided 1 MB to 4 MB of memory, a 10 MB to 80 MB hard disk, a 15" or 19" display, a 5.25" floppy drive, a mouse, Ethernet connection and a price of a little over $6,000. The Xerox 6085 could be sold along with an attached laser printer as a standalone system. Also offered was a PC compatibility mode via an 80186-based expansion board. Users could transfer files between the ViewPoint system and PC-based software, albeit with some difficulty because the file formats were incompatible with any on the PC. But even with a significantly lower price, it was still a Rolls Royce in the world of lower cost $2,000 personal computers. In 1989, Viewpoint 2.0 introduced many new applications related to desktop publishing. Eventually, Xerox jettisoned the integrated hardware/software workstation offered by Viewpoint and offered a software-only product called GlobalView, providing the Star interface and technology on an IBM PC compatible platform. The initial release required installing a Mesa CPU add-on board. The final release of GlobalView 2.1 ran as an emulator on Sun Solaris, Microsoft Windows 3.1, Windows 95, or Windows 98, IBM OS/2 and was released in 1996. In the end, Xerox PARC, which prided itself upon building hardware 10 years ahead of its time and equipping each researcher with the hardware so they could get started on the software, enabled Xerox to bring the product to market 5 years too early, all throughout the 1980s and into the early 1990s. The custom-hardware platform was always too expensive for the mission for which Star/Viewpoint was intended. Apple, having copied the Xerox Star in the early 1980s with Lisa, struggled and had the same poor results. Apple's second, cost-reduced effort, the Macintosh, barely succeeded (by ditching the virtual memory, implementing it in software, and using commodity microprocessors) - and was not their most profitable product in the late 1980s. Apple also struggled to make profits on office system software in the same time period. L Peter Deutsch, one of the pioneers of the Postscript language, finally found a way to achieve Xerox-Star-like efficiency using just-in-time compilation in the early 1990s for bitmap operations, making the last bit of Xerox-Star custom hardware, the BitBLT, obsolete by 1990. Legacy Even though the Star product failed in the market, it raised expectations and laid important groundwork for later computers. Many of the innovations behind the Star, such as WYSIWYG editing, Ethernet, and network services such as directory, print, file, and internetwork routing have become commonplace in computers of today. Members of the Apple Lisa engineering team saw Star at its introduction at the National Computer Conference (NCC '81) and returned to Cupertino where they converted their desktop manager to an icon-based interface modeled on the Star. Among the developers of Xerox's Gypsy WYSIWYG editor, Larry Tesler left Xerox to join Apple in 1980 where he also developed the MacApp framework. Charles Simonyi left Xerox to join Microsoft in 1981 where he developed first WYSIWYG version of Microsoft Word (3.0). In 1983, Simonyi recommended Scott A. McGregor, who was recruited by Bill Gates to lead the development of Windows 1.0, in part for McGregor's experience in windowing systems at PARC. Later that year, several others left PARC to join Microsoft. Star, Viewpoint and GlobalView were the first commercial computing environments to offer support for most natural languages, including full-featured word processing, leading to their adoption by the Voice of America, other United States foreign affairs agencies, and several multinational corporations. The list of products that were inspired or influenced by the user interface of the Star, and to a lesser extent the Alto, include the Apple Lisa and Macintosh, Graphics Environment Manager (GEM) from Digital Research (the CP/M company), VisiCorp's Visi On, Microsoft Windows, Atari ST, BTRON from TRON Project, Commodore's Amiga, Elixir Desktop, Metaphor Computer Systems, Interleaf, IBM OS/2, OPEN LOOK (co-developed by Xerox), SunView, KDE, Ventura Publisher and NEXTSTEP. Adobe Systems PostScript was based on Interpress. Ethernet was further refined by 3Com, and has become a de facto standard networking protocol. Some people feel that Apple, Microsoft, and others plagiarized the GUI and other innovations from the Xerox Star, and believe that Xerox didn't properly protect its intellectual property. The truth is more complex, perhaps. Many patent disclosures were submitted for the innovations in the Star. However, at the time, the 1975 Xerox Consent Decree, a Federal Trade Commission (FTC) antitrust action, placed restrictions on what the firm was able to patent. Also, when the Star disclosures were being prepared, the Xerox patent attorneys were busy with several other new technologies such as laser printing. Finally, patents on software, particularly those relating to user interfaces, were then an untested legal area. Xerox did go to trial to protect the Star user interface. In 1989, after Apple sued Microsoft for copyright infringement of its Macintosh user interface in Windows, Xerox filed a similar lawsuit against Apple. However, this suit was thrown out on procedural grounds, not substantive, because a three-year statute of limitations had passed. In 1994, Apple lost its suit against Microsoft, not only the issues originally contested, but all claims to the user interface. On January 15, 2019, a work-in-progress Xerox Star emulator created by LCM+L known as Darkstar was released for Windows and Linux. See also Lisp machine Pilot (operating system) References External links The first GUIs - Chapter 2. History: A Brief History of User Interfaces Star graphics: An object-oriented implementation Traits: An approach to multiple-inheritance subclassing The design of Star's records processing: data processing for the noncomputer professional The Xerox "Star": A Retrospective. The Xerox "Star": A Retrospective. (with full-size screenshots) Dave Curbow's Xerox Star Historical Documents (at the Digibarn) The Digibarn's pages on the Xerox Star 8010 Information System Xerox Star 1981 HCI Review of the Xerox Star GUI of Xerox Star Video: Xerox Star User Interface (1982) Video: Xerox Star User Interface compared to Apple Lisa (2020) Star History of human–computer interaction Computer workstations Products introduced in 1981 Computers using bit-slice designs
41660149
https://en.wikipedia.org/wiki/Makers%20Academy
Makers Academy
Makers Academy is a 16-week computer programming bootcamp in London. It was founded by Rob Johnson and Evgeny Shadchnev in December 2012. Programme Makers Academy takes students with varying levels of prior experience computer programming and teaches them the fundamentals of web development. The stated aim is to help students develop the necessary skills to secure a role as a junior developer upon graduation. The course covers professional web development technologies such as Ruby on Rails, HTML5, CSS, JavaScript, jQuery, SQL, Ajax and softer skills, including Object-oriented design, Test Driven Development, Agile Methodology and version control with Git. The main course is preceded by a 4-week, part-time, online 'pre-course' , which students have to complete first. The application process is highly selective and the average student has a number of years of prior work experience before applying. Through its fellowship program, Makers Academy offers a limited number of free places to those who cannot afford to pay the usual £8,500 fee. The program adopts a "learn by doing" approach, achieved largely through self-directed, project-based work. Students are encouraged to work in pairs on practical coding challenges, with weekly tests, culminating in a final project which is presented to hiring partners on "Demo Day". The organisation claims to support 100% of its graduates into jobs, though data to verify this claim is not publicly available. The students who have graduated are often put forward for roles by the Academy, which has relationships with employers like Marks & Spencer, Sky, The Financial Times and Deloitte Digital. Reception Makers Academy has been featured on Sky News, in The Guardian, The Independent, Tech City News, Forbes, MadeInShoreditch, ComputerWeekly, StartupBook, and TechWeekEurope. References External links Makers Academy Site Makers Academy on Twitter Education in the London Borough of Islington Computer programming Training programs
27340479
https://en.wikipedia.org/wiki/Red%20Hat%20Virtualization
Red Hat Virtualization
Red Hat Virtualization (RHV) is an x86 virtualization product produced by Red Hat, based on the KVM hypervisor. Red Hat Virtualization uses the SPICE protocol and VDSM (Virtual Desktop Server Manager) with a RHEL-based centralized management server. It can acquire user and group information from an Active Directory or FreeIPA domain. Some of the technologies of Red Hat Virtualization came from Red Hat's acquisition of Qumranet. Other parts derive from oVirt. Before 2016, up to version 3.x, the product was named Red Hat Enterprise Virtualization (RHEV). RHV supports scheduling policies that define the logic by which virtual machines are distributed among hosts. RHV supports up to 400 hosts in a single cluster and no limit for the maximum number of virtual machines. It also support hot plugging virtual CPUs and allows unlimited guest machines in contrast with RHEL which is limited to 4 guest machines. RHV solution is based on two primary software components: Red Hat Virtualization Manager (RHV-M) Red Hat Virtualization Hypervisors or RHV Host (RHV-H) Versions 2.2 released June 22, 2010, at the Red Hat Summit in Boston. 3.0 announced January 18, 2012, which expanded ISV partnerships alongside new features such as management features, performance and scalability for both Linux and Windows workloads, a power user portal for self-service provisioning, RESTful API, and local storage. 3.1 released December 5, 2012. 3.2 released June 12, 2013. 3.3 released January 22, 2014. 3.4 released June 16, 2014. Among other improvements, it provides better integration with OpenStack and version 7 of the Red Hat Enterprise Linux. 3.5 released February 11, 2015. 3.6 released March 9, 2016. 4.0 released August 24, 2016. Rebranding to Red Hat Virtualization. 4.1 released April 19, 2017. Support for QCOW3 image format. Hot unplug CPUs (limited to CPUs that were previously hot plugged). VMs with SR-IOV can now be live-migrated. Ability to sparcify thinly provisioned disks when VM is shut down. It is now possible to pass on discard commands to the underlying storage (previously qemu was ignoring them). Up to 288 vCPU per VM. RHV Host images now also include tcpdump, screen. 4.2 released May 15, 2018. Brings ease-of-use, automation, and continued tighter integration with the Red Hat portfolio. 4.3 released May 15, 2019 which includes Guest Time Synchronization, among other enhancements. 4.4 released Aug 4, 2020 which includes improved integration with Red Hat OpenShift, among other enhancements. See also Red Hat Distribution of OpenStack References External links Red Hat software Virtualization software Virtualization-related software for Linux
1186980
https://en.wikipedia.org/wiki/Social%20software%20%28social%20procedure%29
Social software (social procedure)
In philosophy and the social sciences, social software is an interdisciplinary research program that borrows mathematical tools and techniques from game theory and computer science in order to analyze and design social procedures. The goals of research in this field are modeling social situations, developing theories of correctness, and designing social procedures. Work under the term social software has been going on since about 1996, and conferences in Copenhagen, London, Utrecht and New York, have been partly or wholly devoted to it. Much of the work is carried out at the City University of New York under the leadership of Rohit Jivanlal Parikh, who was influential in the development of the field. Goals and tools Current research in the area of social software include the analysis of social procedures and examination of them for fairness, appropriateness, correctness and efficiency. For example, an election procedure could be a simple majority vote, Borda count, a Single Transferable vote (STV), or Approval voting. All of these procedures can be examined for various properties like monotonicity. Monotonicity has the property that voting for a candidate should not harm that candidate. This may seem obvious, true under any system, but it is something which can happen in STV. Another question would be the ability to elect a Condorcet winner in case there is one. Other principles which are considered by researchers in social software include the concept that a procedure for fair division should be Pareto optimal, equitable and envy free. A procedure for auctions should be one which would encourage bidders to bid their actual valuation – a property which holds with the Vickrey auction. What is new in social software compared to older fields is the use of tools from computer science like program logic, analysis of algorithms and epistemic logic. Like programs, social procedures dovetail into each other. For instance an airport provides runways for planes to land, but it also provides security checks, and it must provide for ways in which buses and taxis can take arriving passengers to their local destinations. The entire mechanism can be analyzed in the way in which a complex computer program can be analyzed. The Banach-Knaster procedure for dividing a cake fairly, or the Brams and Taylor procedure for fair division have been analyzed in this way. To point to the need for epistemic logic, a building not only needs restrooms, for obvious reasons, it also needs signs indicating where they are. Thus epistemic considerations enter in addition to structural ones. For a more urgent example, in addition to medicines, physicians also need tests to indicate what a patient's problem is. See also Dynamic logic Epistemic logic Fair division Game theory Mechanism design No-trade theorem Social procedure Social technology Notes Further reading John Searle, The Construction of Social Reality (1995) New York : Free Press, c1995. Rohit Parikh, “Social Software,” Synthese, 132, Sep 2002, 187-211. Eric Pacuit and Rohit Parikh, "Social Interaction, Knowledge, and Social Software", in Interactive Computation: The New Paradigm, ed. Dina Goldin, Sott Smolka, Peter Wegner, Springer 2007, 441-461. Ludwig Wittgenstein, Philosophical Investigations, Macmillan, 1953. Jaakko Hintikka, Knowledge and Belief: an introduction to the logic of the two notions, Cornell University press, 1962, D. Lewis, Convention, a Philosophical Study, Harvard U. Press, 1969. R. Aumann, Agreeing to disagree, Annals of Statistics, 4 (1976) 1236–1239. J. Geanakoplos and H. Polemarchakis, We Can't Disagree Forever, J. Economic Theory, 28 (1982), 192-200. R. Parikh and P. Krasucki, Communication, Consensus and Knowledge, J. Economic Theory 52 (1990) pp. 178–189. W. Brian Arthur. Inductive reasoning and bounded rationality. Complexity in Economic Theory, 84(2):406-411, 1994. Ronald Fagin, Joseph Halpern, Yoram Moses and Moshe Vardi, Reasoning about Knowledge, MIT Press 1995. Steven Brams and Alan Taylor, The Win-Win Solution: guaranteeing fair shares to everybody, Norton 1999. David Harel, Dexter Kozen and Jerzy Tiuryn, Dynamic Logic, MIT Press, 2000. Michael Chwe, Rational ritual : culture, coordination, and common knowledge, Princeton University Press, 2001. Marc Pauly, Logic for Social Software, Ph.D. Thesis, University of Amsterdam. ILLC Dissertation Series 2001–10, . Rohit Parikh, Language as social software, in Future Pasts: the Analytic Tradition in Twentieth Century Philosophy, Ed. J. Floyd and S. Shieh, Oxford U. Press, 2001, 339-350. Parikh, R. and Ramanujam, R., A knowledge based semantics of messages, in J. Logic, Language, and Information, 12, pp. 453 – 467, 2003. Eric Pacuit, Topics in Social Software: Information in Strategic Situations, Doctoral dissertation, City University of New York (2005). Eric Pacuit, Rohit Parikh and Eva Cogan, The Logic of Knowledge Based Obligation, Knowledge, Rationality and Action, a subjournal of Synthese, 149(2), 311 – 341, 2006. Eric Pacuit and Rohit Parikh, Reasoning about Communication Graphs, in Interactive Logic, Edited by Johan van Benthem, Dov Gabbay and Benedikt Lowe (2007). Mike Wooldridge, Thomas Ågotnes, Paul E. Dunne, and Wiebe van der Hoek. Logic for Automated Mechanism Design – A Progress Report. In Proceedings of the Twenty-Second Conference on Artificial Intelligence (AAAI-07), Vancouver, Canada, July 2007. External links Knowledge, Games and Beliefs Group. City University of New York, Graduate Center. Social Software conference. Carlsberg Academy, Copenhagen. May 27–29, 2004. Retrieved on 2009-06-26. Interactive Logic: Games and Social Software workshop. King's College, London. November 4–7, 2005. Retrieved on 2009-06-26. Games, action and social software workshop. Lorentz Center, Leiden University, Netherlands. 30 Oct 2006–3 Nov 2006. Retrieved on 2009-06-26. Social Software Mini-conference. Knowledge, Games and Beliefs Group, City University of New York. May 18–19, 2007. Retrieved on 2009-06-26. Game theory Logic
45363782
https://en.wikipedia.org/wiki/Three-stage%20quantum%20cryptography%20protocol
Three-stage quantum cryptography protocol
The Three-stage quantum cryptography protocol, also known as Kak's three-stage protocol is a method of data encryption that uses random polarization rotations by both Alice and Bob, the two authenticated parties, that was proposed by Subhash Kak. In principle, this method can be used for continuous, unbreakable encryption of data if single photons are used. It is different from methods of QKD (quantum key distribution) for it can be used for direct encryption of data, although it could also be used for exchanging keys. The basic idea behind this method is that of sending secrets (or valuables) through an unreliable courier by having both Alice and Bob place their locks on the box containing the secret, which is also called double-lock cryptography. Alice locks the box with the secret in it and it is transported to Bob, who sends it back after affixing his own lock. Alice now removes her lock (after checking that it has not been tampered with) and sends it back to Bob who, similarly unlocks his lock and obtains the secret. In the braided form, only one-pass suffices but here Alice and Bob share an initial key. This protocol has been proposed as a method for secure communication that is entirely quantum unlike quantum key distribution in which the cryptographic transformation uses classical algorithms The basic polarization rotation scheme has been implemented in hardware by Pramode Verma in the quantum optics laboratory of the University of Oklahoma. In this method more than one photon can be used in the exchange between Alice and Bob and, therefore, it opens up the possibility of multi-photon quantum cryptography. This works so long as the number of photons siphoned off by the eavesdropper is not sufficient to determine the polarization angles. A version that can deal with the man-in-the-middle attack has also been advanced. Parakh analyzed the three-stage protocol under rotational quantum errors and proposed a modification that would correct these errors. One interesting feature of the modified protocol is that it is invariant to the value of rotational error and can therefore correct for arbitrary rotations. See also Three-pass protocol References Cryptography Quantum cryptography
4248773
https://en.wikipedia.org/wiki/Cypress%20PSoC
Cypress PSoC
PSoC (programmable system on a chip) is a family of microcontroller integrated circuits by Cypress Semiconductor. These chips include a CPU core and mixed-signal arrays of configurable integrated analog and digital peripherals. History In 2002, Cypress began shipping commercial quantities of the PSoC 1. To promote the PSoC, Cypress sponsored a "PSoC Design Challenge" in Circuit Cellar magazine in 2002 and 2004. In April 2013, Cypress released the fourth generation, PSoC 4. The PSoC 4 features a 32-bit ARM Cortex-M0 CPU, with programmable analog blocks (operational amplifiers and comparators), programmable digital blocks (PLD-based UDBs), programmable routing and flexible GPIO (route any function to any pin), a serial communication block (for SPI, UART, I²C), a timer/counter/PWM block and more. PSoC is used in devices as simple as Sonicare toothbrushes and Adidas sneakers, and as complex as the TiVo set-top box. One PSoC implements capacitive sensing for the touch-sensitive scroll wheel on the Apple iPod click wheel. In 2014, Cypress extended the PSoC 4 family by integrating a Bluetooth Low Energy radio along with a PSoC 4 Cortex-M0-based SoC in a single, monolithic die. In 2016, Cypress released PSoC 4 S-Series, featuring ARM Cortex-M0+ CPU. Overview A PSoC integrated circuit is composed of a core, configurable analog and digital blocks, and programmable routing and interconnect. The configurable blocks in a PSoC are the biggest difference from other microcontrollers. PSoC has three separate memory spaces: paged SRAM for data, Flash memory for instructions and fixed data, and I/O registers for controlling and accessing the configurable logic blocks and functions. The device is created using SONOS technology. PSoC resembles an ASIC: blocks can be assigned a wide range of functions and interconnected on-chip. Unlike an ASIC, there is no special manufacturing process required to create the custom configuration — only startup code that is created by Cypress' PSoC Designer (for PSoC 1) or PSoC Creator (for PSoC 3 / 4 / 5) IDE. PSoC resembles an FPGA in that at power up it must be configured, but this configuration occurs by loading instructions from the built-in Flash memory. PSoC most closely resembles a microcontroller combined with a PLD and programmable analog. Code is executed to interact with the user-specified peripheral functions (called "Components"), using automatically generated APIs and interrupt routines. PSoC Designer or PSoC Creator generate the startup configuration code. Both integrate APIs that initialize the user selected components upon the users needs in a Visual-Studio-like GUI. Configurable analog and digital blocks Using configurable analog and digital blocks, designers can create and change mixed-signal embedded applications. The digital blocks are state machines that are configured using the blocks registers. There are two types of digital blocks, Digital Building Blocks (DBBxx) and Digital Communication Blocks (DCBxx). Only the communication blocks can contain serial I/O user modules, such as SPI, UART, etc. Each digital block is considered an 8-bit resource that designers can configure using pre-built digital functions or user modules (UM), or, by combining blocks, turn them into 16-, 24-, or 32-bit resources. Concatenating UMs together is how 16-bit PWMs and timers are created. There are two types of analog blocks. The continuous time (CT) blocks are composed of an op-amp circuit and designated as ACBxx where xx is 00–03. The other type is the switch cap (SC) blocks, which allow complex analog signal flows and are designated by ASCxy where x is the row and y is the column of the analog block. Designers can modify and personalize each module to any design. Programmable routing and interconnect PSoC mixed-signal arrays' flexible routing allows designers to route signals to and from I/O pins more freely than with many competing microcontrollers. Global buses allow for signal multiplexing and for performing logic operations. Cypress suggests that this allows designers to configure a design and make improvements more easily and faster and with fewer PCB redesigns than a digital logic gate approach or competing microcontrollers with more fixed function pins. Series There are five different families of devices, each based around a different microcontroller core: PSoC 1 — CY8C2xxxx series — M8C core. PSoC 3 — CY8C3xxxx series — 8051 core. PSoC 4 — CY8C4xxxx series — ARM Cortex-M0 core. PSoC 5/5LP — CY8C5xxxx series — ARM Cortex-M3 core. PSoC 6 — CY8C6xxxx series — ARM Cortex-M4 core with an added ARM Cortex-M0+ core (in some models). Bluetooth Low Energy Starting in 2014, Cypress began offering PSoC 4 BLE devices with integrated Bluetooth Low Energy (Bluetooth Smart). This can be used to create connected products leveraging the analog and digital blocks. Users can add and configure the BLE module directly in PSoC creator. Cypress also provides a complete Bluetooth Low Energy stack licensed from Mindtree with both Peripheral and Central functionality. The PSoC 6 series includes versions with BLE including Bluetooth 5 features including extended range or higher speed. Summary Development tools PSoC Designer This is the first generation software IDE to design and debug and program the PSoC 1 devices. It introduced unique features including a library of pre-characterized analog and digital peripherals in a drag-and-drop design environment which could then be customized to specific design needs by leveraging the dynamically generated API libraries of code. PSoC Creator PSoC Creator is the second generation software IDE to design debug and program the PSoC 3 / 4 / 5 devices. The development IDE is combined with an easy to use graphical design editor to form a powerful hardware/software co-design environment. PSoC Creator consists of two basic building blocks. The program that allows the user to select, configure and connect existing circuits on the chip and the components which are the equivalent of peripherals on MCUs. What makes PSoC intriguing is the possibility to create own application specific peripherals in hardware. Cypress publishes component packs several times a year. PSoC users get new peripherals for their existing hardware without being charged or having to buy new hardware. PSoC Creator also allows much freedom in assignment of peripherals to I/O pins. Cortex-M Generic ARM development tools for PSoC 4 and PSoC 5. Documentation PSoC 4 / 5 The amount of documentation for all ARM chips is daunting, especially for newcomers. The documentation for microcontrollers from past decades would easily be inclusive in a single document, but as chips have evolved so has the documentation grown. The total documentation is especially hard to grasp for all ARM chips since it consists of documents from the IC manufacturer (Cypress Semiconductor) and documents from CPU core vendor (ARM Holdings). A typical top-down documentation tree is: manufacturer website, manufacturer marketing slides, manufacturer datasheet for the exact physical chip, manufacturer detailed reference manual that describes common peripherals and aspects of a physical chip family, ARM core generic user guide, ARM core technical reference manual, ARM architecture reference manual that describes the instruction set(s). PSoC 4 / 5 documentation tree (top to bottom) PSoC website. PSoC marketing slides. PSoC datasheet. PSoC reference manuals. ARM core website. ARM core generic user guide. ARM core technical reference manual. ARM architecture reference manual. Cypress Semiconductor has additional documents, such as: evaluation board user manuals, application notes, getting started guides, software library documents, errata, and more. See External Links section for links to official PSoC and ARM documents. See also ARM architecture, List of ARM microprocessor cores, ARM Cortex-M Microcontroller, (List of common microcontrollers) Embedded systems Single-board microcontroller Interrupt, Interrupt handler, Comparison of real-time operating systems Joint Test Action Group Serial Wire Debug Field-programmable analog array Reconfigurable computing References Further reading External links PSoC Official Documents PSoC Designer software for PSoC 1 family PSoC Creator software for PSoC 3 / 4 / 5LP families PSoC Programmer software for PSoC 1 / 3 / 4 / 5LP families ARM Official Documents for PSoC 4 / 5 Other PSoC Developer IoT Expert PSoC Tutorials Psoc-chile El primer web site en Español sobre Microcontroladore Psoc Integrated circuits System on a chip
28402948
https://en.wikipedia.org/wiki/Gaslamp%20Games
Gaslamp Games
Gaslamp Games, Inc. was an independent game developer based in Victoria, British Columbia, Canada which designed video games for the Microsoft Windows, Mac OS X, and Linux operating systems. Their first game, Dungeons of Dredmor, was released in 2011. Their most recent game, Clockwork Empires, was released in 2016. While the company did not appear to announce it was no longer operating, employees posted on social media that they no longer worked there after the Christmas holiday in 2016. There have also been no social media posts by the company and no further developments to their games after that time. In May 2019 they confirmed that the company had ceased all operations and officially removed Clockwork Empires from sale on all platforms. Staff Nicholas Vining, Gaslamp's technical director, has been involved in the game industry since the age of sixteen, when he got his Linux gaming start working for Loki Software. Since then, he has contributed to games developed by Piranha Games, 3000AD Inc., Destineer Studios, and TimeGate Studios. He has also written for Game Developer Magazine, and is listed as a contributor to the OpenGL Rendering API specification. He has also worked with prolific coder Ryan C. Gordon on various open source and Linux-related projects. David Baumgart, Gaslamp's art director, has previously worked as a contractor specialising in 2-D artwork for video games. His list of credited titles includes work for Niels Bauer Games, Hexwar Games, Data Spire, and Tactic Studios. He also created the logo for the FatELF project. Daniel Jacobsen, Gaslamp's CEO, co-founded the company while working on his undergraduate in physics at the University of Victoria, and their first release, Dungeons of Dredmor, was his first commercial video game project. In addition to programming and company management, he has also lectured with Nicholas Vining at the University of Victoria on game design, and contributes actively to Australian National University's SkyMapper project. Releases Their first project, Dungeons of Dredmor, was released on July 13, 2011. Dungeons of Dredmor is a Rogue-inspired dungeon crawler which embraces procedural content generation. Gaslamp's second game, Clockwork Empires, a steampunk city-building game, was released on Steam Early Access on August 15, 2014. Version 1.0 of Clockwork Empires launched on October 26, 2016. References External links Gaslamp Games - Official Website Gaslamp Games at YouTube Gaslamp Games at Facebook Defunct video game companies of Canada Video game companies established in 2010 Video game companies disestablished in 2017 2010 establishments in British Columbia 2017 disestablishments in British Columbia Video game development companies Companies based in Victoria, British Columbia Defunct companies of British Columbia
2315033
https://en.wikipedia.org/wiki/YaCy
YaCy
YaCy (pronounced "ya see") is a free distributed search engine, built on principles of peer-to-peer (P2P) networks. Its core is a computer program written in Java distributed on several hundred computers, , so-called YaCy-peers. Each YaCy-peer independently crawls through the Internet, analyzes and indexes found web pages, and stores indexing results in a common database (so called index) which is shared with other YaCy-peers using principles of P2P networks. It is a search engine that everyone can use to build a search portal for their intranet and to help search the public internet clearly. Compared to semi-distributed search engines, the YaCy-network has a distributed architecture. All YaCy-peers are equal and no central server exists. It can be run either in a crawling mode or as a local proxy server, indexing web pages visited by the person running YaCy on their computer. (Several mechanisms are provided to protect the user's privacy). Access to the search functions is made by a locally running web server which provides a search box to enter search terms, and returns search results in a similar format to other popular search engines. YaCy was created in 2003 by Michael Christen. System components YaCy search engine is based on four elements: Crawler A search robot that traverses between web pages, analyzing their content. Indexer Creates a reverse word index (RWI) i.e. each word from the RWI has its list of relevant URLs and ranking information. Words are saved in form of word hashes. Search and administration interface Made as a web interface provided by a local HTTP servlet with servlet engine. Data storage Used to store the reverse word index database utilizing a distributed hash table. Search-engine technology YaCy is a complete search appliance with user interface, index, administration and monitoring. YaCy harvests web pages with a web crawler. Documents are then parsed, indexed and the search index is stored locally. If your peer is part of a peer network, then your local search index is also merged into the shared index for that network. A search is started then the local index contributes together with a global search index from peers in the YaCy search network. The YaCy Grid is a second-generation implementation of the YaCy peer-to-peer search. A YaCy Grid installation consists of micro-services that communicate using the MCP. The YaCy Parser is a microservice that can be deployed using Docker. When the Parser Component is started, it searches for a MCP and connects to it. By default the local host is searched for a MCP but you can configure one yourself. YaCy platform architecture YaCy uses a combination of techniques for the networking, administration, and maintenance of indexing the search engine including blacklisting, moderation, and communication with the community. Here is how YaCy performs these operations: Community components Web forum Statistics XML API Maintenance Web Server Indexing Crawler with Balancer Peer-to-Peer Server Communication Content organization Blacklisting and Filtering Search interface Bookmarks Monitoring search results Distribution YaCy is available in packages for Linux, Windows, Macintosh and also as a Docker image. YaCy can also be installed on any other operation system either by manually compiling it, or using a tarball. YaCy requires Java 8, OpenJDK 8 is recommended. The Debian package can be installed from a repository available at the subdomain of the project's website. The package is not maintained in the official Debian package repository yet. See also Dooble – an open-source web browser with an integrated YaCy Search Engine Tool Widget References Further reading YaCy at LinuxReviews External links Anonymity networks Distributed data storage Free search engine software Free web crawlers Internet properties established in 2003 Internet search engines Java platform software Cross-platform software Software using the GPL license Java (programming language) software Peer-to-peer software
16142167
https://en.wikipedia.org/wiki/History%20of%20personal%20computers
History of personal computers
The history of the personal computer as a mass-market consumer electronic device began with the microcomputer revolution of the 1970s. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time-sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians. Etymology An early use of the term "personal computer" appeared in a 3 November 1962, New York Times article reporting John W. Mauchly's vision of future computing as detailed at a recent meeting of the Institute of Industrial Engineers. Mauchly. stated, "There is no reason to suppose the average boy or girl cannot be master of a personal computer". In 1968, a manufacturer took the risk of referring to their product this way, when Hewlett-Packard advertised their "Powerful Computing Genie" as "The New Hewlett-Packard 9100A personal computer". This advertisement was deemed too extreme for the target audience and replaced with a much drier ad for the HP 9100A programmable calculator. Over the next seven years, the phrase had gained enough recognition that Byte magazine referred to its readers in its first edition as "[in] the personal computing field", and Creative Computing defined the personal computer as a "non-(time)shared system containing sufficient processing power and storage capabilities to satisfy the needs of an individual user." In 1977, three new pre-assembled small computers hit the markets which Byte would refer to as the "1977 Trinity" of personal computing. The Apple II and the PET 2001 were advertised as personal computers, while the TRS-80 was described as a microcomputer used for household tasks including "personal financial management". By 1979, over half a million microcomputers were sold and the youth of the day had a new concept of the personal computer. Overview The history of the personal computer as mass-market consumer electronic devices effectively began in 1977 with the introduction of microcomputers, although some mainframe and minicomputers had been applied as single-user systems much earlier. A personal computer is one intended for interactive individual use, as opposed to a mainframe computer where the end user's requests are filtered through operating staff, or a time sharing system in which one large processor is shared by many individuals. After the development of the microprocessor, individual personal computers were low enough in cost that they eventually became affordable consumer goods. Early personal computers – generally called microcomputers – were sold often in electronic kit form and in limited numbers, and were of interest mostly to hobbyists and technicians. Mainframes, minicomputers, and microcomputers Computer terminals were used for time sharing access to central computers. Before the introduction of the microprocessor in the early 1970s, computers were generally large, costly systems owned by large corporations, universities, government agencies, and similar-sized institutions. End users generally did not directly interact with the machine, but instead would prepare tasks for the computer on off-line equipment, such as card punches. A number of assignments for the computer would be gathered up and processed in batch mode. After the job had completed, users could collect the results. In some cases, it could take hours or days between submitting a job to the computing center and receiving the output. A more interactive form of computer use developed commercially by the middle 1960s. In a time-sharing system, multiple computer terminals let many people share the use of one mainframe computer processor. This was common in business applications and in science and engineering. A different model of computer use was foreshadowed by the way in which early, pre-commercial, experimental computers were used, where one user had exclusive use of a processor. In places such as Carnegie Mellon University and MIT, students with access to some of the first computers experimented with applications that would today be typical of a personal computer; for example, computer aided drafting was foreshadowed by T-square, a program written in 1961, and an ancestor of today's computer games was found in Spacewar! in 1962. Some of the first computers that might be called "personal" were early minicomputers such as the LINC and PDP-8, and later on VAX and larger minicomputers from Digital Equipment Corporation (DEC), Data General, Prime Computer, and others. By today's standards, they were very large (about the size of a refrigerator) and cost prohibitive (typically tens of thousands of US dollars). However, they were much smaller, less expensive, and generally simpler to operate than many of the mainframe computers of the time. Therefore, they were accessible for individual laboratories and research projects. Minicomputers largely freed these organizations from the batch processing and bureaucracy of a commercial or university computing center. In addition, minicomputers were relatively interactive and soon had their own operating systems. The minicomputer Xerox Alto (1973) was a landmark step in the development of personal computers because of its graphical user interface, bit-mapped high resolution screen, large internal and external memory storage, mouse, and special software. In 1945, Vannevar Bush published an essay called "As We May Think" in which he outlined a possible solution to the growing problem of information storage and retrieval. In 1968, SRI researcher Douglas Engelbart gave what was later called The Mother of All Demos, in which he offered a preview of things that have become the staples of daily working life in the 21st century: e-mail, hypertext, word processing, video conferencing, and the mouse. The demo was the culmination of research in Engelbart's Augmentation Research Center laboratory, which concentrated on applying computer technology to facilitate creative human thought. Microprocessor and cost reduction The minicomputer ancestors of the modern personal computer used early integrated circuit (microchip) technology, which reduced size and cost, but they contained no microprocessor. This meant that they were still large and difficult to manufacture just like their mainframe predecessors. After the "computer-on-a-chip" was commercialized, the cost to manufacture a computer system dropped dramatically. The arithmetic, logic, and control functions that previously occupied several costly circuit boards were now available in one integrated circuit, making it possible to produce them in high volume. Concurrently, advances in the development of solid state memory eliminated the bulky, costly, and power-hungry magnetic core memory used in prior generations of computers. The single-chip microprocessor was made possible by an improvement in MOS technology, the silicon-gate MOS chip, developed in 1968 by Federico Faggin, who later used silicon-gate MOS technology to develop the first single-chip microprocessor, the Intel 4004, in 1971. A few researchers at places such as SRI and Xerox PARC were working on computers that a single person could use and that could be connected by fast, versatile networks: not home computers, but personal ones. At RCA, Joseph Weisbecker designed and built a true home computer known as FRED, but this saw mixed interest from management. The CPU design was released as the COSMAC in 1974 and several experimental machines using it were built in 1975, but RCA declined to market any of these until introducing the COSMAC ELF in 1976, in kit form. By this time a number of other machines had entered the market. After the introduction of the Intel 4004 in 1972, microprocessor costs declined rapidly. In 1974 the American electronics magazine Radio-Electronics described the Mark-8 computer kit, based on the Intel 8008 processor. In January of the following year, Popular Electronics magazine published an article describing a kit based on the Intel 8080, a somewhat more powerful and easier to use processor. The Altair 8800 sold remarkably well even though initial memory size was limited to a few hundred bytes and there was no software available. However, the Altair kit was much less costly than an Intel development system of the time and so was purchased by companies interested in developing microprocessor control for their own products. Expansion memory boards and peripherals were soon listed by the original manufacturer, and later by plug compatible manufacturers. The very first Microsoft product was a 4 kilobyte paper tape BASIC interpreter, which allowed users to develop programs in a higher-level language. The alternative was to hand-assemble machine code that could be directly loaded into the microcomputer's memory using a front panel of toggle switches, pushbuttons and LED displays. While the hardware front panel emulated those used by early mainframe and minicomputers, after a very short time I/O through a terminal was the preferred human/machine interface, and front panels became extinct. The beginnings of the personal computer industry Simon Simon was a project developed by Edmund Berkeley and presented in a thirteen articles series issued in Radio-Electronics magazine, from October 1950. Although there were far more advanced machines at the time of its construction, the Simon represented the first experience of building an automatic simple digital computer, for educational purposes. In fact, its ALU had only 2 bits, and the total memory was 12 bits (2bits x6). In 1950, it was sold for US$600. IBM 610 The IBM 610 was designed between 1948 and 1957 by John Lentz at the Watson Lab at Columbia University as the Personal Automatic Computer (PAC) and announced by IBM as the 610 Auto-Point in 1957. Although it was faulted for its speed, the IBM 610 handled floating-point arithmetic naturally. With a price tag of $55,000, only 180 units were produced. Olivetti Elea The Elea 9003 is one of a series of mainframe computers Olivetti developed starting in the late 1950s. The first prototype was created in 1957. The system, made entirely with transistors for high performance, was conceived, designed and developed by a small group of researchers led by Mario Tchou (1924–1961). It was the first solid-state computer designed (it was fully manufactured in Italy). The knowledge obtained was applied a few years later in the development of the successful Programma 101 electronic calculator. LINC Designed in 1962, the LINC was an early laboratory computer especially designed for interactive use with laboratory instruments. Some of the early LINC computers were assembled from kits of parts by the end users. Olivetti Programma 101 First produced in 1965, the Programma 101 was one of the first printing programmable calculators. It was designed and produced by the Italian company Olivetti with Pier Giorgio Perotto being the lead developer. The Olivetti Programma 101 was presented at the 1965 New York World's Fair after two years work (1962- 1964). Over 44,000 units were sold worldwide; in the US its cost at launch was $3,200. It was targeted to offices and scientific entities for their daily work because of its high computing capabilities in a small space with a relatively low cost; NASA was amongst its first owners. Built without integrated circuits or microprocessors, it used only transistors, resistors and condensers for its processing, the Programma 101 had features found in modern personal computers, such as memory, keyboard, printing unit, magnetic card reader/recorder, control and arithmetic unit. HP later copied the Programma 101 architecture for its HP9100 series. Datapoint 2200 Released in June 1970, the programmable terminal called the Datapoint 2200 is among the earliest known devices that bears significant resemblance to the modern personal computer, with a CRT screen, keyboard, programmability, and program storage. It was made by CTC (now known as Datapoint) and was a complete system in a case with the approximate footprint of an IBM Selectric typewriter. The system's CPU was constructed from roughly a hundred (mostly) TTL logic components, which are groups of gates, latches, counters, etc. The company had commissioned Intel, and also Texas Instruments, to develop a single-chip CPU with that same functionality. TI designed a chip rather quickly, based on Intel's early drawings. But their attempt had several bugs and so did not work very well. Intel's version was delayed and both were a little too slow for CTC's needs. A deal was made that in return for not charging CTC for the development work, Intel could instead sell the processor as their own product, along with the supporting ICs they had developed. The first customer was Seiko, which approached Intel early on with this idea, based on what they had seen Busicom do with the 4004. This became the Intel 8008. Although it required several additional ICs, it is generally known as the first 8-bit microprocessor. The requirements of the Datapoint 2200 determined the 8008 architecture, which was later expanded into the 8080 and the Z80 upon which CP/M was designed. These CPUs in turn influenced the 8086, which defined the whole line of "x86" processors used in all IBM-compatible PCs to this day (2020). Although the design of the Datapoint 2200's TTL based bit serial CPU and the Intel 8008 were technically very different, they were largely software-compatible. From a software perspective, the Datapoint 2200 therefore functioned as if it were using an 8008. Kenbak-1 The Kenbak-1, released in early 1971, is considered by the Computer History Museum to be the world's first personal computer. It was designed and invented by John Blankenbaker of Kenbak Corporation in 1970, and was first sold in early 1971. Unlike a modern personal computer, the Kenbak-1 was built of small-scale integrated circuits, and did not use a microprocessor. The system first sold for US$750. Only around 40 machines were ever built and sold. In 1973, production of the Kenbak-1 stopped as Kenbak Corporation folded. With only 256 bytes of memory, an 8-bit word size, and input and output restricted to lights and switches, and no apparent way to extend its power, the Kenbak-1 was most useful for learning the principles of programming but not capable of running application programs. Interestingly, 256 bytes of memory, 8 bit word size, and I/O limited to switches and lights on the front panel, are also the characteristics of the 1975 Altair 8800, whose fate was diametrically opposed to that of the Kenbak. The differentiating factor might have been the extensibility of the Altair, without which it was practically useless. Micral N The French company R2E was formed by two former engineers of the Intertechnique company to sell their Intel 8008-based microcomputer design. The system was developed at the Institut national de la recherche agronomique to automate hygrometric measurements. The system ran at 500 kHz and included 16 kB of memory, and sold for 8500 Francs, about $1300US. A bus, called Pluribus, was introduced that allowed connection of up to 14 boards. Boards for digital I/O, analog I/O, memory, floppy disk were available from R2E. The Micral operating system was initially called Sysmic, and was later renamed Prologue. R2E was absorbed by Groupe Bull in 1978. Although Groupe Bull continued the production of Micral computers, it was not interested in the personal computer market, and Micral computers were mostly confined to highway toll gates (where they remained in service until 1992) and similar niche markets. Xerox Alto and Star The Xerox Alto, developed at Xerox PARC in 1973, was the first computer to use a mouse, the desktop metaphor, and a graphical user interface (GUI), concepts first introduced by Douglas Engelbart while at International. It was the first example of what would today be recognized as a complete personal computer. The first machines were introduced on 1 March 1973. In 1981, Xerox Corporation introduced the Xerox Star workstation, officially known as the "8010 Star Information System". Drawing upon its predecessor, the Xerox Alto, it was the first commercial system to incorporate various technologies that today have become commonplace in personal computers, including a bit-mapped display, a windows-based graphical user interface, icons, folders, mouse, Ethernet networking, file servers, print servers and e-mail. While its use was limited to the engineers at Xerox PARC, the Alto had features years ahead of its time. Both the Xerox Alto and the Xerox Star would inspire the Apple Lisa and the Apple Macintosh. IBM SCAMP In 1972-1973 a team led by Dr. Paul Friedl at the IBM Los Gatos Scientific Center developed a portable computer prototype called SCAMP (Special Computer APL Machine Portable) based on the IBM PALM processor with a Philips compact cassette drive, small CRT and full function keyboard. SCAMP emulated an IBM 1130 minicomputer in order to run APL\1130. In 1973 APL was generally available only on mainframe computers, and most desktop sized microcomputers such as the Wang 2200 or HP 9800 offered only BASIC. Because it was the first to emulate APL\1130 performance on a portable, single-user computer, PC Magazine in 1983 designated SCAMP a "revolutionary concept" and "the world's first personal computer". The prototype is in the Smithsonian Institution. IBM 5100 IBM 5100 was a desktop computer introduced in September 1975, six years before the IBM PC. It was the evolution of SCAMP (Special Computer APL Machine Portable) that IBM demonstrated in 1973. In January 1978 IBM announced the IBM 5110, its larger cousin. The 5100 was withdrawn in March 1982. When the PC was introduced in 1981, it was originally designated as the IBM 5150, putting it in the "5100" series, though its architecture wasn't directly descended from the IBM 5100. Altair 8800 Development of the single-chip microprocessor was the gateway to the popularization of cheap, easy to use, and truly personal computers. It was only a matter of time before one such design was able to hit a sweet spot in terms of pricing and performance, and that machine is generally considered to be the Altair 8800, from MITS, a small company that produced electronics kits for hobbyists. The Altair was introduced in a Popular Electronics magazine article in the January 1975 issue. In keeping with MITS's earlier projects, the Altair was sold in kit form, although a relatively complex one consisting of four circuit boards and many parts. Priced at only $400, the Altair tapped into pent-up demand and surprised its creators when it generated thousands of orders in the first month. Unable to keep up with demand, MITS sold the design after about 10,000 kits had shipped. The introduction of the Altair spawned an entire industry based on the basic layout and internal design. New companies like Cromemco started up to supply add-on kits, while Microsoft was founded to supply a BASIC interpreter for the systems. Soon after, a number of complete "clone" designs, typified by the IMSAI 8080, appeared on the market. This led to a wide variety of systems based on the S-100 bus introduced with the Altair, machines of generally improved performance, quality and ease-of-use. The Altair, and early clones, were relatively difficult to use. The machines contained no operating system in ROM, so starting it up required a machine language program to be entered by hand via front-panel switches, one location at a time. The program was typically a small driver for an attached cassette tape reader, which would then be used to read in another "real" program. Later systems added bootstrapping code to improve this process, and the machines became almost universally associated with the CP/M operating system, loaded from floppy disk. The Altair created a new industry of microcomputers and computer kits, with many others following, such as a wave of small business computers in the late 1970s based on the Intel 8080, Zilog Z80 and Intel 8085 microprocessor chips. Most ran the CP/M-80 operating system developed by Gary Kildall at Digital Research. CP/M-80 was the first popular microcomputer operating system to be used by many different hardware vendors, and many software packages were written for it, such as WordStar and dBase II. Homebrew Computer Club Although the Altair spawned an entire business, another side effect it had was to demonstrate that the microprocessor had so reduced the cost and complexity of building a microcomputer that anyone with an interest could build their own. Many such hobbyists met and traded notes at the meetings of the Homebrew Computer Club (HCC) in Silicon Valley. Although the HCC was relatively short-lived, its influence on the development of the modern PC was enormous. Members of the group complained that microcomputers would never become commonplace if they still had to be built up, from parts like the original Altair, or even in terms of assembling the various add-ons that turned the machine into a useful system. What they felt was needed was an all-in-one system. Out of this desire came the Sol-20 computer, which placed an entire S-100 system – QWERTY keyboard, CPU, display card, memory and ports – into an attractive single box. The systems were packaged with a cassette tape interface for storage and a 12" monochrome monitor. Complete with a copy of BASIC, the system sold for US$2,100. About 10,000 Sol-20 systems were sold. Although the Sol-20 was the first all-in-one system that we would recognize today, the basic concept was already rippling through other members of the group, and interested external companies. Other machines of the era Other 1977 machines that were important within the hobbyist community at the time included the Exidy Sorcerer, the NorthStar Horizon, the Cromemco Z-2, and the Heathkit H8. 1977 and the emergence of the "Trinity" By 1976, there were several firms racing to introduce the first truly successful commercial personal computers. Three machines, the Apple II, PET 2001 and TRS-80 were all released in 1977, becoming the most popular by late 1978. Byte magazine later referred to Commodore, Apple, and Tandy as the "1977 Trinity". Also in 1977, Sord Computer Corporation released the Sord M200 Smart Home Computer in Japan. Apple II Steve Wozniak (known as "Woz"), a regular visitor to Homebrew Computer Club meetings, designed the single-board Apple I computer and first demonstrated it there. With specifications in hand and an order for 100 machines at US$500 each from the Byte Shop, Woz and his friend Steve Jobs founded Apple Computer. About 200 of the machines sold before the company announced the Apple II as a complete computer. It had color graphics, a full QWERTY keyboard, and internal slots for expansion, which were mounted in a high quality streamlined plastic case. The monitor and I/O devices were sold separately. The original Apple II operating system was only the built-in BASIC interpreter contained in ROM. Apple DOS was added to support the diskette drive; the last version was "Apple DOS 3.3". Its higher price and lack of floating point BASIC, along with a lack of retail distribution sites, caused it to lag in sales behind the other Trinity machines until 1979, when it surpassed the PET. It was again pushed into 4th place when Atari introduced its popular Atari 8-bit systems. Despite slow initial sales, the Apple II's lifetime was about eight years longer than other machines, and so accumulated the highest total sales. By 1985 2.1 million had sold and more than 4 million Apple II's were shipped by the end of its production in 1993. PET Chuck Peddle designed the Commodore PET (short for Personal Electronic Transactor) around his MOS 6502 processor. It was essentially a single-board computer with a simple TTL-based CRT driver circuit driving a small built-in monochrome monitor with 40×25 character graphics. The processor card, keyboard, monitor and cassette drive were all mounted in a single metal case. In 1982, Byte referred to the PET design as "the world's first personal computer". The PET shipped in two models; the 2001–4 with 4 kB of RAM, and the 2001–8 with 8 kB. The machine also included a built-in Datassette for data storage located on the front of the case, which left little room for the keyboard. The 2001 was announced in June 1977 and the first 100 units were shipped in mid October 1977. However they remained back-ordered for months, and to ease deliveries they eventually canceled the 4 kB version early the next year. Although the machine was fairly successful, there were frequent complaints about the tiny calculator-like keyboard, often referred to as a "Chiclet keyboard" due to the keys' resemblance to the popular gum candy. This was addressed in the upgraded "dash N" and "dash B" versions of the 2001, which put the cassette outside the case, and included a much larger keyboard with a full stroke non-click motion. Internally a newer and simpler motherboard was used, along with an upgrade in memory to 8, 16, or 32 KB, known as the 2001-N-8, 2001-N-16 or 2001-N-32, respectively. The PET was the least successful of the 1977 Trinity machines, with under 1 million sales. TRS-80 Tandy Corporation (Radio Shack) introduced the TRS-80, retroactively known as the Model I as the company expanded the line with more powerful models. The Model I combined motherboard and keyboard into one unit with a separate monitor and power supply. Although the PET and the Apple II offered color video at higher resolution, Tandy's 3000+ Radio Shack storefronts ensured the computer would have widespread distribution and support (repair, upgrade, training services) that neither Apple nor Commodore could touch. The Model I used a Zilog Z80 processor clocked at 1.77 MHz (later specimens shipped with the Z80A). The basic model originally shipped with 4 kB of RAM and Level 1 BASIC produced in-house. RAM in the first 4 kB machines was upgradeable to 16 kB and Level 2 Microsoft BASIC, which became the standard basic configuration. An Expansion Interface provided sockets for further RAM expansion to 48K. Its other strong features were its full stroke QWERTY keyboard with numeric keypad (lacking in the very first units but upgradeable), small size, well written Microsoft floating-point BASIC and inclusion of a 64 column monitor and tape deck—all for approximately half the cost of the Apple II. Eventually, 5.25 inch floppy drives and megabyte-capacity hard disks were made available by Tandy and third parties. The Expansion Interface provided for up to four floppy drives and hard drives to be daisy-chained, a slot for an RS-232 serial port and a parallel port for printers. With the (later) LDOS operating system double-sided 80-track floppy drives were supported, and features such as Disk Basic with support for overlays and suspended/background programs, device independent data redirection, Job Control Language (batch processing), flexible backup and file maintenance, typeahead and keyboard macros. The Model I could not meet FCC regulations on radio interference due to its plastic case and exterior cables. Apple resolved the issue with an interior metallic foil but the solution would not work for Tandy with the Model I. The Model I also suffered from problems with its cabling between its CPU and Expansion Interface (spontaneous reboots) and keyboard bounce (keystrokes would randomly repeat), and the earliest versions of TRSDOS similarly had technical troubles. Though these issues were quickly or eventually resolved, the computer suffered in some quarters from a reputation for poor build quality. Nevertheless, all the early microcomputer manufacturers experienced similar difficulties. Since the Model II and Model III were already in production by 1981 Tandy decided to stop manufacturing the Model I. Radio Shack sold some 1.5 million Model I's. The line continued until late 1991 when the TRS-80 Model 4 was at last retired. The Japanese Trinity About a year after the American trinity, Japan had its own trinity release referred to as the (8ビット御三家), consisting of the (1978-09), Sharp MZ-80K (1978-12) and the NEC PC-8001 (announced 1979-05, shipped 1979-09). Each of these was the first of a series of machines from each manufacturer; NEC and Sharp continued these 8-bit lines into the late 1980s but Hitachi ended the series in 1984 as it was replaced in the Gosanke by Fujitsu (see below). Home computers Byte in January 1980 announced in an editorial that "the era of off-the-shelf personal computers has arrived". The magazine stated that "a desirable contemporary personal computer has 64 K of memory, about 500 K bytes of mass storage on line, any old competently designed computer architecture, upper and lowercase video terminal, printer, and high-level languages". The author reported that when he needed to purchase such a computer quickly he did so at a local store for $6000 in cash, and cited it as an example of "what the state of the art is at present ... as a mass-produced product". By early that year Radio Shack, Commodore, and Apple manufactured the vast majority of the one half-million microcomputers that existed. As component prices continued to fall, many companies entered the computer business. This led to an explosion of low-cost machines known as home computers that sold millions of units before the market imploded in a price war in the early 1980s. Atari 400/800 Atari, Inc. was a well-known brand in the late 1970s, both due to their hit arcade games like Pong, as well as the hugely successful Atari VCS game console. Realizing that the VCS would have a limited lifetime in the market before a technically advanced competitor came along, Atari decided they would be that competitor, and started work on a new console design that was much more advanced. While these designs were being developed, the Trinity machines hit the market with considerable fanfare. Atari's management decided to change their work to a home computer system instead. Their knowledge of the home market through the VCS resulted in machines that were almost indestructible and just as easy to use as a games machine—simply plug in a cartridge and go. The new machines were first introduced as the Atari 400 and 800 in 1978, but production problems prevented widespread sales until the next year. With a trio of custom graphics and sound co-processors and a 6502 CPU clocked ~80% faster than most competitors, the Atari machines had capabilities that no other microcomputer could match. In spite of a promising start with about 600,000 sold by 1981, they were unable to compete effectively with Commodore's introduction of the Commodore 64 in 1982, and only about 2 million machines were produced by the end of their production run. The 400 and 800 were tweaked into superficially improved models—the 1200XL, 600XL, 800XL, 65XE—as well as the 130XE with 128K of bank-switched RAM. Sinclair Sinclair Research Ltd is a British consumer electronics company founded by Sir Clive Sinclair in Cambridge. It was incorporated in 1973 as Ablesdeal Ltd. and renamed "Westminster Mail Order Ltd" and then "Sinclair Instrument Ltd." in 1975. The company remained dormant until 1976, when it was activated with the intention of continuing Sinclair's commercial work from his earlier company Sinclair Radionics; it adopted the name Sinclair Research in 1981. In 1980, Clive Sinclair entered the home computer market with the ZX80 at £99.95, at the time the cheapest personal computer for sale in the UK. In 1982 the ZX Spectrum was released, later becoming Britain's best selling computer, competing aggressively against Commodore and British Amstrad. At the height of its success, and largely inspired by the Japanese Fifth Generation Computer programme, the company established the "MetaLab" research centre at Milton Hall (near Cambridge), in order to pursue artificial intelligence, wafer-scale integration, formal verification and other advanced projects. The combination of the failures of the Sinclair QL computer and the TV80 led to financial difficulties in 1985, and a year later Sinclair sold the rights to their computer products and brand name to Amstrad. Sinclair Research Ltd exists today as a one-man company, continuing to market Sir Clive Sinclair's newest inventions. ZX80 The ZX80 home computer was launched in February 1980 at £79.95 in kit form and £99.95 ready-built. In November of the same year Science of Cambridge was renamed Sinclair Computers Ltd. ZX81 The ZX81 (known as the TS 1000 in the United States) was priced at £49.95 in kit form and £69.95 ready-built, by mail order. ZX Spectrum The ZX Spectrum was launched on 23 April 1982, priced at £125 for the 16 KB RAM version and £175 for the 48 KB version. Sinclair QL The Sinclair QL was announced in January 1984, priced at £399. Marketed as a more sophisticated 32-bit microcomputer for professional users, it used a Motorola 68008 processor. Production was delayed by several months, due to unfinished development of hardware and software at the time of the QL's launch. ZX Spectrum+ The ZX Spectrum+ was a repackaged ZX Spectrum 48K launched in October 1984. ZX Spectrum 128 The ZX Spectrum 128, with RAM expanded to 128 kB, a sound chip and other enhancements, was launched in Spain in September 1985 and the UK in January 1986, priced at £179.95. TI-99 Texas Instruments (TI), at the time the world's largest chip manufacturer, decided to enter the home computer market with the Texas Instruments TI-99/4A. Announced long before its arrival, most industry observers expected the machine to wipe out all competition – on paper its performance was untouchable, and TI had enormous cash reserves and development capability. When it was released in late 1979, TI took a somewhat slow approach to introducing it, initially focusing on schools. Contrary to earlier predictions, the TI-99's limitations meant it was not the giant-killer everyone expected, and a number of its design features were highly controversial. A total of 2.8 million units were shipped before the TI-99/4A was discontinued in March 1984. VIC-20 and Commodore 64 Realizing that the PET could not easily compete with color machines like the Apple II and Atari, Commodore introduced the VIC-20 in 1980 to address the home market. The tiny 5 kB memory and its relatively limited display in comparison to those machines was offset by a low and ever falling price. Millions of VIC-20s were sold. The best-selling personal computer of all time was released by Commodore International in 1982. The Commodore 64 sold over 17 million units before its end. The C64 name derived from its 64kb of RAM. It used the 6510 microprocessor, a variant of the 6502. MOS Technology, Inc. was then owned by Commodore. BBC Micro The BBC became interested in running a computer literacy series, and sent out a tender for a standardized small computer to be used with the show. After examining several entrants, they selected what was then known as the Acorn Proton and made a number of minor changes to produce the BBC Micro. The Micro was relatively expensive, which limited its commercial appeal, but with widespread marketing, BBC support and wide variety of programs, the system eventually sold as many as 1.5 million units. Acorn was rescued from obscurity, and went on to develop the ARM processor (Acorn RISC Machine) to power follow-on designs. The ARM is widely used to this day, powering a wide variety of products like the iPhone. ARM processors also run the world's fastest supercomputer, record set in June 2020 Fugaku ARM Super computer. The Micro is not to be confused with the BBC Micro Bit, another BBC microcomputer released in March 2016. Commodore/Atari price war and crash In 1982, the TI 99/4A and Atari 400 were both $349, Radio Shack's Color Computer sold at $379, and Commodore had reduced the price of the VIC-20 to $199 and the Commodore 64 to $499. TI had forced Commodore from the calculator market by dropping the price of its own-brand calculators to less than the cost of the chipsets it sold to third parties to make the same design. Commodore's CEO, Jack Tramiel, vowed that this would not happen again, and purchased MOS Technology to ensure a supply of chips. With his supply guaranteed, and good control over the component pricing, Tramiel launched a war against TI soon after the introduction of the Commodore 64. Now vertically integrated, Commodore lowered the retail price of the 64 to $300 at the June 1983 Consumer Electronics Show, and stores sold it for as little as $199. At one point the company was selling as many computers as the rest of the industry combined. Commodore—which even discontinued list prices—could make a profit when selling the 64 for a retail price of $200 because of vertical integration. Competitors also reduced prices; the Atari 800's price in July was $165, and by the time TI was ready in 1983 to introduce the 99/2 computer—designed to sell for $99—the TI-99/4A sold for $99 in June. The 99/4A had sold for $400 in the fall of 1982, causing a loss for TI of hundreds of millions of dollars. A Service Merchandise executive stated "I've been in retailing 30 years and I have never seen any category of goods get on a self-destruct pattern like this". Such low prices probably hurt home computers' reputation; one retail executive said of the 99/4A, '"When they went to $99, people started asking 'What's wrong with it?'" The founder of Compute! stated in 1986 that "our market dropped from 300 percent growth per year to 20 percent". While Tramiel's target was TI, everyone in the home computer market was hurt by the process; many companies went bankrupt or exited the business. In the end, even Commodore's own finances were crippled by the demands of financing the massive building expansion needed to deliver the machines, and Tramiel was forced from the company. Japanese computers From the late 1970s to the early 1990s, Japan's personal computer market was largely dominated by domestic computer products. NEC became the market leader following the release of the PC-8001 in 1979, continuing with the 8-bit PC-88 and 16-bit PC-98 series in the 1980s, but had early competition from the Sharp MZ and series, and later competition from the 8-bit Fujitsu FM-7, Sharp X1, MSX and MSX2 series and 16-bit FM Towns and Sharp X68000 series. Several of these systems were also released in Europe, MSX in particular gaining some popularity there. A key difference between early Western and Japanese systems was the latter's higher display resolutions (640x200 from 1979, and 640x400 from 1985) in order to accommodate Japanese text. Japanese computers also from the early 1980s employed Yamaha FM synthesis sound boards which produce higher quality sound. Japanese computers were widely used to produce video games, though only a small portion of Japanese PC games were released outside of the country.<ref name="retro_computers"> Reprinted from {{citation|title=Retro Gamer|issue=67|year=2009|title-link=Retro Gamer}}</ref> The most successful Japanese personal computer was NEC's PC-98, which sold more than 18 million units by 1999. The IBM PC IBM responded to the success of the Apple II with the IBM PC, released in August 1981. Like the Apple II and S-100 systems, it was based on an open, card-based architecture, which allowed third parties to develop for it. It used the Intel 8088 CPU running at 4.77 MHz, containing 29,000 transistors. The first model used an audio cassette for external storage, though there was an expensive floppy disk option. The cassette option was never popular and was removed in the PC XT of 1983. The XT added a 10MB hard drive in place of one of the two floppy disks and increased the number of expansion slots from 5 to 8. While the original PC design could accommodate only up to 64k on the main board, the architecture was able to accommodate up to 640KB of RAM, with the rest on cards. Later revisions of the design increased the limit to 256K on the main board. The IBM PC typically came with PC DOS, an operating system based on Gary Kildall's CP/M-80 operating system. In 1980, IBM approached Digital Research, Kildall's company, for a version of CP/M for its upcoming IBM PC. Kildall's wife and business partner, Dorothy McEwen, met with the IBM representatives who were unable to negotiate a standard non-disclosure agreement with her. IBM turned to Bill Gates, who was already providing the ROM BASIC interpreter for the PC. Gates offered to provide 86-DOS, developed by Tim Paterson of Seattle Computer Products. IBM rebranded it as PC DOS, while Microsoft sold variations and upgrades as MS-DOS. The impact of the Apple II and the IBM PC was fully demonstrated when Time named the home computer the "Machine of the Year", or Person of the Year for 1982 (3 January 1983, "The Computer Moves In"). It was the first time in the history of the magazine that an inanimate object was given this award. IBM PC clones The original PC design was followed up in 1983 by the IBM PC XT, which was an incrementally improved design; it omitted support for the cassette, had more card slots, and was available with a 10MB hard drive. Although mandatory at first, the hard drive was later made an option and a two floppy disk XT was sold. While the architectural memory limit of 640K was the same, later versions were more readily expandable. Although the PC and XT included a version of the BASIC language in read-only memory, most were purchased with disk drives and run with an operating system; three operating systems were initially announced with the PC. One was CP/M-86 from Digital Research, the second was PC DOS from IBM, and the third was the UCSD p-System (from the University of California at San Diego). PC DOS was the IBM branded version of an operating system from Microsoft, previously best known for supplying BASIC language systems to computer hardware companies. When sold by Microsoft, PC DOS was called MS-DOS. The UCSD p-System OS was built around the Pascal programming language and was not marketed to the same niche as IBM's customers. Neither the p-System nor CP/M-86 was a commercial success. Because MS-DOS was available as a separate product, some companies attempted to make computers available which could run MS-DOS and programs. These early machines, including the ACT Apricot, the DEC Rainbow 100, the Hewlett-Packard HP-150, the Seequa Chameleon and many others were not especially successful, as they required a customized version of MS-DOS, and could not run programs designed specifically for IBM's hardware. (See List of early non-IBM-PC-compatible PCs.) The first truly IBM PC compatible machines came from Compaq, although others soon followed. Because the IBM PC was based on relatively standard integrated circuits, and the basic card-slot design was not patented, the key portion of that hardware was actually the BIOS software embedded in read-only memory. This critical element got reverse engineered, and that opened the floodgates to the market for IBM PC imitators, which were dubbed "PC clones". At the time that IBM had decided to enter the personal computer market in response to Apple's early success, IBM was the giant of the computer industry and was expected to crush Apple's market share. But because of these shortcuts that IBM took to enter the market quickly, they ended up releasing a product that was easily copied by other manufacturers using off the shelf, non-proprietary parts. So in the long run, IBM's biggest role in the evolution of the personal computer was to establish the de facto standard for hardware architecture amongst a wide range of manufacturers. IBM's pricing was undercut to the point where IBM was no longer the significant force in development, leaving only the PC standard they had established. Emerging as the dominant force from this battle amongst hardware manufacturers who were vying for market share was the software company Microsoft that provided the operating system and utilities to all PCs across the board, whether authentic IBM machines or the PC clones. In 1984, IBM introduced the IBM Personal Computer/AT (more often called the PC/AT or AT) built around the Intel 80286 microprocessor. This chip was much faster, and could address up to 16MB of RAM but only in a mode that largely broke compatibility with the earlier 8086 and 8088. In particular, the MS-DOS operating system was not able to take advantage of this capability. The bus in the PC/AT was given the name Industry Standard Architecture (ISA). Peripheral Component Interconnect (PCI) was released in 1992, and was supposed to replace ISA. VESA Local Bus (VLB) and Extended ISA were also displaced by PCI, but a majority of later (post-1992) 486-based systems were featuring a VESA Local Bus video card. VLB importantly offered a less costly high speed interface for consumer systems, as only by 1994 was PCI commonly available outside of the server market. PCI is later replaced by PCI-E (see below). Apple Lisa and Macintosh In 1983 Apple Computer introduced the first mass-marketed microcomputer with a graphical user interface, the Lisa. The Lisa ran on a Motorola 68000 microprocessor and came equipped with 1 megabyte of RAM, a black-and-white monitor, dual 5¼-inch floppy disk drives and a 5 megabyte Profile hard drive. The Lisa's slow operating speed and high price (US$10,000), however, led to its commercial failure. Drawing upon its experience with the Lisa, Apple launched the Macintosh in 1984, with an advertisement during the Super Bowl. The Macintosh was the first successful mass-market mouse-driven computer with a graphical user interface or 'WIMP' (Windows, Icons, Menus, and Pointers). Based on the Motorola 68000 microprocessor, the Macintosh included many of the Lisa's features at a price of US$2,495. The Macintosh was introduced with 128 kb of RAM and later that year a 512 kb RAM model became available. To reduce costs compared the Lisa, the year-younger Macintosh had a simplified motherboard design, no internal hard drive, and a single 3.5" floppy drive. Applications that came with the Macintosh included MacPaint, a bit-mapped graphics program, and MacWrite, which demonstrated WYSIWYG word processing. While not a success upon its release, the Macintosh was a successful personal computer for years to come. This is particularly due to the introduction of desktop publishing in 1985 through Apple's partnership with Adobe. This partnership introduced the LaserWriter printer and Aldus PageMaker (now Adobe PageMaker) to users of the personal computer. During Steve Jobs' hiatus from Apple, a number of different models of Macintosh, including the Macintosh Plus and Macintosh II, were released to a great degree of success. The entire Macintosh line of computers was IBM's major competition up until the early 1990s. GUIs spread In the Commodore world, GEOS was available on the Commodore 64 and Commodore 128. Later, a version was available for PCs running DOS. It could be used with a mouse or a joystick as a pointing device, and came with a suite of GUI applications. Commodore's later product line, the Amiga platform, ran a GUI operating system by default. The Amiga laid the blueprint for future development of personal computers with its groundbreaking graphics and sound capabilities. Byte'' called it "the first multimedia computer... so far ahead of its time that almost nobody could fully articulate what it was all about." In 1985, the Atari ST, also based on the Motorola 68000 microprocessor, was introduced with the first color GUI: Digital Research's GEM. In 1987, Acorn launched the Archimedes range of high-performance home computers in Europe and Australasia. Based on their own 32-bit ARM RISC processor, the systems were shipped with a GUI OS called Arthur. In 1989, Arthur was superseded by a multi-tasking GUI-based operating system called RISC OS. By default, the mice used on these computers had three buttons. PC clones dominate The transition from a PC-compatible market being driven by IBM to one driven primarily by a broader market began to become clear in 1986 and 1987; in 1986, the 32-bit Intel 80386 microprocessor was released, and the first '386-based PC-compatible was the Compaq Deskpro 386. IBM's response came nearly a year later with the initial release of the IBM Personal System/2 series of computers, which had a closed architecture and were a significant departure from the emerging "standard PC". These models were largely unsuccessful, and the PC Clone style machines outpaced sales of all other machines through the rest of this period. Toward the end of the 1980s PC XT clones began to take over the home computer market segment from the specialty manufacturers such as Commodore International and Atari that had previously dominated. These systems typically sold for just under the "magic" $1000 price point (typically $999) and were sold via mail order rather than a traditional dealer network. This price was achieved by using the older 8/16 bit technology, such as the 8088 CPU, instead of the 32-bits of the latest Intel CPUs. These CPUs were usually made by a third party such as Cyrix or AMD. Dell started out as one of these manufacturers, under its original name PC Limited. 1990s onward NeXT In 1990, the NeXTstation workstation computer went on sale, for "interpersonal" computing as Steve Jobs described it. The NeXTstation was meant to be a new computer for the 1990s, and was a cheaper version of the previous NeXT Computer. Despite its pioneering use of Object-oriented programming concepts, the NeXTstation was somewhat a commercial failure, and NeXT shut down hardware operations in 1993. CD-ROM In the early 1990s, the CD-ROM became an industry standard, and by the mid-1990s one was built into almost all desktop computers, and toward the end of the 1990s, in laptops as well. Although introduced in 1982, the CD ROM was mostly used for audio during the 1980s, and then for computer data such as operating systems and applications into the 1990s. Another popular use of CD ROMs in the 1990s was multimedia, as many desktop computers started to come with built-in stereo speakers capable of playing CD quality music and sounds with the Sound Blaster sound card on PCs. ThinkPad IBM introduced its successful ThinkPad range at COMDEX 1992 using the series designators 300, 500 and 700 (allegedly analogous to the BMW car range and used to indicate market), the 300 series being the "budget", the 500 series "midrange" and the 700 series "high end". This designation continued until the late 1990s when IBM introduced the "T" series as 600/700 series replacements, and the 3, 5 and 7 series model designations were phased out for A (3&7) & X (5) series. The A series was later partially replaced by the R series. Dell By the mid-1990s, Amiga, Commodore and Atari systems were no longer on the market, pushed out by strong IBM PC clone competition and low prices. Other previous competition such as Sinclair and Amstrad were no longer in the computer market. With less competition than ever before, Dell rose to high profits and success, introducing low cost systems targeted at consumers and business markets using a direct-sales model. Dell surpassed Compaq as the world's largest computer manufacturer, and held that position until October 2006. Power Macintosh, PowerPC In 1994, Apple introduced the Power Macintosh series of high-end professional desktop computers for desktop publishing and graphic designers. These new computers made use of new Motorola PowerPC processors as part of the AIM alliance, to replace the previous Motorola 68k architecture used for the Macintosh line. During the 1990s, the Macintosh remained with a low market share, but as the primary choice for creative professionals, particularly those in the graphics and publishing industries. ARM In 1994, Acorn Computers launched its Risc PC series of high-end desktop computers. The Risc PC (codenamed Medusa) was Acorn's next generation ARM-based RISC OS computer, which superseded the Acorn Archimedes. In 2005, the ARM Cortex-A8 was released, the first Cortex design to be adopted on a large scale for use in consumer devices. An ARM-based processor is used in the Raspberry Pi, an inexpensive single-board computer. IBM clones, Apple back into profitability Due to the sales growth of IBM clones in the '90s, they became the industry standard for business and home use. This growth was augmented by the introduction of Microsoft's Windows 3.0 operating environment in 1990, and followed by Windows 3.1 in 1992 and the Windows 95 operating system in 1995. The Macintosh was sent into a period of decline by these developments coupled with Apple's own inability to come up with a successor to the Macintosh operating system, and by 1996 Apple was almost bankrupt. In December 1996 Apple bought NeXT and in what has been described as a "reverse takeover", Steve Jobs returned to Apple in 1997. The NeXT purchase and Jobs' return brought Apple back to profitability, first with the release of Mac OS 8, a major new version of the operating system for Macintosh computers, and then with the PowerMac G3 and iMac computers for the professional and home markets. The iMac was notable for its transparent bondi blue casing in an ergonomic shape, as well as its discarding of legacy devices such as a floppy drive and serial ports in favor of Ethernet and USB connectivity. The iMac sold several million units and a subsequent model using a different form factor remains in production as of August 2017. In 2001 Mac OS X, the long-awaited "next generation" Mac OS based on the NeXT technologies was finally introduced by Apple, cementing its comeback. Writable CDs, MP3, P2P file sharing The ROM in CD-ROM stands for Read Only Memory. In the late 1990s CD-R and later, rewritable CD-RW drives were included instead of standard CD ROM drives. This gave the personal computer user the capability to copy and "burn" standard Audio CDs which were playable in any CD player. As computer hardware grew more powerful and the MP3 format became pervasive, "ripping" CDs into small, compressed files on a computer's hard drive became popular. "Peer to peer" file sharing networks such as Napster, Kazaa and Gnutella arose to be used almost exclusively for sharing music files and became a primary computer activity for many individuals. USB, DVD player Since the late 1990s, many more personal computers started shipping that included USB (Universal Serial Bus) ports for easy plug and play connectivity to devices such as digital cameras, video cameras, personal digital assistants, printers, scanners, USB flash drives and other peripheral devices. By the early 21st century, all shipping computers for the consumer market included at least two USB ports. Also during the late 1990s DVD players started appearing on high-end, usually more expensive, desktop and laptop computers, and eventually on consumer computers into the first decade of the 21st century. Hewlett-Packard In 2002, Hewlett-Packard (HP) purchased Compaq. Compaq itself had bought Tandem Computers in 1997 (which had been started by ex-HP employees), and Digital Equipment Corporation in 1998. Following this strategy HP became a major player in desktops, laptops, and servers for many different markets. The buyout made HP the world's largest manufacturer of personal computers, until Dell later surpassed HP. 64 bits In 2003, AMD shipped its 64-bit based microprocessor line for desktop computers, Opteron and Athlon 64. Also in 2003, IBM released the 64-bit based PowerPC 970 for Apple's high-end Power Mac G5 systems. Intel, in 2004, reacted to AMD's success with 64-bit based processors, releasing updated versions of their Xeon and Pentium 4 lines. 64-bit processors were first common in high end systems, servers and workstations, and then gradually replaced 32-bit processors in consumer desktop and laptop systems since about 2005. Lenovo In 2004, IBM announced the proposed sale of its PC business to Chinese computer maker Lenovo Group, which is partially owned by the Chinese government, for US$650 million in cash and $600 million US in Lenovo stock. The deal was approved by the Committee on Foreign Investment in the United States in March 2005, and completed in May 2005. IBM will have a 19% stake in Lenovo, which will move its headquarters to New York State and appoint an IBM executive as its chief executive officer. The company will retain the right to use certain IBM brand names for an initial period of five years. As a result of the purchase, Lenovo inherited a product line that featured the ThinkPad, a line of laptops that had been one of IBM's most successful products. Wi-Fi, LCD monitor, flash memory In the early 21st century, Wi-Fi began to become increasingly popular as many consumers started installing their own wireless home networks. Many of today's laptops and desktop computers are sold pre-installed with wireless cards and antennas. Also in the early 21st century, LCD monitors became the most popular technology for computer monitors, with CRT production being slowed down. LCD monitors are typically sharper, brighter, and more economical than CRT monitors. The first decade of the 21st century also saw the rise of multi-core processors (see following section) and flash memory. Once limited to high-end industrial use due to expense, these technologies are now mainstream and available to consumers. In 2008 the MacBook Air and Asus Eee PC were released, laptops that dispense with an optical drive and hard drive entirely relying on flash memory for storage. Local area networks The invention in the late 1970s of local area networks (LANs), notably Ethernet, allowed PCs to communicate with each other (peer-to-peer) and with shared printers. As the microcomputer revolution continued, more robust versions of the same technology were used to produce microprocessor based servers that could also be linked to the LAN. This was facilitated by the development of server operating systems to run on the Intel architecture, including several versions of both Unix and Microsoft Windows. Multiprocessing In May 2005, Intel and AMD released their first dual-core 64-bit processors, the Pentium D and the Athlon 64 X2 respectively. Multi-core processors can be programmed and reasoned about using symmetric multiprocessing (SMP) techniques known since the 60s (see the SMP article for details). Apple switches to Intel in 2006, also thereby gaining multiprocessing. In 2013, a Xeon Phi extension card is released with 57 x86 cores, at a price of $1695, equalling circa 30 dollars per core. PCI-E PCI Express is released in 2003. It becomes the most commonly used bus in PC-compatible desktop computers. Cheap 3D graphics The rise of cheap 3D accelerators displaced low-end products of Silicon Graphics (SGI), which went bankrupt in 2009. Silicon Graphics was a major 3D business that had grown annual revenues of $5.4 million to $3.7 billion from 1984 to 1997. The addition of 3D graphic capabilities to PCs, and the ability of clusters of Linux- and BSD-based PCs to take on many of the tasks of larger SGI servers, ate into SGI's core markets. Three former SGI employees had founded 3dfx in 1994. Their Voodoo Graphics extension card relied on PCI to provide cheap 3D graphics for PC's. Towards the end of 1996, the cost of EDO DRAM dropped significantly. A card consisted of a DAC, a frame buffer processor and a texture mapping unit, along with 4 MB of EDO DRAM. The RAM and graphics processors operated at 50 MHz. It provided only 3D acceleration and as such the computer also needed a traditional video controller for conventional 2D software. NVIDIA bought 3dfx in 2000. In 2000, NVIDIA grew revenues 96%. SGI had made OpenGL. Control of the specification was passed to the Khronos Group in 2006. SDRAM In 1993, Samsung introduced its KM48SL2000 synchronous DRAM, and by 2000, SDRAM had replaced virtually all other types of DRAM in modern computers, because of its greater performance. For more information see Synchronous dynamic random-access memory#SDRAM history. Double data rate synchronous dynamic random-access memory (DDR SDRAM) is introduced in 2000. Compared to its predecessor in PC-clones, single data rate (SDR) SDRAM, the DDR SDRAM interface makes higher transfer rates possible by more strict control of the timing of the electrical data and clock signals. ACPI Released in December 1996, ACPI replaced Advanced Power Management (APM), the MultiProcessor Specification, and the Plug and Play BIOS (PnP) Specification. Internally, ACPI advertises the available components and their functions to the operating system kernel using instruction lists ("methods") provided through the system firmware (Unified Extensible Firmware Interface (UEFI) or BIOS), which the kernel parses. ACPI then executes the desired operations (such as the initialization of hardware components) using an embedded minimal virtual machine. First-generation ACPI hardware had issues. Windows 98 first edition disabled ACPI by default except on a whitelist of systems. 2010s Semiconductor fabrication In 2011, Intel announced the commercialisation of Tri-gate transistor. The Tri-Gate design is a variant of the FinFET 3D structure. FinFET was developed in the 1990s by Chenming Hu and his colleagues at UC Berkeley. Through-silicon via is used in High Bandwidth Memory (HBM), a successor of DDR-SDRAM. HBM was released in 2013. In 2016 and 2017, Intel, TSMC and Samsung begin releasing 10 nanometer chips. At the ≈10 nm scale, quantum tunneling (especially through gaps) becomes a significant phenomenon. Market size In 2001, 125 million personal computers were shipped in comparison to 48,000 in 1977. More than 500 million PCs were in use in 2002 and one billion personal computers had been sold worldwide since mid-1970s till this time. Of the latter figure, 75 percent were professional or work related, while the rest sold for personal or home use. About 81.5 percent of PCs shipped had been desktop computers, 16.4 percent laptops and 2.1 percent servers. United States had received 38.8 percent (394 million) of the computers shipped, Europe 25 percent and 11.7 percent had gone to Asia-Pacific region, the fastest-growing market as of 2002. Almost half of all the households in Western Europe had a personal computer and a computer could be found in 40 percent of homes in United Kingdom, compared with only 13 percent in 1985. The third quarter of 2008 marked the first time laptops outsold desktop PCs in the United States. As of June 2008, the number of personal computers worldwide in use hit one billion. Mature markets like the United States, Western Europe and Japan accounted for 58 percent of the worldwide installed PCs. About 180 million PCs (16 percent of the existing installed base) were expected to be replaced and 35 million to be dumped into landfill in 2008. The whole installed base grew 12 percent annually. See also History of laptops History of mobile phones History of software Timeline of electrical and electronic engineering Computer museum and Personal Computer Museum Expensive Desk Calculator MIT Computer Science and Artificial Intelligence Laboratory Educ-8 a 1974 pre-microprocessor "micro-computer" Mark-8, a 1974 microprocessor-based microcomputer Programma 101, a 1965 programmable calculator with some attributes of a personal computer SCELBI, another 1974 microcomputer Simon (computer), a 1949 demonstration of computing principles List of pioneers in computer science References Further reading External links A history of the personal computer: the people and the technology (PDF) BlinkenLights Archaeological Institute – Personal Computer Milestones Personal Computer Museum – A publicly viewable museum in Brantford, Ontario, Canada Old Computers Museum – Displaying over 100 historic machines. Chronology of Personal Computers – a chronology of computers from 1947 on "Total share: 30 years of personal computer market share figures" Obsolete Technology – Old Computers History of computing hardware Personal computers History of computing History of Silicon Valley
4883003
https://en.wikipedia.org/wiki/Systems%20Modeling%20Language
Systems Modeling Language
The Systems Modeling Language (SysML) is a general-purpose modeling language for systems engineering applications. It supports the specification, analysis, design, verification and validation of a broad range of systems and systems-of-systems. SysML was originally developed by an open source specification project, and includes an open source license for distribution and use. SysML is defined as an extension of a subset of the Unified Modeling Language (UML) using UML's profile mechanism. The language's extensions were designed to support systems engineering activities. Contrast with UML SysML offers systems engineers several noteworthy improvements over UML, which tends to be software-centric. These improvements include the following: SysML's diagrams are more flexible and expressive. SysML reduces UML's software-centric restrictions and adds two new diagram types, requirement and parametric diagrams. The former can be used for requirements engineering; the latter can be used for performance analysis and quantitative analysis. Consequent to these enhancements, SysML is able to model a wide range of systems, which may include hardware, software, information, processes, personnel, and facilities. SysML is a comparatively small language that is easier to learn and apply. Since SysML removes many of UML's software-centric constructs, the overall language is smaller both in diagram types and total constructs. SysML allocation tables support common kinds of allocations. Whereas UML provides only limited support for tabular notations, SysML furnishes flexible allocation tables that support requirements allocation, functional allocation, and structural allocation. This capability facilitates automated verification and validation (V&V) and gap analysis. SysML model management constructs support models, views, and viewpoints. These constructs extend UML's capabilities and are architecturally aligned with IEEE-Std-1471-2000 (IEEE Recommended Practice for Architectural Description of Software Intensive Systems). SysML reuses seven of UML 2's fourteen diagrams, and adds two diagrams (requirement and parametric diagrams) for a total of nine diagram types. SysML also supports allocation tables, a tabular format that can be dynamically derived from SysML allocation relationships. A table which compares SysML and UML 2 diagrams is available in the SysML FAQ. Consider modeling an automotive system: with SysML one can use Requirement diagrams to efficiently capture functional, performance, and interface requirements, whereas with UML one is subject to the limitations of use case diagrams to define high-level functional requirements. Likewise, with SysML one can use Parametric diagrams to precisely define performance and quantitative constraints like maximum acceleration, minimum curb weight, and total air conditioning capacity. UML provides no straightforward mechanism to capture this sort of essential performance and quantitative information. Concerning the rest of the automotive system, enhanced activity diagrams and state machine diagrams can be used to specify the embedded software control logic and information flows for the on-board automotive computers. Other SysML structural and behavioral diagrams can be used to model factories that build the automobiles, as well as the interfaces between the organizations that work in the factories. History The SysML initiative originated in a January 2001 decision by the International Council on Systems Engineering (INCOSE) Model Driven Systems Design workgroup to customize the UML for systems engineering applications. Following this decision, INCOSE and the Object Management Group (OMG), which maintains the UML specification, jointly chartered the OMG Systems Engineering Domain Special Interest Group (SE DSIG) in July 2001. The SE DSIG, with support from INCOSE and the ISO AP 233 workgroup, developed the requirements for the modeling language, which were subsequently issued by the OMG parting in the UML for Systems Engineering Request for Proposal (UML for SE RFP; OMG document ad/03-03-41) in March 2003. In 2003 Cris Kobryn and Sanford Friedenthal organized and co-chaired the SysML Partners, an informal association of industry leaders and tool vendors, which initiated an open source specification project to develop the SysML in response to the UML for Systems Engineering RFP. The original technical contributors and co-authors of the SysML 1.0a specification were Laurent Balmelli, Conrad Bock, Rick Steiner, Alan Moore and Roger Burkhart. The SysML Partners distributed their first open source SysML specification drafts in 2004, and submitted SysML 1.0a to the OMG for technology adoption in November 2005. OMG SysML After a series of competing SysML specification proposals, a SysML Merge Team was proposed to the OMG in April 2006. This proposal was voted upon and adopted by the OMG in July 2006 as OMG SysML, to differentiate it from the original open source specification from which it was derived. Because OMG SysML is derived from open source SysML, it also includes an open source license for distribution and use. The OMG SysML v. 1.0 specification was issued by the OMG as an Available Specification in September 2007. The current version of OMG SysML is v1.6, which was issued by the OMG in December 2019. In addition, SysML was published by the International Organization for Standardization (ISO) in 2017 as a full International Standard (IS), ISO/IEC 19514:2017 (Information technology -- Object management group systems modeling language). The OMG has been working on the next generation of SysML and issued a Request for Proposals (RFP) for version 2 on December 8, 2017, following its open standardization process. The resulting specification, which will incorporate language enhancements from experience applying the language, will include a UML profile, a metamodel, and a mapping between the profile and metamodel. A second RFP for a SysML v2 Application Programming Interface (API) and Services RFP was issued in June 2018. Its aim is to enhance the interoperability of model-based systems engineering tools. Diagrams SysML includes 9 types of diagram, some of which are taken from UML. Activity diagram Block definition diagram Internal block diagram Package diagram Parametric diagram Requirement diagram Sequence diagram State machine diagram Use case diagram Tools There are several modeling tool vendors already offering SysML support, or are in the process of updating their tools to comply with the OMG SysML specification. Lists of tool vendors who support, or have announced support of, SysML or OMG SysML can be found on the SysML Forum or SysML websites, respectively. Model exchange As an OMG UML 2.0 profile, SysML models are designed to be exchanged using the XML Metadata Interchange (XMI) standard. In addition, architectural alignment work is underway to support the ISO 10303 (also known as STEP, the Standard for the Exchange of Product model data) AP-233 standard for exchanging and sharing information between systems engineering software applications and tools. See also SoaML Energy Systems Language Object Process Methodology Universal Systems Language List of SysML tools References Further reading External links Introduction to Systems Modeling Language (SysML), Part 1 and Part 2. YouTube. SysML Open Source Specification Project Provides information related to SysML open source specifications, FAQ, mailing lists, and open source licenses. OMG SysML Website Furnishes information related to the OMG SysML specification, SysML tutorial, papers, and tool vendor information. Article "EE Times article on SysML (May 8, 2006)" SE^2 MBSE Challenge team: "Telescope Modeling" Paper "System Modelling Language explained" (PDF format) Bruce Douglass: Real-Time Agile Systems and Software Development List of Popular SysML Modeling Tools Unified Modeling Language Systems engineering Modeling languages
43681454
https://en.wikipedia.org/wiki/Macosquin%20Abbey
Macosquin Abbey
Macosquin Abbey, formally known as Clarus Fons, was a Cistercian Monastery in County Londonderry, Northern Ireland in the United Kingdom. The Monastery was located on Abbey Lane, Macosquin, Northern Ireland. The abbey may have owned the churches at Burt and Agivey. History There may have been a monastic establishment in Macosquin as early as the 6th century, however, the Cistercian Abbey of Our Lady of the Clear Springs was founded in 1217 by monks from the monastery of Morimond, a daughter house of Citeaux in France. Earlier spellings of the village's name are Moycosquin and Moycoscain. Dissolution By 1539 the abbey had fallen into a state of disrepair. The Abbey site was occupied at the beginning of the 17th century by a plantation of the London Guild of Merchant Taylors. While Agivey was granted to the Ironmongers’ Company of London. The last remains of the Abbey buildings were removed in the 18th century and the Protestant Church of Saint Mary probably occupies the Abbeys site and may have reused the foundations. Also a Lancet from the 13th century is installed in Saint Mary. List of known Abbots References Monasteries in Northern Ireland
29545609
https://en.wikipedia.org/wiki/QP%20%28framework%29
QP (framework)
QP ("Quantum Platform") is a family of lightweight, open source software frameworks for building responsive and modular real-time embedded applications as systems of cooperating, event-driven active objects (actors). Overview The QP family consists of QP/C, QP/C++, and QP-nano frameworks, which are all quality controlled, documented, and commercially licensable. All QP frameworks can run on "bare-metal" single-chip microcontrollers, completely replacing a traditional Real-Time Operating System (RTOS). Ports and ready-to-use examples are provided for all major CPU families. QP/C and QP/C++ can also work with a traditional OS/RTOS, such as: POSIX (Linux, QNX), Windows, VxWorks, ThreadX, MicroC/OS, FreeRTOS, etc. The behavior of active objects (actors) is specified in QP by means of hierarchical state machines (UML statecharts). The frameworks support manual coding of UML state machines in C or C++ as well as fully automatic code generation by means of the free graphical QM modeling tool. The QP frameworks and the QM modeling tool are used in medical devices, defense & aerospace, robotics, consumer electronics, wired and wireless telecommunication, industrial automation, transportation, and many more. Background Active objects inherently support and automatically enforce the following best practices of concurrent programming: Keep all of the task's data local, bound to the task itself and hidden from the rest of the system. Communicate among tasks asynchronously via intermediary event objects. Using asynchronous event posting keeps the tasks running truly independently without blocking on each other. Tasks should spend their lifetime responding to incoming events, so their mainline should consist of an event loop. Tasks should process events one at a time (to completion), thus avoiding any concurrency hazards within a task itself. Active objects dramatically improve your ability to reason about the concurrent software. In contrast, using raw RTOS tasks directly is trouble for a number of reasons, particularly because raw tasks let you do anything and offer you no help or automation for the best practices. As with all good patterns, active objects raise the level of abstraction above the naked threads and let you express your intent more directly thus improving your productivity. Active objects cannot operate in a vacuum and require a software infrastructure (framework) that provides, at a minimum: an execution thread for each active object, queuing of events, and event-based timing services. In the resource-constrained embedded systems, the biggest concern has always been about scalability and efficiency of such frameworks, especially that the frameworks accompanying various modeling tools have traditionally been built on top of a conventional RTOS, which adds memory footprint and CPU overhead to the final solution. The QP frameworks have been designed for efficiency and minimal footprint from the ground up and do not need an RTOS in the stand-alone configuration. In fact, when compared to conventional RTOSes, QP frameworks provide smaller footprint especially in RAM (data space), but also in ROM (code space). This is possible, because active objects don't need to block, so most blocking mechanisms (e.g., semaphores) of a conventional RTOS are not needed. All these characteristics make event-driven active objects a perfect fit for single-chip microcontrollers (MCUs). Not only do you get the productivity boost by working at a higher level of abstraction than raw RTOS tasks, but you get it at a lower resource utilization and better power efficiency, because event-driven systems use the CPU only when processing events and otherwise can put the chip in a low-power sleep mode. QP architecture and components QP consists of a universal UML-compliant event processor (QEP), a portable, event-driven, real-time framework (QF), a tiny run-to-completion kernel (QK), and software tracing system (QS). QEP (Quantum Event Processor) is a universal UML-compliant event processor that enables direct coding of UML state machines (UML statecharts) in highly maintainable C or C++, in which every state machine element is mapped to code precisely, unambiguously, and exactly once (traceability). QEP fully supports hierarchical state nesting, which enables reusing behavior across many states instead of repeating the same actions and transitions over and over again. QF (Quantum Framework) is a highly portable, event-driven, real-time application framework for concurrent execution of state machines specifically designed for real-time embedded systems. QK (Quantum Kernel) is a tiny preemptive non-blocking run-to-completion kernel designed specifically for executing state machines in a run-to-completion (RTC) fashion. QS (Quantum Spy) is a software tracing system that enables live monitoring of event-driven QP applications with minimal target system resources and without stopping or significantly slowing down the code. Supported processors All types of QP frameworks (QP/C, QP/C++, and QP-nano) can be easily adapted to various microprocessor architectures and compilers. Adapting the QP software is called porting and all QP frameworks have been designed from ground up to make the porting easy. Currently, bare-metal QP ports exist for the following processor architectures: ARM Cortex-M4F (TI Stellaris) ARM Cortex-M3 (TI Stellaris, ST STM32, NXP LPC) ARM Cortex-M0 (NXP LPC1114) ARM7/9 (Atmel AT91R4x, AT91SAM7, NXP LPC, ST STR912) Atmel AVR Mega Atmel AVR32 UC3-A3 TI MSP430 TI TMS320C28x TI TMS320C55x Renesas Rx600 Renesas R8C Renesas H8 Freescale Coldfire Freescale 68HC08 Altera Nios II 8051 (Silicon Labs) 80251 (Atmel) Microchip PIC24/dsPIC Cypress PSoC1 80x86 real mode Supported operating systems The QP/C and QP/C++ frameworks can also work with the traditional operating systems and RTOSes. Currently, QP ports exist for the following OSes/RTOSes: Linux (POSIX) Win32 (all desktop Windows and WindowsCE) VxWorks ThreadX FreeRTOS MicroC/OS-II QNX (POSIX) Integrity (POSIX) Licensing All QP framework types are dual-licensed under the open source GPLv2 and a traditional, closed-source license. Users who want to distribute QP (e.g. embedded inside user upgradable devices) can retain the proprietary status of their code for a fee. Several types of commercial, royalty-free, closed-source licenses are available. See also Actor model UML state machine Embedded operating system Real-time operating system References External links state-machine.com QP project on SourceForge.net qf4net: Quantum Framework for .Net qfj: Quantum Framework for Java on SourceForge.net Miros: a hierarchical state machine module in Python Miros: a hierarchical state machine module in Lua State-Oriented Programming (Groovy) ACCU Overload Journal #64 "Yet Another Hierarchical State Machine" C/C++ Users Journal "Who Moved My State?" C/C++ Users Journal "Deja Vu" Research on Open CNC System Based on Quantum Framework Active Objects by Schmidt Real-time operating systems Embedded operating systems Free software operating systems Embedded Linux Programming tools for Windows Cross-platform software Free software programmed in C++ Software architecture Object-oriented programming
25223405
https://en.wikipedia.org/wiki/InfoZoom
InfoZoom
InfoZoom software is a data analysis, business intelligence and data visualization software product created using in-memory analytics. The software is created and supported by humanIT and the Fraunhofer Institute FIT , the same organization that created MP3 compression technology. The software has over 100,000 licensed users and over 1000 customers worldwide. History InfoZoom software is developed by humanIT GmbH near Bonn, Germany. It was created in 1997 as a spin-off from the Fraunhofer Society which is the same scientific organization that contributed to MP3 compression technology. Since 2003, humanIT is a wholly owned subsidiary of proALPHA, an ERP (Enterprise resource planning) vendor based in Weilerbach, Germany. Today, InfoZoom is co-developed and supported by the Fraunhofer-Institut für Angewandte Informationstechnik (FIT), as known as the Fraunhofer Institute for Applied Information Technology in St. Augustin, Germany. InfoZoom v6.0 was released in March, 2009. The 64-bit version of InfoZoom was released as v6.4 in November 2009. InfoZoom v7.0 was released in November, 2010, with the release of v8.0 in November 2011. By the end of 2009, InfoZoom had grown significantly reaching 36,000 licensed users and 800 client organizations worldwide and in 2011 reaching 50,000 users with 1000 clients. Technology InfoZoom works with in-memory technology and was programmed in the programming languages ​​C++ and C# / .NET. Products InfoZoom allows users to extract large amounts of information from multiple data sources, including any ODBC-compliant databases, Microsoft Excel, text files and other data sources. The software allows for data visualization of entire datasets through an easy to use graphical interface. The tool has applications in ad hoc data analysis and identification of data quality issues and has been used by major government organizations, including universities, as well as by corporations. InfoZoom is also integrated into other software products and used as the reporting and analytical module. It is also implemented for analysis on the internet or intranet. InfoZoom Software is available in four main product lines: InfoZoom Professional InfoZoom Business InfoZoom Explorer InfoZoom Viewer (available for free) Data Sources The following data sources can be loaded: relational database systems (RDBMS) such as Progress, Oracle, Microsoft SQL Server, Microsoft Access or MySQL Excel files (*.xls or *.xlsx) Text files or CSV files XML and JSON files all data sources that can be addressed via ODBC or OLE DB. Output Options The results are visualized using an integrated component for displaying tables and graphics. Optionally, the results can be output directly in Microsoft Office. Results can be actively linked so that the data displayed can be changed later. There is a free add-in for Microsoft Excel. The analyzed data can be inserted directly into Excel via the add-in. In addition to its own proprietary format, InfoZoom can export the data to HTML, Excel, Text files, Open Document Text, Word, and CSV files. Other applications can use InfoZoom as a dynamic data source via the .NET data provider, e.g. B. Crystal Reports 2008. Areas of application Above all, companies that process large amounts of granular data with high compliance requirements benefit from the analysis software. The decisive factor is the free combination of data attributes and characteristics. Due to the novel visualization of data, the software is used in particular in the area of ​​data quality management. InfoZoom is used by the police and criminal authorities, in the energy sector and by financial service providers in the fight against crime by electronic search for clues in mobile phone data. Notes References Fraunhofer Institute for Applied Information Technology: Visualization of Trees as Highly Compressed Tables with InfoZoom Artificial Intelligence in Medicine: Visualization and interactive analysis of blood parameters with InfoZoom Second International Conference on Data Mining: InfoZoom External links Fraunhofer Institute Fraunhofer Institute for Applied Information Technology humanIT Home Page Data visualization software Business intelligence Software companies of Germany
22444906
https://en.wikipedia.org/wiki/AntiX
AntiX
antiX () is a Linux distribution based on Debian Stable. It is comparatively lightweight and suitable for older computers, while also providing cutting edge kernel and applications, as well as updates and additions via the apt-get package system and Debian-compatible repositories. Since version 19 it comes in two init system flavours: sysVinit and runit. antiX comes with default SpaceFM Desktop Environment (DE) built on top of GTK library and IceWM as a Window Manager. Versions antiX is available for IA-32 and x86-64 architectures, and comes in 4 versions: Full, which installs a full range of applications (1.4 GB) Base, which allows the user to choose their own application suite (800 MB) Core, no X, cli-installer without encryption, which enables the user to have total control over the install (440 MB) net, no X, cli-installer without encryption,which enables the user to have total control with no desktop environment by default (180 MB) These four AntiX versions were joined by MEPIS in 2014, developed in cooperation with the MEPIS Community to form MX Linux. MX Linux uses Xfce as the default desktop environment, is based directly on Debian Stable, is highly stable and gives solid performance from a medium-sized footprint. As of November 2016, MX Linux is now listed as a separate distro on DistroWatch. Releases antiX is a Linux distribution, originally based on MEPIS, which itself is based on the Debian stable distribution. It initially replaced the MEPIS KDE desktop environment with the Fluxbox and IceWM window managers, making it suitable for older, less powerful x86-based systems. Unlike Debian, antiX does not use systemd The releases of antiX are named after prominent left-wing figures, groups and revolutionaries. See also MX Linux Category:Linux distributions without systemd List of Linux distributions that run from RAM Lightweight Linux distribution References External links antiX on sourceforge antiX support group on facebook Debian-based distributions Linux distributions Linux distributions without systemd Operating system distributions bootable from read-only media
322801
https://en.wikipedia.org/wiki/List%20of%20colleges%20and%20universities%20in%20the%20Philippines
List of colleges and universities in the Philippines
This is a partial list of universities and colleges in the Philippines. A Abada Colleges - Pinamalayan, Oriental Mindoro ABEC Institute of Business and Technology - Legazpi City (more on TESDA vocational programs) ABE International College of Business and Accountancy Abra Valley Colleges - Bangued, Abra Abuyog Community College (ACC) - Abuyog, Leyte Academia de Davao College (ADDC) - Tagum City Access Computer College Aces Tagum College (ATC) - Tagum City ACQ College of Ministries - Davao City ACMCL College - Victoria, Oriental Mindoro ACLC Colleges - multiple campuses ACSI College - Iloilo City ACTS Computer College - Laguna, Quezon Adamson University - Manila Adventist International Institute of Advanced Studies - Silang, Cavite Adventist University of the Philippines - Silang, Cavite Aemilianum College - Sorsogon City Aeronautical Academy of the Philippines - Canaman, Camarines Sur Ago Medical and Educational Center - Bicol Christian College of Medicine - Legazpi City Agro-Industrial Foundation College of the Philippines - Davao City Agusan del Sur College - Bayugan City AIE College Airlink International Aviation School Aklan Catholic College - Kalibo Aklan Polytechnic College - Kalibo Aklan State University Aldersgate College - Nueva Vizcaya Alfelor Memorial College - Del Gallego, Camarines Sur Alfonso D. Tan College - Tangub City Alpha Centauri Educational System - Lucena City AMA Computer University Amando Cope College - Tabaco City Andres Bonifacio College - Dipolog City Angeles University Foundation Apostle Business College - San Francisco, Agusan del Sur Annunciation College of Bacon Sorsogon Unit, Inc. Araullo University - Cabanatuan City Arellano University – multiple campuses Arreisgado College Foundation Inc. - Tagum City Asia Pacific College Asian Pacific College of Advanced Studies - Balanga City, Bataan Asia School of Arts and Sciences Asian College, Dumaguete Asian College of Technology - Cebu Asian College of Technology Main Campus - P. del Rosario St., Cebu City Asian College of Technology Talamban Campus - Talamban, Cebu City Asian College of Technology Bulacao Campus - Bulacao, Talisay City Asian College - Quezon City Asian College Foundation - Butuan City Asian Computer College - Calamba Campuses Asian Development Foundation College - Tacloban City Asian International School of Aeronautics and Technology - Davao City Asian Institute for Distance Education - Makati Asian Institute of Computer Studies – multiple campuses Asian Institute of Journalism and Communication Asian Institute of Management Asian Institute of Maritime Studies - Pasay Asian School of Hospitality Arts Asian Social Institute - Manila Asian Summit College - Pasig Assumption College Assumption Antipolo - Antipolo, Rizal Assumption Iloilo - Iloilo, Iloilo City Assumption College of Davao - Davao City Assumption College of Nabunturan - Nabunturan, Davao de Oro Assumption College San Lorenzo - Makati Ateneo de Cagayan - Xavier University - Cagayan de Oro Ateneo de Davao University Ateneo de Manila University Ateneo de Naga University Ateneo de Zamboanga University Aurora Pioneers Memorial College - Aurora, Zamboanga del Sur Aurora Polytechnic College - Baler, Aurora Aurora State College of Technology - Baler, Aurora B Baao Community College - Baao, Camarines Sur Bacacay Community College - Bacacay, Albay Bacolod City College Bago City College Baguio Central University Baguio College of Technology Balete Community College - Balete, Aklan Baliuag Polytechnic College Baliuag University Baptist Voice Bible College - Lucena, Quezon Basilan State College - Isabela, Basilan Bataan Peninsula State University – multiple campuses Bataan Heroes Memorial College - Balanga, Bataan Bataan State College - Dinalupihan, Bataan Batan Community College - Batan, Aklan Batanes State College - Basco, Batanes Batangas Eastern Colleges - San Juan, Batangas Batangas State University – multiple campuses The Bearer of Light and Wisdom Colleges - Molino, Bacoor City, Cavite Belen B. Francisco Foundation Belen B. Francisco Foundation - Daraga – main campus Belen B. Francisco Foundation - Sorsogon – satellite campus Benguet State University Bestlink College of the Philippines The Bethany Christian Academy - Cabanatuan City, Inc. Bicol Christian College of Medicine (Ago Medical and Educational Center) - Legazpi City Bicol College - Daraga, Albay Bicol Merchant Marine College - Sorsogon City Bicol State College of Applied Sciences and Technology Bicol University – multiple campuses Biliran National Agricultural College - Biliran, Biliran Province Binalbagan Catholic College - Binalbagan, Negros Occidental Binangonan Catholic College - Binangonan, Rizal Bohol Institute of Technology – multiple campuses Bohol Island State University (formerly Central Visayas State College of Agriculture, Forestry and Technology) – multiple campuses Bohol Northeastern Education Foundation Bool City Central University - Bool City, Biliran Brent Hospital and Colleges Inc. - Zamboanga City Brentwood International School - Naga City Brokenshire College – multiple campuses Brookfield College - Dasmariñas City, Cavite Campus BST Grace College - San Fernando, Camarines Sur Bukidnon State University - Malaybalay City, Bukidnon Bulacan Agricultural State College - San Idelfonso, Bulacan Bulacan Polytechnic College Bulacan State University – multiple campuses C Cabucgayan National School of Arts and Trades - Bgy. Bunga, Cabucgayan, Biliran Province Cagayan de Oro College - Carmen, Cagayan de Oro City Cagayan State University – multiple campuses Cagayan Technical Institute School of Automotive - Tuguegarao City, Cagayan Cagayan Valley Computer and Information Technology College, Inc. Cainta Catholic College - Cainta, Rizal Calabanga Community College - Calabanga, Camarines Sur Calamba Doctors' College - Calamba City Calauag Central College - Calauag, Quezon Province Calayan Educational Foundation Inc. - Lucena City Camarines Norte State College Camarines Norte State College - Abaño campus Camarines Norte State College - Entienza campus Camarines Norte State College - Daet (main campus) Camarines Norte State College - Labo campus Camarines Norte State College - Mercedes campus Camarines Norte State College - Panganiban campus Camiling Colleges - Camiling, Tarlac Camo College Incorporated - Pili, Camarines Sur Canossa Colleges - San Pablo City CAP College Foundation Capalonga College - Capalonga, Camarines Norte Capitol University - Corrales Ext., Cagayan de Oro City Capiz State University – multiple campuses Caraga State University Carlos Hilado Memorial State College - Talisay City, Cebu Cataingan Polytechnic Institute - Cataingan, Masbate Catanduanes Colleges - Virac Catanduanes Institute of Technology Foundation Inc. - Virac Catanduanes State University Catanduanes State University - Panganiban Catanduanes State University - Virac Cavite State University – multiple campuses Cebu Aeronautical Technical School - Salinas Drive, Lahug, Cebu City Cebu Doctors' University - Mandaue City Cebu Eastern College - Cebu City Cebu Institute of Medicine - Cebu City Cebu Institute of Technology – University - Cebu City Cebu International Distance Education College - Cebu City Cebu Normal University Cebu Normal University - Balamban Cebu Roosevelt Memorial Colleges - Bogo City, Cebu Cebu Technological University (formerly Cebu State College of Science and Technology) Ceguera Technological Colleges (formerly Ceguera Institute of Science and Technology) Center for Industrial Technology and Enterprise (CITE) Technical Institute, Inc. Central Bicol State University of Agriculture - multiple campuses Central Colleges of the North - Tuguegarao City, Cagayan Central Colleges of the Philippines Central Luzon College of Science and Technology Central Luzon Doctors' Hospital Educational Institution Central Luzon State University - Munoz City Central Mindanao University Central Panay College of Science and Technology - Kalibo, Aklan Central Philippine Adventist College - Negros Occidental Central Philippine University - Jaro, Iloilo City Central Philippines State University - Kabankalan City, Negros Occidental Centre for International Education (CIE) Centro Escolar University Centro Escolar University Manila Centro Escolar University Makati Centro Escolar University Malolos Centro Escolar Las Piñas Chiang Kai Shek College Chinese General Hospital Colleges Christ the King College - Calbayog City Christ the King College - Gingoog City Christ the King College de Maranding - Maranding Lala, Lanao del Norte Christ the King Mission Seminary Christian Polytechnic Institute of Catanduanes - Virac Chronicles Institute of Isabela - Ilagan City, Isabela CIIT (Cosmopolitan International Institute of Technology) CIIT College of Arts and Technology - Tomas Morato, Quezon City Citi Global College of Cabuyao (Main Campus) Citi Global College of Calamba (Annex Campus) City College of Calamba City College of Lucena City College of Naga - Peñafrancia Avenue, Naga City City of Malabon University City Technological Institute - Tuguegarao City, Cagayan COMTECH International Institute of Technologies Inc.(CIITI Colleges)-Isabela Claret College of Isabela - Isabela City Claret Formation Center - Quezon City Colegio de Capitolio - Tagum City Colegio de Dagupan Colegio de Ilagan - Ilagan City, Isabela Colegio de la Purisima Concepcion Colegio de Medaillè Miraculous, Inc. - Subic, Zambales Colegio de Muntinlupa - Muntinlupa City Colegio de San Clemente - Angono, Rizal Colegio de San Francisco Javier - Rizal, Zamboanga del Norte Colegio de San Gabriel - Arcangel - San Jose del Monte, Bulacan Colegio de San Jose - Jaro, Iloilo City Colegio de San Juan de Letran Colegio de San Juan de Letran - Bataan Colegio de San Juan de Letran - Calamba Colegio de San Juan de Letran - Manaoag Colegio de San Lorenzo Colegio de San Lorenzo Ruiz de Manila - Tacloban City Colegio de San Lorenzo Ruiz de Manila of Northern Samar Inc - Catarman, Northern Samar Colegio de San Pascual Baylon - Obando, Bulacan Colegio de Santa Catalina de Alejandria - Dumaguete City Colegio de Santo Cristo de Burgos - Sariaya, Quezon Colegio de Santo Tomas - Recoletos Colegio de Sta. Teresa de Avila - Novaliches, Quezon City Colegio del Sagrado Corazon de Jesus - Iloilo City Colegio ng Lungsod ng Batangas Colegio San Agustin Colegio San Agustin-Bacolod Colegio San Agustin-Biñan Colegio San Agustin-Makati College of Arts & Sciences of Asia & the Pacific College of Business Education Science and Technology - Cauayan City, Isabela College of Divine Wisdom - Paranaque City College of the Holy Spirit College of the Immaculate Conception, Sumacab Este, Cabanatuan City College of Mary Immaculate - Poblacion, Pandi, Bulacan College of Our Lady of Mercy of Pulilan Foundation, Inc.- Longos, Pulilan, Bulacan College of Mt. Carmel - Lolomboy, Bulacan College of Saint Lawrence - Balagtas, Bulacan College of San Benildo - Rizal College of St. John-Roxas, De La Salle Supervised College of Technological Sciences - Cebu City Columban College - Olongapo City Columbus College - Lucena City, Quezon Community Colleges of the Philippines Foundation Manila Computer Arts & Technological (CAT) College - Legazpi, Albay Computer Communication Development Institute (CCDI) College - Naga City COMTECH International Institute of Technologies Inc. (CIITI Colleges) - Zamboanga City Comteq Computer and Business College - Olongapo City Concordia College - Paco Consolatrix College of Toledo City Cor Jesu College - Digos City Core Gateway College - San Jose City, Nueva Ecija Cotabato Foundation College of Science and Technology Cotabato Medical Foundation College Inc. - Midsayap, Cotabato City Cotabato State University Credo Domine College - Tuguegarao City, Cagayan D Pamantasan ng Lungsod ng San Pablo - San Pablo City, Laguna Daniel B. Peña Memorial College Foundation - Tabaco City, Albay Daraga Community College - Daraga, Albay Data Center College of the Philippines - Baguio City Campus Data Center College of the Philippines - Bangued Branch Data Center College of the Philippines - Vigan City Branch Datamex Institute of Computer Technology Davao Central College Inc. - Davao City Davao del Norte State College - Panabo City Davao Doctors' College - Davao City Davao Medical School Foundation - Davao City Davao Oriental State University - Mati City De La Salle Andres Soriano Memorial College - Toledo City, Cebu De La Salle Araneta University - Malabon De La Salle-College of Saint Benilde - Manila De La Salle John Bosco College - Bislig City De La Salle Lipa - Lipa City De La Salle Medical and Health Sciences Institute - Dasmarinas, Cavite De La Salle University-Dasmariñas De La Salle University-Manila (5 campuses and the oldest constituent of De La Salle Philippines) De Ocampo Memorial College - Nagtahan, Sta. Mesa, Manila Dee Hwa Liong College Foundation De Los Santos College De Paul College - Iloilo City Diaz College - Tanjay City, Negros Oriental Dipolog Medical Center College Foundation - Dipolog City Divine Word College of Bangued Divine Word College of Calapan Divine Word College of Laoag Divine Word College of Legazpi Divine Word College of San Jose Divine Word College of Urdaneta Divine Word College of Vigan Divine Word College Seminary - Tagaytay City Divine Word Mission Seminary - Quezon City Divine Word University of Tacloban Dominican College of Iloilo Dominican College of Sta. Rosa, Laguna Dominican College of Tarlac Don Bosco College, Canlubang Don Bosco Technical College Don Bosco Technical Institute Don Bosco Technical Institute, Tarlac Don Bosco Technical Institute, Victorias Don Honorio Ventura Technological State University Don Mariano Marcos Memorial State University Don Mariano Marcos University Dr. Carlos S. Lanting College - Quezon City Dr. Emilio B. Espinosa Sr. Memorial State College of Agriculture and Technology - Mandaon, Masbate Dr. Filemon C. Aguilar Memorial College of Las Piñas Dr. Yanga's Colleges Inc. - Bocaue, Bulacan E East Asia International System College - Cauayan City, Isabela Easter College, Inc. - Baguio City Eastern Luzon Colleges - Bambang, Nueva Vizcaya Eastern Mindanao College of Technology (EMCOTECH) - Pagadian City Eastern Mindoro Institute of Technology and Sciences (EMA Emits College Philippines) Eastern Samar State University - Borongan City Main Campus Eastern Samar State University - Can-Avid Eastern Samar State University - Guiuan Eastern Samar State University - Maydolong Eastern Samar State University - Salcedo Eastern Tayabas College - Lopez, Quezon Eastern Visayas State University - Tacloban City Main Campus Eastern Visayas State University - Burauen External Campus Eastern Visayas State University - Carigara External Campus Eastern Visayas State University - Dulag External Campus Eastern Visayas State University - Ormoc City Satellite Campus Eastern Visayas State University - Pinabacdao (External Campus in Samar Island) Eastern Visayas State University - Tanauan External Campus Eastwoods Professional College - Balanga, Bataan Eduardo L. Joson Memorial College - Palayan City Educational Systems Technological Institute (ESTI) - Boac, Marinduque Emilio Aguinaldo College - Manila Emmanuel College of Plaridel - Plaridel, Bulacan Enderun Colleges Entrepreneurs School of Asia - Quezon City Escuela de Nuestra Señora de La Salette - Dagupan City Eulogio "Amang" Rodriguez Institute of Science and Technology (EARIST) Eulogio Amang Rodrigues Institute of Science and Technology - Cavite Campus Eveland Christian College - San Mateo, Isabela F Far East Asia Pacific Institute of Tourism Science and Technology, Inc. (Feapitsat Colleges) - Tanza, Cavite FEAPITSAT College of Dasmarinas Inc. Far Eastern University Far Eastern University – Institute of Technology (FEU Tech) Far Eastern University – FERN College (FEU Diliman) Far Eastern University – Nicanor Reyes Medical Foundation (FEU NRMF) Far Eastern University – Roosevelt College (FEU Roosevelt) Far Eastern University Alabang (FEU Alabang) Far Eastern College – Silang (FEU Cavite) Far Eastern University – Makati (FEU Makati) FEATI University Felix O. Alfelor Sr. Foundation College - Sipocot, Camarines Sur Fellowship Baptist College -Kabankalan City, Negros Occidental Fernandez College of Arts and Technology - Baliuag, Bulacan Filamer Christian University - Roxas Avenue, Roxas City, Capiz First Asia Institute of Technology and Humanities - Tanauan, Batangas First City Providential College - City of San Jose Del Monte, Bulacan Five Star Technical Institute - Tuguegarao City, Cagayan Florencio L. Vargas College (Main Campus, Bagay Road Campus, and Pengue Campus) - Tuguegarao City, Cagayan Forbes College - Legazpi City Foundation University - Dumaguete City Fr. Saturnino Urios University Franciscan College of the Immaculate Conception - Baybay City, Leyte G Gallego Foundation Colleges - Cabanatuan City Garcia College of Technology - Kalibo, Aklan Gateways Institute of Science and Technology - Cogeo Campus Gateways Institute of Science and Technology - Fairview Campus Gateways Institute of Science and Technology - Mandaluyong Campus Gateways Institute of Science and Technology - Pasig Campus Gensantos Foundation College - General Santos City Gen. Santos Doctors' Medical School Foundation Inc. - General Santos City General de Jesus College - San Isidro, Nueva Ecija Global City Innovative College - Bonifacio Global City, Taguig Global Computer INFOTEQ School, Inc. - Cainta, Rizal Global IT Colleges Global IT - Goa, Camarines Sur Global IT - Milaor, Camarines Sur Global IT - Naga City (Main Campus) - Abella St., Naga City Global Reciprocal Colleges - Caloocan Gordon College - Olongapo City Grace Christian College - Quezon City Greatways Technical Institute - Makati Green Valley College Foundation, Inc., City of Koronadal Greenville College - Pasig Golden Gate College - Batangas City Golden Link College Foundation, Inc. - Caloocan Goldenstate College of General Santos City Governor Andres Pascual College - Navotas Governor Mariano E. Villafuerte Community Colleges - Camarines Sur Governor Mariano E. Villafuerte Community College - Siruma Governor Mariano E. Villafuerte Community College - Tinambac Guagua National Colleges - Guagua, Pampanga Guimaras State College Guzman College of Science and Technology - Quiapo, Manila H Holy Angel University - Angeles City Holy Name University - (formerly Divine Word College of Tagbilaran) Holy Trinity College - Puerto Princesa City Hua Siong College of Iloilo Holy Cross of Davao College Holy Cross College of Calinan Holy Cross College - Pampanga - Santa Ana, Pampanga I iACADEMY - Makati ICCT Colleges (formerly Institute of Creative Computer Technology) ICCT Colleges - Angono ICCT Colleges - Antipolo ICCT Colleges - Binangonan ICCT Colleges - Cainta ICCT Colleges - Cogeo ICCT Colleges - San Mateo ICCT Colleges - Sumulong ICCT Colleges - Taytay Ifugao State University - Lamut, Ifugao IIH College - Novaliches, Quezon City Iligan Capitol College - Iligan City Iligan Medical Center College - Iligan City Ilocos Polytechnic State College Ilocos Sur Polytechnic State College Candon Campus (College of Business Administration and Tourism) Cervantes Campus Narvacan Campus (College of Fisheries) Santiago Campus (College of Industrial Sciences) Sta. Maria Campus (College of Agriculture) Tagudin Campus (College of Education) Iloilo Doctors' College - Molo, Iloilo City Iloilo Science and Technology University - formerly Western Visayas College of Science and Technology - Iloilo City Iloilo State College of Fisheries Main Campus - Tiwi, Barotac Nuevo, Iloilo Iloilo State College of Fisheries - Barotac Nuevo Campus Iloilo State College of Fisheries - Dingle Campus Iloilo State College of Fisheries - Dumangas Campus Iloilo State College of Fisheries - San Enrique Campus Immaculate Concepcion College - Balayan, Batangas Immaculate Conception International Cabagan Campus San Mariano Campus Immaculate Heart of Mary College Imus Computer College (ICC) - Alabang Muntinlupa Imus Computer College (ICC) - Bacoor Cavite Imus Computer College (ICC) - Carmona Cavite Imus Computer College (ICC) - Dasmariñas Cavite Imus Computer College (ICC) - Gen. Mariano Alvarez (GMA) Cavite Imus Computer College (ICC) - Governor's Drive-FCIE Dasmariñas Cavite Imus Computer College (ICC) - Imus Cavite Imus Computer College (ICC) - Las Piñas Imus Computer College (ICC) - Rosario Cavite Imus Computer College (ICC) - Salawag, Dasmarinas Imus Computer College (ICC) - Silang Cavite Imus Computer College (ICC) - Trece Martires Cavite Imus Institute College Infant Jesus Montessori School College Department - Santiago City Informatics International College (multiple campuses) Information and Communications Technology Academy (iACADEMY) Infotech Development Systems Colleges (IDS Colleges) - Ligao City Infotech Institute of Arts and Sciences Infotech Main - Marcos Highway Infotech - Crossing Infotech - Lagro Infotech - Makati Infotech - Pasig Infotech - Sucat Innovative College of Science in Information Technology (ICST) - Bongabong, Oriental Mindoro Institute of Enterprise Solutions (IES) - Phils, San Jose City, Nueva Ecija Interface Computer College - Manila Interface Computer College- Cabanatuan Interface Computer College - Caloocan Interface Computer College - Cebu Interface Computer College - Davao Interface Computer College - Iloilo International Academy of Film and Television (IAFT) - Lapu-Lapu City International Academy of Management and Economics - Makati International Baptist College (IBC) - Mandaluyong International Electronics and Technical Institute Inc. (IETI) (multiple campuses) International School of Asia and the Pacific - Tuguegarao City, Cagayan International Technological Institute of Arts and Tourism - Ilagan City, Isabela Isabela College of Arts and Technology - Cauayan City, Isabela Isabela Colleges - Cauayan City, Isabela Isabela Colleges of Science & Technology - Roxas, Isabela Isabela State University Isabela State University - Echague Main Campus Isabela State University - Cauayan City Campus Isabela State University - Ilagan City Campus Isabela State University - Cabagan Campus Isabela State University - Roxas Campus Isabela State University - San Mateo Campus Isabela State University - San Mariano Campus Isabela State University - Angadanan Campus Isabela State University - Jones Campus Isabela State University - Palanan Campus Isabela State University - Santiago City Extension Campus J J.H. Cerilles State College of Zamboanga del Sur Jake Battaring University Tumauini Campus Jamiatu Muslim Mindanao - Marawi City JE Mondejar Computer College - Tacloban City Jesus Reigns Christian College John B. Lacson Colleges Foundation - Alijis, Bacolod City John B. Lacson Foundation Maritime University - Arevalo, Iloilo City John B. Lacson Foundation Maritime University - Molo, Iloilo City John Paul College - Odiong, Roxas, Oriental Mindoro John Paul College - Davao City John Wesley College - Tuguegarao City, Cagayan Joji Ilagan Career Center Foundation, Inc. - Davao City and General Santos City Jose Abad Santos Memorial School Quezon City Jose C. Feliciano College - Dau, Mabalacat, Pampanga Jose Rizal Memorial State University Jose Rizal Memorial State University - Dapitan City Campus Jose Rizal Memorial State University - Dipolog City Campus Jose Rizal Memorial State University - Katipunan Campus Jose Rizal Memorial State University - Siocon Campus Jose Rizal Memorial State University - Tampilisan Campus Jose Rizal University - Mandaluyong Jose Maria College - Davao City JP Sioson General Hospital and Colleges - Quezon City K Kabankalan Catholic College (KCC) - Kabankalan, Negros Occidental Kalayaan College - Quezon City KCI Colleges - Isulan, Sultan Kudarat Kolehiyo ng Subic - Subic, Zambales Kolehiyo Ng Pantukan (KNP) -Pantukan, Davao de Oro L La Carlota City College - La Carlota, Negros Occidental La Concordia College - Paco, Manila La Consolacion College - 10th Avenue, Caloocan La Consolacion College - Bais City, Negros Oriental La Consolacion College - Bacolod La Consolacion College - Biñan La Consolacion College - Daet, Camarines Norte La Consolacion College - Iriga City La Consolacion College - Liloan, Cebu La Consolacion College - Mendiola, Manila La Consolacion College - Novaliches, Caloocan La Consolacion College - Pasig La Consolacion College - Tanauan City, Batangas La Consolacion College - Trento, Agusan del Sur La Consolacion University Philippines (formerly University of Regina Carmeli) La Fortuna College - Cabanatuan City La Union Colleges of Nursing, Arts and Sciences La Verdad Christian College - Apalit, Pampanga La Verdad Christian College - Caloocan Lacson College - Pasay Laguna Business College - Sta. Rosa, Laguna Laguna College - San Pablo City Laguna College of Business and Arts - Calamba City Laguna Northwestern College - San Pedro, Laguna Laguna State Polytechnic University - Santa Cruz (Main Campus) Laguna State Polytechnic University - Lopez, Quezon Laguna State Polytechnic University - Los Baños Laguna State Polytechnic University - San Pablo City Laguna State Polytechnic University - Siniloan (Host) Laguna University - Sta. Cruz, Laguna Lapu-Lapu City College - Lapu-Lapu City La Salle College Antipolo -Antipolo City La Salle University-Ozamiz - Ozamiz City Las Piñas College - Pilar Village, Almanza, Las Piñas Lheny Ganda Santos College - Zaragoza, Nueva Ecija Lemery Colleges - Lemery, Batangas Leyte Institute of Technology - Tacloban City Leyte Normal University - Tacloban City Liceo de Cagayan University - Carmen, Cagayan de Oro City Liceo de Davao - Tagum City Liceo del Verbo Divino - Tacloban City (formerly Divine Word University) LIEMG Language Center - Valenzuela City Ligao Community College - Ligao City Lipa City Colleges - Lipa City Lipa City Public College Loreto Academy Lorma College - San Fernando, La Union Lourdes College - Capistrano St. - Cagayan de Oro City Loyola College of Culion - Culion, Palawan Luna Goco Colleges Luna Goco College - Calapan City Luna Goco College - Pinamalayan, Oriental Mindoro Lyceum-Northwestern University Lyceum of Alabang (not affiliated with Lyceum of the Philippines University) Lyceum of Aparri Lyceum of Cebu - Kalunasan, Cebu City Lyceum of the Philippines University System (multiple campuses) Lyceum of the Philippines University - Batangas City Lyceum of the Philippines University - Calamba City, Laguna Lyceum of the Philippines University - General Trias, Cavite Lyceum of the Philippines University - Intramuros, Manila Lyceum of the Philippines University - Makati - Makati Lyceum of Subic Bay - Subic Bay, Zambales Lyceum of Tuao - Cagayan M Maasin City College - Maasin, Southern Leyte Mabalacat College - Rizal St., Dolores, Mabalacat, Pampanga Mach Aviatrix Airhostess Institute - Dumaguete City Mach Aviatrix Flight Institute - Dumaguete City Magsaysay Memorial College - San Narciso, Zambales Ma'had Kutawato College - Campo Muslim, Cotabato City Maila Rosario College - Tuguegarao City, Cagayan Malayan Colleges Laguna - Cabuyao City Mallig Plains Colleges - Mallig, Isabela Mambog Institute of Technology-Mambog - Pinabacdao, Samar Mandaue City College - Mandaue City, Cebu Manila Adventist Medical Center and Colleges - Pasay Manila Business College - Manila Manila Central University - Caloocan Manila Christian Computer Institute for the Deaf College of Technology - San Mateo, Rizal Manila Tytana Colleges - Pasay Manuel L. Quezon University - (Manila, Quezon City and Penarrubia, Abra) Manuel S. Enverga University Foundation - Lucena City, Quezon Manuel S. Enverga University Foundation - Candelaria, Quezon Manuel S. Enverga University Foundation - Catanauan, Quezon Manuel S. Enverga University Foundation - Sampaloc, Quezon Manuel S. Enverga University Foundation - San Antonio, Quezon Manuel S. Enverga University Foundation College of Law - Lucena City, Quezon Mapúa University - Manila Marian College - Ipil, Zamboanga Sibugay Province Mariano Marcos State University Marikina Polytechnic College - Marikina Marinduque Midwest College - Gasan, Marinduque Marinduque State University - Main Campus Tanza, Boac, Marinduque Marinduque State University - Gasan, Marinduque Marinduque State University - Matalaba, Sta. Cruz, Marinduque Marinduque State University - Pag-asa, Sta. Cruz, Marinduque Marinduque State University - Torrijos, Marinduque Mariners' Polytechnic Colleges Foundation Mariners' - Canaman Campus - Canaman, Camarines Sur Mariners' - Legazpi City (Main Campus) Mariners' - Naga Campus - Panganiban Drive, Naga City, Camarines Sur Martinez Colleges, Inc. - Caloocan Mary Chiles College - Manila Mary Johnston College of Nursing - Manila Mary the Queen College (Pampanga) - Guagua, Pampanga Maryhill College - Lucena City, Quezon Masbate Colleges - Masbate City Masters Technological Institute of Mindanao (MTIM) - Iligan City Mater Dei College - Tubigon, Bohol MATS College of Technology - Davao City Maxino College - Dumaguete Medical Colleges of Northern Philippines - Peñablanca, Cagayan Medina College Inc. Medina College - Ozamiz Medina College - Ipil Medina College - Pagadian Medina Foundation College - Sapang Dalaga, Misamis Occidental Mendero College - Pagadian City MFI Technological Institute Meridian International Business, Arts & Technology College (MINT College) - (Taguig, Pasig & Quezon City) Messiah College - Mandaluyong Metro Business College (formerly Metro Data Computer College) - Pasay Metro Dumaguete College - Dumaguete City Metro Manila College - Kaligayahan, Novaliches, Quezon City Metropolitan Hospital College of Nursing Metropolitan School of Science and Technology - Santiago City Meycauayan College - Bulacan Microcity Computer College Foundation, Inc. - Balanga City Microspan Software Technology, Inc. - Cotabato City Mind And Integrity College - Cabuyao City Mindanao Aeronautical Technical School College of Technology - Davao City Mindanao Autonomous College - Lamitan City, Basilan Mindanao Kokusai Daigaku (Mindanao International College) - Davao City Mindanao Medical Foundation College - Davao City Mindanao Polytechnic College - General Santos City Mindanao Sanitarium and Hospital College - Iligan City Mindanao State University Mindanao State University-Buug Mindanao State University–General Santos City Mindanao State University - Iligan Institute of Technology Mindanao State University–Maguindanao (Datu Odin Sinsuat, Maguindanao) Mindanao State University - Marawi Mindanao State University - Naawan Mindanao State University - Sulu Mindanao State University–Tawi-Tawi College of Technology and Oceanography Mindanao University of Science and Technology - Lapasan, Cagayan de Oro City Mindoro State College of Agriculture and Technology Mindoro State College of Agriculture and Technology - Bongabong, Oriental Mindoro Mindoro State College of Agriculture and Technology - Calapan City Mindoro State College of Agriculture and Technology - Victoria, Oriental Mindoro Miriam College - (Quezon City & Calamba, Laguna) Misamis Oriental State College of Agriculture and Technology - Claveria, Misamis Oriental Misamis Oriental University Misamis University Mondriaan Aura College - Subic Bay Monkayo College of Arts of Science and Technology (MONCAST) Montessori Professional College International (MPCI) Montessori Professional College International - Antipolo Montessori Professional College International - Bacoor Montessori Professional College International - Caloocan Montessori Professional College International - Dasmariñas Montessori Professional College International - Imus Montessori Professional College International - Largo Montessori Professional College International - Makati Montessori Professional College International - Marikina Montessori Professional College International - Muñoz Montessori Professional College International - Pasay Montessori Professional College International - Pasig Montessori Professional College International - Recto Montessori Professional College International - Calamba Mount Carmel College - Baler, Aurora Mount Carmel College - Escalante City, Negros Occidental Mountain Province State Polytechnic College - Bontoc, Mountain Province Mountain View College - Mt. Nebo, Valencia City, Bukidnon Mystical Rose College of Science and Technology - Mangatarem, Pangasinan N Naga College Foundation - Naga City Naga View Adventist College - Naga City NAMEI Polytechnic Institute - Mandaluyong National Christian Life College - (multiple campuses) National College of Business and Arts National College of Business and Arts - Cubao, Quezon City National College of Business and Arts - Fairview, Quezon City National College of Business and Arts - Taytay, Rizal National College of Science and Technology - Dasmariñas City, Cavite National Police College Regional Training School - Cauayan City, Isabela National Teachers College - Quiapo, Manila National University Naval State University Naval State University - Main Campus, Naval Naval State University - Biliran Campus Navotas Polytechnic College Nazarenus College and Hospital Foundation, Inc. - Bulacan Negros College - Ayungon, Negros Oriental Negros Maritime College Foundation, Inc. - Sibulan, Negros Oriental Negros Navigation Oceanlink Institute - Pier 2 North Harbor, Tondo, Manila Negros Oriental State University Negros Oriental State University - Main Campus, Dumaguete City Negros Oriental State University - Bajumpandan Campus, Dumaguete City Negros Oriental State University - Bais Campuses Negros Oriental State University - Bayawan-Sta. Catalina Campus Negros Oriental State University - Guihulngan Campus Negros Oriental State University - Mabinay Campus Negros Oriental State University - Pamplona Farm Negros Oriental State University - Siaton Campus New Era University New Era University - Batangas New Era University - General Santos City New Era University - Pampanga (San Fernando, Pampanga) New Era University - Quezon City Northeast Luzon Adventist College - Alicia, Isabela Northeastern College - Santiago City Northeastern Mindanao Colleges - Surigao City Northlink Technological college - Panabo City Northwest Samar State University Northwest Samar State University - Main Campus, Calbayog City Northwest Samar State University - San Jorge Campus Northwestern Visayan Colleges - Kalibo, Aklan Northwestern University (Philippines) - Laoag City North Davao College Tagum Foundation - Tagum City North Luzon Philippines State College - Candon City, Ilocos Sur Northern Cagayan Colleges Foundation - Ballesteros, Cagayan Northern Christian College - Laoag City Northern Davao Colleges - Panabo City Northern Luzon Adventist College - Sison, Pangasinan Northern Negros State College of Science and Technology Northern Negros State College of Science and Technology - Main Campus, Sagay City Northern Negros State College of Science and Technology - Cadiz Campus Northern Negros State College of Science and Technology - Calatrava Campus Northern Negros State College of Science and Technology - Escalante Campus Northern Philippines College for Maritime Science and Technology, Inc. Northern Samar Colleges - Catarman, Northern Samar Northern Zambales College - Masinloc, Zambales Notre Dame of Dadiangas University - General Santos Notre Dame of Isulan Notre Dame of Jolo College - Jolo, Sulu Notre Dame of Kidapawan College Notre Dame of Marbel University - Koronadal Notre Dame of Midsayap College - Midsayap, Cotabato Notre Dame of Tacurong College Notre Dame - Siena College of Polomolok Notre Dame University - Cotabato City Nueva Ecija University of Science and Technology Nueva Vizcaya State University Nueva Vizcaya State University - Main Campus, Bayombong Nueva Vizcaya State University - Bambang Campus Nuevo Zamboanga College - Zamboanga City O Oas Community College - Oas, Albay Occidental Mindoro State College - San Jose, Occidental Mindoro Olivarez College Olivarez College - Parañaque Olivarez College - Tagaytay Operation Brotherhood Montessori Center (OB Montessori Center) (Short name: OBMC) OBMC - Greenhills, San Juan OBMC - Santa Ana, Manila OBMC - Las Piñas, Parañaque OBMC - Angeles, Pampanga OBMC - Fairview, Quezon City Opol Community College - Opol, Misamis Oriental Osmeña Colleges - Masbate City Our Lady of Assumption College - San Pedro, Laguna (Main) Our Lady of Assumption College - Santa Rosa City Our Lady of Assumption College Cabuyao (Main) - Mamatid, Cabuyao City Our Lady of Assumption College Cabuyao (Annex) - Mamatid, Cabuyao City Our Lady of Assumption College - Tanauan, Batangas Our Lady of Fatima University Our Lady of Fatima University - Antipolo City, Rizal Our Lady of Fatima University - City of San Fernando, Pampanga Our Lady of Fatima University - Santa Rosa, Laguna Our Lady of Fatima University - Quezon City Our Lady of Fatima University - Valenzuela City Our Lady of Fatima University - Sta. Rosa City, Laguna Our Lady of Fatima University Nueva Ecija Doctors Colleges, Inc. - Cabanatuan City, Nueva Ecija Our Lady of Guadalupe Colleges - Mandaluyong Our Lady of Lourdes College - Valenzuela City Our Lady of Manaoag College - Manaoag, Pangasinan Our Lady of Mercy College - Borongan City Our Lady of Perpetual Succor College - Marikina Our Lady of the Pillar College - Cauayan City, Isabela Our Lady of the Pillar College - San Manuel, Isabela P Pacific InterContinental College (PIC), Inc. - Las Piñas Palawan State University Palompon Institute of Technology - Palompon, Leyte Pamantasan ng Cabuyao - Cabuyao, Laguna Pamantasan ng Lungsod ng Maynila Main Campus Pamantasan ng Lungsod ng Maynila, District Colleges Pamantasan ng Lungsod ng Marikina Pamantasan ng Lungsod ng Marikina - H. Bautista Campus Pamantasan ng Lungsod ng Marikina - J.P. Rizal Campus Pamantasan ng Lungsod ng Muntinlupa Pamantasan ng Lungsod ng Pasay Pamantasan ng Lungsod ng Pasig Pamantasan ng Lungsod ng Valenzuela Pamantasan ng Montalban - Montalban, Rizal Pambansa-Demokratikong Paaralan Pambayang Kolehiyo ng Mauban - Mauban, Quezon Pampanga Agricultural College - Magalang, Pampanga Pampanga Colleges - Macabebe, Pampanga Pangasinan State University Pangasinan State University - Asingan Pangasinan State University - Bayambang Pangasinan State University - Binmaley Pangasinan State University - Graduate School Pangasinan State University - Infanta Pangasinan State University - Lingayen Pangasinan State University - San Carlos City Pangasinan State University - Sta. Maria Pangasinan State University - Urdaneta City Panpacific University North Philippines - Tayug Panpacific University - Urdaneta City Panay Technological College - Kalibo, Aklan Partido State University Partido State University - Goa, Camarines Sur (Main Campus) Partido State University - Lagonoy, Camarines Sur Partido State University - San Jose, Camarines Sur Partido State University - Tinambac, Camarines Sur Pasig Catholic College - Pasig Passi Trade School - Passi City Pateros Technological College - Pateros, Metro Manila Patria Sable Corpus College - Santiago City PATTS College of Aeronautics Peña de Francia College - Naga (condemned campus) Philippine Advent College - Sindangan, Zamboanga del Norte Philippine Best Training System Colleges - Binangonan, Rizal Philippine Cambridge School - Dasmariñas, Cavite Philippine Cambridge School - GMA, Cavite Philippine Cambridge School - Imus, Cavite Philippine Cambridge School - Noveleta, Cavite Philippine Central Islands College (PCIC) Philippine Christian University Philippine Countryville College, Inc. (PCC) - P2B- Panadtalan, Maramag, Bukidnon Philippine College of Criminology (PCCr) - Manila Philippine College of Health Sciences, Inc. - Manila Philippine College of Technology - Davao City Philippine College of Technology - Bajada Philippine College of Technology - Calinan Philippine International College Philippine Law Enforcement College - Tuguegarao City, Cagayan Philippine Merchant Marine Academy - San Narciso, Zambales Philippine Military Academy - Baguio City Philippine National Police Academy - Silang, Cavite Philippine Nautical Technological College Intramuros Manila - Dasmarinas Cavite Philippine Nazarene College - La Trinidad, Benguet Philippine Normal University Philippine Normal University - Manila (Main Campus) Philippine Normal University - North Luzon (Alicia, Isabela) Philippine Normal University - Visayas (Cadiz, Negros Occidental) Philippine Normal University - South Luzon (Lopez, Quezon) Philippine Normal University - Mindanao (Prosperidad, Agusan del Sur) Philippine Rehabilitation Institute - Quezon City Philippine School of Business Administration - Manila (Main Campus) Philippine School of Business Administration - Quezon City Philippine State College of Aeronautics - Villamor Air Base, Pasay (Main Campus) Philippine State College of Aeronautics - Basa Air Base, Floridablanca, Pampanga Philippine State College of Aeronautics - Clark Air Base, Angeles City, Pampanga Philippine State College of Aeronautics - Fernando Air Base, Lipa City, Batangas Philippine State College of Aeronautics - Mactan-Benito Ebuen Air Base, Cebu Philippine Technological Institute of Science Arts and Trade - Central Inc., General Mariano Alvarez, Cavite Philippine Technological Institute of Science Arts and Trade - Central Inc., Sta. Rosa, Laguna Philippine Technological Institute of Science Arts and Trade - Central Inc., Tanay, Rizal Philippine Women's University Philippine Women's College of Davao Philippine Women's University - CDCEC Calamba Campus Philippine Women's University - Surigao City Campus Philtech Institute of Arts and Technology Inc. - Gumaca, Quezon Philtech Institute of Arts and Technology Inc. - Lucena City Pilar College - Zamboanga City Pilgrim University (formerly Pilgrim Christian College) - Cagayan de Oro City Pinabacdao State University - Mambog, Pinabacdao, Samar Pines City Colleges - Baguio City PLT College Inc. - Bayombong, Nueva Vizcaya PMI Colleges (Philippine Maritime Institute) PMI Colleges Bohol PMI Colleges Manila PMI Colleges Quezon City PMMS - Las Piñas Polangui Community College - Polangui, Albay Polytechnic College of Davao Del Sur Inc. - Digos City, Davao Del Sur Polytechnic State University of Bicol - Nabua, Camarines Sur Polytechnic University of the Philippines System (11 branches and 11 campuses) Q Queen of Apostles College Seminary (QACS) - Taguum City Quezon City Polytechnic University Quezon City Polytechnic University - Batasan Hills Quezon City Polytechnic University - San Bartolome Quezon City Polytechnic University - San Francisco Quezon Colleges of the North - Ballesteros, Cagayan Quezon Colleges of Southern Philippines - Tacurong City Quezon Memorial Institute of Siquijor, Siquijor Quirino State University - Cabarroguis, Quirino Quirino State University - Diffun, Quirino Quirino State University - Maddela, Quirino R Ramon Magsaysay Memorial Colleges - General Santos City Ramon Magsaysay Technological University – multiple campuses RC Al-Khwarizmi International College - Marawi City Red Aeronautics and Technological Institute Inc. - Silay City Red Link Science Institute & Technology of Laguna - Bay, Laguna (Main Campus) Red Link Science Institute & Technology of Calamba (Annex Campus) Regis Marie College Remedios T. Romualdez Memorial Schools - Makati Medical Center Republic Central Colleges - Angeles City Riverside College - Bacolod City Rizal College Rizal College of Laguna - Calamba (Main Campus) Rizal College of Taal - Taal, Batangas (Annex Campus) Rizal Memorial Colleges - Davao City Rizal Memorial Institute - Dapitan City Rizal Technological University – multiple campuses Rogationist College - Silang, Cavite Romblon State University – multiple campuses Romeo Padilla University - Urdaneta City, Pangasinan Roosevelt College System – multiple campuses Roosevelt College Marikina Roosevelt College Quirino Royal Christian College S Sacred Heart College - Lucena City, Quezon Sacred Heart College - Tacloban City St. Anthony College Calapan City, Inc., (formerly St. Anthony College of Science and Technology, Inc.) - Calapan City, Oriental Mindoro St. Anthony College of Nursing, Inc. - Roxas City St. Anthony College of Technology - Mabalacat City, Pampanga St. Anthony's College - San Jose, Antique St. Anthony's College - Santa Ana, Cagayan Saint Bridget College - Batangas City Saint Clare College - Region 2, Cauayan City, Isabela Saint Columban College - Pagadian City San Jose Community College - Malilipot, Albay St. Dominic College of Asia - Bacoor, Cavite St. Dominic College - Basco, Batanes Saint Ferdinand College - Ilagan City, Isabela Main Campus Saint Ferdinand College - Cabagan, Isabela Saint Francis of Assisi College System - Las Piñas Campuses: Alabang, Dasmariñas, Taguig, Biñan, Pamplona, Bacoor, Los Baños Saint Francis Institute of Computer Studies - San Pedro, Laguna Campuses: Cabagan, Isabela Saint Gabriel College - Kalibo, Aklan St. Ignatius Technical Institute of Business and Arts - Sta. Rosa, Laguna St. James College of Parañaque - Paranaque City St. James College of Quezon City Saint John and Paul Colleges - Calamba City, Laguna Saint John Colleges - Calamba City St. Joseph College Cavite City Saint Joseph College Maasin City St. Joseph College-Olongapo, Inc. - Olongapo City Saint Joseph's College - Montalban, Rizal Saint Joseph's College - Quezon City Saint Joseph College of Sindangan - Sindangan, Zamboanga del Norte Saint Jude College - Manila St. Jude Thaddeus Institute of Technology - Surigao City St. Linus University (St. Linus Online Institute) - Paniqui, Tarlac Saint Louis College - San Fernando City, La Union Saint Louis University, Baguio City Main Campus - Bonifacio St., Baguio City Gonzaga Campus - General Luna St., Baguio City Maryheights Campus - Bakakeng, Baguio City Navy Base Campus - Navy Base, Baguio City St. Mary's Angels College of Pampanga - Santa Ana, Pampanga St. Mary's College of Meycauayan - Meycauayan City Saint Mary's College of Quezon City Saint Mary's University of Bayombong, Nueva Vizcaya St. Mary's College of Baganga - Baganga, Davao Oriental St. Mary's College of Baliuag - Bulacan St. Mary's College of Boac - Marinduque St. Mary's College of Borongan - Eastern, Samar St. Mary's College of Catbalogan - Catbalogan, Samar (formerly Sacred Heart College) St. Mary's College of Labason - Labason, Zamboanga del Norte St. Mary's College of Tagum - Tagum City, Davao St. Mary's College of Toledo - Toledo City, Cebu Saint Mary's University of Cebu - Cebu City St. Matthew College - San Mateo, Rizal Saint Michael College of Hindang Leyte - Hindang, Leyte Saint Michael College of Caraga - Nasipit, Agusan del Norte Saint Michael's College - Cantilan, Surigao del Sur Saint Michael's College - Guagua, Pampanga St. Michael's College - Iligan City St. Nicolas College of Business and Technology - San Fernando, Pampanga St. Paul College of Ilocos Sur - Bantay, Ilocos Sur St. Paul College of Makati St. Paul College of Pasig St. Paul College of Parañaque St. Paul College of Technology - Tarlac City St. Paul University System (7 campuses) Saint Pedro Poveda College St. Peter's College - Balingasag St. Peter's College - Iligan St. Peter's College - Ormoc St. Peter's College - Toril (Davao City) St. Peter College of Technology - Capas, Tarlac St. Rita College - Manila St. Rita College - Parañaque City St. Rita's College - Balingasag St. Scholastica's Academy - City of San Fernando, Pampanga St. Scholastica's College Manila St. Scholastica's College - Tacloban Sta. Teresa College - Bauan, Batangas Saint Theresa's College of Cebu City Saint Theresa's College of Quezon City Saint Vincent's College - Dipolog City Samar Island University (formerly Samar College, Samar Junior College) San Agustin Institute of Technology San Beda University System campuses: San Beda College Alabang - Alabang Hills Village, Muntinlupa San Beda University Manila - Mendiola, Manila San Beda University Rizal - Taytay, Rizal San Carlos College - San Carlos City, Pangasinan San Isidro College - Malaybalay City, Bukidnon San Jose Christian Colleges - San Jose City, Nueva Ecija San Jose Community College - Malilipot, Albay Saint Joseph Institute of Technology - Butuan City, Agusan del Norte San Juan de Dios College - Pasay San Lorenzo College - Kalibo, Aklan San Pablo Colleges - San Pablo City San Pedro College - Davao City San Pedro College of Business Administration - San Pedro, Laguna San Sebastian College - Recoletos de Manila - (Manila) San Sebastian College - Recoletos de Cavite - (Cavite City) Sancta Maria, Mater et Regina, Seminarium - Roxas City, Capiz Sangguniang Kabataan University Santa Cruz Institute - Sta. Cruz, Marinduque Santa Isabel College Manila - Manila Santiago City Colleges - Santiago City Siargao Island Institute of Technology - Dapa, Siargao Island, Surigao del Norte Sibonga Community College - Sibonga, Cebu Siena College of Taytay Siena College of Quezon City Sierra College - Bayombong, Nueva Vizcaya Silliman University - Dumaguete City Siquijor State College (SSC) - North Poblacion, 6226 Larena, Siquijor Siquijor State College - Lazi Campus - 6228 Lazi, Siquijor SISTECH College - Santiago City Skill Power Institute - Antipolo, Rizal Sorsogon State University South Forbes City College - South Forbes Golf City, Silang, Cavite South Ilocandia College of Arts and Technology - Aringay, La Union South Philippine Adventist College - Davao City South SEED LPDH College - Las Piñas Southeast Asia Interdisciplinary Development Institute - Antipolo Southeastern College of Arts and Trades - Santiago City Southern Baptist College - Mlang, Cotabato Southern Christian College - Midsayap, Cotabato Southern Isabela Colleges Southern Isabela Colleges of Arts and Trades (TESDA) - Santiago City Southern Leyte Business College - Maasin City Southern Leyte State University - Main Campus, Sogod, Southern Leyte Southern Leyte State University - Bontoc Campus, San Ramon, Bontoc, Southern Leyte Southern Leyte State University - Hinunangan Campus, Hinunangan, Southern Leyte Southern Leyte State University - San Juan Campus, San Juan, Southern Leyte Southern Leyte State University - Tomas Oppus Campus, San Isidro, Tomas Oppus, Southern Leyte Southern Luzon State University (multiple campuses) Southern Luzon Technological College Foundation Inc. - Legazpi City, Albay Southern Mindanao Colleges Southern Philippine Academy College Inc. (SPA, College Inc.) - Datu Piang, Maguindanao Southern Philippines Agri-business and Marine and Aquatic School of Technology (SPAMAST) - Digos City Southern Philippines College - Julio Pacana St., Licuan, Cagayan de Oro City Southland College - Kabankalan City, Negros Occidental Southville Foreign University - Las Piñas Southville International School and Colleges - Las Piñas Southway College of Technology - San Francisco, Agusan del Sur Southwestern University - Cebu City SPJ International Technology Institute Inc. SPJ - Calabanga - Calabanga, Camarines Sur SPJ - Tinambac - Tinambac, Camarines Sur STI College (multiple campuses) STI College Balagtas Sto. Rosario Sapang Palay College - San Jose Del Monte City, Bulacan Stonyhurst Southville International School - Batangas City Sultan Kudarat State University Sumulong College Of Arts And Sciences Superior Institute of Science and Technology of Santiago City Surigao City Adventist Learning Center - Surigao City Surigao del Sur Polytechnic State College Surigao del Sur Polytechnic State College - Surigao del Sur Institute of Technology - Cantilan Surigao Education Center - Surigao City Surigao del Norte State University - Surigao City Surigao del Norte State University - Del Carmen, Surigao del Norte Campus Surigao del Norte State University - Mainit, Surigao del Norte Campus Surigao del Norte State University- Malimono, Surigao del Norte Campus Systems Plus College Foundation Systems Plus Computer College - Caloocan Systems Plus Computer College - City of San Fernando Systems Plus Computer College - Cubao Systems Plus Computer College - Davao Systems Plus College Foundation - Miranda, Angeles City Systems Plus Computer College - Quezon City San Mateo Municipal College - San Mateo, Rizal T Tabaco College - Tabaco City, Albay Tagoloan Community College - Tagoloan Misamis Oriental Taguig City University - Taguig Tagum Doctors College, Inc. - Tagum City Talisay City College - Talisay City, Cebu Tanchuling College - Legazpi City Tan Ting Bing Memorial Colleges Foundation, Inc. - San Isidro, Northern Samar Tarlac Agricultural University - Camiling, Tarlac Tarlac State University Tarlac State University - Main Campus in San Roque Tarlac State University - Lucinda Campus Tarlac State University - San Isidro Campus Tasashyass College Inc - Camarin, Caloocan Technological Institute of the Philippines Technological University of the Philippines Technological University of the Philippines - Manila (Main Campus) Technological University of the Philippines – Cavite Technological University of the Philippines - Cuenca Technological University of the Philippines - Lopez Technological University of the Philippines – Taguig Technological University of the Philippines – Visayas Thames International/Entrepreneurs School - Quezon City The MARIAM School of Nursing, Inc. - Lamitan City, Basilan Tiwi Community College - Tiwi, Albay Tomas Claudio Memorial College - Morong, Rizal Tomas del Rosario College - Balanga, Bataan Trace College - Los Baños Trace College - Makati Trent Information First Technical Career Institute Treston International College - Bonifacio Global City, Taguig Trinitas College Pantoc Meycauayan, Bulacan Trinity University of Asia (formerly Trinity College of Quezon City) Tubod College - Tubod, Lanao del Norte Tyrone Valera University - Sindangan, Zamboanga del Norte U Unciano Colleges, Inc Unciano Colleges and General Hospital - Manila Unciano Colleges and Medical Center - Antipolo Unida Christian Colleges - Anabu, Imus City, Cavite Union Christian College (Philippines) - San Fernando, La Union Union College of Laguna - Sta. Cruz, Laguna United Doctors Medical Center - Southeast Asian College - Quezon City Universal Colleges of Paranaque Inc. - Sucat Road, Paranaque City Universidad de Manila Universidad de Sta. Isabel - Peñafrancia Ave., Naga City, Camarines Sur Universidad de Zamboanga University of Antique - Hamtic, Antique University of Antique - Sibalom, Antique University of Antique - Tibiao, Antique University of Asia and the Pacific - Ortigas Center, Pasig University of the Assumption - (San Fernando, Pampanga) University of Baguio University of Batangas University of Bohol - (Tagbilaran City) University of Cagayan Valley (Cagayan Colleges Tuguegarao) University of Caloocan City (formerly Caloocan City Polytechnic College) University of Camarines Norte (formerly Camarines Norte State College) University of Catbalogan City - Samar University of Cebu University of Cebu - Main Campus, Sanciangko St., Cebu City University of Cebu - Banilad Campus, Gov. M. Cuenco Ave., Cebu City University of Cebu - Lapu-lapu & Mandaue, A.C. Cortes Ave., Mandaue City University of Cebu - Maritime Education and Training Center, Alumnos St., Cebu City University of Cebu School of Medicine, Mandaue City University of Cebu - South Campus (defunct) University of the Cordilleras (formerly Baguio Colleges Foundation) University of the East University of the East - Caloocan University of the East - Manila UERMMMC - Quezon City University of Eastern Philippines - Catubig Campus, Northern Samar University of Frederick Alcantara - Quezon City University of Iloilo University of the Immaculate Conception - Davao City University of James Din - Quezon City University of La Salette University of La Salette - Main, Santiago City University of La Salette - Aurora, Isabela University of La Salette - Cabatuan, Isabela University of La Salette - Cordon, Isabela University of La Salette - Jones, Isabela University of La Salette - Quezon, Isabela University of La Salette - Ramon, Isabela University of La Salette - Roxas, Isabela University of La Salette - San Mateo, Isabela University of Luzon - Dagupan University of Makati University of Manila University of Mindanao University of Mindanao - Main Campus, Bolton Street, Davao City University of Mindanao - Main Campus, Matina, Davao City University of Mindanao - Bangoy Campus University of Mindanao - Bansalan Campus University of Mindanao - Cotabato Campus University of Mindanao - Digos Campus University of Mindanao - Guianga Campus University of Mindanao - Panabo Campus University of Mindanao - Peñaplata Campus University of Mindanao - Tagum Campus University of Mindanao - Tibungco Campus University of Mindanao - Toril Campus University of Negros Occidental - Recoletos University of Northeastern Philippines - Iriga City University of Northeastern Philippines - Catarman, Northern Samar University of Northern Philippines - Vigan, Ilocos Sur University of Nueva Caceres University of Pangasinan - Dagupan City University of Perpetual Help System University of Perpetual Help System DALTA University of Perpetual Help System JONELTA University of the Philippines (8 campuses) University of Rizal System (multiple campuses) University of Saint Anthony - San Miguel, Iriga City University of Saint La Salle - Bacolod City University of Saint Louis Tuguegarao Tuguegarao, Cagayan University of the Samar Island Archipelago (formerly Samar State University) - Catbalogan City, Main Campus University of the Samar Island Archipelago - Basey University of the Samar Island Archipelago - Catarman, Northern Samar University of the Samar Island Archipelago - Guiuan, Eastern Samar University of the Samar Island Archipelago - Mercedes, Catbalogan City University of the Samar Island Archipelago - Paranas University of the Samar Island Archipelago - Pinabacdao (South Samar Campus) University of San Agustin - Iloilo City University of San Carlos University of San Carlos - Downtown Campus (main) University of San Carlos - North Campus (formerly Boys' High School) University of San Carlos - South Campus (formerly Girls' High School) University of San Carlos - Talamban Campus University of San Jose - Recoletos - Cebu City University of San Jose-Recoletos - Main Campus, Magallanes St., Cebu City University of San Jose-Recoletos - Balamban Campus, Arpili, Balamban University of San Jose-Recoletos - Basak Campus, N. Bacalso Ave., Cebu City University of Santo Tomas - Manila University of Santo Tomas–Legazpi University of Southeastern Philippines - Davao City University of Southern Mindanao - Kabacan, Cotabato University of Southern Philippines Foundation - Cebu City University of Southern Philippines Foundation - Lahug Campus (main) University of Southern Philippines Foundation - Mabini Campus University of the Visayas University of the Visayas - Main Campus, Colon St., Cebu City University of the Visayas - Mandaue Campus, D.M. Cortes St., Mandaue City University of the Visayas - Pardo Campus, Pardo, Cebu City University of the Visayas - Minglanilla Campus Urdaneta City University - Urdaneta City, Pangasinan UST Angelicum College - Quezon City V Valencia Colleges - Valencia, Bukidnon Velez College - Cebu City Vineyard International Polytechnic College - A. Luna St., Cagayan de Oro Virgen Delos Remedios College - Olongapo City Virgen Milagrosa University Foundation - San Carlos, Pangasinan Visayas State University Main Campus - Baybay City, Leyte Visayas State University - Alang-alang, Leyte Campus Visayas State University - Isabel, Leyte Campus Visayas State University - Tolosa, Leyte Campus Visayas State University - Villaba, Leyte Campus Visca N Roxas College W WCC Aeronautical College - 461 William Shaw St., Grace Park, Caloocan, Philippines World Citi Colleges - Antipolo, Quezon City, Guimba, Nueva Ecija Worldtech Resources Institute (WRI) Colleges WRI Metro Naga - National Highway, Concepcion Grande, Naga City, Camarines Sur WRI Partido - San Juan Evangelista St., Goa, Camarines Sur WRI Rinconada - San Miguel, Iriga City, Camarines Sur Wesleyan University-Philippines - Cabanatuan City Wesleyan University-Philippines (Aurora) - Maria Aurora, Aurora Province West Bay College - Alabang, Muntinlupa Westbridge Institute of Technology, Inc. - Cabuyao, Laguna West Negros University - Bacolod City West Visayas State University - Lapaz, Iloilo City Western Institute of Technology - Lapaz, Iloilo City Western Leyte College (WLC) - Ormoc City Western Mindanao State University – multiple campuses X Xavier University - Ateneo de Cagayan Y Z Zamboanga Peninsula Polytechnic State University - Zamboanga City Zamboanga del Sur Maritime Institute of Technology - Pagadian City Zamboanga State College of Marine Sciences and Technology - Zamboanga City Zamora Memorial College - Bacacay, Albay See also Higher education in the Philippines List of Jesuit educational institutions in the Philippines List of universities and colleges in Metro Manila List of universities and colleges in the Philippines by province References External links List of Higher Education Institutions at Commission on Higher Education (Philippines) List of colleges & universities in the Philippines by region/province at Finduniversity.ph List of universities in the Philippines by region at courses.com.ph Universities Philippines Philippines
24985552
https://en.wikipedia.org/wiki/Computer%20Entrepreneur%20Award
Computer Entrepreneur Award
The Computer Entrepreneur Award was created in 1982 by the IEEE Computer Society, for individuals with major technical or entrepreneurial contributions to the computer industry. The work must be public, and the award is not given until fifteen years after the developments. The physical award is a chalice from sterling silver and under the cup a gold-plated crown. Recipients Following people received the Computer Entrepreneur Award: 2011: Diane Greene and Mendel Rosenblum, founders of VMware, for "creating a virtualization platform". 2009: Sandy Lerner and Len Bosack, founders of Cisco Systems, for "pioneering routing technology". 2008: John E. Warnock and Charles M. Geschke, founders of Adobe Systems, PostScript and PDF inventors, for the "desktop publishing revolution". 2008: Edwin E. Catmull, Pixar, for many important contributions in computer graphics. 2004: Bjarne Stroustrup, C++ inventor, for contributions to "object-oriented programming technologies". 2000: Michael Dell, founder of Dell Inc., for "revolutionizing the personal computer industry". 1999: Clive Sinclair, home computers pioneer, for "inspiring the computer industry". 1998: Bill Gates, Paul Allen, Steve Jobs and Steve Wozniak, founders of Microsoft and Apple Inc., for their contributions to the "personal computer industry". 1998: George Schussel, founder of Digital Consulting Institute (DCI), for "leadership in professional development, continuing education, and technology assessment". 1997: Andrew S. Grove, former CEO and Chairman of Intel Corporation, for "contributions to the computing industry and profession". 1996: Daniel S. Bricklin, "the father of the spreadsheet", for pioneering work on the spreadsheet. 1995: William Hewlett and David Packard, founders of Hewlett-Packard, for their "role model for the entire computing industry". 1990: J. Presper Eckert, co-inventor ENIAC (together with John Mauchly), for "pioneering design work" for the first general-purpose electronic digital computer. 1989: Gene M. Amdahl, for "entrepreneurial efforts" in the "mainframe industry". 1987: Erwin Tomash, for "pioneering work" on computer peripherals. 1986: Gordon Moore and Robert Noyce, for "early contributions to microcomputers and silicon components". 1985: Kenneth Olsen and William Norris, for "pioneering work" on minicomputers. See also List of computer-related awards References Computer-related awards IEEE society and council awards
30500259
https://en.wikipedia.org/wiki/Pankaj%20K.%20Agarwal
Pankaj K. Agarwal
Pankaj Kumar Agarwal is an Indian computer scientist and mathematician researching algorithms in computational geometry and related areas. He is the RJR Nabisco Professor of Computer Science and Mathematics at Duke University, where he has been chair of the computer science department since 2004. He obtained his Doctor of Philosophy (Ph.D.) in computer science in 1989 from the Courant Institute of Mathematical Sciences, New York University, under the supervision of Micha Sharir. Books Agarwal is the author or co-author of: Intersection and Decomposition Algorithms for Planar Arrangements (Cambridge University Press, 1991, ). The topics of this book are algorithms for, and the combinatorial geometry of, arrangements of lines and arrangements of more general types of curves in the Euclidean plane and the real projective plane. The topics covered in this monograph include Davenport–Schinzel sequences and their application to the complexity of single cells in arrangements, levels in arrangements, algorithms for building arrangements in part or in whole, and ray shooting in arrangements. Davenport–Schinzel Sequences and Their Geometric Applications (with Micha Sharir, Cambridge University Press, 1995, ). This book concerns Davenport–Schinzel sequences, sequences of symbols drawn from a given alphabet with the property that no subsequence of more than some finite length consists of two alternating symbols. As the book discusses, these sequences and combinatorial bounds on their length have many applications in combinatorial and computational geometry, including bounds on lower envelopes of sets of functions, single cells in arrangements, shortest paths, and dynamically-changing geometric structures. Combinatorial Geometry (with János Pach, Wiley, 1995, ). This book, less specialized than the prior two, is split into two sections. The first, on packing and covering problems, includes topics such as Minkowski's theorem, sphere packing, the representation of planar graphs by tangent circles, the planar separator theorem. The second section, although mainly concerning arrangements, also includes topics from extremal graph theory, Vapnik–Chervonenkis dimension, and discrepancy theory. Awards and honors Agarwal was elected as a fellow of the Association for Computing Machinery in 2002. He is also former Duke Bass Fellow and an Alfred P. Sloan Fellow. He was the recipient of a National Young Investigator Award in 1993. Before holding the RJR Nabisco Professorship, he was the Earl D. Mclean Jr. Professor of Computer Science at Duke. References External links , Duke University Department page at Duke University Year of birth missing (living people) Living people Courant Institute of Mathematical Sciences alumni Duke University faculty Researchers in geometric algorithms Fellows of the Association for Computing Machinery 20th-century Indian mathematicians
34433068
https://en.wikipedia.org/wiki/Computer-assisted%20survey%20information%20collection
Computer-assisted survey information collection
Computer-assisted survey information collection (CASIC) refers to a variety of survey modes that were enabled by the introduction of computer technology. The first CASIC modes were interviewer-administered, while later on computerized self-administered questionnaires (CSAQ) appeared. It was coined in 1990 as a catch-all term for survey technologies that have expanded over time. Modes The most common modes of computer-assisted survey information collection, ranked by the extent of interviewer involvement, are: CATI (Computer-assisted telephone interviewing) is the initial CASIC mode where a remotely present interviewer calls respondents by phone and enters the answers into a computerized questionnaire. CAPI (Computer-assisted personal interviewing) was enabled by the introduction of portable computers where a physically present interviewer brings the computer with the questionnaire to the respondent and enters the answers into it. CASI (Computer-assisted self-interviewing) is similar to CAPI but the respondent enters the answers on a computer of a physically present interviewer. Questions can also be presented in the form of audio (audio-CASI) or video clips (video-CASI). CAVI (Computer-assisted video interviewing) is similar to CATI but the communication between a remotely present interviewer and the respondent is established via video chat. Disk by mail includes a floppy or optical disk that is sent to the respondent. The interviewer is not present. Touch-tone data entry (TDE) means that the respondent enters the answers by pressing the appropriate numeric keys on a telephone handset. The interviewer is not present. Interactive voice response (IVR) includes a wide range of approaches for voice communication with a computer using the telephone. Modern speech recognition enabled IVR systems allow the respondent to provide complex answers through the telephone that are automatically recorded as text. The interviewer is not present. Internet surveys include a variety of survey modes (e.g. mail, web) where the most widely used are web surveys. The interviewer is not present. Virtual interviewer surveys are usually carried out via the Internet, where some kind of virtual interviewer introduces the questions to the respondent. Future technological developments will enable increased virtualization and interviewers will probably become completely computerized virtual characters. The interviewer is not present. Benefits Benefits of CASIC include: Reduced time and costs for data input Elimination of errors during data transcription Implementation of advanced features, such as automatic skips and branching, randomization of questions and response options, control of answer validity, and inclusion of multimedia elements Increased sense of privacy for the respondent Reduced cost of research Higher data quality due to the absence of interviewer-related bias. See also Comparison of survey software References Survey methodology
59364603
https://en.wikipedia.org/wiki/Kanjoya
Kanjoya
Kanjoya was an enterprise software-as-a-service (SaaS) company that developed natural language processing (NLP) based artificial intelligence to understand, measure, and improve customer and employee experience. Founded in 2006 by Armen Berjikly, the company was acquired by Ultimate Software in 2016. History Headquartered in San Francisco, Kanjoya's vision was to deliver "empathy through technology." The company's core intellectual property was a significant advance in sentiment analysis enabling real-time recognition of over 100 human emotions in written text, with greater accuracy than could be expected of human analysts. In its development phase, preliminary applications of Kanjoya's technology included measuring the emotional reaction of audiences during political debates, understanding how advertisements made consumers feel, and analyzing consumer sentiment to successfully predict future actions of the Federal Reserve Board. Eventually, Kanjoya focused product development on understanding employee sentiment in the workplace, through inputs including open-ended survey questions and performance reviews. Berjikly noted: "The area that we thought we could make the most impact, where people were the least understood, but yet affected the biggest part of their lives was [the] employee world." Products At the time of its acquisition, Kanjoya marketed three software-as-a-service offerings: Perception to conduct employee surveys, analyze the results, and distribute results to managers Perception for Performance to evaluate, develop, and motivate employees Perception for Diversity and Inclusion to diagnose conscious and unconscious bias in the workplace All solutions focussed on understanding employee's qualitative feedback in addition to more traditional quantitative insights. Recognitions Kanjoya was awarded Awesome New Startup at the 2015 HR Technology Conference; and named Gartner's “Cool Vendor in Human Capital Management, 2016.” In a research partnership with the Federal Reserve Board, it was shown that Kanjoya's technology "outperforms the University of Michigan index of consumer sentiment for predicting macroeconomic series such as output and unemployment." References Software companies based in California Companies based in San Francisco Software companies of the United States 2006 establishments in the United States 2006 establishments in California Software companies established in 2006 Companies established in 2006
79668
https://en.wikipedia.org/wiki/Uniq
Uniq
uniq is a utility command on Unix, Plan 9, Inferno, and Unix-like operating systems which, when fed a text file or standard input, outputs the text with adjacent identical lines collapsed to one, unique line of text. Overview The command is a kind of filter program. Typically it is used after sort. It can also output only the duplicate lines (with the -d option), or add the number of occurrences of each line (with the -c option). For example, the following command lists the unique lines in a file, sorted by the number of times each occurs: $ sort file | uniq -c | sort -n Using uniq like this is common when building pipelines in shell scripts. History First appearing in Version 3 Unix, uniq is now available for a number of different Unix and Unix-like operating systems. It is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. The version bundled in GNU coreutils was written by Richard Stallman and David MacKenzie. A uniq command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. See also List of Unix commands References External links SourceForge UnxUtils – Port of several GNU utilities to Windows Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands
21121213
https://en.wikipedia.org/wiki/Desmond%20%28software%29
Desmond (software)
Desmond is a software package developed at D. E. Shaw Research to perform high-speed molecular dynamics simulations of biological systems on conventional computer clusters. The code uses novel parallel algorithms and numerical methods to achieve high performance on platforms containing multiple processors, but may also be executed on a single computer. The core and source code are available at no cost for non-commercial use by universities and other not-for-profit research institutions, and have been used in the Folding@home distributed computing project. Desmond is available as commercial software through Schrödinger, Inc. Molecular dynamics program Desmond supports algorithms typically used to perform fast and accurate molecular dynamics. Long-range electrostatic energy and forces can be calculated using particle mesh Ewald-based methods. Constraints can be enforced using the M-SHAKE algorithm. These methods can be used together with time-scale splitting (RESPA-based) integration schemes. Desmond can compute energies and forces for many standard fixed-charged force fields used in biomolecular simulations, and is also compatible with polarizable force fields based on the Drude formalism. A variety of integrators and support for various ensembles have been implemented in the code, including methods for temperature control (Andersen, Nosé-Hoover, and Langevin) and pressure control (Berendsen, Martyna-Tobias-Klein, and Langevin). The code also supports methods for restraining atomic positions and molecular configurations; allows simulations to be carried out using a variety of periodic cell configurations; and has facilities for accurate checkpointing and restart. Desmond can also be used to perform absolute and relative free energy calculations (e.g., free energy perturbation). Other simulation methods (such as replica exchange) are supported through a plug-in-based infrastructure, which also allows users to develop their own simulation algorithms and models. Desmond is also available in a graphics processing unit (GPU) accelerated version that is about 60-80 times faster than the central processing unit (CPU) version. Related software tools Along with the molecular dynamics program, the Desmond software also includes tools for minimizing and energy analysis, both of which can be run efficiently in a parallel environment. Force fields parameters can be assigned using a template-based parameter assignment tool called Viparr. It currently supports several versions of the CHARMM, Amber and OPLS force fields, and a range of different water models. Desmond is integrated with a molecular modeling environment (Maestro, developed by Schrödinger, Inc.) for setting up simulations of biological and chemical systems, and is compatible with Visual Molecular Dynamics (VMD) for trajectory viewing and analysis. See also D. E. Shaw Research Folding@home Comparison of software for molecular mechanics modeling Metadynamics Molecular design software Molecular dynamics References External links Desmond Users Group Schrödinger Desmond Product Page Molecular dynamics software Force fields
21228779
https://en.wikipedia.org/wiki/Darius%20Crouter
Darius Crouter
Darius Crouter (May 5, 1827 – May 9, 1910) was a Canadian minister, farmer and political figure. He represented Northumberland East in the House of Commons of Canada from 1881 to 1882 as an Independent Liberal member. He was born in Haldimand Township, Upper Canada. Crouter was a minister of the Methodist Episcopal Church, who retired to farming later in life. He was elected to the House of Commons in an 1881 by-election held following the death of Joseph Keeler. Crouter was defeated when he ran for reelection in 1882. He lived near Brighton. References The Canadian parliamentary companion and annual register, 1882 CH Mackintosh 1827 births 1910 deaths Members of the House of Commons of Canada from Ontario Methodist ministers
57760387
https://en.wikipedia.org/wiki/SCION%20%28Internet%20architecture%29
SCION (Internet architecture)
SCION (Scalability, Control, and Isolation On Next-Generation Networks) is a modern Future Internet architecture that aims to offer high availability and efficient point-to-point packet delivery, even in the presence of actively malicious network operators and devices. As of 2018 it is an ongoing research project lead by researchers at ETH Zurich and, among other Future Internet proposals, is being explored in the Internet Engineering Task Force research group for path-aware networking. Goals Availability in the presence of distributed adversaries: As long as an attacker-free path between endpoints exists, it should be discovered and utilized with guaranteed bandwidth. Transparency and Control: Separation of control and data planes by encoding paths as packet-carried forwarding state (PCFS) in the packet header, as well as enabling of multipath communication for enhanced availability and defense against network attacks. Efficiency, Scalability, and Extensibility: Packet forwarding is at least efficient in latency and throughput as current IP in common cases and more scalable with respect to BGP and the size of routing tables. Achieved by storing state in packet headers and protecting them cryptographically, using modern block ciphers such as AES that can be computed very efficiently (within 10ns on a modern CPU ). Support for Global but Heterogeneous Trust: Scale the authentication of entities to a global environment and utilizing trust agility so each end host or user can know the complete set of trust roots for the validation of a certificate. Deployability: Deployment should only require installation or upgrade of a few border routers, thus requiring minimal added complexity to the existing infrastructure. In addition, it should not disrupt current Internet topology and business models/relationships (e.g., should still support peering). Isolation domains and autonomous systems SCION introduces the concept of an isolation domain (ISD) which is a logical grouping of autonomous systems (ASes), administered by a smaller subset of the ASes that constitute the ISD core. The ISD is governed by a policy, called the trust root configuration (TRC), which is negotiated by the ISD core and defines the roots of trust that are used to validate bindings between names and public keys or addresses. ASes within an ISD can be connected by core links, customer-provider links, or peering links, representative of the relationship between the ASes. Within an AS there are several services such as: Beacon Servers - responsible for beaconing which is a process to generate, receive, and propagate messages called path-segment construction beacons (PCBs) to construct path segments and explore routing paths. Path Servers - storage for mappings of AS to path that were discovered during beaconing. Name Servers - perform name translation similar to DNS by using RAINS to retrieve (ISD, AS) tuple that can be used to find and construct end-to-end paths. Certificate Servers - cache for copies of TRCs retrieved from the ISD core, AS certificates, and key management for securing inter-AS communication. Border Routers - used for SCION packet forwarding to the next SCION border router or to the destination host within the destination AS. Control plane The control plane is responsible for discovering networking paths and making those paths available to end hosts. Inter-domain beaconing connects ISDs by enabling core ASes to learn paths to other core ASes while intra-domain beaconing allows non-core ASes to learn path segments to core ASes. The SCION control plane operates at the AS level, while communication within an AS is governed by existing intra-domain communication technologies and protocols (e.g. OSPF, SDN, MPLS). To reach a remote destination, a host performs a path lookup at its local path server to obtain up-segments (from source AS to the core), down segments (from core AS to destination AS), and core segments (between core ASes) in the case these up and down segments end at different core ASes. Paths can be combined as desired, possibly using peering links where available. Data plane A SCION packet minimally contains a path and the data plane ensures packet forwarding using the provided paths. Forwarding utilizes a split of locator (AS-level path) and identifier (the destination address), like in the Locator/Identifier Separation Protocol (LISP). As a result, SCION border routers forward packets based on the AS-level path in the packet header without inspecting the destination address and also without consulting an inter-domain routing table. The destination address can have any format that the destination AS can interpret because only the border router at the destination AS needs to inspect the destination address to forward it to the appropriate local host. The destination can respond to the source by inverting the end-to-end path from the packet header, or it can perform its own path lookup and path-segment construction. Security Similar to BGPsec, each AS signs the PCBs it forwards. This signature enables PCB validation by all entities. To ensure path correctness, the forwarding information within each packet is also cryptographically protected. Each AS uses a secret symmetric key that is shared among beacon servers and border routers and is used to efficiently compute a message authentication code (MAC) over the forwarding information. The per-AS information includes the ingress and egress interfaces, an expiration time, and the MAC computed over these fields, which is (by default) all encoded within an 8-byte field referred to as a hop field (HF). Deployment and commercial operations SCION is running on a number of nodes around the world. "In 2017, several internet service providers and financial institutions in Switzerland wanted to use SCION for their commercial operations. And so Adrian Perrig founded the spin-off Anapaya Systems together with David Basin and Peter Müller, fellow professors at the Department of Computer Science at ETH Zurich." The first ISPs to use SCION are Swisscom and SWITCH. Several corporations have obtained SCION network connections through these ISPs to the corporate SCION network. Among the first customer deployments are SNB, ZKB and SIX from the Swiss financial sector. References Further reading External links Official website IETF Path-Aware Networking Research Group Network layer protocols Routing protocols Internet layer protocols
590873
https://en.wikipedia.org/wiki/Tim%20Sweeney%20%28game%20developer%29
Tim Sweeney (game developer)
Timothy Dean Sweeney (born 1970) is an American video game programmer, businessman and conservationist, known as the founder and CEO of Epic Games, and as the creator of the Unreal Engine, one of the most used game development platforms. Early life Sweeney was raised in Potomac, Maryland, the youngest of three brothers. At a young age, he became interested in tinkering with mechanical and electrical devices, and stated he had taken apart a lawnmower as early as five or six, and later built his own go-kart. He became interested in arcade games when they began to become popular in the late 1970s, knowing that like the mechanics devices he took apart and repaired, there were those that had programmed the games in the machines. Though the family got an Atari 2600, Sweeney was not as interested in the games for that, outside of Adventure, and later said he had not played many video games in his life and very few to completion. At the age of 11, Sweeney visited his older brother's new startup in California, where he had access to early IBM Personal Computers. Sweeney spent the week there, learning BASIC and establishing his interest in programming; while he had had a Commodore 64 before, Sweeney was much more taken by how easy the IBM PC was to use. When his family got an Apple II, Sweeney began in earnest learning how to program on that, trying to make Adventure 2 in the spirit of the Atari 2600 game. Sweeney estimated that between the ages of 11 and 15, he spent over 10,000 hours teaching himself how to program using information on online bulletin boards, and completed several games, though never shared these with others. He also learned from his brothers concepts of entrepreneurship. As a teenager, he made a good deal of money by offering to mow lawns of wealthy residents in the area for half the price of professional services. Founding of Epic Games Sweeney attended the University of Maryland starting around 1989, where he studied mechanical engineering, though he was still fascinated by computers. Around this time, his father, who worked for the Defense Mapping Agency, gave him an IBM Personal Computer/AT. Sweeney established a consulting business, Potomac Computer Systems, out of his parents' home to offer help with computers, but it never took off and he shelved the company. Later, Sweeney had the idea of creating games that could be sold, programming them at night or over weekends outside college work. This first required him to create a text editor based on the Pascal language to be able to program the game, which led to the idea of making a game out of the text editor itself. This became the basis of ZZT. He let college friends and those around his neighborhood provide feedback, and was aware it was something he could sell to other computer users. To distribute the game, Sweeney looked to the shareware model, and wrote to Scott Miller of Apogee Software, Ltd., a leading shareware producer at the time, for ideas on how to distribute ZZT. He revitalized Potomac Computer Systems for selling ZZT, fulfilling mail orders with help of his father. ZZT sold well enough, a few copies each day that came to about per day, that Sweeney decided to make developing games his career. Recognizing he needed a better name for a video game company, he renamed Potomac Computer Systems to Epic MegaGames. Following ZZT, Sweeney started working on his next title, Jill of the Jungle, but found that he lacked the skills to complete this alone. He formed a team of four people to complete the game by mid-1992. For continued development, Sweeney sought out a business partner for Epic MegaGames, eventually coming to Mark Rein, who had just been let go from id Software. Rein helped with growing and managing the company; due to the company's growth, Sweeney did not end up getting his degree, short by one credit. Sweeney would later start work on the Unreal Engine, developed for the 1998 first-person shooter Unreal and licensed by multiple other video games. With the success of Unreal, the company relocated to, North Carolina in 1999, and changed its name to Epic Games. Sweeney has filed several patents related to computer software. Conservation and philanthropy Since the real estate bubble collapsed in 2008, Sweeney has used his fortune to purchase large tracts of land in North Carolina for conservation, becoming one of the largest private landowners in the state. As of December 2019, he has salvaged 50,000 acres of forest land including the Box Creek Wilderness, a 7,000-acre natural area that contains more than 130 rare and threatened plants and wildlife species. Sweeney, who had paid $15 million for Box Creek Wilderness, donated the conservation easement to the United States Fish and Wildlife Service in 2016. One of the motives to put Box Creek Wilderness under conservation easement was a condemnation lawsuit filed by a power company who planned to build a transmission line through the land. The lawsuit was settled following the Fish and Wildlife Service's and Senator Richard Burr's involvement in protecting the site, which prevented it from being fragmented. "I'm grateful for the efforts of Senator Burr to help protect Box Creek Wilderness,” Sweeney said. "And for the whole Fish and Wildlife Service team's tireless efforts to preserve vital North Carolina natural areas in partnership with conservation-minded landowners like me." Additionally, he has participated in the expansion to Mount Mitchell State Park by donating 1,500 acres to a conservation project. In April 2021, it was announced that Sweeney would donate 7,500 acres in the Roan Highlands of western North Carolina to the Southern Appalachian Highlands Conservancy. When transferred the next year, the conservancy will manage the property as a nature preserve, conducting scientific studies in collaboration with Sweeney and offering guided hikes. This acreage, valued at tens of millions of dollars, is reportedly the largest private conservation land donation in the history of North Carolina. Awards and recognition Wired magazine awarded him a Rave Award in 2007 for his work on Unreal Engine 3, the technology behind the blockbuster Gears of War. In February 2012, Sweeney was inducted into the Academy of Interactive Arts & Sciences (AIAS) Hall of Fame for changing "the face of gaming with the advent of the Unreal Engine and the commitment of Epic, as a studio, to bring both consumer and industry-facing technology to new heights." In recognition of his conservation efforts, he was named Land Conservationist of the Year by the North Carolina Wildlife Federation in 2013, and later in 2014 the land trusts of North Carolina honored him with the Stanback Volunteer Conservationist of the Year Award. In 2017, Sweeney was the recipient of the Lifetime Achievement Award at the Game Developers Choice Awards. In 2019, he was named Person of the Year by British video game industry trade magazine MCV. He was also a finalist for The News & Observer'''s Tar Heel of the Year award, which recognizes the contributions of North Carolina residents. At the Forbes Media Awards 2020, Sweeney was chosen as Person of the Year for building and turning Fortnite into a social network with his company, hosting online events such as Travis Scott's in-game concert which drew 28 million viewers. Personal Sweeney lives in Cary, North Carolina. According to Forbes, , he has a net worth of $7.4 billion. However, Bloomberg estimates his wealth at $9.4 billion. Publications Tim Sweeney (2000). A Critical Look at Programming Languages. GameSpy – via Internet Archive. Tim Sweeney (2006). The Next Mainstream Programming Language: A Game Developer's Perspective. Symposium on Principles of Programming Languages (POPL) – via MIT CSAIL. Tim Sweeney (2008). Wild Speculation on Consumer Workloads: 2012-2020. IEEE International Symposium on Workload Characterization (IISWC). Neal Glew, Tim Sweeney & Leaf Petersen (2013). A Multivalued Language with a Dependent Type System. Proceedings of the 2013 ACM SIGPLAN workshop on Dependently-typed programming. Neal Glew, Tim Sweeney & Leaf Petersen (2013). Formalisation of the λℵ Runtime''. arXiv. References Sources Further reading External links Tim Sweeney's profile on the Academy of Interactive Arts & Sciences BAFTA Celebrates: Epic Games – Interview with Tim Sweeney 1970 births American technology chief executives American technology company founders American video game designers Living people Businesspeople from Raleigh, North Carolina People from Potomac, Maryland University of Maryland, College Park alumni Epic Games Academy of Interactive Arts & Sciences Hall of Fame inductees American video game programmers American billionaires Game Developers Conference Lifetime Achievement Award recipients American conservationists
11418059
https://en.wikipedia.org/wiki/Law%20practice%20management
Law practice management
Law practice management (LPM) is the management of a law practice. In the United States, law firms may be composed of a single attorney, of several attorneys, or of many attorneys, plus support staff such as paralegals/legal assistants, secretaries (including legal secretaries), and other personnel. Debate over law as a profession versus a business has occurred for over a century; a number of observers believe that it is both. Law practice management is the study and practice of business administration in the legal context, including such topics as workload and staff management; financial management; office management; and marketing, including legal advertising. Many lawyers have commented on the difficulty of balancing the management functions of a law firm with client matters. History Lawyers started practicing centuries ago. Law firms as an institution date back to the 19th century, and in the United States began appearing in the period before the Civil War. predating the development of modern management theory. Today, the George Washington University College of Professional Studies (CPS) offers a Master of Professional Studies and Graduate Certificate in Law Firm Management. ABA Law Practice Division The leading organization focused on law practice management in the United States is the Law Practice Division of the American Bar Association which traces its history back to the creation of the ABA Special Committee on Economics of Law Practice by the ABA Board of Governors on July 30, 1957. In August 1957, when Charles S. Rhyne became President of the ABA, he made one of his major objectives the institution of a “comprehensive program to aid members of the ABA in the field of economics of law practice”. He appointed the first Committee which consisted of five members and was increased in May of the following year to seven by action of the Board. The first Chair of the Special Committee was John C. Satterfield of Yazoo City, Mississippi. The Committee was charged with the duty of laying the groundwork for the development of practical suggestions to lawyers, designed to improve their economic status. Combined with this, there was to be an increase in coordination of assistance to lawyers in the business phase of the practice of law, achieved by ABA through its staff, committees and sections and by the state and local bar associations. An early publication from the Committee was The 1958 Lawyer and His 1938 Dollar. Satterfield was elected President of the American Bar Association in 1961 during which The Lawyers Handbook was first published and distributed to all attorneys who joined the ABA that year. By action of the Board of Governors at the ABA Annual Meeting in August 1961, the Special Committee was made a standing committee of the Association and Lewis F. Powell of Richmond, Virginia, was appointed as the first Chair of the ABA Standing Committee on Economics of Law Practice. Shortly after the completion of his term as Chair in 1962, he was elected President of the American Bar Association and subsequently became an Associate Justice of the United States Supreme Court in 1972. The Standing Committee on Economics of Law Practice published a bimonthly newsletter, Legal Economics News, and more than 30 books and pamphlets, three educational films, and an audio cassette program. The Committee continued to publish The Lawyer's Handbook. The Committee's staff answered over one hundred inquiries a month from attorneys regarding the application of sound management principles to the law office operation. In addition, a small group of attorneys led by J. Harris Morgan of Greenville, Texas, Kline Strong of Salt Lake City, Utah, Lee Turner of Great Bend, Kansas and Jimmy Brill of Houston, Texas, traveled throughout the US, presenting programs on law firm management. Their efforts created the need for the Section. Commencing in 1965 when John D. Connor served as Chair, the Committee presented the first of six National Conferences of Law Office Economics and Management in Chicago, which attracted approximately 500 lawyers throughout the country and several foreign countries. As activities expanded and lawyer interest in law office management increased, it became apparent that the committee structure could not meet the demonstrated need of American lawyers for assistance in law practice issues and limited the participation and contribution of interested and informed lawyers in the vital economics and efficiency programs of the Association. Accordingly, two members of the Committee, Robert S. Mucklestone of Seattle, Washington, the former Chair of the Young Lawyers Section, and Richard A. Williams of Little Rock, Arkansas were joined by William J. Fuchs of Haverford, Pennsylvania; John “Buddy” Thomason of Memphis, Tennessee; and Robert P. Wilkins of Columbia, South Carolina, and commenced efforts to form a section to address the subject of law office economics and management. Proponents for a new Section originally proposed that the Board of Governors recommend to the House the creation of a Section of Law Office Practice and Efficiency, but after deliberation it was determined that the new Section should be called by the Standing Committee name. At the ABA Midyear Meeting in Houston, Texas in February 1974, the House of Delegates approved establishing the Section of Economics of Law Practice. This action culminated a two-year effort to expand the Committee's work to a much wider lawyer population. The organizational meeting of the Section was held in April 1974 at the close of the Sixth National Conference of Law Office Economics and Management. Robert S. Mucklestone of Seattle Washington, who had served as the Chair of the ABA Committee on Economics of Law Practice was elected the new Section chair with 1074 charter members. Those chosen to serve on the initial Section Council were selected from the members of the ABA Committee on Economics of Law Practice and the Committee on Legal Assistants, other ABA contacts, speakers from the National Conferences and attorneys active at the state level. The current chair of the American Bar Association Law Practice Division is Tom Bolt, a St. Thomas, U.S. Virgin Islands lawyer with BoltNagi PC. Other organizations and consultants The Association of Legal Administrators (ALA) professional association, which was founded in 1971 is another organization that is concerned with law practice management. A large number of law practice consulting firms also exist. Many bar associations have a law practice section or division which they allow non-attorney members due to the technical, non-legal basis of law office management. Elements of law-practice management Law practice management includes management of people (clients, staff, vendors), workplace facilities and equipment, internal processes and policies, and financial matters such as collection, budgeting, financial controls, payroll, and client trust accounts. Software and legal research Software applications have become increasingly important in modern law practice. Picking the best software for a law office depends on many variables. Practice management software, a form of customer relationship management software, is among the most important, and features and functions of such management software often include case management (databases, conflict of interest checking, statute of limitations checking), time tracking, billing, document storage, document assembly, task management, contact management, and calendaring and docket. Other software used includes password security, disk encryption, mindmapping, desktop notes, word processing, and email management. Some firms use modified versions of open source software. Most law firms also subscribe to a computer-assisted legal research database for legal research. Such databases provide case law from case reporters, and often other legal resources. The two largest legal databases are Westlaw (part of West, which is owned by Thomson Reuters) and LexisNexis, but other databases also exist, such the free Google Scholar, and the newer Bloomberg Law, as well as Loislaw (operated by Wolters Kluwer) and several smaller databases. These document automation tools allow any lawyer to create their own workflows, in the same way that companies like LegalZoom and RocketLawyer offer standardized automated document production for individuals and small businesses. Some bar associations and lawyers' organizations have their own software; for example, the American Academy of Estate Planning Attorneys' CounselPro program is designed for estate planning lawyers and assists in producing wills, trusts, and other legal documents, as well as other documents such as thank-you letters. Firm personnel Human resource management (managing personnel) is an important aspect of law practice management, and many books and other resources offer advice to firms on this topic. Law firms often employ a number of non-legal personnel or support staff; according to one figure, the average attorney to non-attorney ratio is 1 to 1.3. Many firms and other organizations employ a professional non-attorney legal administrator, or law firm administrator, to manage non-attorney personnel and the administrative aspects of the firm. The professional association for legal administrators is the Association of Legal Administrators (ALA), founded in 1971. Over the past two decades, the role of legal administrator has changes as duties have expanded and become more complex, and as more firms hired administrators; the ALA grew from less than a thousand members in 1976 to over 8,000 in 1995. According to the ALA, in 2007 some 76 percent of legal administrators were women in their 40s and 50s. The main duties of legal administrators are the financial, operation, and human resource management of the firm. A legal administrator is similar to an office manager or executive director, but often with some expanded duties. Depending on the size, needs, and type of law firm, the firm may employ a separate database manager, network administrator, marketing director, computer systems or information technology manager, bookkeeper, accounts payable and accounts receivable clerk, and others. See also Alternative fee arrangements Attorney's fee Best practice Book of business (law) Contingency fee Document review Interest on Lawyer Trust Accounts (IOLTA) Law Practice Manager Legal ethics Retainer agreement Sole practitioner (lawyer) Trial practice References Further reading External links Law Practice Management Section of the American Bar Association - includes article from the section's publications: Law Practice bimonthly magazine, Law Practice Today monthly webzine, LawPractice.news bimonthly newsletter, and ABA Women Rainmakers monthly newsletter Practice of law Management by type
75433
https://en.wikipedia.org/wiki/Windows%2098
Windows 98
Windows 98 is a consumer-oriented operating system developed by Microsoft as part of its Windows 9x family of Microsoft Windows operating systems. The second operating system in the 9x line, it is the successor to Windows 95, and was released to manufacturing on May 15, 1998, and generally to retail on June 25, 1998. Like its predecessor, it is a hybrid 16-bit and 32-bit monolithic product with the boot stage based on MS-DOS. Windows 98 is a heavily web-integrated operating system that bears numerous similarities to its predecessor and relies on the HTML language. Most of its improvements were cosmetic or designed to improve the user experience, but there were also a handful of features introduced to enhance system functionality and capabilities, including improved USB support and accessibility, as well as support for hardware advancements such as DVD players. Windows 98 was the first edition of Windows to adopt the Windows Driver Model, and introduced features that would become standard in future generations of Windows, such as Disk Cleanup, Windows Update, multi-monitor support, and Internet Connection Sharing. Microsoft had marketed Windows 98 as a "tune-up" to Windows 95, rather than an entirely improved next generation of Windows. Upon release, it was generally well-received for its web-integrated interface and ease of use, as well as its addressing of issues present in Windows 95, although some pointed out that it was not significantly more stable than its predecessor. Windows 98 sold an estimated 58 million licenses, and saw one major update, known as Windows 98 Second Edition (SE), released on May 5, 1999. After the release of its successor, Windows Me in 2000, mainstream support for Windows 98 and 98 SE ended on June 30, 2002, followed by extended support on July 11, 2006. Development Following the success of Windows 95, development of Windows 98 began, initially under the development codename "Memphis." The first test version, Windows Memphis Developer Release, was released in January 1997. Memphis first entered beta as Windows Memphis Beta 1, released on June 30, 1997. It was followed by Windows 98 Beta 2, which dropped the Memphis name and was released in July. Microsoft had planned a full release of Windows 98 for the first quarter of 1998, along with a Windows 98 upgrade pack for Windows 95, but it also had a similar upgrade for Windows 3.x operating systems planned for the second quarter. Stacey Breyfogle, a product manager for Microsoft, explained that the later release of the upgrade for Windows 3 was because the upgrade required more testing than that for Windows 95 due to the presence of more compatibility issues, and without user objections, Microsoft merged the two upgrade packs into one and set all of their release dates to the second quarter. On December 15, Microsoft released Windows 98 Beta 3. It was the first build to be able to upgrade from Windows 3.1x, and introduced new startup and shutdown sounds. Near its completion, Windows 98 was released as Windows 98 Release Candidate on April 3, 1998, which expired on December 31. This coincided with a notable press demonstration at COMDEX that month. Microsoft CEO Bill Gates was highlighting the operating system's ease of use and enhanced support for Plug and Play (PnP). However, when presentation assistant Chris Capossela plugged a USB scanner in, the operating system crashed, displaying a Blue Screen of Death. Bill Gates remarked after derisive applause and cheering from the audience, "That must be why we're not shipping Windows 98 yet." Video footage of this event became a popular Internet phenomenon. Microsoft had quietly marketed the operating system as a "tune-up" to Windows 95. It was compiled as Windows 98 on May 11, 1998, before being fully released to manufacturing on May 15. The company was facing pending legal action for allowing free downloads of, and planning to ship Windows licenses with, Internet Explorer 4.0 in an alleged effort to expand its software monopoly. Microsoft's critics believed the lawsuit would further delay Windows 98's public release; it did not, and the operating system was released on June 25, 1998. A second major version of the operating system called Windows 98 Second Edition was later unveiled in March 1999. Microsoft compiled the final build on April 23, 1999, before publicly releasing it on May 5, 1999. Windows 98 was to be the final product in the Windows 9x line until Microsoft briefly revived the line to release Windows Me in 2000 as the final Windows 9x product before the introduction of Windows XP in 2001. New and updated features Web integration and shell enhancements The first release of Windows 98 included Internet Explorer 4.01. This was updated to 5.0 in the Second Edition. Besides Internet Explorer, many other Internet companion applications are included such as Outlook Express, Windows Address Book, FrontPage Express, Microsoft Chat, Personal Web Server and a Web Publishing Wizard, and NetShow. NetMeeting allows multiple users to hold conference calls and work with each other on a document. The Windows 98 shell is web-integrated; it contains deskbands, Active Desktop, Channels, ability to minimize foreground windows by clicking their button on the taskbar, single-click launching, Back and Forward navigation buttons, favorites, and address bar in Windows Explorer, image thumbnails, folder infotips and Web view in folders, and folder customization through HTML-based templates. The taskbar supports customizable toolbars designed to speed up access to the Web or the user's desktop; these toolbars include an Address Bar and Quick Launch. With the Address Bar, the user accesses the Web by typing in a URL, and Quick Launch contains shortcuts or buttons that perform system functions such as switching between windows and the desktop with the Show Desktop button. Another feature of this new shell is that dialog boxes show up in the Alt-Tab sequence. Windows 98 also integrates shell enhancements, themes and other features from Microsoft Plus! for Windows 95 such as DriveSpace 3, Compression Agent, Dial-Up Networking Server, Dial-Up Scripting Tool and Task Scheduler. 3D Pinball Space Cadet is included on the CD-ROM, but not installed by default. Windows 98 had its own separately purchasable Plus! pack, called Plus! 98. Title bars of windows and dialog boxes support two-color gradients, a feature ported from and refined from Microsoft Office 95. Windows menus and tooltips support slide animation. Windows Explorer in Windows 98, as in Windows 95, converts all-uppercase filenames to sentence case for readability purposes; however, it also provides an option Allow all uppercase names to display them in their original case. Windows Explorer includes support for compressed CAB files. The Quick Res and Telephony Location Manager Windows 95 PowerToys are integrated into the core operating system. Improvements to hardware support Windows Driver Model Windows 98 was the first operating system to use the Windows Driver Model (WDM). This fact was not well publicized when Windows 98 was released, and most hardware producers continued to develop drivers for the older VxD driver standard, which Windows 98 supported for compatibility's sake. The WDM standard only achieved widespread adoption years later, mostly through Windows 2000 and Windows XP, as they were not compatible with the older VxD standard. With the Windows Driver Model, developers could write drivers that were compatible with other versions of Windows. Device driver access in WDM is actually implemented through a VxD device driver, NTKERN.VXD, which implements several Windows NT-specific kernel support functions. Support for WDM audio enables digital mixing, routing and processing of simultaneous audio streams and kernel streaming with high quality sample rate conversion on Windows 98. WDM Audio allows for software emulation of legacy hardware to support MS-DOS games, DirectSound support and MIDI wavetable synthesis. The Windows 95 11-device limitation for MIDI devices is eliminated. A Microsoft GS Wavetable Synthesizer licensed from Roland shipped with Windows 98 for WDM audio drivers. Windows 98 supports digital playback of audio CDs, and the Second Edition improves WDM audio support by adding DirectSound hardware mixing and DirectSound 3D hardware abstraction, DirectMusic kernel support, KMixer sample-rate conversion for capture streams and multichannel audio support. All audio is sampled by the Kernel Mixer to a fixed sampling rate which may result in some audio getting upsampled or downsampled and having a high latency, except when using Kernel Streaming or third-party audio paths like ASIO which allow unmixed audio streams and lower latency. Windows 98 also includes a WDM streaming class driver (Stream.sys) to address real time multimedia data stream processing requirements and a WDM kernel-mode video transport for enhanced video playback and capture. Windows Driver Model also includes Broadcast Driver Architecture, the backbone for TV technologies support in Windows. WebTV for Windows utilized BDA to allow viewing television on the computer if a compatible TV tuner card is installed. TV listings could be updated from the Internet and WaveTop Data Broadcasting allowed extra data about broadcasts to be received via regular television signals using an antenna or cable, by embedding data streams into the vertical blanking interval portion of existing broadcast television signals. Other device support improvements Windows 98 had more robust USB support than Windows 95, which only had support in OEM versions OSR2.1 and later. Windows 98 supports USB hubs, USB scanners and imaging class devices. Windows 98 also introduced built-in support for some USB Human Interface Device class (USB HID) and PID class devices such as USB mice, keyboards, force feedback joysticks etc. including additional keyboard functions through a certain number of Consumer Page HID controls. Windows 98 introduced ACPI 1.0 support which enabled Standby and Hibernate states. However, hibernation support was extremely limited, and vendor-specific. Hibernation was only available if compatible (PnP) hardware and BIOS are present, and the hardware manufacturer or OEM supplied compatible WDM drivers, non-VxD drivers. However, there are hibernation issues with the FAT32 file system, making hibernation problematic and unreliable. Windows 98, in general, provides improved — and a broader range of — support for IDE and SCSI drives and drive controllers, floppy drive controllers and all other classes of hardware as compared to Windows 95. There is integrated Accelerated Graphics Port (AGP) support (although the USB Supplement to Windows 95 OSR2 and later releases of Windows 95 did have AGP support). Windows 98 has built-in DVD support and UDF 1.02 read support. The Still imaging architecture (STI) with TWAIN support was introduced for scanners and cameras and Image Color Management 2.0 for devices to perform color space transformations. Multiple monitor support allows using up to nine multiple monitors on a single PC, with the feature requiring one PCI graphics adapter per monitor. Windows 98 shipped with DirectX 5.2, which notably included DirectShow. Windows 98 Second Edition would later ship with DirectX 6.1. Networking enhancements Windows 98 networking enhancements to TCP/IP include built-in support for Winsock 2, SMB signing, a new IP Helper API, Automatic Private IP Addressing (also known as link-local addressing), IP multicasting, and performance enhancements for high-speed high bandwidth networks. Multihoming support with TCP/IP is improved and includes RIP listener support. The DHCP client has been enhanced to include address assignment conflict detection and longer timeout intervals. NetBT configuration in the WINS client has been improved to continue persistently querying multiple WINS servers if it failed to establish the initial session until all of the WINS servers specified have been queried or a connection is established. Network Driver Interface Specification 5 support means Windows 98 can support a wide range of network media, including Ethernet, Fiber Distributed Data Interface (FDDI), Token Ring, Asynchronous Transfer Mode (ATM), ISDN, wide area networks, X.25, and Frame Relay. Additional features include NDIS power management, support for quality of service, Windows Management Instrumentation (WMI) and support for a single INF file format across all Windows versions. Windows 98 Dial-Up Networking supports PPTP tunneling, support for ISDN adapters, multilink support, and connection-time scripting to automate non-standard login connections. Multilink channel aggregation enables users to combine all available dial-up lines to achieve higher transfer speeds. PPP connection logs can show actual packets being passed and Windows 98 allows PPP logging per connection. The Dial-Up Networking improvements are also available in Windows 95 OSR2 and are downloadable for earlier Windows 95 releases. For networked computers that have user profiles enabled, Windows 98 introduces Microsoft Family Logon which lists all users that have been configured for that computer, enabling users to simply select their names from a list rather than having to type them in. Windows 98 supports IrDA 3.0 that specifies both Serial Infrared Devices and Fast Infrared devices, which are capable of sending and receiving data at 4 Mbit/s. Infrared Recipient, a new application for transferring files through an infrared connection is included. The IrDA stack in Windows 98 supports networking profiles over the IrCOMM kernel-mode driver. Windows 98 also has built-in support for browsing Distributed File System trees on Server Message Block shares such as Windows NT servers. UPnP and NAT traversal APIs can be installed on Windows 98 by installing the Windows XP Network Setup Wizard. An L2TP/IPsec VPN client can also be downloaded. By installing Active Directory Client Extensions, Windows 98 can take advantage of several Windows 2000 Active Directory features. Improvements to the system and built-in utilities Performance improvements Windows 95 introduced the 32-bit, protected-mode cache driver VCACHE (replacing SMARTDrv) to cache the most recently accessed information from the hard drive in memory, divided into chunks. However, the cache parameters needed manual tuning as it degraded performance by consuming too much memory and not releasing it quickly enough, forcing paging to occur far too early. The Windows 98 VCACHE cache size management for disk and network access, CD-ROM access and paging is more dynamic compared to Windows 95, resulting in no tuning being required for cache parameters. On the FAT32 file system, Windows 98 has a performance feature called MapCache that can run applications from the disk cache itself if the code pages of executable files are aligned/mapped on 4K boundaries, instead of copying them to virtual memory. This results in more memory being available to run applications, and lesser usage of the swap file. Windows 98 registry handling is more robust than Windows 95 to avoid corruption and there are several enhancements to eliminate limitations and improve registry performance. The Windows 95 registry key size limitation of 64 KB is gone. The registry uses less memory and has better caching. Disk Defragmenter has been improved to rearrange program files that are frequently used to a hard disk region optimized for program start. The aggravating "Drive contents changed....restarting." message will still frequently appear in this version. If it gets stuck on the same area too many times, it will ask the user if it should keep trying or give up. However, the Disk Defragmenter from Windows Me does not have this problem and will function on Windows 98 if the user copies it over. Windows 98 also supports a Fast Shutdown feature that initiates shutdown without uninitializing device drivers. However, this can cause Windows 98 to hang instead of shutting down the computer if a buggy driver is active, so Microsoft supplied instructions for disabling the feature. Windows 98 supports write-behind caching for removable disk drives. A utility for converting FAT16 partitions to FAT32 without formatting the partition is also included. Other system tools A number of improvements are made to various other system tools and accessories in Windows 98. Microsoft Backup supports differential backup and SCSI tape devices in Windows 98. Disk Cleanup, a new tool, enables users to clear their disks of unnecessary files. Cleanup locations are extensible through Disk Cleanup handlers. Disk Cleanup can be automated for regular silent cleanups. Scanreg (DOS) and ScanRegW are Registry Checker tools used to back up, restore or optimize the Windows registry. ScanRegW tests the registry's integrity and saves a backup copy each time Windows successfully boots. The maximum number of copies could be customized by the user through "scanreg.ini" file. The restoration of a registry that causes Windows to fail to boot can only be done from DOS mode using ScanReg. System Configuration Utility is a new system utility used to disable programs and services that are not required to run the computer. A Maintenance Wizard is included that schedules and automates ScanDisk, Disk Defragmenter and Disk Cleanup. Windows Script Host, with VBScript and JScript engines is built-in and upgradeable to version 5.6. System File Checker checks installed versions of system files to ensure they were the same version as the one installed with Windows 98 or newer. Corrupt or older versions are replaced by the correct versions. This tool was introduced to resolve the DLL hell issue and was replaced in Windows Me by System File Protection. Windows 98 Setup simplifies installation, reducing the bulk of user input required. The Windows 98 Startup Disk contains generic, real-mode ATAPI and SCSI CD-ROM drivers that can be used instead in the event that the specific driver for a CD-ROM is unavailable. The system could be updated using Windows Update. A utility to automatically notify the user of critical updates was later released. Windows 98 includes an improved version of the Dr. Watson utility that collects and lists comprehensive information such as running tasks, startup programs with their command line switches, system patches, kernel driver, user drivers, DOS drivers and 16-bit modules. With Dr. Watson loaded in the system tray, whenever a software fault occurs (general protection fault, hang, etc.), Dr. Watson will intercept it and indicate what software crashed and its cause. Windows Report Tool takes a snapshot of system configuration and lets users submit a manual problem report along with system information to technicians. It has e-mail confirmation for submitted reports. Accessories Windows 98 includes Microsoft Magnifier, Accessibility Wizard and Microsoft Active Accessibility 1.1 API (upgradeable to MSAA 2.0.) A new HTML Help system with 15 Troubleshooting Wizards was introduced to replace WinHelp. Users can configure the font in Notepad. Microsoft Paint supports GIF transparency. HyperTerminal supports a TCP/IP connection method, which allows it to be used as a Telnet client. Imaging for Windows is updated. System Monitor—used to track the performance of hardware and software—supports output to a log file. Miscellaneous improvements Telephony API (TAPI) 2.1 DCOM version 1.2 Ability to list fonts by similarity determined using PANOSE information. Tools to automate setup, such as Batch 98 and INFInst.exe, support error-checking, gathering information automatically to create an INF file directly from a machine's registry, customizing IE4, shell and desktop settings and adding custom drivers. Several other Resource Kit tools are included on the Windows 98 CD. Windows 98 has new system event sounds for low battery alarm and critical battery alarm. The new startup sound for Windows 98 was composed by Microsoft sound engineer Ken Kato, who considered it to be a "tough act to follow". Windows 98 shipped with Flash Player and Shockwave Player preinstalled. Windows 98 Second Edition Windows 98 Second Edition (often shortened to Windows 98 SE and sometimes to Win98 SE) is an updated version of Windows 98, released on May 5, 1999, nine months before the release of Windows 2000. It includes many bug fixes, improved WDM audio and modem support, improved USB support, the replacement of Internet Explorer 4.0 with Internet Explorer 5.0, Web Folders (WebDAV namespace extension for Windows Explorer), and related shell updates. Also included is basic OHCI-compliant FireWire DV camcorder support (MSDV class driver) and SBP-2 support for mass storage class devices. Wake-On-LAN reenables suspended networked computers due to network activity, and Internet Connection Sharing allows multiple networked client computers to share an Internet connection via a single host computer. Other features in the update include DirectX 6.1 which introduced major improvements to DirectSound and the introduction of DirectMusic, improvements to Asynchronous Transfer Mode support (IP/ATM, PPP/ATM and WinSock 2/ATM support), Windows Media Player 6.1 replacing the older Media Player, Microsoft NetMeeting 3.0, MDAC 2.1 and WMI. A memory overflow issue was resolved which in the older version of Windows 98 would crash most systems if left running for 49.7 days (equal to 232 milliseconds). Windows 98 SE could be obtained as retail upgrade and full version packages, as well as OEM and a Second Edition Updates Disc for existing Windows 98 users. USB audio device class support is present from Windows 98 SE onwards. Windows 98 Second Edition improved WDM support in general for all devices, and it introduced support for WDM for modems (and therefore USB modems and virtual COM ports), Microsoft driver support for both USB printers, and for USB mass-storage device class is not available for Windows 98. Removed features Windows 98 Second Edition did not ship with the WinG API or RealPlayer 4.0, unlike the original release of Windows 98, due to both of these having been superseded by DirectX and Windows Media Player, respectively. Upgradeability Several components of both Windows 98 and Windows 98 Second Edition can be updated to newer versions. These include: Internet Explorer 6 SP1 and Outlook Express 6 SP1 Windows Media Format Runtime and Windows Media Player 9 Series on Windows 98 Second Edition (and Windows Media Player 7.1 on Windows 98 original release.) Windows Media Encoder 7.1 and Windows Media 8 Encoding Utility DirectX 9.0c (the latest compatible runtime is from October 2007.) MSN Messenger 7.0 Significant features from newer Microsoft operating systems can be installed on Windows 98. Chief among them are .NET Framework versions 1.0, 1.1 and 2.0, the Visual C++ 2005 runtime, Windows Installer 2.0, the GDI+ redistributable library, Remote Desktop Connection client 5.2 and the Text Services Framework. Several other components such as MSXML 3.0 SP7, Microsoft Agent 2.0, NetMeeting 3.01, MSAA 2.0, ActiveSync 3.8, WSH 5.6, Microsoft Data Access Components 2.81 SP1, WMI 1.5 and Speech API 4.0. Office XP is the last version of Microsoft Office that is compatible with Windows 98. Although Windows 98 does not fully support Unicode, certain Unicode applications can run if the Microsoft Layer for Unicode is installed. System requirements The two major versions of Windows 98 have minimum requirements needed to be run. Users can bypass processor requirement checks with the undocumented /NM setup switch. This allows installation on computers with processors as old as the Intel 80386. Limitations Windows 98 is only designed to handle up to 1 GB of RAM without changes. Both Windows 98 and Windows 98 Second Edition have problems running on hard drives of capacities larger than 32 GB in systems with certain Phoenix BIOS configurations. A software update fixed this shortcoming. In addition, until Windows XP with Service Pack 1, Windows was unable to handle hard drives that are over 137 GB in size with the default drivers, because of missing 48-bit Logical Block Addressing support. Support lifecycle Support for Windows 98 under Microsoft's consumer product life cycle policy was planned to end on June 30, 2003, however, in December 2002, Microsoft extended the support window to January 16, 2004. This date would then be extended again on January 13, 2004 to a final end of support date of July 11, 2006, citing support volumes in emerging markets as the reason for the extension. Windows 98 retail availability ended as planned on June 30, 2002, and later became completely unavailable from Microsoft (through MSDN or otherwise) in any form due to the terms of Java-related settlements Microsoft made with Sun Microsystems. The Windows Update website continued to be available after Windows 98's end of support date, however, during 2011, Microsoft retired the Windows Update v4 website and removed the updates for Windows 98 and Windows 98SE from its servers. Reception Windows 98 was released to generally favorable reviews, with praise directed to its improved graphical user interface and customizability, ease of use, and the degree to which it addressed complaints that users and critics had with Windows 95. Michael Sweet of Smart Computing characterized it as heavily integrating features of the Internet browser, and found file and folder navigation easier. Ed Bott of PC Computing lauded the bug fixes, easier troubleshooting, and support for hardware advances such as DVD players and USB. However, he also found that the operating system crashed only slightly less frequently, and criticized the high upgrade price and system requirements. He rated it four stars out of five. Sales Windows 98 sold 530,000 licenses in its first four days of availability, overtaking Windows 95's 510,000. It later sold a total of 580,000 and 350,000 licenses in the first and second months of availability, respectively. In the first year of its release, Windows 98 sold a total of 15 million licenses – 2 million more than its predecessor. However, International Data Corporation estimated that of the roughly 89 million shipped computers in the desktop market, the operating system had a market share of 17.2 percent, compared to Windows 95's 57.4 percent. Meanwhile, the two operating systems continued to observe a trend whereby Windows 98 improved in sales performance, whereas Windows 95 dwindled. After a legal dispute and subsequent settlement with Sun Microsystems over the former's Java Virtual Machine, Microsoft ceased distributing the operating system on December 15, 2003, and IDC estimated that a total of 58 million copies were installed worldwide by then. References Further reading External links "Windows 98." – Microsoft (Archive) GUIdebook: Windows 98 Gallery – A website dedicated to preserving and showcasing Graphical User Interfaces 1998 software 1999 software Products and services discontinued in 2006 98 DOS variants IA-32 operating systems
41411
https://en.wikipedia.org/wiki/Network%20operating%20system
Network operating system
A network operating system (NOS) is a specialized operating system for a network device such as a router, switch or firewall. Historically operating systems with networking capabilities were described as network operating systems, because they allowed personal computers (PCs) to participate in computer networks and shared file and printer access within a local area network (LAN). This description of operating systems is now largely historical, as common operating systems include a network stack to support a client–server model. History Early microcomputer operating systems such as CP/M, MS-DOS and classic Mac OS were designed for one user on one computer. Packet switching networks were developed to share hardware resources, such as a mainframe computer, a printer or a large and expensive hard disk. As local area network technology became available, two general approaches to handle sharing of resources on networks arose. Historically a network operating system was an operating system for a computer which implemented network capabilities. Operating systems with a network stack allowed personal computers to participate in a client-server architecture in which a server enables multiple clients to share resources, such as printers. Early examples of client-server operating systems that were shipped with fully integrated network capabilities are Novell NetWare using the Internetwork Packet Exchange (IPX) network protocol and Banyan VINES which used a variant of the Xerox Network Systems (XNS) protocols. These limited client/server networks were gradually replaced by Peer-to-peer networks, which used networking capabilities to share resources and files located on a variety of computers of all sizes. A peer-to-peer network sets all connected computers equal; they all share the same abilities to use resources available on the network. The most popular peer-to-peer networks as of 2020 are Ethernet, Wi-Fi and the Internet protocol suite. Software that allowed users to interact with these networks, despite a lack of networking support in the underlying manufacturer's operating system, was sometimes called a network operating system. Examples of such add-on software include Phil Karn's KA9Q NOS (adding Internet support to CP/M and MS-DOS), PC/TCP Packet Drivers (adding Ethernet and Internet support to MS-DOS), and LANtastic (for MS-DOS, Microsoft Windows and OS/2), and Windows for Workgroups (adding NetBIOS to Windows). Examples of early operating systems with peer-to-peer networking capabilities built-in include MacOS (using AppleTalk and LocalTalk), and the Berkeley Software Distribution. Today, distributed computing and groupware applications have become the norm. Computer operating systems include a networking stack as a matter of course. During the 1980s the need to integrate dissimilar computers with network capabilities grew and the number of networked devices grew rapidly. Partly because it allowed for multi-vendor interoperability, and could route packets globally rather than being restricted to a single building, the Internet protocol suite became almost universally adopted in network architectures. Thereafter, computer operating systems and the firmware of network devices tended to support Internet protocols. Network device operating systems Network operating systems can be embedded in a router or hardware firewall that operates the functions in the network layer (layer 3). Notable network operating systems include: Proprietary network operating systems Cisco IOS, a family of network operating systems used on most Cisco Systems routers and current Cisco network switches. Earlier switches ran the Catalyst Operating System CatOS RouterOS by MikroTik. ZyNOS, used in network devices made by ZyXEL. LCOS(LX/FX/SX), used in network devices made by LANCOM Systems. NetBSD, FreeBSD, or Linux based operating systems DD-WRT, Linux kernel based DD-WRT is Linux-based firmware for wireless routers and access points as well as low-cost networking device platforms such as the Linksys WRT54G Dell Networking Operating System, DNOS9 is NetBSD based, while OS10 uses the Linux kernel Extensible Operating System runs on switches from Arista and uses an unmodified Linux kernel ExtremeXOS (EXOS), used in network devices made by Extreme Networks FTOS or Force10 Operating System, is the firmware family used on Force10 Ethernet switches OpenWrt used to route IP packets on embedded devices pfSense, a fork of M0n0wall, uses PF OPNsense, a fork of pfSense SONiC, a Linux-based network operating system developed by Microsoft Cumulus Linux distribution, which uses the full TCP/IP stack of Linux VyOS, an open source fork of the Vyatta routing package ONOS, an open source SDN operating system (hosted by The Linux Foundation) for communications service providers that is designed for scalability, high performance and high availability. See also Distributed operating system FRRouting Network Computer Operating System Network functions virtualization Operating System Projects Interruptible operating system SONiC (operating system) References External links Chapter 6 of Dr. Roy Winkelman's guide to networks The open-source NOS for Disaggregated Cell Site Gateways Operating systems Internet Protocol based network software
11353703
https://en.wikipedia.org/wiki/List%20of%20Jewish%20mathematicians
List of Jewish mathematicians
This list of Jewish mathematicians includes mathematicians and statisticians who are or were verifiably Jewish or of Jewish descent. In 1933, when the Nazis rose to power in Germany, one-third of all mathematics professors in the country were Jewish, while Jews constituted less than one percent of the population. Jewish mathematicians made major contributions throughout the 20th century and into the 21st, as is evidenced by their high representation among the winners of major mathematics awards: 27% for the Fields Medal, 30% for the Abel Prize, and 40% for the Wolf Prize. A Abner of Burgos ( 1270– 1347), mathematician and philosopher Abraham Abigdor (14th century), logician Milton Abramowitz (1915–1958), mathematician Samson Abramsky (born 1953), game semantics Amir Aczel (1950–2015), history of mathematics Georgy Adelson-Velsky (1922–2014), mathematician and computer scientist Abraham Adelstein (1916–1992), statistics Caleb Afendopolo ( 1430– 1499), mathematician, astronomer, poet, and rabbi Aaron Afia (16th century), mathematician, physician and philosopher Shmuel Agmon (born 1922), mathematical analysis and partial differential equations Matest Agrest (1915–2005), mathematician and pseudoscientist Ron Aharoni (born 1952), combinatorics Bendich Ahin (14th century), mathematician and physician Michael Aizenman (born 1945), mathematician and physicist Naum Akhiezer (1901–1980), approximation theory Isaac Albalia (1035–1094), mathematician, astronomer, and Talmudist Abraham Adrian Albert (1905–1972), algebra; Cole Prize (1939) Félix Alcan (1841–1925), mathematician Semyon Alesker (born 1972), convex and integral geometry; Erdős Prize (2004) Al-Samawal al-Maghribi ( 1130– 1180), mathematician, astronomer and physician Noga Alon (born 1956), combinatorics and theoretical computer science; Erdős Prize (1989), Pólya Prize (2000) Franz Alt (1910–2011), mathematician and computer scientist Shimshon Amitsur (1921–1994), mathematician Jacob Anatoli ( 1194–1256), mathematician, scientist and translator Aldo Andreotti (1924–1980), mathematician Kenneth Appel (1932–2013), proved four-color theorem Zvi Arad (1942–2018), mathematician Vladimir Arnold (1937–2010), mathematician; Wolf Prize (2001) Siegfried Aronhold (1819–1884), invariant theory Nachman Aronszajn (1907–1980), mathematical analysis and mathematical logic Kenneth Arrow (1921–2017), mathematician and economist; Nobel Prize in Economics (1972) Michael Artin (born 1934), algebraic geometry Emilio Artom (1888–1952), mathematician Giulio Ascoli (1843–1869), mathematician Guido Ascoli (1887–1957), mathematician Herman Auerbach (1901–1942), mathematician Robert Aumann (born 1930), mathematician and game theorist; Nobel Prize in Economics (2005) Louis Auslander (1928–1997), mathematician Maurice Auslander (1926–1994), algebra Hertha Ayrton (1854–1923), mathematician and engineer B Isaak Bacharach (1854–1942), mathematician Reinhold Baer (1902–1979), algebra Egon Balas (1922–2019), applied mathematics Yehoshua Bar-Hillel (1915–1975), mathematician, philosopher and linguist Abraham bar Hiyya (1070–1136 or 1145), mathematician, astronomer and philosopher Dror Bar-Natan (born 1966), knot theory and homology theory Ruth Barcan Marcus (1921–2012), logician Grigory Barenblatt (1927–2018), mathematician Valentine Bargmann (1908–1989), mathematician and theoretical physicist Elijah Bashyazi ( 1420–1490), mathematician, astronomer, philosopher and rabbi Hyman Bass (born 1932), algebra and mathematics education; Cole Prize (1975) Laurence Baxter (1954–1996), statistician August Beer (1825–1863), mathematician Alexander Beilinson (born 1957), mathematician; Wolf Prize (2018) Richard Bellman (1920–1984), applied mathematics Kalonymus ben Kalonymus (1286– 1328), philosopher, mathematician and translator Isaac ben Moses Eli (15th century), mathematician Jacob ben Nissim (10th century), philosopher and mathematician Judah ben Solomon ( 1215– 1274), mathematician, astronomer, and philosopher Paul Benacerraf (born 1931), philosophy of mathematics Lazarus Bendavid (1762–1832), mathematician and philosopher Felix Berezin (1931–1980), mathematician and physicist Boris Berezovsky (1946–2013), mathematician and businessman Toby Berger (born 1940), information theory Stefan Bergman (1895–1977), complex analysis Paul Bernays (1888–1977), foundations of mathematics Benjamin Abram Bernstein (1881–1964), mathematical logic Dorothy Lewis Bernstein (1914–1988), applied mathematics Felix Bernstein (1878–1956), set theory Joseph Bernstein (born 1945), algebraic geometry, representation theory, and number theory Sergei Bernstein (1880–1968), mathematician Lipman Bers (1914–1993), mathematical analysis Ludwig Berwald (1883–1942), differential geometry Abram Besicovitch (1891–1970), mathematician (Karaite) Paul Biran (born 1969), symplectic and algebraic geometry; Erdős Prize (2006) Joan Birman (born 1927), topology Zygmunt Wilhelm Birnbaum (1903–2000), functional analysis and probability Max Black (1909–1988), philosopher of mathematics André Bloch (1893–1948), complex analysis Maurice Block (1816–1901), statistician Lenore Blum (born 1942), mathematician and computer scientist Leonard Blumenthal (1901–1984), mathematician Otto Blumenthal (1876–1944), mathematician Harald Bohr (1887–1951), almost periodic functions Vladimir Boltyansky (1925–2019), mathematician and educator Carl Borchardt (1817–1880), mathematical analysis Max Born (1882–1970), physicist and mathematician Moses Botarel Farissol (15th century), mathematician Salomon Bochner (1899–1982), mathematician; Steele Prize (1979) Hermann Bondi (1919–2005), mathematician Immanuel Bonfils ( 1300–1377), mathematician and astronomer Valentina Borok (1931–2004), partial differential equations David Borwein (1924–2021), mathematician Jonathan Borwein (1951–2016), mathematician Peter Borwein (1953–2020), mathematician Raoul Bott (1923–2005), geometry; Steele Prize (1990) Victor Brailovsky (born 1935), mathematician and computer scientist Achi Brandt (born 1938), numerical analysis Nikolai Brashman (1796–1866), analytical geometry; Demidov Prize (1836) Alfred Brauer (1894–1985), number theory Richard Brauer (1901–1977), modular representation theory; Cole Prize (1949) Haïm Brezis (born 1944), functional analysis and partial differential equations Selig Brodetsky (1888–1954), mathematician and President of the Board of Deputies of British Jews Jacob Bronowski (1908–1974), mathematician and science educator Robert Brooks (1952–2002), complex analysis and differential geometry Felix Browder (1927–2016), nonlinear functional analysis William Browder (born 1934), topology and differential geometry Leonid Bunimovich (born 1947), dynamical systems Leone Burton (1936–2007), mathematics education Herbert Busemann (1905–1994), convex and differential geometry C Anneli Cahn Lax (1922–1999), mathematician Eugenio Calabi (born 1923), mathematician; Steele Prize (1991) Georg Cantor (1845–1918), set theorist Moritz Cantor (1829–1920), historian of mathematics Sylvain Cappell (born 1946), geometric topology Leonard Carlitz (1907–1999), number theory and algebra Moshe Carmeli (1933–2007), mathematical physics Emma Castelnuovo (1913–2014), mathematics education Guido Castelnuovo (1865–1952), mathematician Wilhelm Cauer (1900–1945), mathematician Yair Censor (born 1943), computational mathematics and optimization Gregory Chaitin (born 1947), algorithmic information theory and metamathematics Herman Chernoff (born 1923), applied mathematics and statistics Alexey Chervonenkis (1938–2014), mathematician and computer scientist David Chudnovsky (born 1947), mathematician and engineer Gregory Chudnovsky (born 1952), mathematician and engineer Maria Chudnovsky (born 1977), graph theory and combinatorial optimization Henri Cohen (born 1947), number theory Irvin Cohen (1917–1955), mathematician Joel Cohen (born 1944), mathematical biology Marion Cohen (born 1943), poet and mathematician Miriam Cohen (born 1941), algebra Paul Cohen (1934–2007), set theorist; Fields Medal (1966) Ralph Cohen (born 1952), algebraic topology and differential topology Wim Cohen (1923–2000), queueing theory Paul Cohn (1924–2006), algebraist Stephan Cohn-Vossen (1902–1936), differential geometry Ronald Coifman (born 1941), mathematician Mordecai Comtino (died 1485), mathematician Lionel Cooper (1915–1979), mathematician Leo Corry (born 1956), history of mathematics Mischa Cotlar (1913–2007), mathematician Richard Courant (1888–1972), mathematical analysis and applied mathematics Nathan Court (1881–1968), geometer Michael Creizenach (1789–1842), mathematician and theologian Luigi Cremona (1830–1903), mathematician Alexander Crescenzi (17th century), mathematician D Noah Dana-Picard (born 1954), mathematician Henry Daniels (1912–2000), statistician David van Dantzig (1900–1959), topology George Dantzig (1914–2005), mathematical optimization Tobias Dantzig (1884–1956), mathematician Martin Davis (born 1928), mathematician Philip Dawid (born 1946), statistics Max Dehn (1878–1952), topology Percy Deift (born 1945), mathematician; Pólya Prize (1998) Nissan Deliatitz (19th century), mathematician Joseph Delmedigo (1591–1655), rabbi and mathematician Ely Devons (1913–1967), statistics Persi Diaconis (born 1945), mathematician and magician Samuel Dickstein (1851–1939), mathematician and pedagogue Nathan Divinsky (1925–2012), mathematician Roland Dobrushin (1929–1995), probability theory, mathematical physics and information theory Wolfgang Doeblin (1915–1940), probabilist Domninus of Larissa ( 420– 480 AD), mathematician Jesse Douglas (1897–1965), mathematician; Fields Medal (1936), Bôcher Prize (1943) Vladimir Drinfeld (born 1954), algebraic geometry; Fields Medal (1990), Wolf Prize (2018) Louis Israel Dublin (1882–1969), statistician Aryeh Dvoretzky (1916–2008), functional analysis and probability Bernard Dwork (1923–1998), mathematician; Cole Prize (1962) Harry Dym (born 1938), functional and numerical analysis Eugene Dynkin (1924–2014), probability and algebra; Steele Prize (1993) E Abraham Eberlen (16th century), mathematician Ishak Efendi ( 1774–1835), mathematician and engineer Bradley Efron (born 1938), statistician Andrew Ehrenberg (1926–2010), statistician Tatyana Ehrenfest (1905–1984), mathematician Leon Ehrenpreis (1930–2010), mathematician Jacob Eichenbaum (1796–1861), poet and mathematician Samuel Eilenberg (1913–1988), category theory; Wolf Prize (1986), Steele Prize (1987) Gotthold Eisenstein (1823–1852), mathematician Yakov Eliashberg (born 1946), symplectic topology and partial differential equations Jordan Ellenberg (born 1971), arithmetic geometry Emanuel Lodewijk Elte (1881–1943), mathematician David Emmanuel (1854–1941), mathematician Federigo Enriques (1871–1946), algebraic geometry Moses Ensheim (1750–1839), mathematician and poet Bernard Epstein (1920–2005), mathematician and physicist David Epstein (born 1937), hyperbolic geometry, 3-manifolds, and group theory Paul Epstein (1871–1939), number theory Paul S. Epstein (1883–1966), mathematical physics Yechiel Michel Epstein (1829–1908), rabbi and mathematician Arthur Erdélyi (1908–1977), mathematician Paul Erdős (1913–1996), mathematician; Cole Prize (1951), Wolf Prize (1983/84) Alex Eskin (born 1965), dynamical systems and group theory Gregory Eskin (born 1936), partial differential equations Theodor Estermann (1902–1991), analytic number theory F Gino Fano (1871–1952), mathematician Yehuda Farissol (15th century), mathematician and astronomer Gyula Farkas (1847–1930), mathematician and physicist Herbert Federer (1920–2010), geometric measure theory Solomon Feferman (1928–2016), mathematical logic and philosophy of mathematics Charles Fefferman (born 1949), mathematician; Fields Medal (1978), Bôcher Prize (2008) Joan Feigenbaum (born 1958), mathematics and computer science Mitchell Feigenbaum (1944–2019), chaos theory; Wolf Prize (1986) Walter Feit (1930–2004), finite group theory and representation theory; Cole Prize (1965) Leopold Fejér (1880–1959), harmonic analysis Michael Fekete (1886–1957), mathematician Jacques Feldbau (1914–1945), mathematician Joel Feldman (born 1949), mathematical physics William Feller (1906–1970), probability theory Käte Fenchel (1905–1983), group theory Werner Fenchel (1905–1988), geometry and optimization theory Mordechai Finzi ( 1407–1476), mathematician and astronomer Ernst Sigismund Fischer (1875–1954), mathematical analysis Irene Fischer (1907–2009), mathematician and engineer John Fox (born 1946), statistician Abraham Fraenkel (1891–1965), set theory Aviezri Fraenkel (born 1929), combinatorial game theory Philipp Frank (1884–1966), mathematical physics and philosophy Péter Frankl (born 1953), combinatorics Fabian Franklin (1853–1939), mathematician Michael Freedman (born 1951), mathematician; Fields Medal (1986) Gregory Freiman (born 1926), additive number theory Edward Frenkel (born 1968), representation theory, algebraic geometry, and mathematical physics Hans Freudenthal (1905–1990), algebraic topology Avner Friedman (born 1932), partial differential equations Harvey Friedman (born 1948), reverse mathematics Sy Friedman (born 1953), set theory and recursion theory David Friesenhausen (1756–1828), mathematician Uriel Frisch (born 1940), mathematical physics Albrecht Fröhlich (1916–2001), algebra; De Morgan Medal (1992) Robert Frucht (1906–1997), graph theory Guido Fubini (1879–1943), mathematical analysis László Fuchs (born 1924), group theory Lazarus Fuchs (1833–1902), linear differential equations Paul Funk (1886–1969), mathematical analysis Hillel Furstenberg (born 1935), mathematician; Wolf Prize (2006/07), Abel Prize (2020) G David Gabai (born 1954), low-dimensional topology and hyperbolic geometry Dov Gabbay (born 1945), logician Ofer Gabber (born 1958), algebraic geometry; Erdős Prize (1981) Boris Galerkin (1871–1945), mathematician and engineer Zvi Galil (born 1947), mathematician and computer scientist David Gans (1541–1613), mathematician Hilda Geiringer (1893–1973), mathematician Israel Gelfand (1913–2009), mathematician; Kyoto Prize (1989), Steele Prize (2005) Alexander Gelfond (1906–1968), number theory Semyon Gershgorin (1901–1933), mathematician Gersonides (1288–1344), mathematician Murray Gerstenhaber (born 1927), algebra and mathematical physics David Gilbarg (1918–2001), mathematician Jekuthiel Ginsburg (1889–1957), mathematician Moti Gitik (born 1955), set theory Samuel Gitler (1933–2014), mathematician Alexander Givental (born 1958), symplectic topology and singularity theory George Glauberman (born 1941), finite simple groups Israel Gohberg (1928–2009), operator theory and functional analysis Anatolii Goldberg (1930–2008), complex analysis Lisa Goldberg (born 1956), statistics and mathematical finance Dorian Goldfeld (born 1947), number theory; Cole Prize (1987) Carl Wolfgang Benjamin Goldschmidt (1807–1851), mathematician Catherine Goldstein (born 1958), number theory Sydney Goldstein (1903–1989), mathematical physics Daniel Goldston (born 1954), number theory; Cole Prize (2014) Michael Golomb (1909–2008), mathematician Solomon Golomb (1932–2016), mathematical games Gene Golub (1932–2007), numerical analysis Marty Golubitsky (born 1945), mathematician Benjamin Gompertz (1779–1865), mathematician I. J. Good (1916–2009), mathematician and cryptologist Paul Gordan (1837–1912), invariant theory Daniel Gorenstein (1923–1992), group theory David Gottlieb (1944–2008), numerical analysis Dovid Gottlieb, rabbi and mathematician Ian Grant (born 1930), mathematical physics Harold Grad (1923–1986), applied mathematics Eugene Grebenik (1919–2001), demographer Leslie Greengard (born 1958), mathematician and computer scientist Kurt Grelling (1886–1942), logician Mikhail Gromov (born 1943), mathematician; Wolf Prize (1993), Kyoto Prize (2002), Abel Prize (2009) Benedict Gross (born 1950), number theory; Cole Prize (1987) Marcel Grossmann (1878–1936), descriptive geometry Emil Grosswald (1912–1989), number theory Alexander Grothendieck (1928–2014), algebraic geometry; Fields Medal (1966) Branko Grünbaum (1929–2018), discrete geometry Géza Grünwald (1910–1943), mathematician Heinrich Guggenheimer (1924–2021), mathematician Paul Guldin (1577–1643), mathematician and astronomer Emil Gumbel (1891–1966), extreme value theory Sigmund Gundelfinger (1846–1910), algebraic geometry Larry Guth (born 1977), mathematician Louis Guttman (1916–1987), mathematician and sociologist H Alfréd Haar (1885–1933), mathematician Steven Haberman (born 1951), statistician and actuarial scientist Jacques Hadamard (1865–1963), mathematician Hans Hahn (1879–1934), mathematical analysis and topology John Hajnal (1924–2008), statistics Heini Halberstam (1926–2014), number theory Paul Halmos (1916–2006), mathemematician; Steele Prize (1983) Israel Halperin (1911–2007), mathematician Georges-Henri Halphen (1844–1889), geometer Hans Hamburger (1889–1956), mathematician Haim Hanani (1912–1991), combinatorial design theory Frank Harary (1921–2005), graph theory David Harbater (born 1952), Galois theory, algebraic geometry and arithmetic geometry; Cole Prize (1995) David Harel (born 1950), mathematician and computer scientist Sergiu Hart (born 1949), mathematician and economist Ami Harten (1946–1994), applied mathematics Numa Hartog (1846–1871), mathematician Friedrich Hartogs (1874–1943), set theory and several complex variables Helmut Hasse (1898–1979), algebraic number theory Herbert Hauptman (1917–2011), mathematician; Nobel Prize in Chemistry (1985) Felix Hausdorff (1868–1942), topology Louise Hay (1935–1989), computability theory Walter Hayman (1926–2020), complex analysis Hans Heilbronn (1908–1975), mathematician Ernst Hellinger (1883–1950), mathematician Eduard Helly (1884–1943), mathematician Dagmar Henney (born 1931), mathematician Kurt Hensel (1861–1941), mathematician Reuben Hersh (1927–2020), mathematics and philosopher of mathematics Daniel Hershkowitz (born 1953), mathematician and politician Israel Herstein (1923–1988), algebra Maximilian Herzberger (1899–1982), mathematician and physicist Emil Hilb (1882–1929), mathematician Peter Hilton (1923–2010), homotopy theory Edith Hirsch Luchins (1921–2002), mathematician Kurt Hirsch (1906–1986), group theory Morris Hirsch (born 1933), mathematician Elias Höchheimer (18th century), mathematician and astronomer Gerhard Hochschild (1915–2010), mathematician; Steele Prize (1980) Melvin Hochster (born 1943), commutative algebra; Cole Prize (1980) Douglas Hofstadter (born 1945), recreational mathematics Chaim Samuel Hönig (1926–2018), functional analysis Heinz Hopf (1894–1971), topology Ludwig Hopf (1884–1939), mathematician and physicist Janina Hosiasson-Lindenbaum (1899–1942), logician and philosopher Isaac Hourwich (1860–1924), statistician Ehud Hrushovski (born 1959), mathematical logic; Erdős Prize (1994) Witold Hurewicz (1904–1956), mathematician Adolf Hurwitz (1859–1919), function theory Wallie Abraham Hurwitz (1886–1958), mathematical analysis I Isaac ibn al-Ahdab (1350–1430), mathematician, astronomer and poet Sind ibn Ali (9th century), mathematician and astronomer Mashallah ibn Athari ( 740–815), mathematician and astrologer Sahl ibn Bishr ( 786– 845), mathematician Abraham ibn Ezra ( 1089– 1167), mathematician and astronomer Abu al-Fadl ibn Hasdai (11th century), mathematician and philosopher Bashar ibn Shu'aib (10th century), mathematician Issachar ibn Susan ( 1539–1572), mathematician Jacob ibn Tibbon (1236–1305), mathematician and astronomer Moses ibn Tibbon ( 1240–1283), mathematician and translator Judah ibn Verga (15th century), mathematician, astronomer and kabbalist Arieh Iserles (born 1947), computational mathematics Isaac Israeli (14th century), astronomer and mathematician J Eri Jabotinsky (1910–1969), mathematician, politician and activist Carl Gustav Jacob Jacobi (1804–1851), analysis; first Jewish mathematician to be appointed professor at a German university Nathan Jacobson (1910–1999), algebra; Steele Prize (1998) Ernst Jacobsthal (1882–1965), number theory E. Morton Jellinek (1890–1963), biostatistics Svetlana Jitomirskaya (born 1966), dynamical systems and mathematical physics Ferdinand Joachimsthal (1818–1861), mathematician Fritz John (1910–1994), partial differential equations; Steele Prize (1982) Joseph of Spain (9th and 10th centuries), mathematician Sir Roger Jowell (1942–2011), social statistics K Mark Kac (1914–1984), probability theory Victor Kac (born 1943), representation theory; Steele Prize (2015) Mikhail Kadets (1923–2011), mathematical analysis Richard Kadison (1925–2018), mathematician; Steele Prize (1999) Veniamin Kagan (1869–1953), mathematician William Kahan (born 1933), mathematician and computer scientist; Turing Award (1989) Jean-Pierre Kahane (1926–2017), harmonic analysis Franz Kahn (1926–1998), mathematician and astrophysicist Margarete Kahn (1880–1942?), topology Gil Kalai (born 1955), mathematician; Pólya Prize (1992), Erdős Prize (1992) László Kalmár (1905–1976), mathematical logic Shoshana Kamin (born 1930), partial differential equations Daniel Kan (1927–2013), homotopy theory Leonid Kantorovich (1912–1986), mathematician and economist; Nobel Prize in Economics (1975) Irving Kaplansky (1917–2006), mathematician Samuel Karlin (1924–2007), mathematician Theodore von Kármán (1881–1963), mathematical physics Edward Kasner (1878–1955), differential geometry Svetlana Katok (born 1947), mathematician Eric Katz (born 1977), combinatorial algebraic geometry and arithmetic geometry Mikhail Katz (born 1958), differential geometry and geometric topology Nets Katz (born 1972), combinatorics and harmonic analysis Nick Katz (born 1943), algebraic geometry Sheldon Katz (born 1956), algebraic geometry Victor Katz (born 1942), algebra and history of mathematics Yitzhak Katznelson (born 1934), mathematician Bruria Kaufman (1918–2010), mathematician and physicist David Kazhdan (born 1946), representation theory Herbert Keller (1925–2008), applied mathematics and numerical analysis Joseph Keller (1923–2016), applied mathematician; National Medal of Science (1988), Wolf Prize (1997) John Kemeny (1926–1992), mathematician and computer scientist Carlos Kenig (born 1953), harmonic analysis and partial differential equations; Bôcher Prize (2008) Harry Kesten (1931–2019), probability; Pólya Prize (1994), Steele Prize (2001) Aleksandr Khinchin (1894–1959), probability theory David Khorol (1920–1990), mathematician Mojżesz Kirszbraun (1903–1942), mathematical analysis Sergiu Klainerman (born 1950), hyperbolic differential equations; Bôcher Prize (1999) Boáz Klartag (born 1978), asymptotic geometric analysis; Erdős Prize (2010) Morris Kline (1908–1992), mathematician Lipót Klug (1854–1945), mathematician Hermann Kober (1888–1973), mathematical analysis Simon Kochen (born 1934), model theory and number theory; Cole Prize (1967) Joseph Kohn (born 1932), partial differential operators and complex analysis Ernst Kolman (1892–1972), philosophy of mathematics Dénes Kőnig (1884–1944), graph theorist Gyula Kőnig (1849–1913), mathematician Leo Königsberger (1837–1921), historian of mathematics Arthur Korn (1870–1945), mathematician and inventor Thomas Körner (born 1946), mathematician Stephan Körner (1913–2000), philosophy of mathematics Bertram Kostant (1928–2017), mathematician Edna Kramer (1902–1984), mathematician Mark Krasnosel'skii (1920–1997), nonlinear functional analysis Mark Krein (1907–1989), functional analysis; Wolf Prize (1982) Cecilia Krieger (1894–1974), mathematician Georg Kreisel (1923–2015), mathematical logic Maurice Kraitchik (1882–1957), number theory and recreational mathematics Leopold Kronecker (1823–1891), number theory Joseph Kruskal (1928–2010), graph theory and statistics Martin Kruskal (1925–2006), mathematician and physicist William Kruskal (1919–2005), non-parametric statistics Kazimierz Kuratowski (1896–1980), mathematics and logic Simon Kuznets (1901–1985), statistician and economist; Nobel Prize in Economics (1971) L Imre Lakatos (1922–1974), philosopher of mathematics Dan Laksov (1940–2013), algebraic geometry Cornelius Lanczos (1893–1974), mathematician and physicist Edmund Landau (1877–1938), number theory and complex analysis Georg Landsberg (1865–1912), complex analysis and algebraic geometry Serge Lang (1927–2005), number theory; Cole Prize (1960) Emanuel Lasker (1868–1941), mathematician and chess player Albert Lautman (1908–1944), philosophy of mathematics Ruth Lawrence (born 1971), knot theory and algebraic topology Peter Lax (born 1926), mathematician; Wolf Prize (1987), Steele Prize (1993), Abel Prize (2005) Joel Lebowitz (born 1930), mathematical physics Gilah Leder (born 1941), mathematics education Walter Ledermann (1911–2009), algebra Solomon Lefschetz (1884–1972), algebraic topology and ordinary differential equations; Bôcher Prize (1924) Emma Lehmer (1906–2007), algebraic number theory Moses Lemans (1785–1832), mathematician Alexander Lerner (1913–2004), applied mathematics Arthur Levenson (1914–2007), mathematician and cryptographer Beppo Levi (1875–1961), mathematician Eugenio Levi (1883–1917), mathematician Friedrich Levi (1888–1966), algebra Leone Levi (1821–1888), statistician Raphael Levi Hannover (1685–1779), mathematician and astronomer Tullio Levi-Civita (1873–1941), tensor calculus Dany Leviatan (born 1942), approximation theory Boris Levin (1906–1993), function theory Leonid Levin (born 1948), foundations of mathematics and computer science Norman Levinson (1912–1975), mathematician; Bôcher Prize (1953) Boris Levitan (1914–2004), almost periodic functions Jacob Levitzki (1904–1956), mathematician Armand Lévy (1795–1841), mathematician Azriel Lévy (born 1934), mathematical logic Hyman Levy (1889–1975), mathematician Paul Lévy (1886–1971), probability theory Tony Lévy (born 1943), history of mathematics Hans Lewy (1904–1988), mathematician; Wolf Prize (1986) Gabriel Judah Lichtenfeld (1811–1887), mathematician Leon Lichtenstein (1878–1933), differential equations, conformal mapping, and potential theory Paulette Libermann (1919–2007), differential geometry Elliott Lieb (born 1932), mathematical physics Lillian Lieber (1886–1986), mathematician and popular author Heinrich Liebmann (1874–1939), differential geometry Michael Lin (born 1942), Markov chains and ergodic theory Baruch Lindau (1759–1849), mathematician and science writer Adolf Lindenbaum (1904–1942), mathematical logic Elon Lindenstrauss (born 1970), mathematician; Erdős Prize (2009), Fields Medal (2010) Joram Lindenstrauss (1936–2012), mathematician Yom Tov Lipman Lipkin (1846–1876), mathematician Rudolf Lipschitz (1832–1903), mathematical analysis and differential geometry Rehuel Lobatto (1797–1866), mathematician Michel Loève (1907–1979), probability theory Charles Loewner (1893–1968), mathematician Alfred Loewy (1873–1935), representation theory Gino Loria (1862–1954), mathematician and historian of mathematics Leopold Löwenheim (1878–1957), mathematical logic Baruch Solomon Löwenstein (19th century), mathematician Alexander Lubotzky (born 1956), mathematician and politician; Erdős Prize (1990) Eugene Lukacs (1906–1987), statistician Yudell Luke (1918–1983), function theory Jacob Lurie (born 1977), mathematician; Breakthrough Prize (2014) George Lusztig (born 1946), mathematician; Cole Prize (1985), Steele Prize (2008) Israel Lyons (1739–1775), mathematician Lazar Lyusternik (1899–1981), topology and differential geometry M Myrtil Maas (1792–1865), mathematician Moshé Machover (born 1936), mathematician, philosopher and activist Menachem Magidor (born 1946), set theory Ludwig Immanuel Magnus (1790–1861), geometer Kurt Mahler (1903–1988), mathematician; De Morgan Medal (1971) Yuri Manin (born 1937), algebraic geometry and diophantine geometry Henry Mann (1905–2000), number theory and statistics; Cole Prize (1946) Amédée Mannheim (1831–1906), mathematician and inventor of the slide rule Eli Maor (born 1937), history of mathematics Solomon Marcus (1925–2016), mathematical analysis, mathematical linguistics and computer science Szolem Mandelbrojt (1899–1983), mathematical analysis Benoit Mandelbrot (1924–2010), mathematician; Wolf Prize (1993) Grigory Margulis (born 1946), mathematician; Fields Medal (1978), Wolf Prize (2005), Abel Prize (2020) Edward Marczewski (1907–1976), mathematician Michael Maschler (1927–2008), game theory Walther Mayer (1887–1948), mathematician Barry Mazur (born 1937), mathematician; Cole Prize (1982) Vladimir Mazya (born 1937), mathematical analysis and partial differential equations Naum Meiman (1912–2001), complex analysis, partial differential equations, and mathematical physics Nathan Mendelsohn (1917–2006), discrete mathematics Karl Menger (1902–1985), mathematician Abraham Joseph Menz (18th century), mathematician and rabbi Yves Meyer (born 1939), mathematician; Abel Prize (2017) Ernest Michael (1925–2013), general topology Solomon Mikhlin (1908–1990), mathematician David Milman (1912–1982), functional analysis Pierre Milman (born 1945), mathematician Vitali Milman (born 1939), mathematical analysis Hermann Minkowski (1864–1909), number theory Richard von Mises (1883–1953), mathematician and engineer Elijah Mizrachi ( 1455– 1525), mathematician and rabbi Boris Moishezon (1937–1993), mathematician Louis Mordell (1888–1972), number theory Claus Moser (1922–2015), statistics George Mostow (1923–2017), mathematician; Wolf Prize (2013) Andrzej Mostowski (1913–1975), set theory Simon Motot (15th century), algebra Theodore Motzkin (1908–1970), mathematician José Enrique Moyal (1910–1998), mathematical physics Herman Müntz (1884–1956), mathematician N Leopoldo Nachbin (1922–1993), topology and harmonic analysis Assaf Naor (born 1975), metric spaces; Bôcher Prize (1999) Isidor Natanson (1906–1964), real analysis and constructive function theory Melvyn Nathanson (born 1944), number theory Caryn Navy (born 1953), set-theoretic topology Mark Naimark (1909–1978), functional analysis and mathematical physics Zeev Nehari (1915–1978), mathematical analysis Rabbi Nehemiah ( 150), mathematician Leonard Nelson (1882–1927), mathematician and philosopher Paul Nemenyi (1895–1952), mathematician and physicist Peter Nemenyi (1927–2002), mathematician Abraham Nemeth (1918–2013), mathematician and creator of Nemeth Braille Arkadi Nemirovski (born 1947), optimization Elisha Netanyahu (1912–1986), complex analysis Bernhard Neumann (1909–2003), group theory John von Neumann (1903–1957), set theory, physics and computer science; Bôcher Prize (1938) Hanna Neumann (1914–1971), group theory Klára Dán von Neumann (1911–1963), mathematician and computer scientist Nelli Neumann (1886–1942), synthetic geometry Max Newman (1897–1984), mathematician and codebreaker; De Morgan Medal (1962) Abraham Niederländer (16th century), mathematician and scribe Louis Nirenberg (1925–2020), mathematical analysis; Bôcher Prize (1959), Steele Prize (1994), Chern Medal (2010), Abel Prize (2015) Emmy Noether (1882–1935), algebra and theoretical physics Fritz Noether (1884–1941), mathematician Max Noether (1844–1921), algebraic geometry and algebraic functions Simon Norton (1952–2019), group theory Pedro Nunes (1502–1578), mathematician and cosmographer A. Edward Nussbaum (1925–2009), mathematician and theoretical physicist O David Oppenheim (1664–1736), rabbi and mathematician Menachem Oren (1903–1962), mathematician and chess master Donald Ornstein (born 1934), ergodic theory; Bôcher Prize (1974) Mollie Orshansky (1915–2006), statistics Steven Orszag (1943–2011), applied mathematics Stanley Osher (born 1942), applied mathematics Robert Osserman (1926–2011), geometry Alexander Ostrowski (1893–1986), mathematician Jacques Ozanam (1640–1718), mathematician P–Q Alessandro Padoa (1868–1937), mathematician and logician Emanuel Parzen (1929–2016), statistician Seymour Papert (1928–2016), mathematician and computer scientist Moritz Pasch (1843–1930), foundations of geometry Chaim Pekeris (1908–1992), mathematician and physicist Daniel Pedoe (1910–1998), geometry Rudolf Peierls (1907–1995), physics and applied mathematics; Copley Medal (1996) Rose Peltesohn (1913–1998), combinatorics Grigori Perelman (born 1966), mathematician; Fields Medal (2006, declined), Millennium Prize (2010) Yakov Perelman (1882–1942), recreational mathematics Micha Perles (born 1936), graph theory and discrete geometry Leo Perutz (1882–1957), mathematician and novelist Rózsa Péter (1905–1977), recursion theory Ralph Phillips (1913–1998), functional analysis; Steele Prize (1997) Ilya Piatetski-Shapiro (1929–2009), mathematician; Wolf Prize (1990) Georg Pick (1859–1942), mathematician Salvatore Pincherle (1853–1936), functional analysis Abraham Plessner (1900–1961), functional analysis Felix Pollaczek (1892–1981), number theory, mathematical analysis, mathematical physics and probability theory Harriet Pollatsek (born 1942), mathematician Leonid Polterovich (born 1963), symplectic geometry and dynamical systems; Erdős Prize (1998) George Pólya (1887–1985), combinatorics, number theory, numerical analysis and probability Carl Pomerance (born 1944), number theory Alfred van der Poorten (1942–2010), number theory Emil Post (1897–1954), mathematician and logician Mojżesz Presburger (1904– 1943), mathematician and logician Vera Pless (1931–2020), combinatorics Ilya Prigogine (1917–2003), statistician and chemist; Nobel Prize in Chemistry (1977) Alfred Pringsheim (1850–1941), analysis, theory of functions Moshe Provençal (1503–1576) mathematician, posek and grammarian Heinz Prüfer (1896–1934), mathematician Hilary Putnam (1926–2016), philosophy of mathematics R Michael Rabin (born 1931), mathematical logic and computer science; Turing Award (1976) Philip Rabinowitz (1926–2006), numerical analysis Giulio Racah (1909–1965), mathematician and physicist Richard Rado (1906–1989), mathematician Aleksander Rajchman (1890–1940), measure theory Rose Rand (1903–1980), logician and philosopher Joseph Raphson ( 1648– 1715), mathematician Anatol Rapoport (1911–2007), applied mathematics Marina Ratner (1938–2017), ergodic theory Yitzchak Ratner (1857–?), mathematician Amitai Regev (born 1940), ring theory Isaac Samuel Reggio (1784–1855), mathematician and rabbi Hans Reissner (1874–1967), mathematical physics Robert Remak (1888–1942), algebra and mathematical economics Evgeny Remez (1895–1975), constructive function theory Alfréd Rényi (1921–1970), combinatorics, number theory and probability Ida Rhodes (1900–1986), mathematician Paulo Ribenboim (born 1928), number theory Ken Ribet (born 1948), algebraic number theory and algebraic geometry Frigyes Riesz (1880–1956), functional analysis Marcel Riesz (1886–1969), mathematician Eliyahu Rips (born 1948), geometric group theory; Erdős Prize(1979) Joseph Ritt (1893–1951), differential algebra Igor Rivin (born 1961), hyperbolic geometry, topology, group theory, experimental mathematics. Abraham Robinson (1918–1974), nonstandard analysis Olinde Rodrigues (1795–1851), mathematician and social reformer Werner Rogosinski (1894–1964), mathematician Vladimir Rokhlin (1919–1984), mathematician Werner Romberg (1909–2003), mathematician and physicist Jakob Rosanes (1842–1922), algebraic geometry and invariant theory Johann Rosenhain (1816–1887), mathematician Louis Rosenhead (1906–1984), applied mathematics Maxwell Rosenlicht (1924–1999), algebra; Cole Prize (1960) Arthur Rosenthal (1887–1959), mathematician Klaus Roth (1925–2015), diophantine approximation; Fields Medal (1958) Leonard Roth (1904–1968), algebraic geometry Uriel Rothblum (1947–2012), mathematician and operations researcher Bruce Rothschild (born 1941), combinatorics; Pólya Prize (1971) Linda Preiss Rothschild (born 1945), mathematician Arthur Rubin (born 1956), mathematician and aerospace engineer Karl Rubin (born 1956), elliptic curves; Cole Prize (1992) Reuven Rubinstein (1938–2012), probability theory and statistics Walter Rudin (1921–2010), mathematical analysis Zeev Rudnick (born 1961), number theory and mathematical physics; Erdős Prize (2001) S Saadia Gaon (882 or 892–942), rabbi, philosopher and mathematician Louis Saalschütz (1835–1913), number theory and mathematical analysis Cora Sadosky (1940–2010), mathematical analysis Manuel Sadosky (1914–2005), mathematician and computer scientist Philip Saffman (1931–2008), applied mathematics Stanisław Saks (1897–1942), measure theory Raphaël Salem (1898–1963), mathematician Hans Samelson (1916–2005), differential geometry, topology, Lie groups and Lie algebras Ester Samuel-Cahn (1933–2015), statistician Peter Sarnak (born 1953), analytic number theory; Pólya Prize (1998), Cole Prize (2005), Wolf Prize (2014) Leonard Jimmie Savage (1917–1971), mathematician and statistician Shlomo Sawilowsky (1954–2021), statistician Hermann Schapira (1840–1898), mathematician Malka Schaps (born 1948), mathematician Michelle Schatzman (1949–2010), applied mathematics Robert Schatten (1911–1977), functional analysis Juliusz Schauder (1899–1943), functional analysis and partial differential equations Menahem Max Schiffer (1911–1997), complex analysis, partial differential equations, and mathematical physics Ludwig Schlesinger (1864–1933), mathematician Lev Schnirelmann (1905–1938), calculus of variations, topology and number theory Isaac Schoenberg (1903–1990), mathematician Arthur Schoenflies (1853–1928), mathematician Moses Schönfinkel (1889–1942), combinatory logic Oded Schramm (1961–2008), conformal field theory and probability theory; Erdős Prize (1996), Pólya Prize (2006) Józef Schreier (1909–1943), functional analysis, group theory and combinatorics Otto Schreier (1901–1929), group theory Issai Schur (1875–1941), group representations, combinatorics and number theory Arthur Schuster (1851–1934), applied mathematics; Copley Medal (1931) Albert Schwarz (born 1934), differential topology Karl Schwarzschild (1873–1916), mathematical physics Jacob Schwartz (1930–2009), mathematician Laurent Schwartz (1915–2002), mathematician; Fields Medal (1950) Marie-Hélène Schwartz (1913–2013), mathematician Richard Schwartz (born 1934), mathematician and activist Irving Segal (1918–1998), functional and harmonic analysis Lee Segel (1932–2005), applied mathematics Beniamino Segre (1903–1977), algebraic geometry Corrado Segre (1863–1924), algebraic geometry Wladimir Seidel (1907–1981), mathematician Esther Seiden (1908–2014), statistics Abraham Seidenberg (1916–1988), algebra Gary Seitz (born 1943), group theory Zlil Sela (born 1962), geometric group theory; Erdős Prize (2003) Reinhard Selten (1930–2016), mathematician and game theorist; Nobel Prize in Economics (1994) Valery Senderov (1945–2014), mathematician Aner Shalev (born 1958), group theory Jeffrey Shallit (born 1957), number theory and computer science Adi Shamir (born 1952), mathematician and cryptographer; Erdős Prize (1983) Eli Shamir (born 1934), mathematician and computer scientist Harold Shapiro (1928–2021), approximation theory and functional analysis Samuil Shatunovsky (1859–1929), mathematical analysis and algebra Henry Sheffer (1882–1964), logician Saharon Shelah (born 1945), mathematician; Erdős Prize (1977), Pólya Prize (1992), Wolf Prize (2001) James Shohat (1886–1944), mathematical analysis Naum Shor (1937–2006), optimization William Sidis (1898–1944), mathematician and child prodigy Barry Simon (born 1946), mathematical physicist; Steele Prize (2016) Leon Simon (born 1945), mathematician; Bôcher Prize (1994) Max Simon (1844–1918), history of mathematics James Simons (born 1938), mathematician and hedge fund manager Yakov Sinai (born 1935), dynamical systems; Wolf Prize (1997), Steele Prize (2013), Abel Prize (2014) Isadore Singer (1924–2021), mathematician; Bôcher Prize (1969), Steele Prize (2000), Abel Prize (2004) Abraham Sinkov (1907–1998), mathematician and crypanalyst Hayyim Selig Slonimski (1810–1904), mathematician and astronomer; Demidov Prize (1844) Raymond Smullyan (1919–2017), mathematician and philosopher Alan Sokal (born 1955), combinatorics and mathematical physics Robert Solovay (born 1938), set theory David Spiegelhalter (born 1953), statistician Daniel Spielman (born 1970), applied mathematics and computer science; Pólya Prize (2014) Frank Spitzer (1996–1992), probability theory Guido Stampacchia (1922–1978), mathematician Elias Stein (1931–2018), harmonic analysis; Wolf Prize (1999), Steele Prize (2002) Robert Steinberg (1922–2014), mathematician Mark Steiner (1942–2020), philosophy of mathematics Hugo Steinhaus (1887–1972), mathematician Ernst Steinitz (1871–1928), algebra Moritz Steinschneider (1816–1907), history of mathematics Abraham Stern ( 1762–1842), mathematician and inventor Moritz Abraham Stern (1807–1894), first Jewish full professor at a German university Shlomo Sternberg (born 1936), mathematician Reinhold Strassmann (1893–1944), mathematician Ernst Straus (1922–1983), analytic number theory, graph theory and combinatorics Steven Strogatz (born 1959), nonlinear systems and applied mathematics Daniel Stroock (born 1940), probability theory Eduard Study (1862–1930), invariant theory and geometry Bella Subbotovskaya (1938–1982), mathematician and founder of the Jewish People's University Benny Sudakov (born 1969), combinatorics James Joseph Sylvester (1814–1897), mathematician; Copley Medal (1880), De Morgan Medal (1887) Otto Szász (1884–1952), real analysis Gábor Szegő (1895–1985), mathematical analysis Esther Szekeres (1910–2005), mathematician George Szekeres (1911–2005), mathematician T–U Dov Tamari (1911–2006), logic and combinatorics Jacob Tamarkin (1888–1945), mathematical analysis Éva Tardos (born 1957), mathematician and computer scientist Alfred Tarski (1901–1983), logician, mathematician, and philosopher Alfred Tauber (1866–1942), mathematical analysis Olga Taussky (1906–1995), algebraic number theory and algebra Olry Terquem (1782–1862), mathematician Otto Toeplitz (1881–1940), linear algebra and functional analysis Jakow Trachtenberg (1888–1953), mathematician and mental calculator Avraham Trahtman (born 1944), combinatorics Boris Trakhtenbrot (1921–2016), mathematical logic Boaz Tsaban (born 1973), set theory and nonabelian cryptology Jacob Tsimerman (born 1988), number theory Boris Tsirelson (1950–2020), probability theory and functional analysis Pál Turán (1910–1976), number theory Eli Turkel (born 1944), applied mathematics Stanislaw Ulam (1909–1984), mathematician Fritz Ursell (1923–2012), mathematician Pavel Urysohn (1898–1924), dimension theory and topology V Vladimir Vapnik (born 1936), mathematician and computer scientist Moshe Vardi (born 1954), mathematical logic and theoretical computer science Andrew Vázsonyi (1916–2003), mathematician and operations researcher Anatoly Vershik (born 1933), mathematician Naum Vilenkin (1920–1991), combinatorics Vilna Gaon (1720–1797), Talmudist and mathematician Giulio Vivanti (1859–1949), mathematician Aizik Volpert (1923–2006), mathematician and chemical engineer Vito Volterra (1860–1940), functional analysis Vladimir Vranić (1896–1976), probability and statistics W Friedrich Waismann (1896–1950), mathematician and philosopher Abraham Wald (1902–1950), decision theory, geometry and econometrics Henri Wald (1920–2002), logician Arnold Walfisz (1892–1962), analytic number theory Stefan Warschawski (1904–1989), mathematician Wolfgang Wasow (1909–1993), singular perturbation theory André Weil (1906–1998), number theory and algebraic geometry; Wolf Prize (1979), Steele Prize (1980), Kyoto Prize (1994) Shmuel Weinberger (born 1963), topology Alexander Weinstein (1897–1979), applied mathematics Eric Weinstein (born 1965), mathematical physics Boris Weisfeiler (1942–1985?), algebraic geometry Benjamin Weiss (born 1941), mathematician Wendelin Werner (born 1968), probability theory and mathematical physics; Pólya Prize (2006), Fields Medal (2006) Eléna Wexler-Kreindler (1931–1992), algebra Harold Widom (1932–2021), operator theory and random matrices; Pólya Prize (2002) Norbert Wiener (1894–1964), mathematician; Bôcher Prize (1933) Avi Wigderson (born 1956), mathematician and computer scientist, Abel Prize (2021) Eugene Wigner (1902–1995), mathematician and theoretical physicist; Nobel Prize in Physics (1963) Ernest Julius Wilczynski (1876–1932), geometer Herbert Wilf (1931–2012), combinatorics and graph theory Aurel Wintner (1903–1958), mathematician Daniel Wise (born 1971), geometric group theory and 3-manifolds Edward Witten (born 1951), mathematical physics; Fields Medal (1990), Kyoto Prize (2014) Ludwig Wittgenstein (1889–1951), logic and philosophy of mathematics Julius Wolff (1882–1945), mathematician Jacob Wolfowitz (1910–1981), statistics Paul Wolfskehl (1856–1906), mathematician Mario Wschebor (1939–2011), probability and statistics X–Z Mordecai Yoffe ( 1530–1612), rabbi and mathematician Akiva Yaglom (1921–2007), probability and statistics Isaak Yaglom (1921–1988), mathematician Sofya Yanovskaya (1896–1966), logic and history of mathematics Adolph Yushkevich (1906–1993), history of mathematics Abraham Zacuto (1452– 1515), mathematician and astronomer Lotfi Zadeh (1921–2017), fuzzy mathematics Pedro Zadunaisky (1917–2009), mathematician and astronomer Don Zagier (born 1951), number theory; Cole Prize (1987) Elijah Zahalon (18th century), mathematician and Talmudist Zygmunt Zalcwasser (1898–1943), mathematician Victor Zalgaller (1920–2020), geometry and optimization Israel Zamosz ( 1700–1772), Talmudist and mathematician Oscar Zariski (1899–1986), algebraic geometer; Cole Prize (1944), Wolf Prize (1981), Steele Prize (1981) Edouard Zeckendorf (1901–1983), number theory Doron Zeilberger (born 1950), combinatorics Efim Zelmanov (born 1955), mathematician; Fields Medal (1994) Tamar Ziegler (born 1971), ergodic theory and arithmetic combinatorics; Erdős Prize (2011) Leo Zippin (1905–1995), solved Hilbert's fifth problem Abraham Ziv (1940–2013), number theory Benedict Zuckermann (1818–1891), mathematician and historian Moses Zuriel (16th century), mathematician See also Lists of Jews List of Jewish American mathematicians List of Israeli mathematicians Mishnat ha-Middot References Sources Mathematicians Jewish
1356223
https://en.wikipedia.org/wiki/Bryan%20Clough
Bryan Clough
Bryan Clough (born 1932, Oldham, Lancashire) is an English writer. Clough has written several books and articles dealing with phreakers, hackers and computer virus writers; credit card fraud; banking; and the activities of MI5 during World War II, specifically the Tyler Kent–Anna Wolkoff Affair (2005). Works In 1990, Clough and Paul Mungo, a journalist, wrote Approaching Zero (1992) a book that covered the activities of phreakers, hackers and computer virus writers. It was later published in North America and translations appeared in French, Spanish, Turkish and Japanese. Three incidents of credit card fraud described in the book resulted in much interest in the press. Further investigations resulted in articles on computer viruses, and investigations into 'phantom withdrawals' from ATMs and credit card fraud. These investigations culminated in the publication of Cheating at Cards (1994) which revealed 40 ways of fraudulently obtaining money from ATMs; and Beware of Your Bank (1995) in which he examined mistakes made by banks and explained how to detect errors, and how to obtain compensation. Sparked by a close interest in cryptology, he then turned to the strange case of Tyler Kent, an American national employed as a code and cipher clerk at the American Embassy in London, at a time when Great Britain was at war with Germany and America claimed to be strictly neutral. In May 1940, Kent was arrested, tried in camera and sentenced to seven years' penal servitude. Clough's book State Secrets: The Kent–Wolkoff Affair (2005) took advantage of privileged access to Government files and also the release of others under the Freedom of Information Act. Sixty-five years after the event, Clough finally revealed the 'real reason' for Kent's arrest and imprisonment – which was very different from the earlier versions in officially inspired publications. Clough appeared in the documentary Churchill and the Fascist Plot broadcast on Channel Four on 16 March 2013. Personal life Clough was educated at the Hulme Grammar School, Oldham and served his national service with the 10th Royal Hussars in Germany. He then worked in a variety of industries, mainly in engineering, before becoming chief executive for a major international company which allowed him to travel widely. He set up his own computer supply and maintenance company in 1983 which he sold out in 1990 in order to concentrate on research and writing. Clough married his wife in 1971 and they had two daughters. He now lives in Hove, Sussex. Bibliography Clough, Bryan. Mungo, Paul. Approaching Zero: Data Crime & the Computer Underworld Faber & Faber, London 1992. Clough, Bryan. Mungo, Paul. Approaching Zero: The Extraordinary World of Hackers, Phreakers, Virus Writers and Keyboard Criminals Random House, New York 1992. Clough, Bryan. Mungo, Paul. Los Piratas del Chip: La Mafia Informatica al Desnudo Ediciones B, Barcelona 1992. Clough, Bryan. Mungo, Paul. Delinquance Assistée par Ordinateur: La Saga des “Hackers” Nouveaux Flibustiers “High Tech”! Dunod, Paris 1993. Clough, Bryan. Mungo, Paul. Approaching Zero Hayakawa Publishing, Tokyo 1994 Clough, Bryan. Mungo, Paul. Sıfıra Doğru İletişim Yayınları, Istanbul 1994. Clough, Bryan. Cheating at Cards – Plastic Fraud: Sharp Practices and Naïve Systems Hideaway Publications, Hove 1994. Clough, Bryan. Beware of Your Bank Hideaway Publications, Hove 1995. Clough, Bryan. State Secrets: The Kent-Wolkoff Affair. East Sussex: Hideaway Publications Ltd., 2005. References External links 1932 births Historians of the British Isles British investigative journalists Living people People from Hove People educated at Hulme Grammar School
2640550
https://en.wikipedia.org/wiki/Circular%20dependency
Circular dependency
In software engineering, a circular dependency is a relation between two or more modules which either directly or indirectly depend on each other to function properly. Such modules are also known as mutually recursive. Overview Circular dependencies are natural in many domain models where certain objects of the same domain depend on each other. However, in software design, circular dependencies between larger software modules are considered an anti-pattern because of their negative effects. Despite this, such circular (or cyclic) dependencies have been found to be widespread among the source files of real-world software. Mutually recursive modules are, however, somewhat common in functional programming, where inductive and recursive definitions are often encouraged. Problems Circular dependencies can cause many unwanted effects in software programs. Most problematic from a software design point of view is the tight coupling of the mutually dependent modules which reduces or makes impossible the separate re-use of a single module. Circular dependencies can cause a domino effect when a small local change in one module spreads into other modules and has unwanted global effects (program errors, compile errors). Circular dependencies can also result in infinite recursions or other unexpected failures. Circular dependencies may also cause memory leaks by preventing certain very primitive automatic garbage collectors (those that use reference counting) from deallocating unused objects. Causes and solutions In very large software designs, software engineers may lose the context and inadvertently introduce circular dependencies. There are tools to analyze software and find unwanted circular dependencies. Circular dependencies can be introduced when implementing callback functionality. This can be avoided by applying design patterns like the observer pattern. See also Acyclic dependencies principle Dependency hell References Programming language topics C++ Articles with example C++ code
891085
https://en.wikipedia.org/wiki/Sc%C3%A8nes%20%C3%A0%20faire
Scènes à faire
A scène à faire (French for "scene to be made" or "scene that must be done"; plural: scènes à faire) is a scene in a book or film which is almost obligatory for a book or film in that genre. In the U.S. it also refers to a principle in copyright law in which certain elements of a creative work are held to be not protected when they are mandated by or customary to the genre. Examples in different genres For example, a spy novel is expected to contain elements such as numbered Swiss bank accounts, a femme fatale, and various spy gadgets hidden in wristwatches, belts, shoes, and other personal effects. The United States Court of Appeals for the Second Circuit interpreted the scènes à faire doctrine expansively to hold that a motion picture about the South Bronx would need to feature drunks, prostitutes, vermin, and derelict cars to be perceived as realistic, and therefore a later film that duplicated these features of an earlier film did not infringe. These elements are not protected by copyright, though specific sequences and compositions of them can be. As another example, in computer programming, it is often customary to list variables at the beginning of the source code of a program. In some programming languages, it is required to also declare the type of variable at the same time. Depending on the function of a program, certain types of variables are to be expected. If a program deals with files, variable types that deal with files are often listed and declared. As a result, variable declarations are generally not considered protected elements of a program. The United States Court of Appeals for the Second Circuit made this part of the analysis for infringement of non-literal elements of computer code in Computer Associates International, Inc. v. Altai, Inc. In that case, the court added it into its Abstraction-Filtration-Comparison test. Policy The policy rationale of the doctrine of scènes à faire is that granting a first comer exclusivity over scènes à faire would greatly hinder others in the subsequent creation of other expressive works. That would be against the constitutionally mandated policy of the copyright law to promote progress in the creation of works, and it would be an impediment to the public's enjoyment of such further creative expressions. By the same token, little benefit to society would flow from grants of copyright exclusivity over scènes à faire. In a business and computer program context, the doctrine of scènes à faire is interpreted to apply to the practices and demands of the businesses and industries that the given computer program serves. Hence, the concepts of idea vs. expression (merger doctrine) and scènes à faire relate directly to promoting availability of business functionality. In CMM Cable Rep., Inc. v. Ocean Coast Properties, Inc., 97 F.3d 1504 (1st Cir. 1996). the court compared the merger and scènes à faire doctrines. The court said that the two doctrines were similar in policy, in that they both sought to prevent monopolization of ideas. However, merger applied when idea and expression were inseparable, but scènes à faire applied despite separability where an external common setting caused use of common elements and thus similarity of expression. Limits of doctrine The doctrine must be a matter of degree—that is, operate on a continuum. Consider the Second Circuit's ruling that the scène à faire for a movie about the South Bronx would need to feature drunks, prostitutes, vermin (rats, in the accused and copyrighted works), and derelict cars. The principle must have a limit, however, so that something is outside the scènes à faire doctrine for South Bronx movies. Perhaps, cockroaches, gangs, and muggings are also part of the South Bronx scène à faire, but further similarity such as the film having as characters "a slumlord with a heart of gold and a policeman who is a Zen Buddhist and lives in a garage" surely goes beyond the South Bronx scène à faire. There must be some expression possible even in a cliche-ridden genre." Cases Cain v. Universal Pictures, 47 F.Supp. 1013 (United States District Court for the Southern District of California 1942) This was the case where the term was introduced, when the writer James M. Cain sued Universal Pictures, the scriptwriter and the director for copyright infringement in connection with the film When Tomorrow Comes. Cain claimed a scene in his book where two protagonists take refuge from a storm in a church had been copied in a scene depicting the same situation in the movie. Judge Leon Rene Yankwich ruled that there was no resemblance between the scenes in the book and the film other than incidental "scènes à faire", or natural similarities due to the situation. Walker v. Time Life Films, Inc., 784 F.2d 44 (2d Cir. 1986) After the release of the film Fort Apache, The Bronx, author Thomas Walker filed a lawsuit against one of the production companies, Time-Life Television Films (legal owner of the script), claiming that the producers infringed on his book Fort Apache (New York: Crowell, 1976. ). Among other things, Walker, the plaintiff, argued that: "both the book and the film begin with the murder of a black and a white policeman with a handgun at close range; both depict cockfights, drunks, stripped cars, prostitutes and rats; both feature as central characters third- or fourth-generation Irish policemen who live in Queens and frequently drink; both show disgruntled, demoralized police officers and unsuccessful foot chases of fleeing criminals." But the United States Court of Appeals for the Second Circuit ruled that these are stereotypical ideas, and that the United States copyright law does not protect concepts or ideas. The court ruling stated: "the book Fort Apache and the film Fort Apache: The Bronx were not substantially similar beyond [the] level of generalized or otherwise nonprotectible ideas, and thus [the] latter did not infringe copyright of [the] former." Joshua Ets-Hokin v. Skyy Spirits Inc., 225 F.3d 1068 (9th Cir. 2000) Another significant case in United States law was Ets-Hokin v. Skyy Spirits (2003), in which scenes à faire was upheld as an affirmative defense by the United States Court of Appeals for the Ninth Circuit. The case involved a commercial photographer, Joshua Ets-Hokin, who sued SKYY vodka when another photographer created advertisements with a substantially similar appearance to work he had done for them in the past. It was established that the similarity between his work and the later works of the photographer was largely mandated by the limited range of expression possible; within the constraints of a photo shoot for a commercial product there are only so many ways one may photograph a vodka bottle. In light of this, to establish copyright infringement, the two photos would have been required to be virtually identical. The originality of the later work was established by such minor differences as different shadows and angles.<ref>[http://www.ca9.uscourts.gov/ca9/newopinions.nsf/04485f8dcbd4e1ea882569520074e698/add102410b4fb5af88256e5a0070788c/$FILE/9817072.PDF Joshua Ets-Hokin v. Skyy Spirits Inc., 225 F.3d 1068 9th Cir. 2000]. – United States Court of Appeals for the Ninth Circuit. – (Adobe Acrobat *.PDF document)</ref> Gates Rubber Co. v. Bando Chemical Industries, Ltd., 9 F.3d 823 (10th Cir. 1993) A significant scènes à faire case in the computer program context is Gates v. Bando. The court explained the policy and application of the doctrine to computer program copyright infringement cases in these terms: Under the scènes à faire doctrine, we deny protection to those expressions that are standard, stock, or common to a particular topic or that necessarily follow from a common theme or setting. Granting copyright protection to the necessary incidents of an idea would effectively afford a monopoly to the first programmer to express those ideas. Furthermore, where a particular expression is common to the treatment of a particular idea, process, or discovery, it is lacking in the originality that is the sine qua non for copyright protection. The scènes à faire doctrine also excludes from protection those elements of a program that have been dictated by external factors. In the area of computer programs these external factors may include: hardware standards and mechanical specifications, software standards and compatibility requirements, computer manufacturer design standards, target industry practices and demands, and computer industry programming practices. RG Anand v. M/s Deluxe Films, AIR 1978 SC 1613The plaintiff was the writer and producer of a play called "Hum Hindustani" that was produced in the period of 1953-1955. The play was based on the evils of provincialism. The defendant in 1956 produced a film called "New Delhi''". One of the themes of the film was provincialism, too. While evaluating whether or not the defendant had infringed the plaintiff's copyright, the Supreme Court of India held:There can be no copyright in an idea, subject matter, themes, plots or historical or legendary facts and violation of the copyright in such cases is confined to the form, manner and arrangement and expression of the idea by the author of the copyright work. (emphasis supplied)Therefore, the court held that there is a standard way of dealing with the theme of provincialism, and there can be no copyright over that theme. Consequently, a question of infringement does not even arise. See also Fair use Idea-expression divide References Further reading Asked Questions (and Answers) about Fan Fiction, Chilling Effects. Legal doctrines and principles French legal terminology Copyright law
34164583
https://en.wikipedia.org/wiki/History%20of%20computer%20clusters
History of computer clusters
The history of computer clusters is best captured by a footnote in Greg Pfister's In Search of Clusters: “Virtually every press release from DEC mentioning clusters says ‘DEC, who invented clusters...’. IBM did not invent them either. Customers invented clusters, as soon as they could not fit all their work on one computer, or needed a backup. The date of the first is unknown, but it would be surprising if it was not in the 1960s, or even late 1950s.” The formal engineering basis of cluster computing as a means of doing parallel work of any sort was arguably invented by Gene Amdahl of IBM, who in 1967 published what has come to be regarded as the seminal paper on parallel processing: Amdahl's Law. Amdahl's Law describes mathematically the speedup one can expect from parallelizing any given otherwise serially performed task on a parallel architecture. This article defined the engineering basis for both multiprocessor computing and cluster computing, where the primary differentiator is whether or not the interprocessor communications are supported "inside" the computer (on for example a customized internal communications bus or network) or "outside" the computer on a commodity network. Consequently, the history of early computer clusters is more or less directly tied into the history of early networks, as one of the primary motivations for the development of a network was to link computing resources, creating a de facto computer cluster. Packet switching networks were conceptually invented by the RAND corporation in 1962. Using the concept of a packet switched network, the ARPANET project succeeded in creating in 1969 what was arguably the world's first commodity-network based computer cluster by linking four different computer centers (each of which was something of a "cluster" in its own right, but probably not a commodity cluster). The ARPANET project grew into the Internet—which can be thought of as "the mother of all computer clusters" (as the union of nearly all of the compute resources, including clusters, that happen to be connected). It also established the paradigm in use by all computer clusters in the world today—the use of packet-switched networks to perform interprocessor communications between processor (sets) located in otherwise disconnected frames. The development of customer-built and research clusters proceeded hand in hand with that of both networks and the Unix operating system from the early 1970s, as both TCP/IP and the Xerox PARC project created and formalized protocols for network-based communications. The Hydra operating system was built for a cluster of DEC PDP-11 minicomputers called C.mmp at Carnegie Mellon University in 1971. However, it was not until circa 1983 that the protocols and tools for easily doing remote job distribution and file sharing were defined (largely within the context of BSD Unix, as implemented by Sun Microsystems) and hence became generally available commercially, along with a shared filesystem. The first commercial clustering product was ARCnet, developed by Datapoint in 1977. ARCnet was not a commercial success and clustering per se did not really take off until Digital Equipment Corporation released their VAXcluster product in 1984 for the VAX/VMS operating system. The ARCnet and VAXcluster products not only supported parallel computing, but also shared file systems and peripheral devices. The idea was to provide the advantages of parallel processing, while maintaining data reliability and uniqueness. VAXcluster, now VMScluster, is still available on OpenVMS running on Alpha, Itanium and x86-64 systems. Two other noteworthy early commercial clusters were the Tandem Himalaya (a circa 1994 high-availability product) and the IBM S/390 Parallel Sysplex (also circa 1994, primarily for business use). No history of commodity computer clusters would be complete without noting the pivotal role played by the development of Parallel Virtual Machine (PVM) software in 1989. This open source software based on TCP/IP communications enabled the instant creation of a virtual supercomputer—a high performance compute cluster—made out of any TCP/IP connected systems. Free-form heterogeneous clusters built on top of this model rapidly achieved total throughput in FLOPS that greatly exceeded that available even with the most expensive "big iron" supercomputers. PVM and the advent of inexpensive networked PCs led, in 1993, to a NASA project to build supercomputers out of commodity clusters. In 1995 the Beowulf cluster—a cluster built on top of a commodity network for the specific purpose of "being a supercomputer" capable of performing tightly coupled parallel HPC computations—was invented, which spurred the independent development of grid computing as a named entity, although Grid-style clustering had been around at least as long as the Unix operating system and the Arpanet, whether or not it, or the clusters that used it, were named. See also History of supercomputing References Concurrent computing History of computing hardware Parallel computing
9285884
https://en.wikipedia.org/wiki/Delta%20ISO
Delta ISO
A Delta ISO is used to update an ISO image which contains RPM Package Manager files. It makes use of DeltaRPMs (a form of Delta compression) for RPMs which have changed between the old and new versions of the ISO. Delta ISOs can save disk space and download time, as a Delta ISO only contains the things that were updated in the new version of the ISO. After downloading the Delta ISO, a user can use it to update the outdated ISO. Some RPM-based Linux distributions such as Fedora and openSUSE make use of this technique. External links Fedora openSUSE Linux package management-related software SUSE Linux
44704090
https://en.wikipedia.org/wiki/Digimon%20Story%3A%20Cyber%20Sleuth
Digimon Story: Cyber Sleuth
is a role-playing video game developed by Media.Vision and published by Bandai Namco Entertainment that was released in Japan on March 12, 2015 for PlayStation Vita and PlayStation 4. Part of the Digimon franchise, the game is the fifth installment in the Digimon Story series, following 2011's Super Xros Wars, and the first to be released on home consoles. The game would be released in North America on February 2, 2016, also becoming the first instalment of the Digimon Story series to be released in North America since 2007's Digimon World Dawn and Dusk, and the first to be released under its original title. A sequel, titled Digimon Story: Cyber Sleuth – Hacker's Memory, was released in Japan in 2017 and in Western territories in 2018. In July 2019, a port of the game and its sequel for Nintendo Switch and Windows, was announced for release on October 18, 2019, as Digimon Story Cyber Sleuth: Complete Edition. although the PC version released a day early. Gameplay Digimon Story: Cyber Sleuth is a role-playing game, played from a third-person perspective where players control a human character with the ability to command Digimon, digital creatures with their own unique abilities who do battle against other Digimon. Players can choose between either Palmon, Terriermon or Hagurumon as their starting partner at the beginning of the game, with more able to be obtained as they make their way into new areas. A total of 249 unique Digimon are featured, including seven that were available as DLC throughout the life of the game, and two which were exclusive to the Western release. The title features a New Game Plus mode where players retain all of their Digimon, non-key items, money, memory, sleuth rank, scan percentages, and Digifarm progress. The Complete Edition includes the 92 new Digimon from Hacker's Memory, for a total of 341 Digimon. Plot Players assume the role of Takumi Aiba (male) or Ami Aiba (female), a young Japanese student living in Tokyo while their mother, a news reporter, is working abroad. After receiving a message from a hacker, Aiba investigates the physical-interaction cyberspace network EDEN, where they meet Nokia Shiramine and Arata Sanada. The hacker gives them "Digimon Capture" programs and locks them in EDEN. While searching for an exit, Aiba meets Yuugo, leader of the hacker team "Zaxon"; Yuugo teaches Aiba how to use their Digimon Capture and tells them that Arata is a skilled hacker himself. Aiba meets up with Nokia and Arata, who unlocks a way out, but the three are then attacked by a mysterious creature that grabs Aiba and corrupts their logout process. Aiba emerges in the real world as a half-digitized entity and is rescued by detective Kyoko Kuremi, head of the Kuremi Detective Agency, which specializes in cyber-crimes. Aiba manifests an ability, Connect Jump, which allows them to travel into and through networks. Recognizing their utility, Kyoko helps Aiba stabilize their digital body and recruits them as her assistant. They investigate a hospital ward overseen by Kamishiro Enterprises, which owns and manages EDEN, and finds it filled with patients of a phenomenon called "EDEN Syndrome," where users logged onto EDEN fall into a seemingly permanent coma. Aiba discovers their own physical body in the ward, before being confronted by a mysterious girl. The girl admits to knowing one of the other victims, and helps Aiba avoid Rie Kishibe, the current president of Kamishiro. The mysterious girl approaches Kyoko and Aiba and reveals herself as Yuuko Kamishiro, the daughter of Kamishiro Industries' former president, and requests they investigate her father's purported suicide. With the assistance of Goro Mayatoshi, a detective in the Tokyo Police Department and an old friend of Kyoko's father, Kyoko and Aiba gather evidence regarding illegal activity within Kamishiro. Kyoko's plans of are thwarted when Kishibe holds a sudden press conference, admitting to the activity and terminating several non-essential employees as scapegoats, which causes Mayatoshi's superiors to call off the accusations. Aiba, Arata, Yuuko, and Kyoko take advantage of an EDEN preview event to hack into the Kamishiro servers, discovering ao a "Paradise Lost Plan," and that Yuuko's older brother is a victim of EDEN Syndrome, a casualty of a failed beta test eight years ago apparently covered up by Kamishiro. Nokia, with Aiba's help, reunites with an Agumon and Gabumon she met and bonded with in Kowloon; she learns from them that Digimon are not hacker programs, but living creatures from a "Digital World", and that Agumon and Gabumon came to EDEN for a purpose they can't remember. Nokia vows to help them recover their memories, but is hampered by her lack of fighting experience; after being soundly defeated by Yuugo's lieutenant Fei, she resolves to become stronger and forms her own group, the Rebels, to improve relations between humans and Digimon. This allows Agumon and Gabumon to digivolve into WarGreymon and MetalGarurumon, and gains her a large following, but Yuugo worries that she might interfere with his goal of protecting EDEN. Meanwhile, Aiba assists Arata in investigating "Digital Shift" phenomena occurring around Tokyo. They meet Akemi Suedou, who identifies the creature behind the Digital Shifts as an Eater; a mass of corrupted data that consumes users' mental data, making it responsible for the EDEN Syndrome and Aiba's half-digital state. Eaters have links to a "white boy ghost" that keeps appearing around it, and by "eating" data can evolve into different forms. Arata, discouraged after witnessesing many friends become victims of EDEN Syndrome, decides to help Aiba upon learning the truth about their condition. As Aiba continues their investigations, Jimiken "Jimmy KEN," a Japanese rock idol and disgruntled Zaxon hacker, breaks away from Zaxon and forms a group called the "Demons." Jimiken hijacks Tokyo's television signals, broadcasting a music video overlaid with subliminal messaging to hypnotize users into logging onto EDEN and entering the Demons' stronghold. Aiba defeats Jimiken, who reveals the signal hijacking equipment was given to him by Rie Kishibe in exchange for his loyalty, but his account is destroyed by Fei before he can be further interrogated. Yuugo mobilizes hackers around EDEN to attempt a large-scale attack on Kamishiro Enterprise’s high-security servers codenamed "Valhalla." Arata intervenes, revealing he is the former leader of a hacker group that failed to hack the Valhalla server in the past, and initiates a battle between Yuugo's Zaxon hackers, his own group of veteran hackers, and Nokia's Rebels, supported by Aiba. The battle is interrupted when Rie unleashes Eaters in the server, revealing the entire event was a trap to get Yuugo to accumulate Eater prey, and forcibly logs Yuugo out, who is actually Yuuko using a false EDEN avatar modeled and named after her older brother. Rie informs Yuuko that she was using the avatar to manipulate her actions, and begins extracting Yuuko's memories. Nokia's determination during the battle causes WarGreymon and MetalGarurumon to DNA Digivolve into Omnimon, who rescues the survivors and remembers that his real purpose of coming to EDEN was to save the Digital World from the Eaters, which were created from negative human emotions taking form in EDEN; the Digital World's ruler, King Drasil, determined that humans were the cause, and ordered the Royal Knights to investigate and put a stop to the attacks, but the Royal Knights became split, with some Knights advocating destroying humanity to wipe out the Eaters at their source, and others favoring a peaceful solution. Deducing that Rie is allied with the Knights who advocate destroying humanity, Aiba and their friends chase after her. When they confront Rie, she reveals her true identity as the Royal Knight Crusadermon and reveals that the "Paradise Lost Plan" is meant to gather energy via Tokyo's digital network with which to open the gate between the physical and digital world as a preface to a full-scale invasion. Arata closes the dimensional gate by causing a citywide blackout and encounters Suedou, who reveals that he was the hacker developed and distributed the Digimon Capture program. The group attempts to save Yuuko, who has been absorbed and held captive by an Eater "Eve." Aiba rescues Yuuko but is pulled into a digital void, where they meet the real Yuugo. Yuugo wishes Aiba and their friends well, but asks them not to search for him, before a mysterious Digimon rescues Aiba from the digital void. Aiba returns to the real world to discover a week has passed, and that Tokyo is besieged by a massive Digital Shift as a result of Crusadermon's actions, allowing Digimon run amok in the real world, and discovers that their half-digital body is beginning to destabilize as their mental data disperses. The group begins to search for the other Royal Knights in the hopes of convincing them to join their side instead of trying to destroy humanity. In the process, Aiba and Yuuko encounter a former colleague of Suedou, who gives them details regarding the EDEN Beta test eight years ago; Kamishiro sent five children, one of which was Yuugo, into the Beta as a demonstration to investors, but something went wrong and the test was aborted. Four of the children logged out successfully, but were heavily traumatized, and Yuugo never regained consciousness, becoming the first EDEN Syndrome victim. To cover up the disaster, Suedou had the memories of the remaining four children erased. Shortly after, Aiba and Arata encounter Suedou inside another Digital Shift; Arata is attacked and consumed by an Eater, but Suedou rescues him and sparks a specific memory in his mind. Arata, now suddenly obsessed with becoming stronger, leaves with Suedou and cuts all contact. Aiba and their friends manage to recruit most of the Royal Knights to their cause, and Aiba tracks down and confronts Crusadermon, seemingly defeating them, but, upon trying to return to Kyoko, falls into a Digital Shift containing what appears to be a recreation of the EDEN Beta test from eight years ago. Crusadermon, still alive, reveals the recreation to be a trap to capture Aiba and tells them the truth of the Beta test incident: The four other children who entered the beta with Yuugo were none other than Arata, Nokia, Yuuko, and Aiba themselves. When the children first entered the beta, they found a portal leading from EDEN to the Digital World. However, after opening it, an Eater followed them and consumed Yuugo; the other children, frightened, fled the Digital World back to Eden, leaving the portal open, allowing more Eaters to enter the Digital World. Overcome by despair at the revelation and already suffering from deterioration, Aiba allows their data to be absorbed into the simulation. Meanwhile, Nokia and Yuuko, worried about Aiba after they failed to return, discover the entrance to the Beta Test recreation while searching for them. After learning the events of the Beta test incident, they locate what is left of Aiba, but are ambushed by Crusadermon. To save them, Kyoko enters the Digital Shift and reveals her true form as "Alphamon", the 13th Royal Knight, and assists them in defeating Crusadermon and restoring Aiba. Alphamon then explains that the "real" Kyoko and Rie were humans who were attacked by Eaters and inflicted with EDEN Syndrome, and that Alphamon and Crusadermon possessed their comatose bodies to hide in the human world, but unaware of each other. Despite Crusadermon's defeat, however, Alphamon informs Aiba that Leopardmon, Crusadermon's leader, is collecting power in order to evolve into an even more dangerous form, intending to destroy humanity themselves, and that they must be stopped before the evolution is complete. Aiba and Alphamon head to the Tokyo Metropolitan Office to stop Leopardmon but are confronted by Arata, who reveals Suedou had sparked his memory of the beta test incident and his despair at being unable to save Yuugo. Arata transforms completely into an Eater "Adam" and attempts to assimilate Aiba, but Aiba defeats the Eater and saves Arata as they did with Yuuko. After stopping Leopardmon, Suedou appears and tells the group that they can stop the Eaters by traveling to the Digital World and extracting Yuugo from the core of the "Mother Eater," which will make it so that Eaters, and their effects on both worlds, never existed. The group arrives to find that the Mother Eater has completely taken over King Drasil; after defeating it, Aiba rescues Yuugo, but Yuugo reveals that he had been acting as a limiter on Eater's actions, and without him as a central conscience it has no restraint to simply eat everything indiscriminately. Suedou takes the opportunity to merge with Mother Eater himself and becomes its new conscience, hoping to merge the Digital and Physical Worlds as one and recreate a world without sadness or misery. After defeating the merged Mother Eater, Aiba, despite suffering from extensive data deterioration and risking a complete collapse of their half-cyber body, Connect Jumps into the Mother Eater in an attempt to rescue Suedou. Suedou, amazed that Aiba would risk their own existence to save him, determines that the universe is better off being allowed to unfold and evolve in its own way rather than be influenced by him, and restores King Drasil, simultaneously erasing himself from history. As Aiba returns to their friends and watches the reforming Digital World, Alphamon informs them they must return to the human world, as King Drasil will be reverting both worlds to a state in which contact with the Digimon eight years ago never occurred. Alphamon and the other Digimon bid farewell under the promise to meet again, and Aiba accompanies their friends back to the human world, but on the way back, Aiba's deteriorated half-cyber body dissolves before their friends' eyes, leaving behind only their Digivice. In the real world, only Nokia, Arata, Yuugo, and Yuuko remember the events while Aiba is still comatose; Yuuko's father is alive again, Rie is an ordinary human woman, Suedou was never born, and Kyoko, despite there being evidence of her existence, cannot be found by the remaining four friends, who are still anxiously waiting for Aiba to wake up. Eventually, Aiba's scrap data is found by Alphamon, who has Aiba's Digimon team gather data from their memories to recreate Aiba's mind and restore them to their body. After being restored, Aiba meets Kyoko, who has no memory of them but, sensing a familiarity, invites them to work as her assistant. Development Digimon Story: Cyber Sleuth was first announced for the PlayStation Vita in a December 2013 issue of Japanese V Jump magazine, although its projected release date was still more than a year away. A teaser trailer was revealed near the end of the month on the official website, with a release window of Spring 2015 slated in a later September 2014 issue of V Jump. The game was developed by Media.Vision, and features character designs by Suzuhito Yasuda, known for his work on Shin Megami Tensei: Devil Survivor and Durarara!!. In June 2015, Amazon Canada listed a North American version of Digimon Story: Cyber Sleuth under the title "Digimon World: Cyber Sleuth" for the PlayStation 4, hinting for a release in the region. Bandai Namco Games later confirmed English-language releases in North America and Europe for 2016, which would be a retail title for the PlayStation 4, and digital release for the PlayStation Vita. An English trailer was showcased at the 2015 Tokyo Game Show, with a final North American release date of February 2, 2016 announced the following month. Pre-order DLC bonuses for the North American physical PlayStation 4 version include two Digimon exclusive to the Western release - making for a total of 11 DLC Digimon, in-game items, and costumes for Agumon, whist the digital Vita version included the same pre-order items with two PlayStation Vita themes. Seven new Digimon were added as free DLC on March 10. The game's music was composed by Masafumi Takada, with sound design by Jun Fukuda. Purchasers of the Japanese version of the game received a code for a free digital download of 13 tracks from the game grouped together as the Digimon Story: Cyber Sleuth Bonus Original Soundtrack. An official commercial soundtrack containing 60 tracks from the game was released in Japan on March 29, 2015 by Sound Prestige Records. Cyber Sleuth was removed from the US PSN store on both PS4 and PS Vita on December 20, 2018. It remained up in Europe and South East Asia however. It was delisted in Europe/South East Asia at the end of January 2019. The Nintendo Switch and PC versions were developed by h.a.n.d. Cyber Sleuth is considered to be a reboot of the Digimon Story (series) and was developed with player feedback in mind. Kazumasa Habu decided to stick to the base concepts of the Story series which has simple turn based battles with a levelling system, as that would allow players to be able to play without having to read instructions. As the focus of the Story series was to collect and train Digimon, it was felt that it was important to make sure Cyber Sleuth at least had the same amount of trainable Digimon as the original Digimon Story game, Digimon World DS. With Cyber Sleuth having 3D models instead of sprites this was tough, but they were able to achieve this goal thanks to the work of the developers, Media.Vision. Feedback they had received from players was that they wanted to be able to see their Digimon during battle, which the games didn't do, which is why they decided to use 3D models for Cyber Sleuth. Due to the experience of creating models for Digimon Adventure (video game), Habu was certain they would be able to take that knowledge into making them for Cyber Sleuth as well. The attack and victory animations in Cyber Sleuth were very popular and highly admired, with their quality being because one of the development staff was a big Digimon fan so put a lot of effort into studying even minor Digimon. When Cyber Sleuth was in development, overseas distributors were not open to the idea of localising Digimon games because according to them, the games were aimed at children, and the anime wasn't popular, but they were eventually to localise Cyber Sleuth because of the petitions signed by fans for Digimon games to be localised again. Reception The game holds a score of 75/100 on the review aggregator Metacritic, indicating generally favorable reviews. Digimon Story: Cyber Sleuth received a 34 out of 40 total score from Japanese magazine Weekly Famitsu, based on individual scores of 8, 9, 9, and 8. Destructoid felt that the game wasn't much of a departure from older role-playing games, stating "The battle system is basically everything you've seen before from the past few decades of JRPGs," which includes random encounters that are "either deliciously or inexcusably old-school, depending on your tastes." While PlayStation LifeStyle felt that the game "isn’t a perfect video game interpretation of Bandai Namco’s long-running franchise," criticizing its linear dungeon design and "cheap" interface, its gameplay improvements were a step in the right direction "for fans who have been waiting to see the series get on Pokémon’s level." The website also commended the colorful art and character design of Suzuhito Yasuda, declaring that "Yasuda’s art brings crucial style and life to Digimon’s game series, which had spent previous years sort of fighting to establish its identity." Hardcore Gamer thought that the game was an important step forward for the franchise, stating "It isn’t perfect; its story and script could use some fine-tuning, and the world needs to be more interesting, but overall, this is a solid first step." Sales The PlayStation Vita version of Digimon Story: Cyber Sleuth sold 76,760 copies in its debut week in Japan, becoming the third high-selling title for the week. Although initial sales were less than its predecessor, Digimon World Re:Digitize, Cyber Sleuth managed to sell approximately 91.41% of all physical copies shipped to the region, and would go on to sell a total of 115,880 copies by the end of 2015, becoming the 58th best-selling software title that year. In the UK, Digimon Story: Cyber Sleuth was the 11th best selling game in the week of release. The PlayStation Vita version was the best selling digital title in North America and Europe. The game also has good performance among Latin American countries (#2 Brazil, #3 Mexico, #3 Argentina, #3 Chile, #3 Costa Rica, #4 Guatemala, #6 Perú, #9 Colombia) and the PlayStation 4 version was the 20th best selling digital title in North America and the 19th in Europe on the PlayStation Store in the month of its release in their respective categories. By May 2019, Cyber Sleuth had sold over 800,000 copies worldwide. The Switch port of Complete Edition sold 4,536 copies in its first week in Japan. By October 2020, Cyber Sleuth and Hacker's Memory had shipped more than 1.5 million units worldwide combined. Notes References External links 2015 video games Bandai Namco Entertainment franchises Detective video games Digimon video games Multiplayer and single-player video games Nintendo Switch games PlayStation Vita games PlayStation 4 games Postcyberpunk Role-playing video games Video games developed in Japan Video games featuring parallel universes Video games featuring protagonists of selectable gender Video games scored by Masafumi Takada Video games with cross-platform play Windows games Video games set in Tokyo
48720560
https://en.wikipedia.org/wiki/DLT%20Solutions
DLT Solutions
DLT Solutions is an American company in Herndon, Virginia. It is a value-added reseller of software and hardware and professional services provider to the federal, state, and local governments as well as educational institutions. DLT Solutions was founded in 1991 by Thomas Marrelli, who died in 2002. In 2005, his heirs sold DLT Solutions to then–DLT President and Chief Executive Officer Rick Marcotte and then-Chief Financial Officer Craig Adler. DLT Solutions had revenue of roughly $800 million in 2012, $857 million in 2013 and over $900 million in 2014. In 2015, DLT was sold by majority owner MTZP Capital Group to private equity firm Millstein & Co. In 2015, the company had 230 employees in the Washington metropolitan area. The company primarily resells software and hardware from Autodesk, Dell Software, Red Hat, Oracle Corporation, and Symantec. In 2014, it released two products, CODEvolved and Cloud Navigator, to help public-sector employees learn and implement cloud computing products. History DLT Solutions was founded in 1991 by Thomas J. Marrelli. DLT Solutions is a value-added reseller of software and hardware and professional services provider in Herndon, Virginia. In July 2004, Rick Marcotte was appointed DLT's president and CEO after having been on its board of directors for a year. After Marrelli died in 2002, his heirs in 2005 sold the company to DLT President and Chief Executive Officer Rick Marcotte and Chief Financial Officer Craig Adler. During the acquisition, SunTrust Banks handled the financing, while BB&T Capital Markets and Windsor Group advised DLT Solutions. Marcotte had worked with the founder, Thomas Marrelli, in the mid-1990s when Marrelli led Exide Electronics' federal systems branch and Marcotte was the new business development director. In 2005, the company resold products from Oracle Corporation, Autodesk, Quest Solutions, Red Hat and Adobe Systems. The company mostly works with groups in the public sector such as the United States Department of Defense and local and state administrations. In 2013, after federal spending shrunk, DLT's work with local and state governments increased more than with the federal government. DLT scored a contract with Colorado to install Oracle software for its health insurance marketplace. In 2014, DLT's main vendors were Autodesk, Dell Software, Red Hat, Oracle Corporation, and Symantec, which DLT used for 80% of its enterprises. Millstein & Co. acquisition On January 30, 2015, DLT Solutions announced that private equity firm Millstein & Co. had bought it. Prior to Millstein's purchase, MTZP Capital Group had been the majority owner. In the acquisition agreement, Millstein & Co. operating partner Alan Marc Smith became the President and CEO of DLT Solutions, taking over from Rick Marcotte, who had been DLT Solutions President and CEO for over a decade. After the acquisition, Marcotte became DLT Solutions' board's vice chairman and consultant. In three two years prior to the Milstein acquisition, DLT Solutions had revenue of roughly $800 million in 2012, $857 million in 2013 and over $900 million in 2014. In the several months before the acquisition, DLT scored multimillion-dollar deals with the Social Security Administration, the United States Department of the Navy, NASA SEWP, the United States Department of Defense, and Internet2. In 2015, the company had 230 employees in the Washington metropolitan area. Products In April 2014, DLT released CODEvolved, a platform-as-a-service product that integrates Red Hat's OpenShift, Amazon Web Services, and DLT's domain knowledge. The technology is intended to help DLT customers quickly learn cloud computing products. In June 2014, DLT released DLT Cloud Navigator, a basket of tools that helps public sector entities implement cloud technology. Cloud Navigator contains products from Oracle Corporation, Red Hat, and Amazon.com. References External links Official website Companies based in Fairfax County, Virginia Software companies established in 1991 Herndon, Virginia 1991 establishments in Virginia American companies established in 1991
343230
https://en.wikipedia.org/wiki/Autopilot
Autopilot
An autopilot is a system used to control the path of an aircraft, marine craft or spacecraft without requiring constant manual control by a human operator. Autopilots do not replace human operators. Instead, the autopilot assists the operator's control of the vehicle, allowing the operator to focus on broader aspects of operations (for example, monitoring the trajectory, weather and on-board systems). When present, an autopilot is often used in conjunction with an autothrottle, a system for controlling the power delivered by the engines. An autopilot system is sometimes colloquially referred to as "George" (e.g. "we'll let George fly for a while"). The etymology of the nickname is unclear: some claim it is a reference to inventor George De Beeson, who patented an autopilot in the 1930s, while others claim that Royal Air Force pilots coined the term during World War II to symbolize that their aircraft technically belonged to King George VI. First autopilots In the early days of aviation, aircraft required the continuous attention of a pilot to fly safely. As aircraft range increased, allowing flights of many hours, the constant attention led to serious fatigue. An autopilot is designed to perform some of the tasks of the pilot. The first aircraft autopilot was developed by Sperry Corporation in 1912. The autopilot connected a gyroscopic heading indicator and attitude indicator to hydraulically operated elevators and rudder. (Ailerons were not connected as wing dihedral was counted upon to produce the necessary roll stability.) It permitted the aircraft to fly straight and level on a compass course without a pilot's attention, greatly reducing the pilot's workload. Lawrence Sperry (the son of famous inventor Elmer Sperry) demonstrated it in 1914 at an aviation safety contest held in Paris. Sperry demonstrated the credibility of the invention by flying the aircraft with his hands away from the controls and visible to onlookers. Elmer Sperry Jr., the son of Lawrence Sperry, and Capt Shiras continued work on the same autopilot after the war, and in 1930 they tested a more compact and reliable autopilot which kept a US Army Air Corps aircraft on a true heading and altitude for three hours. In 1930, the Royal Aircraft Establishment in the United Kingdom developed an autopilot called a pilots' assister that used a pneumatically-spun gyroscope to move the flight controls. The autopilot was further developed, to include e.g. improved control algorithms and hydraulic servomechanisms. Adding more instruments such as radio-navigation aids made it possible to fly at night and in bad weather. In 1947 a US Air Force C-53 made a transatlantic flight, including takeoff and landing, completely under the control of an autopilot. Bill Lear developed his F-5 automatic pilot and automatic approach control system, and was awarded the Collier Trophy for 1949. In the early 1920s, the Standard Oil tanker J.A. Moffet became the first ship to use an autopilot. The Piasecki HUP-2 Retriever was the first production helicopter with an autopilot. The lunar module digital autopilot of the Apollo program was an early example of a fully digital autopilot system in spacecraft. Modern autopilots Not all of the passenger aircraft flying today have an autopilot system. Older and smaller general aviation aircraft especially are still hand-flown, and even small airliners with fewer than twenty seats may also be without an autopilot as they are used on short-duration flights with two pilots. The installation of autopilots in aircraft with more than twenty seats is generally made mandatory by international aviation regulations. There are three levels of control in autopilots for smaller aircraft. A single-axis autopilot controls an aircraft in the roll axis only; such autopilots are also known colloquially as "wing levellers," reflecting their single capability. A two-axis autopilot controls an aircraft in the pitch axis as well as roll, and may be little more than a wing leveller with limited pitch oscillation-correcting ability; or it may receive inputs from on-board radio navigation systems to provide true automatic flight guidance once the aircraft has taken off until shortly before landing; or its capabilities may lie somewhere between these two extremes. A three-axis autopilot adds control in the yaw axis and is not required in many small aircraft. Autopilots in modern complex aircraft are three-axis and generally divide a flight into taxi, takeoff, climb, cruise (level flight), descent, approach, and landing phases. Autopilots that automate all of these flight phases except taxi and takeoff exist. An autopilot-controlled landing on a runway and controlling the aircraft on rollout (i.e. keeping it on the centre of the runway) is known as a CAT IIIb landing or Autoland, available on many major airports' runways today, especially at airports subject to adverse weather phenomena such as fog. Landing, rollout, and taxi control to the aircraft parking position is known as CAT IIIc. This is not used to date, but may be used in the future. An autopilot is often an integral component of a Flight Management System. Modern autopilots use computer software to control the aircraft. The software reads the aircraft's current position, and then controls a flight control system to guide the aircraft. In such a system, besides classic flight controls, many autopilots incorporate thrust control capabilities that can control throttles to optimize the airspeed. The autopilot in a modern large aircraft typically reads its position and the aircraft's attitude from an inertial guidance system. Inertial guidance systems accumulate errors over time. They will incorporate error reduction systems such as the carousel system that rotates once a minute so that any errors are dissipated in different directions and have an overall nulling effect. Error in gyroscopes is known as drift. This is due to physical properties within the system, be it mechanical or laser guided, that corrupt positional data. The disagreements between the two are resolved with digital signal processing, most often a six-dimensional Kalman filter. The six dimensions are usually roll, pitch, yaw, altitude, latitude, and longitude. Aircraft may fly routes that have a required performance factor, therefore the amount of error or actual performance factor must be monitored in order to fly those particular routes. The longer the flight, the more error accumulates within the system. Radio aids such as DME, DME updates, and GPS may be used to correct the aircraft position. Control Wheel Steering An option midway between fully automated flight and manual flying is Control Wheel Steering (CWS). Although it is becoming less used as a stand-alone option in modern airliners, CWS is still a function on many aircraft today. Generally, an autopilot that is CWS equipped has three positions: off, CWS, and CMD. In CMD (Command) mode the autopilot has full control of the aircraft, and receives its input from either the heading/altitude setting, radio and navaids, or the FMS (Flight Management System). In CWS mode, the pilot controls the autopilot through inputs on the yoke or the stick. These inputs are translated to a specific heading and attitude, which the autopilot will then hold until instructed to do otherwise. This provides stability in pitch and roll. Some aircraft employ a form of CWS even in manual mode, such as the MD-11 which uses a constant CWS in roll. In many ways, a modern Airbus fly-by-wire aircraft in Normal Law is always in CWS mode. The major difference is that in this system the limitations of the aircraft are guarded by the flight computer, and the pilot cannot steer the aircraft past these limits. Computer system details The hardware of an autopilot varies between implementations, but is generally designed with redundancy and reliability as foremost considerations. For example, the Rockwell Collins AFDS-770 Autopilot Flight Director System used on the Boeing 777 uses triplicated FCP-2002 microprocessors which have been formally verified and are fabricated in a radiation-resistant process. Software and hardware in an autopilot are tightly controlled, and extensive test procedures are put in place. Some autopilots also use design diversity. In this safety feature, critical software processes will not only run on separate computers and possibly even using different architectures, but each computer will run software created by different engineering teams, often being programmed in different programming languages. It is generally considered unlikely that different engineering teams will make the same mistakes. As the software becomes more expensive and complex, design diversity is becoming less common because fewer engineering companies can afford it. The flight control computers on the Space Shuttle used this design: there were five computers, four of which redundantly ran identical software, and a fifth backup running software that was developed independently. The software on the fifth system provided only the basic functions needed to fly the Shuttle, further reducing any possible commonality with the software running on the four primary systems. Stability augmentation systems A stability augmentation system (SAS) is another type of automatic flight control system; however, instead of maintaining the aircraft required altitude or flight path, the SAS will move the aircraft control surfaces to damp unacceptable motions. SAS automatically stabilizes the aircraft in one or more axes. The most common type of SAS is the yaw damper which is used to reduce the Dutch roll tendency of swept-wing aircraft. Some yaw dampers are part of the autopilot system while others are stand-alone systems. Yaw dampers use a sensor to detect how fast the aircraft is rotating (either a gyroscope or a pair of accelerometers), a computer/amplifier and an actuator. The sensor detects when the aircraft begins the yawing part of Dutch roll. A computer processes the signal from the sensor to determine the rudder deflection required to damp the motion. The computer tells the actuator to move the rudder in the opposite direction to the motion since the rudder has to oppose the motion to reduce it. The Dutch roll is damped and the aircraft becomes stable about the yaw axis. Because Dutch roll is an instability that is inherent in all swept-wing aircraft, most swept-wing aircraft need some sort of yaw damper. There are two types of yaw damper: the series yaw damper and the parallel yaw damper. The actuator of a parallel yaw damper will move the rudder independently of the pilot's rudder pedals while the actuator of a series yaw damper is clutched to the rudder control quadrant and will result in pedal movement when the rudder moves. Some aircraft have stability augmentation systems that will stabilize the aircraft in more than a single axis. The Boeing B-52, for example, requires both pitch and yaw SAS in order to provide a stable bombing platform. Many helicopters have pitch, roll and yaw SAS systems. Pitch and roll SAS systems operate much the same way as the yaw damper described above; however, instead of damping Dutch roll, they will damp pitch and roll oscillations to improve the overall stability of the aircraft. Autopilot for ILS landings Instrument-aided landings are defined in categories by the International Civil Aviation Organization, or ICAO. These are dependent upon the required visibility level and the degree to which the landing can be conducted automatically without input by the pilot. CAT I – This category permits pilots to land with a decision height of and a forward visibility or Runway Visual Range (RVR) of . Autopilots are not required. CAT II – This category permits pilots to land with a decision height between and and a RVR of . Autopilots have a fail passive requirement. CAT IIIa -This category permits pilots to land with a decision height as low as and a RVR of . It needs a fail-passive autopilot. There must be only a 10−6 probability of landing outside the prescribed area. CAT IIIb – As IIIa but with the addition of automatic roll out after touchdown incorporated with the pilot taking control some distance along the runway. This category permits pilots to land with a decision height less than 50 feet or no decision height and a forward visibility of in Europe (76 metres, compare this to aircraft size, some of which are now over long) or in the United States. For a landing-without-decision aid, a fail-operational autopilot is needed. For this category some form of runway guidance system is needed: at least fail-passive but it needs to be fail-operational for landing without decision height or for RVR below . CAT IIIc – As IIIb but without decision height or visibility minimums, also known as "zero-zero". Not yet implemented as it would require the pilots to taxi in zero-zero visibility. An aircraft that is capable of landing in a CAT IIIb that is equipped with autobrake would be able to fully stop on the runway but would have no ability to taxi. Fail-passive autopilot: in case of failure, the aircraft stays in a controllable position and the pilot can take control of it to go around or finish landing. It is usually a dual-channel system. Fail-operational autopilot: in case of a failure below alert height, the approach, flare and landing can still be completed automatically. It is usually a triple-channel system or dual-dual system. Radio-controlled models In radio-controlled modelling, and especially RC aircraft and helicopters, an autopilot is usually a set of extra hardware and software that deals with pre-programming the model's flight. See also Acronyms and abbreviations in avionics Gyrocompass Self-driving car References External links "How Fast Can You Fly Safely", June 1933, Popular Mechanics page 858 photo of Sperry Automatic Pilot and drawing of its basic functions in flight when set Avionics Aircraft instruments Uncrewed vehicles American inventions 1912 introductions
17945376
https://en.wikipedia.org/wiki/DHCPD
DHCPD
dhcpd (an abbreviation for "DHCP daemon") is a DHCP server program that operates as a daemon on a server to provide Dynamic Host Configuration Protocol (DHCP) service to a network. This implementation, also known as ISC DHCP, is one of the first and best known, but there are now a number of other DHCP server software implementations available. Clients may solicit an IP address from a DHCP server when they need one. The DHCP server then offers the "lease" of an IP address to the client, which the client is free to request or ignore. If the client requests it and the server acknowledges it, then the client is permitted to use that IP address for the "lease time" specified by the server. At some point before the lease expires, the client must re-request the same IP address if it wishes to continue to use it. Issued IP addresses are tracked by dhcpd through a record in the dhcpd.leases file. This allows the server to maintain state over restarts of the dhcp service, which could otherwise lead to duplicate IP addresses being issued when server issued the same IP address again while another client still has the right to use it. This reference implementation of DHCP is developed by the Internet Systems Consortium and is supported on Linux, Mac OS X, FreeBSD, and Solaris. Remote access to a running instance of dhcpd is provided by the Object Management Application Programming Interface (OMAPI). This API allows manipulation of the internal state of a running instance of the dhcpd server or client. On the server side, this interface allows editing of registration information for managed nodes. Uses on the client include fetching configuration information, releasing and renewing leases, and changing which interfaces are managed by the DHCP client. ISC DHCP is in wide distribution; however, it is very mature software. ISC is developing a new DHCP software system, which is intended to eventually replace it. This software, Kea, includes a DHCP server only (so, no client or relay yet) and is supported on the same platforms as ISC DHCP. It is distributed under the Mozilla Public License (MPL2.0). ISC DHCP adopted the Mozilla Public License (MPL2.0) with the release of 4.4.1. See also Comparison of DHCP server software References External links Configuring dhcpd on a wireless access point dhcpd section in the ISC website Official FTP repository Open Gitlab repository Knowledgebase articles on dhcpd Servers (computing) Software using the ISC license Unix network-related software Free network-related software
1036106
https://en.wikipedia.org/wiki/Flash%20mob%20computing
Flash mob computing
Flash mob computing or flash mob computer is a temporary ad hoc computer cluster running specific software to coordinate the individual computers into one single supercomputer. A flash mob computer is distinct from other types of computer clusters in that it is set up and broken down on the same day or during a similar brief amount of time and involves many independent owners of computers coming together at a central physical location to work on a specific problem and/or social event. Flash mob computer derives its name from the more general term flash mob which can mean any activity involving many people co-ordinated through virtual communities coming together for brief periods of time for a specific task or event. Flash mob computing is a more specific type of flash mob for the purpose of bringing people and their computers together to work on a single task or event. History The first flash mob computer was created on April 3, 2004 at the University of San Francisco using software written at USF called FlashMob (not to be confused with the more general term flash mob). The event, called FlashMob I, was a success. There was a call for computers on the computer news website Slashdot. An article in The New York Times "Hey, Gang, Let’s Make Our Own Supercomputer" brought a lot of attention to the effort. More than 700 computers were brought to the gym at the University of San Francisco, and were wired to a network donated by Foundry Networks. At FlashMob I the participants were able to run a benchmark on 256 of the computers, and achieved a peak rate of 180 Gflops (billions of calculations per second), though this computation stopped three quarters of the way due to a node failure. The best, complete run used 150 computers and resulted in 77 Gflops. FlashMob I was run off a bootable CD-ROM that ran a copy of Morphix Linux, which was only available for the x86 platform. Despite these efforts, the project was unable to achieve its original goal of running a cluster momentarily fast enough to enter the (November 2003) Top 500 list of supercomputers. The system would have had to provide at least 402.5 Gflops to match a Chinese cluster of 256 Intel Xeon nodes. For comparison, the fastest super computer at the time, Earth Simulator, provided 35,860 Gflops. Creators of flash mob computing Pat Miller was a research scientist at a national lab and adjunct professor at USF. His class on Do-It-Yourself Supercomputers evolved into FlashMob I from the original idea of every student bringing a commodity CPU or an Xbox to class to make an evanescent cluster at each meeting. Pat worked on all aspects of the FlashMob software. Greg Benson, USF Associate Professor of Computer Science, invented the name "flash mob computing", and proposed the first idea of wireless flash mob computers. Greg worked on the core infrastructure of the FlashMob run time environment. John Witchel (Stuyvesant High School '86) was a USF graduate student in Computer Science during the Spring of 2004. After talking to Greg about the issues of networking a stadium of wireless computers and listening to Pat lecture on what it takes to break the Top 500, John asked the simple question: "Couldn't we just invite people off the street and get enough power to break the Top 500?" And flash mob supercomputing was born. FlashMob I and the FlashMob software was John's master's thesis. See also Flash mob Supercomputer References Markoff, J. (2004). "Hey, Gang, Let’s Make Our Own Supercomputer", The New York Times, February 23, 2004. External links San Francisco Flashmob Attempts Supercomputer Original Slashdot article FlashMobComputing.org Supercomputers University of San Francisco
43804789
https://en.wikipedia.org/wiki/Numbuster
Numbuster
NumBuster! is a phone community that users can access via a mobile phone client and a Web application. Developed by NumBuster Ltd, it allows users to find contact details of any phone number, exchange information about numbers with other users and block calls and messages. The client is available for Android and Apple iOS. History NumBuster! was developed by NumBuster Ltd, a privately held company founded by Evgeny Gnutikov, Ilya Osipov and some others. The project was launched on Android in May 2013 and as a Web site in February 2014. As of September 2014, it has more than 100 000 users in Russia and the CIS, where it was first launched. In July–August 2014, NumBuster! was accelerated in the biggest startup accelerator in France, NUMA Paris. Features and functionality The service is a global telephone directory that has social and call + SMS blocking functionality. It allows its users not only to find out the name of an unfamiliar caller or SMS sender, whether it's a person or a business, but also to rate and comment, thus adding more value to the service and helping other users in the community to get more information. Apart from that, users can block any phone number. Application is available on Android from May 2013 and on Apple iPhone from September 2014. Languages and localization NumBuster! is available in 12 languages: English, Russian, Turkish, Arabic, French, Chinese, Italian, Portuguese, Hindi, Spanish, Ukrainian and Korean. Reception Upon its release, NumBuster! gathered positive feedback from Russian bloggers and media, including a publication in a leading news web-site. See also List of most downloaded Android applications Truecaller Truth in Caller ID Act of 2009 References External links NumBuster! on Apple AppStore Android (operating system) software Cross-platform software Communication software 2014 software Mobile software IOS software Social networking services Windows Phone software Caller ID
40201915
https://en.wikipedia.org/wiki/Outlast
Outlast
Outlast is a first-person survival horror video game developed and published by Red Barrels. The game revolves around a freelance investigative journalist, Miles Upshur, who decides to investigate a remote psychiatric hospital named Mount Massive Asylum, located deep in the mountains of Lake County, Colorado. The downloadable content Outlast: Whistleblower centers on Waylon Park, the man who led Miles there in the first place. Outlast was released for Microsoft Windows on September 4, 2013, PlayStation 4 on February 4, 2014 and for Xbox One on June 19, 2014. Linux and OS X versions were later released on March 31, 2015. A Nintendo Switch version titled Outlast: Bundle of Terror was released in February 2018. Outlast received generally positive reviews, with praise for its atmosphere, horror elements, and gameplay. As of October 2016, the game has sold 4 million copies. As of May 2018, the whole series has sold 15 million copies. A sequel, Outlast 2, was released on April 25, 2017, while a third installment, The Outlast Trials, is set to be released in 2022. The Murkoff Account, a comic book series set between Outlast and Outlast 2, was released from July 2016 to November 2017. Gameplay In Outlast, the player assumes the role of investigative journalist Miles Upshur, as he navigates a dilapidated psychiatric hospital in Leadville, Colorado that is overrun by homicidal patients. The game is played from a first-person perspective and features some stealth gameplay mechanics. The player can walk, run, crouch, jump, climb ladders and vault over objects. Unlike most games, however, the player does not have a visible health bar on the screen and is unable to attack enemies. The player must instead rely on stealth tactics such as hiding in lockers, sneaking past enemies, staying in the shadows and hiding behind or under things in order to survive. Alternatively, the player can attempt to outrun their pursuer. If the player dies, the game will reset to the most recent checkpoint. Most of the hospital is unlit, and the only way for the player to see while in the dark is through the lens of a camcorder equipped with night vision. Using the night vision mode will slowly consume batteries, which there is not too many of them, forcing the player to scavenge for additional batteries found throughout the asylum. Outlast makes heavy use of traditional jump scares and audio cues, which alert the player if an enemy has seen them. If the player records specific events with their camcorder, Miles will write a note about it, providing further insight into his thoughts. Documents can be collected, which offer backstory and other expository information about the facility, including pages taken from the diaries of patients and reports from the hospital staff. Developer Red Barrels have pointed to the survival-focused gameplay in Amnesia: The Dark Descent as a primary influence on the combat-free narrative style of Outlast. Found-footage horror films like Quarantine and REC also served as influences. Plot Freelance investigative journalist Miles Upshur receives an anonymous e-mail that inhumane experiments are being conducted at Mount Massive Asylum, a private psychiatric hospital owned by the notoriously unethical Murkoff Corporation. Upon entering, Miles is shocked to discover its halls ransacked and littered with the mutilated corpses of the staff. He's informed by a dying officer of Murkoff's private military unit that Mount Massive's deranged inmates, known as "variants", have escaped and are freely roaming the grounds, butchering Murkoff's employees. The officer implores Miles to escape and reveals that the main doors can be unlocked from security control. Moving on, Miles is suddenly ambushed by a hulking variant named Chris Walker, who knocks him unconscious. While incapacitated, Miles encounters Father Martin Archimbaud, a self-anointed priest with schizotypal personality disorder, who claims Miles is his "apostle" and sabotages his escape by cutting off power to the front doors. Miles restores power, but Father Martin injects him with anesthetic. He shows Miles footage of "the Walrider", a ghostly entity killing patients and personnel alike, which he claims is responsible for the asylum's ransacking. Regaining consciousness, Miles finds himself trapped in a decaying cell block filled with catatonic and demented patients. He escapes through the sewers to the main wards, pursued by Walker and two cannibalistic twins, only to be captured by Richard Trager, a former Murkoff executive driven insane. Trager amputates two of Miles' fingers with a pair of bone shears, preparing to do the same to his tongue and genitals. However, Miles escapes to an elevator, inadvertently crushing Trager to death between floors when he attacks him. Miles reconvenes with Father Martin, who tells him to go to the asylum's chapel. Reaching an auditorium, Miles learns that the Walrider was created by Dr. Rudolf Gustav Wernicke, a German scientist brought to the U.S. during Operation Paperclip. Wernicke believed that intensive dream therapy conducted on traumatized patients could connect swarms of nanites into a single malevolent being. In the chapel, Miles finds a crucified Father Martin, who gives Miles a key to the atrium elevator that he insists will take him to freedom before immolating himself. Miles takes the elevator, which descends into a subterranean laboratory. Walker attacks him, only to be eviscerated by the Walrider. Miles locates an aged Wernicke, who confirms that the Walrider is a biotechnological nanite entity controlled by Billy Hope, a comatose subject of Murkoff's experiments. He orders Miles to terminate Billy's life support in the hopes that this will destroy the Walrider. Miles accomplishes this task; however, just before Billy dies, the Walrider attacks Miles and possesses his body. On his way out of the laboratory, Miles encounters a Murkoff military team led by Wernicke, which guns him down. A horrified Wernicke realizes that Miles is the Walrider's new host. Panicked screams and gunfire are heard as the screen fades to black. Whistleblower Waylon Park is a software engineer working at M.M.A. for Murkoff. His job entails maintaining the Morphogenic Engine, which controls lucid dreaming in comatose individuals. After several experiences working directly with the Engine and witnessing its effects on the facility's patients, he desperately sends an anonymous e-mail to reporter Miles Upshur to expose the corporation. Shortly afterwards, Park is summoned to the underground laboratory's operations center to debug a monitoring system. When he returns to his laptop, his supervisor, Jeremy Blaire, has him detained and subjected to the Morphogenic Engine after discovering his e-mail. However, Park escapes his restraints when the Walrider is unleashed. He roams the increasingly decrepit facility as surviving guards and medical personnel flee from the newly freed patients, searching for a shortwave radio that he can use to contact the authorities, all the while eluding a cannibal named Frank Manera, who wields an electric bone saw. Just as Park manages to find a working radio transmitter, Blaire appears and destroys it. Park finds his way into the asylum's vocational block where he is captured by Eddie Gluskin, a serial killer obsessed with finding the "perfect bride" by killing other patients and mutilating their genitalia. Gluskin tries to hang Park in a gymnasium with his other victims, but during the struggle, he is entangled by his own pulley system and fatally impaled by a loose section of rebar. At daybreak, Murkoff's paramilitary division arrives at the asylum, intent on eliminating the variants. Park slips past them and escapes into the main lobby. There, he finds a gravely wounded Blaire, who stabs him suddenly, insisting that no one can know the truth about Mount Massive, but the Walrider kills him before he can kill Park. Park then stumbles out the open front door and towards Miles Upshur's jeep, which is still idling near the main gates. He takes the jeep and drives away as Miles, now the Walrider's host, also emerges from the asylum. In the epilogue, Park is sitting at a laptop with his camcorder footage ready for upload in order to expose the Murkoff Corporation. An associate informs him that it will be more than enough to ruin Murkoff, but is warned that they will then seek to eliminate him and his family. Despite some initial hesitation, Park decides to upload the file. Release Outlast was released on September 4, 2013, for download through Steam, and it was released on February 4, 2014, for the PlayStation 4 as the free monthly title for PlayStation Plus users. The downloadable content, Outlast: Whistleblower, serves as an overlapping prequel to the original game. The plot follows Waylon Park, the anonymous tipster to Miles Upshur and shows the events both before and after the main plotline. The Microsoft Windows version of Whistleblower was released on May 6, 2014, worldwide, the PlayStation 4 version was launched on May 6, 2014, in North America and on May 7, 2014, in Europe, and the Xbox One version launched on June 18 in North America and Europe. Linux and OS X versions were later released on March 31, 2015. In December 2017, Red Barrels announced that Outlast, including Whistleblower and the sequel Outlast 2, would be coming to the Nintendo Switch in early 2018. The title was released by surprise on February 27, 2018 under the title Outlast: Bundle of Terror via Nintendo eShop. Reception As of October 19, 2016, Outlast has sold over 4 million copies. Outlast received positive reviews. Aggregating review website Metacritic gave the Xbox One version 80/100 based on 6 reviews, the Microsoft Windows version 80/100 based on 59 reviews, and the PlayStation 4 version 78/100 based on 33 reviews. It has been received with a number of accolades and awards from E3 2013, including the "Most Likely to Make you Faint" honor, and one of "Best of E3". The PC gaming website Rock, Paper, Shotgun gave Outlast a very positive review, noting that "Outlast is not an experiment in how games can be scary, it’s an exemplification." Marty Sliva of IGN rated the game with a score of 7.8, praising the horror elements and gameplay while criticizing the environments and character modeling. GameSpot gave the game a positive review as well stating that "Outlast isn't really a game of skill, and as it turns out, that makes sense. You're not a cop or a soldier or a genetically enhanced superhero. You're just a reporter. And as a reporter, you don't possess many skills with which you can fend off the hulking brutes, knife-wielding stalkers, and other homicidal maniacs who lurk in the halls of the dilapidated Mount Massive Asylum. You can't shoot them, or punch them, or rip pipes from the walls to clobber them with. You can only run and hide". Sequels On October 23, 2014, in an interview with Bloody Disgusting, Red Barrels revealed that due to the success of Outlast, a sequel was in development. It was initially intended to be released in late 2016, but was delayed to early 2017 due to complications during development. Subsequently, the release date was further pushed to Q2 2017, despite the intended Q1 2017 release. On March 6, 2017, Red Barrels announced that a physical bundle called Outlast Trinity would be released for Xbox One and PlayStation 4 on April 25. The sequel, titled Outlast 2, was made digitally available for Microsoft Windows, PlayStation 4, and Xbox One on April 25, 2017; and came to the Nintendo Switch, alongside Outlast, in February 2018. It takes place in the same universe as the first game, but features a new storyline with different characters, set in the Arizona desert. Outlast 3 was announced in December 2017, though no time frame or target platforms were confirmed. During this announcement, Red Barrels said that because they could not easily add downloadable content for Outlast 2, they had a smaller separate project related to Outlast that would be released before Outlast 3. The project, teased in October 2019, is a prequel for Outlast 2, called The Outlast Trials, and is a horror game set in the Cold War. The game is in its early development stages, with a set 2022 release date. References External links 2013 video games Horror video games Linux games MacOS games Video games about mental health Amputees in fiction Nanopunk Nintendo Switch games PlayStation 4 games PlayStation Network games Project MKUltra Psychological thriller video games Psychological horror games Splatterpunk Survival video games Works about whistleblowing Red Barrels games Single-player video games Unreal Engine games Video games adapted into comics Video games developed in Canada Video games set in 2013 Video games set in Colorado Video games set in psychiatric hospitals Video games set in the United States Video games with downloadable content Video games with expansion packs Windows games Xbox One games Nanotechnology in fiction Human experimentation in fiction
40013505
https://en.wikipedia.org/wiki/Information%20technology%20in%20Bangladesh
Information technology in Bangladesh
The information technology sector in Bangladesh had its beginnings in nuclear research during the 1960s. Over the next few decades, computer use increased at large Bangladeshi organizations, mostly with IBM mainframe computers. However, the sector only started to get substantial attention during the 1990s. Today the sector is still in a nascent stage, though it is showing potential for advancement. Nonetheless, Bangladesh IT/ITES industry has fared comparatively well by achieving US$ 1.3 billion export earnings in FY 2020-21 and holding US$ 1.4 billion equivalent market share in the local market contributing 0.76 per cent to the GDP creating more than 1 million employment opportunities so far amid Covid-19 havoc that suddenly shattered businesses last year. Consequentially, riding on the successes of IT/ITES sector-supported export-led industries as well as pro-private sector and conducive policies pursued by Bangladesh Government, the country is now poised to become a Developing Country by 2026, as recommended by the United Nations Committee for Development Policy (UNCDP), besides, Bangladesh now seeks to transform itself into a knowledge-based and 4IR-driven cashless economy, aiming to become a developed country by 2041. The Bangladesh government has formulated a draft `Made in Bangladesh– ICT Industry Strategy’ aimed at turning Bangladesh into a ICT manufacturing hub, enhancing export of local products, attracting foreign investment and creating employment proposing to implement in three terms— short term from 2021 to 2023, mid-term from 2021 to 2028 and long term from 2021 to 2031 for implementation of the 65 action plans. History The first computer in East Pakistan was an IBM mainframe 1620 series, installed in 1964 at the Dhaka center of the Pakistan Atomic Energy Commission (later the Bangladesh Atomic Energy Commission). Computer use increased in the following years, especially after the independence of Bangladesh in 1971; more-advanced IT equipment began to be set up in different educational, research and financial institutions. In 1979, a computer centre, later renamed Department of Computer Science & Engineering, was established at Bangladesh University of Engineering and Technology (BUET); the centre has been playing a pivotal role in Bangladeshi IT education since its inception. Through the introduction of personal computers, the use of computers witnessed a rapid increase in the late 1980s. In 1985, succeeding several individual initiatives, the first Bengali script in computers was invented, paving the way for more intense computer activities. In 1995, use of the Internet began and locally made software started to be exported. In 1983, the Ministry of Science and Technology established a National Computer Committee to create the required policies. The committee was also responsible to carry out programs to expand and promote the efficacious use of the sector. In 1988, the committee was replaced by the National Computer Board. In 1990, the ministry reformed the board and reconstituted it as the Bangladesh Computer Council to monitor computer- and IT-related works in the country. ICT industry The ICT industry is a relatively new sector in the country's economy. Though it is yet to make tangible contributions in the national economy, it is an important growth industry. The Bangladesh Association of Software and Information Services (BASIS) was established in 1997 as the national trade body for software and IT service industry. Starting with only 17 member companies, by 2009 membership had grown to 326. In a study among Asian countries by Japan International Cooperation Agency in 2007–08, Bangladesh was ranked first in software and IT services competitiveness and third in competencies, after India and China. The World Bank, in a study conducted in 2008, projected triple digit growth for Bangladesh in IT services and software exports. Bangladesh was also listed as one of the top 30 Countries for Offshore Services in 2010–2011 by Gartner. The Internet penetration has also grown to 21.27 percent in 2012, up from 3.2 percent three years prior. The Information and Communications Technology (ICT) sector of the country has maintained 57.21 percent export growth on an average over the last nine years since 2009. In the fiscal year (FY) 2016–17, Bangladesh ICT sector registered export earnings worth US$0.8 billion from the global market and US$1.54 billion from the domestic market span - thereby making around one percent contribution to the gross domestic product (GDP). The ICT sector has created around three hundred thousand job opportunities so far. As the Internet usage increases, the government expects the IT sector to add 7.28 percent to GDP growth by 2021. References
30668670
https://en.wikipedia.org/wiki/Different%20Recordings
Different Recordings
Different Recordings is an electronic music label owned by the PIAS, an international independent music company. After a three year hiatus, the label returned in 2015 with a family of new signings from Claptone, Anna of the North, Vessels, KLLO, Infinity Ink and Denney. The label looks to champion a fresh crop of forward thinking, electronic artists. It was launched during the emergence of the French house scene and over the years has represented a broad range of electronic music with releases by artists such as Etienne de Crecy, Alex Gopher, Alan Braxe, Tiga, The Hacker, Felix Da Housecat, MSTRKRFT, Crystal Castles and Vitalic. History The label's first release in 1996 was the album Pansoul by Motorbass, a project of producers Etienne de Crecy and Philippe Zdar of Cassius. De Crecy’s next project, Superdiscount, was released a year later and is still considered by many as the first album of the French house genre. With the signing of acts like The Hacker, Tiga and Vitalic the label started to develop a broad electronic artist roster. In 2008 Different Recordings licensed and released the eponymous debut album by Crystal Castles which was named by the NME as one of the best albums of the decade with the band featuring on the magazine’s cover twice in one year. Artists Different currently has Anna Of The North, Claptone, DBFC, DNKL, Henry Krinkle, Infinity Ink, Innerspace Orchestra, Kllo, Rina Sawayama Shadow Child, TÂCHES and Vessels on their roster. Previous artists include Abstraxion, Agoria, Alex Gopher, Alex Martin Ensemble, Alexander Kowalski, Bloody Beetroots, Crystal Castles, Crystal Fighters, D/R/U/G/S, Etienne de Crecy, Felix Da Housecat, Harissa, Kiko, Kris Menace and DJ Pierre, Lifelike, Matt & Kim, Max Bitt, Motorbass, Mr. No, M.S.K., MSTRKRFT, Mustang, Neven, Padded Cell, Terence Fixmer, T.D.R., The Hacker, Super Discount, Tiga, Villeneuve, Vitalic and Xenia Beliayeva. See also [PIAS] References External links Different Recordings 1996 establishments in Belgium Belgian record labels