id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
16708
https://en.wikipedia.org/wiki/Kent%20Beck
Kent Beck
Kent Beck (born 1961) is an American software engineer and the creator of extreme programming, a software development methodology that eschews rigid formal specification for a collaborative and iterative design process. Beck was one of the 17 original signatories of the Agile Manifesto, the founding document for agile software development. Extreme and Agile methods are closely associated with Test-Driven Development (TDD), of which Beck is perhaps the leading proponent. Beck pioneered software design patterns, as well as the commercial application of Smalltalk. He wrote the SUnit unit testing framework for Smalltalk, which spawned the xUnit series of frameworks, notably JUnit for Java, which Beck wrote with Erich Gamma. Beck popularized CRC cards with Ward Cunningham, the inventor of the wiki. He lives in San Francisco, California and worked at social media company Facebook. In 2019, Beck joined Gusto as a software fellow and coach, where he coaches engineering teams as they build out payroll systems for small businesses. History Beck attended the University of Oregon between 1979 and 1987, receiving B.S. and M.S. degrees in computer and information science. In 1996 Beck was hired to work on the Chrysler Comprehensive Compensation System. Beck in turn brought in Ron Jeffries. In March 1996 the development team estimated the system would be ready to go into production around one year later. In 1997 the development team adopted a way of working which is now formalized as extreme programming. The one-year delivery target was nearly achieved, with actual delivery being only a couple of months late. Publications Books 1996. Kent Beck's Guide to Better Smalltalk : A Sorted Collection. Cambridge University Press. () 1997. Smalltalk Best Practice Patterns. Prentice Hall. () 2000. Extreme Programming Explained: Embrace Change. Addison-Wesley. Winner of the Jolt Productivity Award. () 2000. Planning Extreme Programming. With Martin Fowler. Addison-Wesley. () 2002. Test-Driven Development by Example. Addison-Wesley. Winner of the Jolt Productivity Award. () Beck's concept of test-driven development centers on two basic rules: Never write a single line of code unless you have a failing automated test. Eliminate duplication. The book illustrates the use of unit testing as part of the methodology, including examples in Java and Python. One section includes using test-driven development to develop a unit testing framework. 2003. Contributing to Eclipse: Principles, Patterns, and Plugins. With Erich Gamma. Addison-Wesley. () 2004. JUnit Pocket Guide. O'Reilly. () 2004. Extreme Programming Explained: Embrace Change, 2nd Edition. With Cynthia Andres. Addison-Wesley. Completely rewritten. () 2008. Implementation Patterns. Addison-Wesley. () Selected papers 1987. "Using Pattern Languages for Object-Oriented Programs". With Ward Cunningham. OOPSLA'87. 1989. "A Laboratory For Teaching Object-Oriented Thinking". With Ward Cunningham. OOPSLA'89. 1989. "Simple Smalltalk Testing: With Patterns". SUnit framework, origin of xUnit frameworks. References External links KentBeck on the WikiWikiWeb Sample chapter of Kent's book, IMPLEMENTATION PATTERNS TalkWare Podcast interview with Kent Beck FLOSS Weekly interview with Kent Beck Kent Beck's Notes at Facebook Kent Beck on unit testing Being Human Podcast - A conversation with Kent Beck Extreme programming American technology writers University of Oregon College of Arts and Sciences alumni 1961 births Living people American software engineers Facebook employees Software testing people Tektronix people Engineers from Oregon Agile software development
306769
https://en.wikipedia.org/wiki/Protein%20structure%20prediction
Protein structure prediction
Protein structure prediction is the inference of the three-dimensional structure of a protein from its amino acid sequence—that is, the prediction of its secondary and tertiary structure from primary structure. Structure prediction is different from the inverse problem of protein design. Protein structure prediction is one of the most important goals pursued by computational biology; and it is important in medicine (for example, in drug design) and biotechnology (for example, in the design of novel enzymes). Starting in 1994, the performance of current methods is assessed biannually in the CASP experiment (Critical Assessment of Techniques for Protein Structure Prediction). A continuous evaluation of protein structure prediction web servers is performed by the community project CAMEO3D. Protein structure and terminology Proteins are chains of amino acids joined together by peptide bonds. Many conformations of this chain are possible due to the rotation of the chain about each alpha-Carbon atom (Cα atom) . It is these conformational changes that are responsible for differences in the three dimensional structure of proteins. Each amino acid in the chain is polar, i.e. it has separated positive and negative charged regions with a free carbonyl group, which can act as hydrogen bond acceptor and an NH group, which can act as hydrogen bond donor. These groups can therefore interact in the protein structure. The 20 amino acids can be classified according to the chemistry of the side chain which also plays an important structural role. Glycine takes on a special position, as it has the smallest side chain, only one hydrogen atom, and therefore can increase the local flexibility in the protein structure. Cysteine on the other hand can react with another cysteine residue to form one cystine and thereby form a cross link stabilizing the whole structure. The protein structure can be considered as a sequence of secondary structure elements, such as α helices and β sheets, which together constitute the overall three-dimensional configuration of the protein chain. In these secondary structures regular patterns of H bonds are formed between neighboring amino acids, and the amino acids have similar Φ and ψ angles. The formation of these structures neutralizes the polar groups on each amino acid. The secondary structures are tightly packed in the protein core in a hydrophobic environment. Each amino acid side group has a limited volume to occupy and a limited number of possible interactions with other nearby side chains, a situation that must be taken into account in molecular modeling and alignments. α Helix The α helix is the most abundant type of secondary structure in proteins. The α helix has 3.6 amino acids per turn with an H bond formed between every fourth residue; the average length is 10 amino acids (3 turns) or 10 Å but varies from 5 to 40 (1.5 to 11 turns). The alignment of the H bonds creates a dipole moment for the helix with a resulting partial positive charge at the amino end of the helix. Because this region has free NH2 groups, it will interact with negatively charged groups such as phosphates. The most common location of α helices is at the surface of protein cores, where they provide an interface with the aqueous environment. The inner-facing side of the helix tends to have hydrophobic amino acids and the outer-facing side hydrophilic amino acids. Thus, every third of four amino acids along the chain will tend to be hydrophobic, a pattern that can be quite readily detected. In the leucine zipper motif, a repeating pattern of leucines on the facing sides of two adjacent helices is highly predictive of the motif. A helical-wheel plot can be used to show this repeated pattern. Other α helices buried in the protein core or in cellular membranes have a higher and more regular distribution of hydrophobic amino acids, and are highly predictive of such structures. Helices exposed on the surface have a lower proportion of hydrophobic amino acids. Amino acid content can be predictive of an α -helical region. Regions richer in alanine (A), glutamic acid (E), leucine (L), and methionine (M) and poorer in proline (P), glycine (G), tyrosine (Y), and serine (S) tend to form an α helix. Proline destabilizes or breaks an α helix but can be present in longer helices, forming a bend. β sheet β sheets are formed by H bonds between an average of 5–10 consecutive amino acids in one portion of the chain with another 5–10 farther down the chain. The interacting regions may be adjacent, with a short loop in between, or far apart, with other structures in between. Every chain may run in the same direction to form a parallel sheet, every other chain may run in the reverse chemical direction to form an anti parallel sheet, or the chains may be parallel and anti parallel to form a mixed sheet. The pattern of H bonding is different in the parallel and anti parallel configurations. Each amino acid in the interior strands of the sheet forms two H bonds with neighboring amino acids, whereas each amino acid on the outside strands forms only one bond with an interior strand. Looking across the sheet at right angles to the strands, more distant strands are rotated slightly counterclockwise to form a left-handed twist. The Cα atoms alternate above and below the sheet in a pleated structure, and the R side groups of the amino acids alternate above and below the pleats. The Φ and Ψ angles of the amino acids in sheets vary considerably in one region of the Ramachandran plot. It is more difficult to predict the location of β-sheets than of α-helices. The situation improves somewhat when the amino acid variation in multiple sequence alignments is taken into account. Loops Some parts of the protein have fixed three-dimensional structure, but do not form any regular structures. They should not be confused with disordered or unfolded segments of proteins or random coil, an unfolded polypeptide chain lacking any fixed three-dimensional structure. These parts are frequently called "loops" because they connect β-sheets and α-helices. Loops are usually located at protein surface, and therefore mutations of their residues are more easily tolerated. Having more substitutions, insertions, and deletions in a certain region of a sequence alignment maybe an indication of a loop. The positions of introns in genomic DNA may correlate with the locations of loops in the encoded protein . Loops also tend to have charged and polar amino acids and are frequently a component of active sites. Protein classification Proteins may be classified according to both structural and sequence similarity. For structural classification, the sizes and spatial arrangements of secondary structures described in the above paragraph are compared in known three-dimensional structures. Classification based on sequence similarity was historically the first to be used. Initially, similarity based on alignments of whole sequences was performed. Later, proteins were classified on the basis of the occurrence of conserved amino acid patterns. Databases that classify proteins by one or more of these schemes are available. In considering protein classification schemes, it is important to keep several observations in mind. First, two entirely different protein sequences from different evolutionary origins may fold into a similar structure. Conversely, the sequence of an ancient gene for a given structure may have diverged considerably in different species while at the same time maintaining the same basic structural features. Recognizing any remaining sequence similarity in such cases may be a very difficult task. Second, two proteins that share a significant degree of sequence similarity either with each other or with a third sequence also share an evolutionary origin and should share some structural features also. However, gene duplication and genetic rearrangements during evolution may give rise to new gene copies, which can then evolve into proteins with new function and structure. Terms used for classifying protein structures and sequences The more commonly used terms for evolutionary and structural relationships among proteins are listed below. Many additional terms are used for various kinds of structural features found in proteins. Descriptions of such terms may be found at the CATH Web site, the Structural Classification of Proteins (SCOP) Web site, and a Glaxo Wellcome tutorial on the Swiss bioinformatics Expasy Web site. Active site a localized combination of amino acid side groups within the tertiary (three-dimensional) or quaternary (protein subunit) structure that can interact with a chemically specific substrate and that provides the protein with biological activity. Proteins of very different amino acid sequences may fold into a structure that produces the same active site. Architecture is the relative orientations of secondary structures in a three-dimensional structure without regard to whether or not they share a similar loop structure. Fold (topology) a type of architecture that also has a conserved loop structure. Blocks is a conserved amino acid sequence pattern in a family of proteins. The pattern includes a series of possible matches at each position in the represented sequences, but there are not any inserted or deleted positions in the pattern or in the sequences. By way of contrast, sequence profiles are a type of scoring matrix that represents a similar set of patterns that includes insertions and deletions. Class a term used to classify protein domains according to their secondary structural content and organization. Four classes were originally recognized by Levitt and Chothia (1976), and several others have been added in the SCOP database. Three classes are given in the CATH database: mainly-α, mainly-β, and α–β, with the α–β class including both alternating α/β and α+β structures. Core the portion of a folded protein molecule that comprises the hydrophobic interior of α-helices and β-sheets. The compact structure brings together side groups of amino acids into close enough proximity so that they can interact. When comparing protein structures, as in the SCOP database, core is the region common to most of the structures that share a common fold or that are in the same superfamily. In structure prediction, core is sometimes defined as the arrangement of secondary structures that is likely to be conserved during evolutionary change. Domain (sequence context) a segment of a polypeptide chain that can fold into a three-dimensional structure irrespective of the presence of other segments of the chain. The separate domains of a given protein may interact extensively or may be joined only by a length of polypeptide chain. A protein with several domains may use these domains for functional interactions with different molecules. Family (sequence context) a group of proteins of similar biochemical function that are more than 50% identical when aligned. This same cutoff is still used by the Protein Information Resource (PIR). A protein family comprises proteins with the same function in different organisms (orthologous sequences) but may also include proteins in the same organism (paralogous sequences) derived from gene duplication and rearrangements. If a multiple sequence alignment of a protein family reveals a common level of similarity throughout the lengths of the proteins, PIR refers to the family as a homeomorphic family. The aligned region is referred to as a homeomorphic domain, and this region may comprise several smaller homology domains that are shared with other families. Families may be further subdivided into subfamilies or grouped into superfamilies based on respective higher or lower levels of sequence similarity. The SCOP database reports 1296 families and the CATH database (version 1.7 beta), reports 1846 families. When the sequences of proteins with the same function are examined in greater detail, some are found to share high sequence similarity. They are obviously members of the same family by the above criteria. However, others are found that have very little, or even insignificant, sequence similarity with other family members. In such cases, the family relationship between two distant family members A and C can often be demonstrated by finding an additional family member B that shares significant similarity with both A and C. Thus, B provides a connecting link between A and C. Another approach is to examine distant alignments for highly conserved matches. At a level of identity of 50%, proteins are likely to have the same three-dimensional structure, and the identical atoms in the sequence alignment will also superimpose within approximately 1 Å in the structural model. Thus, if the structure of one member of a family is known, a reliable prediction may be made for a second member of the family, and the higher the identity level, the more reliable the prediction. Protein structural modeling can be performed by examining how well the amino acid substitutions fit into the core of the three-dimensional structure. Family (structural context) as used in the FSSP database (Families of structurally similar proteins) and the DALI/FSSP Web site, two structures that have a significant level of structural similarity but not necessarily significant sequence similarity. Fold similar to structural motif, includes a larger combination of secondary structural units in the same configuration. Thus, proteins sharing the same fold have the same combination of secondary structures that are connected by similar loops. An example is the Rossman fold comprising several alternating α helices and parallel β strands. In the SCOP, CATH, and FSSP databases, the known protein structures have been classified into hierarchical levels of structural complexity with the fold as a basic level of classification. Homologous domain (sequence context) an extended sequence pattern, generally found by sequence alignment methods, that indicates a common evolutionary origin among the aligned sequences. A homology domain is generally longer than motifs. The domain may include all of a given protein sequence or only a portion of the sequence. Some domains are complex and made up of several smaller homology domains that became joined to form a larger one during evolution. A domain that covers an entire sequence is called the homeomorphic domain by PIR (Protein Information Resource). Module a region of conserved amino acid patterns comprising one or more motifs and considered to be a fundamental unit of structure or function. The presence of a module has also been used to classify proteins into families. Motif (sequence context) a conserved pattern of amino acids that is found in two or more proteins. In the Prosite catalog, a motif is an amino acid pattern that is found in a group of proteins that have a similar biochemical activity, and that often is near the active site of the protein. Examples of sequence motif databases are the Prosite catalog and the Stanford Motifs Database. Motif (structural context) a combination of several secondary structural elements produced by the folding of adjacent sections of the polypeptide chain into a specific three-dimensional configuration. An example is the helix-loop-helix motif. Structural motifs are also referred to as supersecondary structures and folds. Position-specific scoring matrix (sequence context, also known as weight or scoring matrix) represents a conserved region in a multiple sequence alignment with no gaps. Each matrix column represents the variation found in one column of the multiple sequence alignment. Position-specific scoring matrix—3D (structural context) represents the amino acid variation found in an alignment of proteins that fall into the same structural class. Matrix columns represent the amino acid variation found at one amino acid position in the aligned structures. Primary structure the linear amino acid sequence of a protein, which chemically is a polypeptide chain composed of amino acids joined by peptide bonds. Profile (sequence context) a scoring matrix that represents a multiple sequence alignment of a protein family. The profile is usually obtained from a well-conserved region in a multiple sequence alignment. The profile is in the form of a matrix with each column representing a position in the alignment and each row one of the amino acids. Matrix values give the likelihood of each amino acid at the corresponding position in the alignment. The profile is moved along the target sequence to locate the best scoring regions by a dynamic programming algorithm. Gaps are allowed during matching and a gap penalty is included in this case as a negative score when no amino acid is matched. A sequence profile may also be represented by a hidden Markov model, referred to as a profile HMM. Profile (structural context) a scoring matrix that represents which amino acids should fit well and which should fit poorly at sequential positions in a known protein structure. Profile columns represent sequential positions in the structure, and profile rows represent the 20 amino acids. As with a sequence profile, the structural profile is moved along a target sequence to find the highest possible alignment score by a dynamic programming algorithm. Gaps may be included and receive a penalty. The resulting score provides an indication as to whether or not the target protein might adopt such a structure. Quaternary structure the three-dimensional configuration of a protein molecule comprising several independent polypeptide chains. Secondary structure the interactions that occur between the C, O, and NH groups on amino acids in a polypeptide chain to form α-helices, β-sheets, turns, loops, and other forms, and that facilitate the folding into a three-dimensional structure. Superfamily a group of protein families of the same or different lengths that are related by distant yet detectable sequence similarity. Members of a given superfamily thus have a common evolutionary origin. Originally, Dayhoff defined the cutoff for superfamily status as being the chance that the sequences are not related of 10 6, on the basis of an alignment score (Dayhoff et al. 1978). Proteins with few identities in an alignment of the sequences but with a convincingly common number of structural and functional features are placed in the same superfamily. At the level of three-dimensional structure, superfamily proteins will share common structural features such as a common fold, but there may also be differences in the number and arrangement of secondary structures. The PIR resource uses the term homeomorphic superfamilies to refer to superfamilies that are composed of sequences that can be aligned from end to end, representing a sharing of single sequence homology domain, a region of similarity that extends throughout the alignment. This domain may also comprise smaller homology domains that are shared with other protein families and superfamilies. Although a given protein sequence may contain domains found in several superfamilies, thus indicating a complex evolutionary history, sequences will be assigned to only one homeomorphic superfamily based on the presence of similarity throughout a multiple sequence alignment. The superfamily alignment may also include regions that do not align either within or at the ends of the alignment. In contrast, sequences in the same family align well throughout the alignment. Supersecondary structure a term with similar meaning to a structural motif. Tertiary structure is the three-dimensional or globular structure formed by the packing together or folding of secondary structures of a polypeptide chain. Secondary structure Secondary structure prediction is a set of techniques in bioinformatics that aim to predict the local secondary structures of proteins based only on knowledge of their amino acid sequence. For proteins, a prediction consists of assigning regions of the amino acid sequence as likely alpha helices, beta strands (often noted as "extended" conformations), or turns. The success of a prediction is determined by comparing it to the results of the DSSP algorithm (or similar e.g. STRIDE) applied to the crystal structure of the protein. Specialized algorithms have been developed for the detection of specific well-defined patterns such as transmembrane helices and coiled coils in proteins. The best modern methods of secondary structure prediction in proteins were claimed to reach 80% accuracy after using machine learning and sequence alignments; this high accuracy allows the use of the predictions as feature improving fold recognition and ab initio protein structure prediction, classification of structural motifs, and refinement of sequence alignments. The accuracy of current protein secondary structure prediction methods is assessed in weekly benchmarks such as LiveBench and EVA. Background Early methods of secondary structure prediction, introduced in the 1960s and early 1970s, focused on identifying likely alpha helices and were based mainly on helix-coil transition models. Significantly more accurate predictions that included beta sheets were introduced in the 1970s and relied on statistical assessments based on probability parameters derived from known solved structures. These methods, applied to a single sequence, are typically at most about 60-65% accurate, and often underpredict beta sheets. The evolutionary conservation of secondary structures can be exploited by simultaneously assessing many homologous sequences in a multiple sequence alignment, by calculating the net secondary structure propensity of an aligned column of amino acids. In concert with larger databases of known protein structures and modern machine learning methods such as neural nets and support vector machines, these methods can achieve up to 80% overall accuracy in globular proteins. The theoretical upper limit of accuracy is around 90%, partly due to idiosyncrasies in DSSP assignment near the ends of secondary structures, where local conformations vary under native conditions but may be forced to assume a single conformation in crystals due to packing constraints. Moreover, the typical secondary structure prediction methods do not account for the influence of tertiary structure on formation of secondary structure; for example, a sequence predicted as a likely helix may still be able to adopt a beta-strand conformation if it is located within a beta-sheet region of the protein and its side chains pack well with their neighbors. Dramatic conformational changes related to the protein's function or environment can also alter local secondary structure. Historical perspective To date, over 20 different secondary structure prediction methods have been developed. One of the first algorithms was Chou-Fasman method, which relies predominantly on probability parameters determined from relative frequencies of each amino acid's appearance in each type of secondary structure. The original Chou-Fasman parameters, determined from the small sample of structures solved in the mid-1970s, produce poor results compared to modern methods, though the parameterization has been updated since it was first published. The Chou-Fasman method is roughly 50-60% accurate in predicting secondary structures. The next notable program was the GOR method is an information theory-based method. It uses the more powerful probabilistic technique of Bayesian inference. The GOR method takes into account not only the probability of each amino acid having a particular secondary structure, but also the conditional probability of the amino acid assuming each structure given the contributions of its neighbors (it does not assume that the neighbors have that same structure). The approach is both more sensitive and more accurate than that of Chou and Fasman because amino acid structural propensities are only strong for a small number of amino acids such as proline and glycine. Weak contributions from each of many neighbors can add up to strong effects overall. The original GOR method was roughly 65% accurate and is dramatically more successful in predicting alpha helices than beta sheets, which it frequently mispredicted as loops or disorganized regions. Another big step forward, was using machine learning methods. First artificial neural networks methods were used. As a training sets they use solved structures to identify common sequence motifs associated with particular arrangements of secondary structures. These methods are over 70% accurate in their predictions, although beta strands are still often underpredicted due to the lack of three-dimensional structural information that would allow assessment of hydrogen bonding patterns that can promote formation of the extended conformation required for the presence of a complete beta sheet. PSIPRED and JPRED are some of the most known programs based on neural networks for protein secondary structure prediction. Next, support vector machines have proven particularly useful for predicting the locations of turns, which are difficult to identify with statistical methods. Extensions of machine learning techniques attempt to predict more fine-grained local properties of proteins, such as backbone dihedral angles in unassigned regions. Both SVMs and neural networks have been applied to this problem. More recently, real-value torsion angles can be accurately predicted by SPINE-X and successfully employed for ab initio structure prediction. Other improvements It is reported that in addition to the protein sequence, secondary structure formation depends on other factors. For example, it is reported that secondary structure tendencies depend also on local environment, solvent accessibility of residues, protein structural class, and even the organism from which the proteins are obtained. Based on such observations, some studies have shown that secondary structure prediction can be improved by addition of information about protein structural class, residue accessible surface area and also contact number information. Tertiary structure The practical role of protein structure prediction is now more important than ever. Massive amounts of protein sequence data are produced by modern large-scale DNA sequencing efforts such as the Human Genome Project. Despite community-wide efforts in structural genomics, the output of experimentally determined protein structures—typically by time-consuming and relatively expensive X-ray crystallography or NMR spectroscopy—is lagging far behind the output of protein sequences. The protein structure prediction remains an extremely difficult and unresolved undertaking. The two main problems are the calculation of protein free energy and finding the global minimum of this energy. A protein structure prediction method must explore the space of possible protein structures which is astronomically large. These problems can be partially bypassed in "comparative" or homology modeling and fold recognition methods, in which the search space is pruned by the assumption that the protein in question adopts a structure that is close to the experimentally determined structure of another homologous protein. On the other hand, the de novo protein structure prediction methods must explicitly resolve these problems. The progress and challenges in protein structure prediction have been reviewed by Zhang. Before modelling Most tertiary structure modelling methods, such as Rosetta, are optimized for modelling the tertiary structure of single protein domains. A step called domain parsing, or domain boundary prediction, is usually done first to split a protein into potential structural domains. As with the rest of tertiary structure prediction, this can be done comparatively from known structures or ab initio with the sequence only (usually by machine learning, assisted by covariation). The structures for individual domains are docked together in a process called domain assembly to form the final tertiary structure. Ab initio protein modelling Energy- and fragment-based methods Ab initio- or de novo- protein modelling methods seek to build three-dimensional protein models "from scratch", i.e., based on physical principles rather than (directly) on previously solved structures. There are many possible procedures that either attempt to mimic protein folding or apply some stochastic method to search possible solutions (i.e., global optimization of a suitable energy function). These procedures tend to require vast computational resources, and have thus only been carried out for tiny proteins. To predict protein structure de novo for larger proteins will require better algorithms and larger computational resources like those afforded by either powerful supercomputers (such as Blue Gene or MDGRAPE-3) or distributed computing (such as Folding@home, the Human Proteome Folding Project and Rosetta@Home). Although these computational barriers are vast, the potential benefits of structural genomics (by predicted or experimental methods) make ab initio structure prediction an active research field. As of 2009, a 50-residue protein could be simulated atom-by-atom on a supercomputer for 1 millisecond. As of 2012, comparable stable-state sampling could be done on a standard desktop with a new graphics card and more sophisticated algorithms. A much larger simulation timescales can be achieved using coarse-grained modeling. Evolutionary covariation to predict 3D contacts As sequencing became more commonplace in the 1990s several groups used protein sequence alignments to predict correlated mutations and it was hoped that these coevolved residues could be used to predict tertiary structure (using the analogy to distance constraints from experimental procedures such as NMR). The assumption is when single residue mutations are slightly deleterious, compensatory mutations may occur to restabilize residue-residue interactions. This early work used what are known as local methods to calculate correlated mutations from protein sequences, but suffered from indirect false correlations which result from treating each pair of residues as independent of all other pairs. In 2011, a different, and this time global statistical approach, demonstrated that predicted coevolved residues were sufficient to predict the 3D fold of a protein, providing there are enough sequences available (>1,000 homologous sequences are needed). The method, EVfold, uses no homology modeling, threading or 3D structure fragments and can be run on a standard personal computer even for proteins with hundreds of residues. The accuracy of the contacts predicted using this and related approaches has now been demonstrated on many known structures and contact maps, including the prediction of experimentally unsolved transmembrane proteins. Comparative protein modeling Comparative protein modeling uses previously solved structures as starting points, or templates. This is effective because it appears that although the number of actual proteins is vast, there is a limited set of tertiary structural motifs to which most proteins belong. It has been suggested that there are only around 2,000 distinct protein folds in nature, though there are many millions of different proteins. The comparative protein modeling can combine with the evolutionary covariation in the structure prediction. These methods may also be split into two groups: Homology modeling is based on the reasonable assumption that two homologous proteins will share very similar structures. Because a protein's fold is more evolutionarily conserved than its amino acid sequence, a target sequence can be modeled with reasonable accuracy on a very distantly related template, provided that the relationship between target and template can be discerned through sequence alignment. It has been suggested that the primary bottleneck in comparative modelling arises from difficulties in alignment rather than from errors in structure prediction given a known-good alignment. Unsurprisingly, homology modelling is most accurate when the target and template have similar sequences. Protein threading scans the amino acid sequence of an unknown structure against a database of solved structures. In each case, a scoring function is used to assess the compatibility of the sequence to the structure, thus yielding possible three-dimensional models. This type of method is also known as 3D-1D fold recognition due to its compatibility analysis between three-dimensional structures and linear protein sequences. This method has also given rise to methods performing an inverse folding search by evaluating the compatibility of a given structure with a large database of sequences, thus predicting which sequences have the potential to produce a given fold. Modeling of side-chain conformations Accurate packing of the amino acid side chains represents a separate problem in protein structure prediction. Methods that specifically address the problem of predicting side-chain geometry include dead-end elimination and the self-consistent mean field methods. The side chain conformations with low energy are usually determined on the rigid polypeptide backbone and using a set of discrete side chain conformations known as "rotamers." The methods attempt to identify the set of rotamers that minimize the model's overall energy. These methods use rotamer libraries, which are collections of favorable conformations for each residue type in proteins. Rotamer libraries may contain information about the conformation, its frequency, and the standard deviations about mean dihedral angles, which can be used in sampling. Rotamer libraries are derived from structural bioinformatics or other statistical analysis of side-chain conformations in known experimental structures of proteins, such as by clustering the observed conformations for tetrahedral carbons near the staggered (60°, 180°, -60°) values. Rotamer libraries can be backbone-independent, secondary-structure-dependent, or backbone-dependent. Backbone-independent rotamer libraries make no reference to backbone conformation, and are calculated from all available side chains of a certain type (for instance, the first example of a rotamer library, done by Ponder and Richards at Yale in 1987). Secondary-structure-dependent libraries present different dihedral angles and/or rotamer frequencies for -helix, -sheet, or coil secondary structures. Backbone-dependent rotamer libraries present conformations and/or frequencies dependent on the local backbone conformation as defined by the backbone dihedral angles and , regardless of secondary structure. The modern versions of these libraries as used in most software are presented as multidimensional distributions of probability or frequency, where the peaks correspond to the dihedral-angle conformations considered as individual rotamers in the lists. Some versions are based on very carefully curated data and are used primarily for structure validation, while others emphasize relative frequencies in much larger data sets and are the form used primarily for structure prediction, such as the Dunbrack rotamer libraries. Side-chain packing methods are most useful for analyzing the protein's hydrophobic core, where side chains are more closely packed; they have more difficulty addressing the looser constraints and higher flexibility of surface residues, which often occupy multiple rotamer conformations rather than just one. Quaternary structure In the case of complexes of two or more proteins, where the structures of the proteins are known or can be predicted with high accuracy, protein–protein docking methods can be used to predict the structure of the complex. Information of the effect of mutations at specific sites on the affinity of the complex helps to understand the complex structure and to guide docking methods. Software A great number of software tools for protein structure prediction exist. Approaches include homology modeling, protein threading, ab initio methods, secondary structure prediction, and transmembrane helix and signal peptide prediction. Some recent successful methods based on the CASP experiments include I-TASSER, HHpred and AlphaFold. AlphaFold was reported as currently having the best performance. Knowing the structure of a protein often allows functional prediction as well. For instance, collagen is folded into a long-extended fiber-like chain and it makes it a fibrous protein. Recently, several techniques have been developed to predict protein folding and thus protein structure, for example, Itasser, and AlphaFold. AlphaFold is the first computational approach capable of predicting protein structures to near experimental accuracy in many cases. AlphaFold has predicted accurate structures as compared to other competing methods. Also, accurate side chains are produced with their methods having accuracy in the domains and domain-packing with scalable methods to very long proteins. The model can be used confidently because of the precise predictions per-residue. 3D coordinates of all heavy atoms for a given protein are predicted directly by the AlphaFold network using the amino acid sequence and aligned sequences of homologues. The AlphaFold network consists of a trunk which processes the inputs through repeated layers and the structure module that introduces an explicit 3D structure. Since AlphaFold outputs protein coordinates directly, AlphaFold produces predictions in graphic processing unit (GPU) minutes to GPU hours depending  on the length of protein sequence. Evaluation of automatic structure prediction servers CASP, which stands for Critical Assessment of Techniques for Protein Structure Prediction, is a community-wide experiment for protein structure prediction taking place every two years since 1994. CASP provides with an opportunity to assess the quality of available human, non-automated methodology (human category) and automatic servers for protein structure prediction (server category, introduced in the CASP7). The CAMEO3D Continuous Automated Model EvaluatiOn Server evaluates automated protein structure prediction servers on a weekly basis using blind predictions for newly release protein structures. CAMEO publishes the results on its website. See also Protein design Protein function prediction Protein–protein interaction prediction Gene prediction Protein structure prediction software De novo protein structure prediction Molecular design software Molecular modeling software Modelling biological systems Fragment libraries Lattice proteins Statistical potential Structure atlas of human genome Protein circular dichroism data bank References Further reading External links CASP experiments home page ExPASy Proteomics tools — list of prediction tools and servers Bioinformatics Protein structure Protein methods
6797809
https://en.wikipedia.org/wiki/West%20Coast%20Trojans
West Coast Trojans
The West Coast Trojans were an amateur American Football team based at Pro-Life Gym in Paisley, Scotland. In their final season, the Trojans competed in the BAFANL Division 2 North. The Trojans played their home games at Meadow Park, Irvine with previous home venues including Portland Park, Troon and Scotstoun Stadium, Glasgow. Team history 2005 season The Trojans' home games were played at St Stephens High School in Port Glasgow, Inverclyde. The team competed in the BAFL Division 2 Scottish Conference which they won convincingly, but were defeated in the semi-final of the playoffs by the Div 2 Champions, the Coventry Cassidy Jets. 2006 season The Trojans competed in the BAFL Division 2 Scottish Conference which they won convincingly by beating Redditch Arrows in the Northern Conference Championship game, but ended up losing the Britbowl Division 2 Bowl 29-28 to the Oxford Saints. 2007 season The Trojans competed in the BAFL Division 1 Northern Conference, winning nine games and losing only one to the Redditch Arrows at home and winning their second consecutive title. The team lost their semi-final to the Ipswich Cardinals going down 28-13 at home. 2008 season The team posted a 4–4–2 record but missed the playoffs for the first time in their history after an end of season defeat at the hands of the Redditch Arrows. One of the tied games was awarded by the BAFL after an away visit to the Dundee Hurricanes, but was never rescheduled after initially being postponed in the late part of the season. 2009 season The offseason was promising, with the Trojans bringing in a large number of rookies through their coaching connections with nearby University side, the Paisley Pyros. Filling the gaps in the roster which plagued them in 2008, West Coast were expected to be back to their best as they opened their campaign against the team whose split in 2004 caused their formation, the East Kilbride Pirates. Two early scores gave the Trojans a 12-0 lead, however reality bit hard as the Pirates scored 62 points without reply, handing the Trojans the worst defeat in their history. Game two saw the Trojans visit Leeds to take on the Yorkshire Rams. Injuries from the season opener had taken their toll on the Trojans and the home team recorded a comfortable 21-point victory. The third game of their campaign saw the Trojans hosting a team who were yet to beat them in seven meetings, the Dundee Hurricanes. In yet another blow to the home side, the Hurricanes strong running game overpowered the Trojans, leading to Dundee recording their first ever victory over West Coast. In game four, the Trojans finally broke their duck, recording an away victory over former Division Two club, the Merseyside Nighthawks. However, this success didn't last and the Trojans lost again in game five. The return of long-time running back Jon Sutherland for game six meant that they were able to record their second victory of the year. Their return fixture against the Dundee Hurricanes had to be postponed due to a lack of referees and, due to a lack of available players for the fixture, the Trojans were caused to forfeit the return fixture against the East Kilbride Pirates. In their next played game, the Trojans fell to a crushing home loss from the Merseyside Nighthawks, dropping them to second bottom in Division One North. Their penultimate game of the season saw the Trojans crushed by the Doncaster Mustangs, leaving only the rescheduled game against the Dundee Hurricanes to save them from finishing bottom of Division One North. 2010 season The Trojans' home games were played at Scotstoun Stadium in Glasgow. The 2010 season was classed as a rebuilding year and nothing much was expected of the Trojans, but they finished strong and secured 2nd spot behind the Wolves. The Trojans went on to lose to the eventual finalists, the Titans. 2011 season The Trojans' home games were played at Portland Park in Troon. On the back of the return of running back Jordan Falconer, the Trojans returned to form emphatically in 2011, tearing through their conference with a 10-0 record, making it all the way to the British finals in Crystal Palace, London. Despite being heavy favourites, the Trojans suffered a crushing defeat at the hands of the South Wales Warriors. 2012 season The Trojans' home games were again played at Portland Park in Troon, where the club hoped to set-up a permanent base including a training & development academy for American Football. Having made it to the 2011 Division 2 final, the Trojans were promoted to Division One as part of a restructure of the top segment of the game. They started poorly, losing their opener to the Coventry Jets in awful weather conditions, but then went on a 10-game winning streak, taking the top seed in the North and beating the Berkshire Renegades convincingly in the National semi final. This gave them their third shot at a British final as they took on the Sussex Thunder at Don Valley Stadium, Sheffield. The Trojans had the better record, a significantly more potent offense and a defense which was almost twice as stingy as the Thunder's, giving them the tag of favourites for the game. But this was again was just down to the Trojans playing ball in the North, but everyone connected to the game knew the former Premier Thunder would be a hard nut to crack, more so with the return of many players. Under horrendous conditions the Trojans out ran, out passed but failed to outscore the Thunder who took the Bowl with a 10-7 win. 2013 season The Trojans were promoted to the Premier Division, in a restructure which saw the sport move to a two-tier format after the 2012 season. It was to prove the hardest season for the team since it first entered the league in 2005, the team lost a lot of players during pre-season, but they continued to battle on in games many would have forfeited, they played with 17 v Caesars, 22 v Rams and on their last game, faced the #3 team in the UK with 23 players. This season has been a sore one for the supporters of the club, but they are fully behind the coaches and players and hope the team builds for the 2014 season. Fixtures and Results: 27/04 vs Birmingham Bulls - L 0 - 3 05/05 @ Sheffield Predators - L 46 - 24 12/05 vs East Kilbride Pirates - L 0 - 34 09/06 @ Tamworth Phoenix CANCELLED 16/06 @ Doncaster Mustangs CANCELLED 14/07 vs Lancashire Wolverines - L 32 - 54 21/07 @ Nottingham Caesars - L 42 - 14 04/08 vs Yorkshire Rams - L - 36 - 6 10/08 vs Coventry Jets CANCELLED 17/08 @ East Kilbride Pirates - L - 63 - 0 2014 season With the leagues being restructured the Trojans were put in an all Scotland league, with Aberdeen Roughnecks, Clyde Valley Black hawks, Dundee Hurricanes, Edinburgh Wolves and Glasgow Tigers. The team were optimistic about their chances, with a high number of rookies joining and especially with the return of influential players such as Jordan Falconer RB and Fabio Maturano DB. In the end the team finished 7-0-3, with the losses coming twice from Clyde Valley and once from Edinburgh Wolves, nonetheless, it was a vast improvement from the season before. References External links Official Site of the West Coast Trojans BAFA National League teams American football teams in Scotland 2004 establishments in Scotland American football teams established in 2004
241095
https://en.wikipedia.org/wiki/Basename
Basename
basename is a standard computer program on Unix and Unix-like operating systems. When basename is given a pathname, it will delete any prefix up to the last slash ('/') character and return the result. basename is described in the Single UNIX Specification and is primarily used in shell scripts. History was introduced in X/Open Portability Guidelines issue 2 of 1987. It was inherited into the first version of POSIX and the Single Unix Specification. It first appeared in 4.4BSD. The version of basename bundled in GNU coreutils was written by David MacKenzie. The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. Usage The Single UNIX Specification specification for basename is. basename string [suffix] string A pathname suffix If specified, basename will also delete the suffix. Examples basename will retrieve the last name from a pathname ignoring any trailing slashes $ basename /home/jsmith/base.wiki base.wiki $ basename /home/jsmith/ jsmith $ basename / / basename can also be used to remove the end of the base name, but not the complete base name $ basename /home/jsmith/base.wiki .wiki base $ basename /home/jsmith/base.wiki ki base.wi $ basename /home/jsmith/base.wiki base.wiki base.wiki See also List of Unix commands dirname Path (computing) References External links Basename Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands
708695
https://en.wikipedia.org/wiki/Wc%20%28Unix%29
Wc (Unix)
wc (short for word count) is a command in Unix, Plan 9, Inferno, and Unix-like operating systems. The program reads either standard input or a list of computer files and generates one or more of the following statistics: newline count, word count, and byte count. If a list of files is provided, both individual file and total statistics follow. Example Sample execution of wc: $ wc foo bar 40 149 947 foo 2294 16638 97724 bar 2334 16787 98671 total The first column is the count of newlines, meaning that the text file foo has 40 newlines while bar has 2294 newlines- resulting in a total of 2334 newlines. The second column indicates the number of words in each text file showing that there are 149 words in foo and 16638 words in bar giving a total of 16787 words. The last column indicates the number of characters in each text file, meaning that the file foo has 947 characters while bar has 97724 characters 98671 characters all in all. Newer versions of wc can differentiate between byte and character count. This difference arises with Unicode which includes multi-byte characters. The desired behaviour is selected with the -c or -m options. History is part of the X/Open Portability Guide since issue 2 of 1987. It was inherited into the first version of POSIX.1 and the Single Unix Specification. It appeared in Version 1 Unix. GNU wc used to be part of the GNU textutils package; it is now part of GNU coreutils. The version of wc bundled in GNU coreutils was written by Paul Rubin and David MacKenzie. A wc command is also part of ASCII's MSX-DOS2 Tools for MSX-DOS version 2. The command is available as a separate package for Microsoft Windows as part of the GnuWin32 project and the UnxUtils collection of native Win32 ports of common GNU Unix-like utilities. The command has also been ported to the IBM i operating system. Usage wc -c <filename> prints the byte count wc -l <filename> prints the line count wc -m <filename> prints the character count wc -w <filename> prints the word count wc -L <filename> prints the length of the longest line (GNU extension) See also List of Unix commands References External links wc(1) - Original Unix First Edition manual page for wc. The wc Command by The Linux Information Project (LINFO) Unix text processing utilities Unix SUS2008 utilities Plan 9 commands Inferno (operating system) commands
165232
https://en.wikipedia.org/wiki/Pentium%20Pro
Pentium Pro
The Pentium Pro is a sixth-generation x86 microprocessor developed and manufactured by Intel and introduced on November 1, 1995. It introduced the P6 microarchitecture (sometimes termed i686) and was originally intended to replace the original Pentium in a full range of applications. While the Pentium and Pentium MMX had 3.1 and 4.5 million transistors, respectively, the Pentium Pro contained 5.5 million transistors. Later, it was reduced to a more narrow role as a server and high-end desktop processor and was used in supercomputers like ASCI Red, the first computer to reach the trillion floating point operations per second (teraFLOPS) performance mark. The Pentium Pro was capable of both dual- and quad-processor configurations. It only came in one form factor, the relatively large rectangular Socket 8. The Pentium Pro was succeeded by the Pentium II Xeon in 1998. Microarchitecture The lead architect of Pentium Pro was Fred Pollack who was specialized in superscalarity and had also worked as the lead engineer of the Intel iAPX 432. Summary The Pentium Pro incorporated a new microarchitecture, different from the Pentium's P5 microarchitecture. It has a decoupled, 14-stage superpipelined architecture which used an instruction pool. The Pentium Pro (P6) implemented many radical architectural differences mirroring other contemporary x86 designs such as the NexGen Nx586 and Cyrix 6x86. The Pentium Pro pipeline had extra decode stages to dynamically translate IA-32 instructions into buffered micro-operation sequences which could then be analysed, reordered, and renamed in order to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro thus featured out of order execution, including speculative execution via register renaming. It also had a wider 36-bit address bus, usable by Physical Address Extension (PAE), allowing it to access up to 64 GB of memory. The Pentium Pro has an 8 KB instruction cache, from which up to 16 bytes are fetched on each cycle and sent to the instruction decoders. There are three instruction decoders. The decoders are unequal in ability: only one can decode any x86 instruction, while the other two can only decode simple x86 instructions. This restricts the Pentium Pro's ability to decode multiple instructions simultaneously, limiting superscalar execution. x86 instructions are decoded into 118-bit micro-operations (micro-ops). The micro-ops are reduced instruction set computer (RISC)-like; that is, they encode an operation, two sources, and a destination. The general decoder can generate up to four micro-ops per cycle, whereas the simple decoders can generate one micro-op each per cycle. Thus, x86 instructions that operate on the memory (e.g., add this register to this location in the memory) can only be processed by the general decoder, as this operation requires a minimum of three micro-ops. Likewise, the simple decoders are limited to instructions that can be translated into one micro-op. Instructions that require more micro-ops than four are translated with the assistance of a sequencer, which generates the required micro-ops over multiple clock cycles. The Pentium Pro was the first processor in the x86-family to support upgradeable microcode under BIOS and/or operating system (OS) control. Micro-ops exit the re-order buffer (ROB) and enter a reserve station (RS), where they await dispatch to the execution units. In each clock cycle, up to five micro-ops can be dispatched to five execution units. The Pentium Pro has a total of six execution units: two integer units, one floating-point unit (FPU), a load unit, store address unit, and a store data unit. One of the integer units shares the same ports as the FPU, and therefore the Pentium Pro can only dispatch one integer micro-op and one floating-point micro-op, or two integer micro-ops per a cycle, in addition to micro-ops for the other three execution units. Of the two integer units, only the one that shares the path with the FPU on port 0 has the full complement of functions such as a barrel shifter, multiplier, divider, and support for LEA instructions. The second integer unit, which is connected to port 1, does not have these facilities and is limited to simple operations such as add, subtract, and the calculation of branch target addresses. The FPU executes floating-point operations. Addition and multiplication are pipelined and have a latency of three and five cycles, respectively. Division and square-root are not pipelined and are executed in separate units that share the FPU's ports. Division and square root have a latency of 18-36 and 29-69 cycles, respectively. The smallest number is for single precision (32-bit) floating-point numbers and the largest for extended precision (80-bit) numbers. Division and square root can operate simultaneously with adds and multiplies, preventing them from executing only when the result has to be stored in the ROB. After the microprocessor was released, a bug was discovered in the floating point unit, commonly called the "Pentium Pro and Pentium II FPU bug" and by Intel as the "flag erratum". The bug occurs under some circumstances during floating point-to-integer conversion when the floating point number will not fit into the smaller integer format, causing the FPU to deviate from its documented behaviour. The bug is considered to be minor and occurs under such special circumstances that very few, if any, software programs are affected. The Pentium Pro P6 microarchitecture was used in one form or another by Intel for more than a decade. The pipeline would scale from its initial 150 MHz start, all the way up to 1.4 GHz with the "Tualatin" Pentium III. The design's various traits would continue after that in the derivative core called "Banias" in Pentium M and Intel Core (Yonah), which itself would evolve into the Core microarchitecture (Core 2 processor) in 2006 and onward. Instruction set The Pentium Pro (P6) introduced new instructions into the Intel range; the CMOVxx (‘conditional move’) instructions can move a value that is either the contents of a register or memory location into another register or not, according to some predicate logical condition xx on the flags register, xx being a flags predicate code as given in the condition for conditional jump instructions. So for example CMOVNE moves a specified value into a register or not depending on whether the NE (not-equal) condition is true in the flags register ie Z flag = 0. This allows the evaluation of if-then-else operations and for example the ? : operation in C. These instructions give a performance boost by allowing the avoidance of costly jump and branch instructions. In eg CMOVxx destreg1, source_operand2 the first operand is the destination register, the second the source register or memory location. The second operand unfortunately can not be an immediate (in-line constant) value and such a constant would have to be placed in a register first. The predicate code xx can take the full range of values as allowed in conditional branches. A second development was the documentation of the UD2 illegal instruction. This op code is reserved and guaranteed to cause an illegal instruction exception on the P6 and all later processors. This allows developers to easily crash the current program in a future-proof fashion when a bug is detected by software. Performance Despite being advanced for the time, the Pentium Pro's out-of-order register renaming architecture had trouble running 16-bit code and mixed code (8-bit with 16-bit (8/16), or 16-bit with 32-bit (16/32), as using partial registers cause frequent pipeline flushing. Specific use of partial registers was then a common performance optimization, as it incurred no performance penalty on pre-P6 Intel processors; also, the dominant operating systems at the time of the Pentium Pro's release were 16-bit DOS, and mixed 16/32-bit Windows 3.1x and Windows 95 (although the latter requires a 32-bit 80386 CPU, much of its code is still 16-bit for performance reasons, such as USER.exe). This, with the high cost of Pentium Pro systems, led to tepid sales among PC buyers at the time. To fully use the Pentium Pro's P6 microarchitecture, a fully 32-bit operating system is needed, such as Windows NT, Linux, Unix, or OS/2. The performance issues on legacy code were later partly mitigated by Intel with the Pentium II. Compared to RISC microprocessors, the Pentium Pro, when introduced, slightly outperformed the fastest RISC microprocessors on integer performance when running the SPECint95 benchmark, but floating-point performance was significantly lower, half that of some RISC microprocessors. The Pentium Pro's integer performance lead disappeared rapidly, first overtaken by the MIPS Technologies R10000 in January 1996, and then by Digital Equipment Corporation's EV56 variant of the Alpha 21164. Reviewers quickly noted the very slow writes to video memory as the weak spot of the P6 platform, with performance here being as low as 10% of an identically clocked Pentium system in benchmarks such as VIDSPEED. Methods to circumvent this included setting VESA drawing to system memory instead of video memory in games such as Quake, and later on utilities such as FASTVID emerged, which could double performance in certain games by enabling the write combining features of the CPU. memory type range registers (MTRRs) are set automatically by Windows video drivers starting from ~1997, and there the improved cache/memory subsystem and FPU performance caused it to outclass the Pentium clock-for-clock in the emerging 3D games of the mid–to–late 1990s, particularly when using NT4. However, its lack of MMX implementation reduces performance in multimedia applications that made use of those instructions. Caching Likely Pentium Pro's most noticeable addition was its on-package L2 cache, which ranged from 256 KB at introduction to 1 MB in 1997. At the time, manufacturing technology did not feasibly allow a large L2 cache to be integrated into the processor core. Intel instead placed the L2 die(s) separately in the package which still allowed it to run at the same clock speed as the CPU core. Additionally, unlike most motherboard-based cache schemes that shared the main system bus with the CPU, the Pentium Pro's cache had its own back-side bus (called dual independent bus by Intel). Because of this, the CPU could read main memory and cache concurrently, greatly reducing a traditional bottleneck. The cache was also "non-blocking", meaning that the processor could issue more than one cache request at a time (up to 4), reducing cache-miss penalties. (This is an example of MLP, Memory Level Parallelism.) These properties combined to produce an L2 cache that was immensely faster than the motherboard-based caches of older processors. This cache alone gave the CPU an advantage in input/output performance over older x86 CPUs. In multiprocessor configurations, Pentium Pro's integrated cache skyrocketed performance in comparison to architectures which had each CPU sharing a central cache. However, this far faster L2 cache did come with some complications. The Pentium Pro's "on-package cache" arrangement was unique. The processor and the cache were on separate dies in the same package and connected closely by a full-speed bus. The two or three dies had to be bonded together early in the production process, before testing was possible. This meant that a single, tiny flaw in either die made it necessary to discard the entire assembly, which was one of the reasons for the Pentium Pro's relatively low production yield and high cost. All versions of the chip were expensive, those with 1024 KB being particularly so, since it required two 512 KB cache dies as well as the processor die. Available models Pentium Pro clock speeds were 150, 166, 180 or 200 MHz with a 60 or 66 MHz external bus clock. Some users chose to overclock their Pentium Pro chips, with the 200 MHz version often being run at 233 MHz, the 180 MHz version often being run at 200 MHz, and the 150 MHz version often being run at 166 MHz. The chip was popular in symmetric multiprocessing configurations, with dual and quad SMP server and workstation setups being commonplace. In Intel's "Family/Model/Stepping" scheme, the Pentium Pro is family 6, model 1, and its Intel Product code is 80521. Fabrication The process used to fabricate the Pentium Pro processor die and its separate cache memory die changed, leading to a combination of processes used in the same package: The 133 MHz Pentium Pro prototype processor die was fabricated in a 0.6 μm BiCMOS process. The 150 MHz Pentium Pro processor die was fabricated in a 0.50 μm BiCMOS process. The 166, 180, and 200 MHz Pentium Pro processor die was fabricated in a 0.35 μm BiCMOS process. The 256 KB L2 cache die was fabricated in a 0.50 μm BiCMOS process. The 512 and 1024 KB L2 cache die was fabricated in a 0.35 μm BiCMOS process. Packaging The Pentium Pro (up to 512 KB cache) is packaged in a ceramic multi-chip module (MCM). The MCM contains two underside cavities in which the microprocessor die and its companion cache die reside. The dies are bonded to a heat slug, whose exposed top helps the heat from the dies to be transferred more directly to cooling apparatus such as a heat sink. The dies are connected to the package using conventional wire bonding. The cavities are capped with a ceramic plate. The Pentium Pro with 1 MB of cache uses a plastic MCM. Instead of two cavities, there is only one, in which the three dies reside, bonded to the package instead of a heat slug. The cavities are filled in with epoxy. The MCM has 387 pins, of which approximately half are arranged in a pin grid array (PGA) and half in an interstitial pin grid array (IPGA). The packaging was designed for Socket 8. Upgrade paths In 1998, the 300/333 MHz Pentium II Overdrive processor for Socket 8 was released. Featuring 512 KB of full-speed cache, it was produced by Intel as a drop-in upgrade option for owners of Pentium Pro systems. However, it only supported two-way glueless multiprocessing, not four-way or higher, which did not make it a usable upgrade for quad-processor systems. These specially packaged Pentium II Xeon processors were used to upgrade ASCI Red, which became the first computer to reach the teraFLOPS performance mark with the Pentium Pro processor and then the first to exceed 2 teraFLOPS after the upgrade to Pentium II Xeon processors. As Slot 1 motherboards became prevalent, several manufacturers released slocket adapters, such as the Tyan M2020, Asus C-P6S1, Tekram P6SL1, and the Abit KP6. The slockets allowed Pentium Pro processors to be used with Slot 1 motherboards. The Intel 440FX chipset explicitly supported both Pentium Pro and Pentium II processors, but the Intel 440BX and later Slot 1 chipsets did not explicitly support the Pentium Pro, so the Socket 8 slockets did not see wide use. Slockets, in the form of Socket 370 to Slot 1 adapters, saw renewed popularity when Intel introduced Socket 370 Celeron and Pentium III processors. Core specifications Pentium Pro L1 cache: 8, 8 KB (data, instructions) L2 cache: 256, 512 KB (one die) or 1024 KB (two 512 KB dies) in a multi-chip module clocked at CPU-speed Socket: Socket 8 Front-side bus: 60 and 66 MHz VCore: 3.1–3.3 V Fabrication: 0.50 μm or 0.35 BiCMOS Clockrate: 150, 166, 180, 200 MHz, (capable of 233 MHz on some motherboards) First release: November 1995 Pentium II Overdrive L1 cache: 16, 16 KB (data + instructions) L2 cache: 512 KB external chip on CPU module clocked at CPU-speed Socket: Socket 8 Multiplier: Locked at 5× Front-side bus: 60 and 66 MHz VCore: 3.1–3.3 V (has on-board voltage regulator) Fabrication: 0.25 μm Clockrate: Based on the Deschutes-generation Pentium II First release: 1997 Supports MMX technology Bus and multiprocessor capabilities The Pentium Pro used GTL+ signaling in its front-side bus. The Pentium Pro could be used by itself on up to four-way designs. Eight-way Pentium Pro computers were also built, but these used multiple buses. The design of the Pentium Pro bus was influenced by Futurebus, the Intel iAPX 432 bus, and elements of the Intel i960 bus. Futurebus has been intended as an advanced bus to replace VMEbus used with the Motorola 68000 from the late 1970s, but it stagnated in standardization committee for more than a decade if you count all the twists and turns. Intel's iAPX 432 initiative was also a commercial failure, but in the process they did learn how to build a split-transaction bus to support a cacheless multiprocessor system. The i960 had further developed the split-transaction iAPX 432 bus to include a cache coherency protocol, ending up with a feature set highly reminiscent of the original Futurebus ambitions. The lead architect of i960 was superscalarity specialist Fred Pollack who was also the lead engineer of the Intel iAPX 432 and the lead architect of the i686 chip, the Pentium Pro. He was no doubt intimately familiar with all this history. The Pentium Pro was designed to include the 4-way SMP split-transaction cache-coherent bus as a mandatory feature of every chip produced. This also served to deny competition access to the socket to produce cloned processors. While the Pentium Pro was not successful as a machine for the masses, due to poor 16-bit support for Windows 95, it did become highly successful in the file server space due to its advanced, integrated bus design, introducing many advanced features that had formerly only been available in the pricey workstation segment into the commodity marketplace. Pentium Pro/6th generation competitors AMD K5 and K6 Cyrix 6x86 and MII IDT WinChip Intel P5 Pentium, co-existed with Pentium Pro for several years See also List of Intel Pentium II microprocessors List of Intel Pentium Pro microprocessors References External links Backside Bus, searchstorage.techtarget.com Intel Pentium Pro images and descriptions, cpu-collection.de CPU-INFO: Intel Pentium Pro, indepth processor history, web.archive.org Computer-related introductions in 1995 Intel x86 microprocessors Superscalar microprocessors 32-bit microprocessors
21139526
https://en.wikipedia.org/wiki/Sara%20Vietnam
Sara Vietnam
Sara Vietnam Joint Stock Company (SRA:VN, SRB:VN; also transliterated Sara Viet Nam; also known as Sara Group) is an information technology company of Vietnam. It has its origins in 2002 with the founding of the SARA Center, a learning center that teaches information technology and foreign language skills, which currently graduates 2000 people per year. The company is based in Hanoi and is listed at the Hanoi Securities Trading Center, one of Vietnam's two major stock markets. Operations The company's primary activities include of information technology research and software development, including business software for e-commerce, accounting, procurement, human resources, customer service, industrial production technology and hospital management. Sara Vietnam is also working in the media, education and real estate fields. According to its website, the overall work of the company is divided into four "Fields of action": Software Media, including television programming. Instruction; the company offers a Bachelor's Degree program in information technology, and foreign language training. Real estate development and management, and building construction including 'Commercial Center' in Vinh City, 'Software Park' in Hà Tây, 'North Central Trade Centre' in Ha Noi, the 'Boris Smirnov Wine Factory', and a plastic factory. In addition to the "Four fields", the company is involved in Sara Window, a constructions materials manufacturing company that makes windows, doors and room partitions. Sara Vietnam conducts a feasibility study on the possibilities of mobile marketing in Vietnam, supported by the Danish B2B Programme. Professional associations Sara Vietnam is a member of the Vietnam Software Association (VINASA); the South East Asia Scientific Association; and the Vietnam Chamber of Commerce and Industry (VCCI). University The company's management and education departments are working with the Ministry of Education towards the opening of SIBT University (Sara International Business Technology University), in Lai Châu Province of the Vietnamese northwest. Shareholders Sara Vietnam is a publicly traded company. 15% of shares is owned by Japanese CPR International. It was the first Japanese investment into a Vietnamese private company. References External links Sara Vietnam JSC Companies established in 2002 Companies listed on the Hanoi Stock Exchange Companies based in Hanoi Information technology companies of Vietnam
1449
https://en.wikipedia.org/wiki/Alan%20Kay
Alan Kay
Alan Curtis Kay (born May 17, 1940) is an American computer scientist. He has been elected a Fellow of the American Academy of Arts and Sciences, the National Academy of Engineering, and the Royal Society of Arts. He is best known for his pioneering work on object-oriented programming and windowing graphical user interface (GUI) design. He was awarded the Turing award in 2003. He was the president of the Viewpoints Research Institute before its closure in 2018, and an adjunct professor of computer science at the University of California, Los Angeles. He is also on the advisory board of TTI/Vanguard. Until mid-2005, he was a senior fellow at HP Labs, a visiting professor at Kyoto University, and an adjunct professor at the Massachusetts Institute of Technology (MIT). Kay is also a former professional jazz guitarist, composer, and theatrical designer. He also is an amateur classical pipe organist. Early life and work In an interview on education in America with the Davis Group Ltd., Kay said: Originally from Springfield, Massachusetts, Kay's family relocated several times due to his father's career in physiology before ultimately settling in the New York metropolitan area when he was nine. He attended Brooklyn Technical High School. Having accumulated enough credits to graduate, Kay then attended Bethany College in Bethany, West Virginia. He majored in biology and minored in mathematics. Thereafter, Kay taught guitar in Denver, Colorado for a year and hastily enlisted in the United States Air Force when the local draft board inquired about his nonstudent status. Assigned as a computer programmer (a rare billet usually filled by women due to the secretarial connotations of the field in the era) after passing an aptitude test, he devised an early cross-platform file transfer system. Following his discharge, Kay enrolled at the University of Colorado Boulder, earning a Bachelor of Science (B.S.) in mathematics and molecular biology in 1966. In the autumn of 1966, he began graduate school at the University of Utah College of Engineering. He earned a Master of Science (M.S.) in electrical engineering in 1968, and then a Doctor of Philosophy (Ph.D.) in computer science in 1969. His doctoral dissertation, FLEX: A Flexible Extendable Language, described the invention of a computer language named FLEX. While there, he worked with "fathers of computer graphics" David C. Evans (who had been recently recruited from the University of California, Berkeley to start Utah's computer science department) and Ivan Sutherland (best known for writing such pioneering programs as Sketchpad). Their mentorship greatly inspired Kay's evolving views on objects and programming. As he grew busier with research for the Defense Advanced Research Projects Agency (DARPA), he ended his musical career. In 1968, he met Seymour Papert and learned of the programming language Logo, a dialect of Lisp optimized for educational purposes. This led him to learn of the work of Jean Piaget, Jerome Bruner, Lev Vygotsky, and of constructionist learning, further influencing his professional orientation. Leaving Utah as an associate professor of computer science in 1969, Kay became a visiting researcher at the Stanford Artificial Intelligence Laboratory in anticipation of accepting a professorship at Carnegie Mellon University. Instead, in 1970, he joined the Xerox PARC research staff in Palo Alto, California. Throughout the decade, he developed prototypes of networked workstations using the programming language Smalltalk. These inventions were later commercialized by Apple in their Lisa and Macintosh computers. Along with some colleagues at PARC, Kay is one of the fathers of the idea of object-oriented programming (OOP), which he named. Some of the original object-oriented concepts, including the use of the words 'object' and 'class', had been developed for Simula 67 at the Norwegian Computing Center. Later he said: I'm sorry that I long ago coined the term "objects" for this topic because it gets many people to focus on the lesser idea. The big idea is "messaging". While at PARC, Kay conceived the Dynabook concept, a key progenitor of laptop and tablet computers and the e-book. He is also the architect of the modern overlapping windowing graphical user interface (GUI). Because the Dynabook was conceived as an educational platform, Kay is considered to be one of the first researchers into mobile learning; many features of the Dynabook concept have been adopted in the design of the One Laptop Per Child educational platform, with which Kay is actively involved. Recognition and recent work From 1981 to 1984, Kay was the chief scientist at Atari. In 1984, he became an Apple Fellow. Following the closure of the Apple Advanced Technology Group in 1997, he was recruited by his friend Bran Ferren, head of research and development at Disney, to join Walt Disney Imagineering as a Disney Fellow. He remained there until Ferren left to start Applied Minds Inc with Imagineer Danny Hillis, leading to the cessation of the Fellows program. In 2001, he founded Viewpoints Research Institute, a nonprofit organization dedicated to children, learning, and advanced software development. For its first ten years, Kay and his Viewpoints group were based at Applied Minds in Glendale, California, where he and Ferren continued to work together on various projects. Kay was also a senior fellow at Hewlett-Packard until HP disbanded the Advanced Software Research Team on July 20, 2005. Squeak, Etoys, and Croquet In December 1995, while still at Apple, Kay collaborated with many others to start the open source Squeak version of Smalltalk, and he continues to work on it. As part of this effort, in November 1996, his team began research on what became the Etoys system. More recently he started, along with David A. Smith, David P. Reed, Andreas Raab, Rick McGeer, Julian Lombardi, and Mark McCahill, the Croquet Project, an open source networked 2-D and 3-D environment for collaborative work. Tweak In 2001, it became clear that the Etoy architecture in Squeak had reached its limits in what the Morphic interface infrastructure could do. Andreas Raab was a researcher working in Kay's group, then at Hewlett-Packard. He proposed defining a "script process" and providing a default scheduling mechanism that avoids several more general problems. The result was a new user interface, proposed to replace the Squeak Morphic user interface in the future. Tweak added mechanisms of islands, asynchronous messaging, players and costumes, language extensions, projects, and tile scripting. Its underlying object system is class-based, but to users (during programming) it acts as if it were prototype-based. Tweak objects are created and run in Tweak project windows. Children's machine In November 2005, at the World Summit on the Information Society, the MIT research laboratories unveiled a new laptop computer, for educational use around the world. It has many names: the $100 Laptop, the One Laptop per Child program, the Children's Machine, and the XO-1. The program was begun and is sustained by Kay's friend Nicholas Negroponte, and is based on Kay's Dynabook ideal. Kay is a prominent co-developer of the computer, focusing on its educational software using Squeak and Etoys. Reinventing programming Kay has lectured extensively on the idea that the computer revolution is very new, and all of the good ideas have not been implemented universally. Lectures at OOPSLA 1997 conference and his ACM Turing award talk, entitled "The Computer Revolution Hasn't Happened Yet" were informed by his experiences with Sketchpad, Simula, Smalltalk, and the bloated code of commercial software. On August 31, 2006, Kay's proposal to the United States National Science Foundation (NSF) was granted, thus funding Viewpoints Research Institute for several years. The proposal title was: STEPS Toward the Reinvention of Programming: A compact and Practical Model of Personal Computing as a Self-exploratorium. A sense of what Kay is trying to do comes from this quote, from the abstract of a seminar on this, given at Intel Research Labs, Berkeley: "The conglomeration of commercial and most open source software consumes in the neighborhood of several hundreds of millions of lines of code these days. We wonder: how small could be an understandable practical "Model T" design that covers this functionality? 1M lines of code? 200K LOC? 100K LOC? 20K LOC?" Awards and honors Alan Kay has received many awards and honors. Among them: 2001: UdK 01-Award in Berlin, Germany for pioneering the GUI; J-D Warnier Prix D'Informatique; NEC C&C Prize 2002: Telluride Tech Festival Award of Technology in Telluride, Colorado 2003: ACM Turing Award "For pioneering many of the ideas at the root of contemporary object-oriented programming languages, leading the team that developed Smalltalk, and for fundamental contributions to personal computing." 2004: Kyoto Prize; Charles Stark Draper Prize with Butler W. Lampson, Robert W. Taylor and Charles P. Thacker 2012: UPE Abacus Award awarded to individuals who have provided extensive support and leadership for student-related activities in the computing and information disciplines, Honorary doctorates: 2002: Kungliga Tekniska Högskolan (Royal Institute of Technology) in Stockholm 2005: Georgia Institute of Technology 2005: Columbia College Chicago awarded Doctor of Humane Letters, Honoris Causa 2007: Laurea Honoris Causa in Informatica, Università di Pisa, Italy 2008: University of Waterloo 2009: Kyoto University 2010: Universidad de Murcia 2017: University of Edinburgh Honorary Professor, Berlin University of the Arts Elected fellow of: American Academy of Arts and Sciences 1997: National Academy of Engineering for inventing the concept of portable personal computing. Royal Society of Arts 1999: Computer History Museum "for his fundamental contributions to personal computing and human-computer interface development." 2008: Association for Computing Machinery "For fundamental contributions to personal computing and object-oriented programming." 2011: Hasso Plattner Institute His other honors include the J-D Warnier Prix d'Informatique, the ACM Systems Software Award, the NEC Computers & Communication Foundation Prize, the Funai Foundation Prize, the Lewis Branscomb Technology Award, and the ACM SIGCSE Award for Outstanding Contributions to Computer Science Education. See also List of pioneers in computer science References External links Viewpoints Research Institute "There is no information content in Alan Kay" 2012 1940 births American computer programmers American computer scientists Apple Inc. employees Apple Fellows Atari people Computer science educators Draper Prize winners Fellows of the American Association for the Advancement of Science Fellows of the Association for Computing Machinery Hewlett-Packard people Human–computer interaction researchers Living people Massachusetts Institute of Technology faculty Open source advocates People from Springfield, Massachusetts Programming language designers Scientists at PARC (company) Turing Award laureates University of California, Los Angeles faculty University of Colorado Boulder alumni University of Utah alumni
17487236
https://en.wikipedia.org/wiki/Promise%20theory
Promise theory
Promise Theory, in the context of information science, is a model of voluntary cooperation between individual, autonomous actors or agents who publish their intentions to one another in the form of promises. It is a form of labelled graph theory, describing discrete networks of agents joined by the unilateral promises they make. A 'promise' is a declaration of intent whose purpose is to increase the recipient's certainty about a claim of past, present or future behaviour. For a promise to increase certainty, the recipient needs to trust the promiser, but trust can also be built on the verification (or 'assessment') that previous promises have been kept, thus trust plays a symbiotic relationship with promises. Each agent assesses its belief in the promise's outcome or intent. Thus Promise Theory is about the relativity of autonomous agents. One of the goals of Promise Theory is to offer a model that unifies the physical (or dynamical) description of an information system with its intended meaning, i.e. its semantics. This has been used to describe configuration management of resources in information systems, amongst other things. History Promise Theory was proposed by Mark Burgess in 2004, in the context of computer science, in order to solve problems present in obligation-based computer management schemes for policy-based management. However its usefulness was quickly seen to go far beyond computing. The simple model of a promise used in Promise Theory (now called 'micro-promises') can easily address matters of Economics and Organization. Promise Theory has since been developed by Burgess in collaboration with Dutch computer scientist Jan Bergstra, resulting in a book: Promise Theory: Principles and Applications. published in 2013. Interest in promise theory has grown in the IT industry, with several products citing it. Autonomy Obligations, rather than promises have been the traditional way of guiding behaviour. Promise Theory's point of departure from obligation logics is the idea that all agents in a system should have autonomy of control—i.e. that they cannot be coerced or forced into a specific behaviour. Obligation theories in computer science often view an obligation as a deterministic command that causes its proposed outcome. In Promise Theory an agent may only make promises about its own behaviour. For autonomous agents it is meaningless to make promises about another's behaviour. Although this assumption could be interpreted morally or ethically, in Promise Theory this is simply a pragmatic `engineering' principle, which leads to a more complete documentation of the intended roles of the actors or agents within the whole. The reason for this is that, when one is not allowed to make assumptions about others' behaviour, one is forced to document every promise more completely in order to make predictions; thus it leads to a more complete documentation which in turn points out the possible failure modes by which cooperative behaviour could fail. Command and control systems like those that motivate obligation theories can easily be reproduced by having agents voluntarily promise to follow the instructions of another agent (this is also viewed as a more realistic model of behaviour). Since a promise can always be withdrawn, there is no contradiction between voluntary cooperation and command and control. In Philosophy and Law a promise is often viewed as something that leads to an obligation. Promise Theory rejects that point of view. Bergstra and Burgess have shown that the concept of a promise is quite independent of that of obligation and indeed is simpler. The role of obligations in increasing certainty is unclear, since obligations can come from anywhere and an aggregation of non-local constraints cannot be resolved by a local agent: this means that obligations can actually increase uncertainty. In a world of promises, all constraints on an agent are self-imposed and local (even if they are suggested by outside agents), thus all contradictions can be resolved locally. Multi-agent systems and commitments The theory of commitments in multi-agent systems has some similarities with aspects of promise theory, but there are key differences. In Promise Theory a commitment is a subset of intentions. Since a promise is a published intention, a commitment may or may not be a promise. A detailed comparison of Promises and Commitments in the senses intended in their respective fields is forthcoming, and not a trivial matter. Economics Promises can be valuable to the promisee or even to the promiser. They might also lead to costs. There is thus an economic story to tell about promises. The economics of promises naturally motivate `selfish agent' behaviour and Promise Theory can be seen as a motivation for game theoretical decision making, in which multiple promises play the role of strategies in a game. The theory of promises as applied to organizations bears some resemblance to the theory of Institutional Diversity by Elinor Ostrom. Several of the same themes and consideration appear; the main difference is that Ostrom focuses, like many authors, on the role of external rules and obligations. Promise Theory takes the opposite viewpoint that obeyance of rules is a voluntary act and hence it makes sense to focus on those voluntary promises. An attempt to force obeyance without a promise is considered to constitute an attack. One benefit of a Promise Theory approach is that it does not require special structural elements (e.g. Ostrom's institutional "Positions") to describe different roles in a collaborate network—these may also be viewed as promises in Promise Theory; thus there is a parsimony that helps to avoid an explosion of concepts, and perhaps more importantly admits mathematical formalization. The algebra and calculus of promises allows simple reasoning in a mathematical framework. CFEngine In spite of the generality of Promise Theory, it was originally proposed by Burgess as a way of modelling the computer management software CFEngine and its autonomous behaviour. Existing theories based on obligation were unsuitable. CFEngine uses a model of autonomy both as a way of avoiding distributed inconsistency in policy and as a security principle against external attack: no agent can be forced to receive information or instructions from another agent, thus all cooperation is voluntary. For many users of the software, this property has been instrumental in both keeping their systems safe and adapting to the local requirements. Emergent behaviour In computer science, the Promise theory describes policy governed services, in a framework of completely autonomous agents, which assist one another by voluntary cooperation alone. It is a framework for analyzing realistic models of modern networking, and as a formal model for swarm intelligence. Promise theory may be viewed as a logical and graph theoretical framework for understanding complex relationships in networks, where many constraints have to be met, which was developed at Oslo University College, by drawing on ideas from several different lines of research conducted there, including policy based management, graph theory, logic and configuration management. It uses a constructivist approach that builds conventional management structures from graphs of interacting, autonomous agents. Promises can be asserted either from an agent to itself or from one agent to another and each promise implies a constraint on the behavior of the promising agent. The atomicity of the promises makes them a tool for finding contradictions and inconsistencies. Agency as a model of systems in space and time The promises made by autonomous agents lead to a mutually approved graph structure, which in turn leads to spatial structures in which the agents represent point-like locations. This allows models of smart spaces, i.e. semantically labeled or even functional spaces, such as databases, knowledge maps, warehouses, hotels, etc., to be unified with other more conventional descriptions of space and time. The model of semantic spacetime uses promise theory to discuss these spacetime concepts. Promises are more mathematically primitive than graph adjacencies, since a link requires the mutual consent of two autonomous agents, thus the concept of a connected space requires more work to build structure. This makes them mathematically interesting as a notion of space, and offers a useful way of modeling physical and virtual information systems. References Theoretical computer science
294813
https://en.wikipedia.org/wiki/Internet%20forum
Internet forum
An Internet forum, or message board, is an online discussion site where people can hold conversations in the form of posted messages. They differ from chat rooms in that messages are often longer than one line of text, and are at least temporarily archived. Also, depending on the access level of a user or the forum set-up, a posted message might need to be approved by a moderator before it becomes publicly visible. Forums have a specific set of jargon associated with them; example: a single conversation is called a "thread", or topic. A discussion forum is hierarchical or tree-like in structure: a forum can contain a number of subforums, each of which may have several topics. Within a forum's topic, each new discussion started is called a thread and can be replied to by as many people as so wish. Depending on the forum's settings, users can be anonymous or have to register with the forum and then subsequently log in to post messages. On most forums, users do not have to log in to read existing messages. History The modern forum originated from bulletin boards, and so-called computer conferencing systems, and are a technological evolution of the dialup bulletin board system. From a technological standpoint, forums or boards are web applications managing user-generated content. Early Internet forums could be described as a web version of an electronic mailing list or newsgroup (such as exist on Usenet); allowing people to post messages and comment on other messages. Later developments emulated the different newsgroups or individual lists, providing more than one forum, dedicated to a particular topic. Internet forums are prevalent in several developed countries. Japan posts the most with over two million per day on their largest forum, 2channel. China also has many millions of posts on forums such as Tianya Club. Some of the first forum systems were the Planet-Forum system, developed at the beginning of the 1970s, the EIES system, first operational in 1976, and the KOM system, first operational in 1977. One of the first forum sites (which is still active today) is Delphi Forums, once called Delphi. The service, with four million members, dates to 1983. Forums perform a function similar to that of dial-up bulletin board systems and Usenet networks that were first created starting in the late 1970s. Early web-based forums date back as far as 1994, with the WIT project from W3 Consortium and starting from this time, many alternatives were created. A sense of virtual community often develops around forums that have regular users. Technology, video games, sports, music, fashion, religion, and politics are popular areas for forum themes, but there are forums for a huge number of topics. Internet slang and image macros popular across the Internet are abundant and widely used in Internet forums. Forum software packages are widely available on the Internet and are written in a variety of programming languages, such as PHP, Perl, Java and ASP. The configuration and records of posts can be stored in text files or in a database. Each package offers different features, from the most basic, providing text-only postings, to more advanced packages, offering multimedia support and formatting code (usually known as BBCode). Many packages can be integrated easily into an existing website to allow visitors to post comments on articles. Several other web applications, such as blog software, also incorporate forum features. WordPress comments at the bottom of a blog post allow for a single-threaded discussion of any given blog post. Slashcode, on the other hand, is far more complicated, allowing fully threaded discussions and incorporating a robust moderation and meta-moderation system as well as many of the profile features available to forum users. Some stand alone threads on forums have reached fame and notability such as the "I am lonely will anyone speak to me" thread on MovieCodec.com's forums, which was described as the "web's top hangout for lonely folk" by Wired Magazine. Structure A forum consists of a tree-like directory structure. The top end is "Categories". A forum can be divided into categories for the relevant discussions. Under the categories are sub-forums and these sub-forums can further have more sub-forums. The topics (commonly called threads) come under the lowest level of sub-forums and these are the places under which members can start their discussions or posts. Logically forums are organized into a finite set of generic topics (usually with one main topic) driven and updated by a group known as members, and governed by a group known as moderators. It can also have a graph structure. All message boards will use one of three possible display formats. Each of the three basic message board display formats: Non-Threaded/Semi-Threaded/Fully Threaded, has its own advantages and disadvantages. If messages are not related to one another at all, a Non-Threaded format is best. If a user has a message topic and multiple replies to that message topic, a semi-threaded format is best. If a user has a message topic and replies to that message topic and responds to replies, then a fully threaded format is best. User groups Internally, Western-style forums organize visitors and logged in members into user groups. Privileges and rights are given based on these groups. A user of the forum can automatically be promoted to a more privileged user group based on criteria set by the administrator. A person viewing a closed thread as a member will see a box saying he does not have the right to submit messages there, but a moderator will likely see the same box granting him access to more than just posting messages. An unregistered user of the site is commonly known as a guest or visitor. Guests are typically granted access to all functions that do not require database alterations or breach privacy. A guest can usually view the contents of the forum or use such features as read marking, but occasionally an administrator will disallow visitors to read their forum as an incentive to become a registered member. A person who is a very frequent visitor of the forum, a section or even a thread is referred to as a lurker and the habit is referred to as lurking. Registered members often will refer to themselves as lurking in a particular location, which is to say they have no intention of participating in that section but enjoy reading the contributions to it. Moderators The moderators (short singular form: "mod") are users (or employees) of the forum who are granted access to the posts and threads of all members for the purpose of moderating discussion (similar to arbitration) and also keeping the forum clean (neutralizing spam and spambots etc.). Moderators also answer users' concerns about the forum, general questions, as well as respond to specific complaints. Common privileges of moderators include: deleting, merging, moving, and splitting of posts and threads, locking, renaming, stickying of threads, banning, unbanning, suspending, unsuspending, warning the members, or adding, editing, and removing the polls of threads. "Junior modding", "backseat modding", or "forum copping" can refer negatively to the behavior of ordinary users who take a moderator-like tone in criticizing other members. Essentially, it is the duty of the moderator to manage the day-to-day affairs of a forum or board as it applies to the stream of user contributions and interactions. The relative effectiveness of this user management directly impacts the quality of a forum in general, its appeal, and its usefulness as a community of interrelated users. Moderators act as unpaid volunteers on many websites, which has sparked controversies and community tensions. On Reddit, some moderators have prominently expressed dissatisfaction with their unpaid labor being underappreciated, while other site users have accused moderators of abusing special access privileges to act as a "cabal" of "petty tyrants". On 4chan, moderators are subject to notable levels of mockery and contempt. There, they are often referred to as janitors (or, more pejoratively, "jannies") given their job of what is tantamount to cleaning up the imageboards' infamous shitposting. Administrators The administrators (short form: "admin") manage the technical details required for running the site. As such, they may promote (and demote) members to/from moderators, manage the rules, create sections and sub-sections, as well as perform any database operations (database backup etc.). Administrators often also act as moderators. Administrators may also make forum-wide announcements, or change the appearance (known as the skin) of a forum. There are also many forums where administrators share their knowledge. Post A post is a user-submitted message enclosed into a block containing the user's details and the date and time it was submitted. Members are usually allowed to edit or delete their own posts. Posts are contained in threads, where they appear as blocks one after another. The first post starts the thread; this may be called the TS (thread starter) or OP (original post). Posts that follow in the thread are meant to continue discussion about that post, or respond to other replies; it is not uncommon for discussions to be derailed. On Western forums, the classic way to show a member's own details (such as name and avatar) has been on the left side of the post, in a narrow column of fixed width, with the post controls located on the right, at the bottom of the main body, above the signature block. In more recent forum software implementations, the Asian style of displaying the members' details above the post has been copied. Posts have an internal limit usually measured in characters. Often one is required to have a message with a minimum length of 10 characters. There is always an upper limit but it is rarely reached – most boards have it at either 10,000, 20,000, 30,000, or 50,000 characters. Most forums keep track of a user's postcount. The postcount is a measurement of how many posts a certain user has made. Users with higher postcounts are often considered more reputable than users with lower postcounts, but not always. For instance, some forums have disabled postcounts with the hopes that doing so will emphasize the quality of information over quantity. Thread A thread (sometimes called a topic) is a collection of posts, usually displayed from oldest to latest, although this is typically configurable: Options for newest to oldest and for a threaded view (a tree-like view applying logical reply structure before chronological order) can be available. A thread is defined by a title, an additional description that may summarize the intended discussion, and an opening or original post (common abbreviation OP, which can also be used to refer to the original poster), which opens whatever dialogue or makes whatever announcement the poster wished. A thread can contain any number of posts, including multiple posts from the same members, even if they are one after the other. Bumping A thread is contained in a forum and may have an associated date that is taken as the date of the last post (options to order threads by other criteria are generally available). When a member posts in a thread it will jump to the top since it is the latest updated thread. Similarly, other threads will jump in front of it when they receive posts. When a member posts in a thread for no reason but to have it go to the top, it is referred to as a bump or bumping. It has been suggested that "bump" is an acronym of "bring up my post"; however, this is almost certainly a backronym and the usage is entirely consistent with the verb "bump" which means "to knock to a new position". On some messageboards, users can choose to sage (correctly pronounced though often confused as ) a post if they wish to make a post, but not "bump" it. The word "sage" derives from the 2channel terminology 下げる sageru, meaning "to lower". Stickying Threads that are important but rarely receive posts are stickyed (or, in some software, "pinned"). A sticky thread will always appear in front of normal threads, often in its own section. A "threaded discussion group" is simply any group of individuals who use a forum for threaded, or asynchronous, discussion purposes. The group may or may not be the only users of the forum. A thread's popularity is measured on forums in reply (total posts minus one, the opening post, in most default forum settings) counts. Some forums also track page views. Threads meeting a set number of posts or a set number of views may receive a designation such as "hot thread" and be displayed with a different icon compared to other threads. This icon may stand out more to emphasize the thread. If the forum's users have lost interest in a particular thread, it becomes a dead thread. Discussion Forums prefer a premise of open and free discussion and often adopt de facto standards. Most common topics on forums include questions, comparisons, polls of opinion, as well as debates. It is not uncommon for nonsense or unsocial behavior to sprout as people lose temper, especially if the topic is controversial. Poor understanding of differences in values of the participants is a common problem on forums. Because replies to a topic are often worded aimed at someone's point of view, discussion will usually go slightly off into several directions as people question each other's validity, sources and so on. Circular discussion and ambiguity in replies can extend for several tens of posts of a thread eventually ending when everyone gives up or attention spans waver and a more interesting subject takes over. It is not uncommon for debate to end in ad hominem attacks. Liabilities of owners and moderators Several lawsuits have been brought against the forums and moderators claiming libel and damage. A recent case is the scubaboard lawsuit where a business in the Maldives filed a suit against scubaboard for libel and defamation in January 2010. For the most part, though, forum owners and moderators in the United States are protected by Section 230 of the Communications Decency Act, which states that "[n]o provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider." In 2019, Facebook was faced with a class action lawsuit set forth by moderators diagnosed with post-traumatic stress disorder. It was settled for $52 million the following year. Common features By default to be an Internet forum, the web application needs an ability to submit threads and replies. Typically, threads are in newer to older view, and replies in older to newer view. Tripcodes and capcodes Most imageboards and 2channel-style discussion boards allow (and encourage) anonymous posting and use a system of tripcodes instead of registration. A tripcode is the hashed result of a password that allows one's identity to be recognized without storing any data about users. In a tripcode system, a secret password is added to the user's name following a separator character (often a number sign). This password, or tripcode, is hashed into a special key, or trip, distinguishable from the name by HTML styles. Tripcodes cannot be faked but on some types of forum software they are insecure and can be guessed. On other types, they can be brute forced with software designed to search for tripcodes such as Tripcode Explorer. Moderators and administrators will frequently assign themselves capcodes, or tripcodes where the guessable trip is replaced with a special notice (such as "# Administrator"), or cap. Private message A private message, or PM for short, is a message sent in private from a member to one or more other members. The ability to send so-called blind carbon copies is sometimes available. When sending a blind carbon copy (bcc), the users to whom the message is sent directly will not be aware of the recipients of the blind carbon copy or even if one was sent in the first place. Private messages are generally used for personal conversations. They can also be used with tripcodes—a message is addressed to a public trip and can be picked up by typing in the tripcode. Attachment An attachment can be almost any file. When someone attaches a file to a person's post they are uploading that particular file to the forum's server. Forums usually have very strict limit on what can be attached and what cannot be (among which the size of the files in question). Attachments can be part of a thread, social group, etc., BBCode and HTML HyperText Markup Language (HTML) is sometimes allowed but usually its use is discouraged or when allowed, it is extensively filtered. Modern bulletin board systems often will have it disabled altogether or allow only administrators use it, as allowing it on any normal user level is considered a security risk due to a high rate of XSS vulnerabilities. When HTML is disabled Bulletin Board Code (BBCode) is the most common preferred alternative. BBCode usually consists of a tag, similar to HTML only instead of < and > the tagname is enclosed within square brackets (meaning: [ and ]). Commonly [i] is used for italic type, [b] is used for bold, [u] for underline, [color="value"] for color and [list] for lists, as well as [img] for images and [url] for links. The following example BBCode: [b]This[/b] is [i]clever[/i] [b] [i]text[/i] [/b] when the post is viewed the code is rendered to HTML and will appear as: This is clever text. Many forum packages offer a way to create Custom BBCodes, or BBcodes that are not built into the package, where the administrator of the board can create complex BBCodes to allow the use of JavaScript or iframe functions in posts, for example embedding a YouTube or Google Video complete with viewer directly into a post. Emoticon An emoticon or smiley is a symbol or combination of symbols used to convey emotional content in written or message form. Forums implement a system through which some of the text representations of an emoticons (e.g. xD, :p) are rendered as a small image. Depending on what part of the world the forum's topic originates (since most forums are international) smilies can be replaced by other forms of similar graphics, an example would be kaoani (e.g. *(^O^)*, (^-^)b), or even text between special symbols (e.g. :blink:, :idea:). Poll Most forums implement an opinion poll system for threads. Most implementations allow for single-choice or multi-choice (sometimes limited to a certain number) when selecting options as well as private or public display of voters. Polls can be set to expire after a certain date or in some cases after a number of days from its creation. Members vote in a poll and a statistic is displayed graphically. RSS and ATOM RSS and ATOM feeds allow a minimalistic means of subscribing to the forum. Common implementations allow RSS feeds to list only the last few threads updated for the forum index and the last posts in a thread. Other features An ignore list allows members to hide posts of other members that they do not want to see or have a problem with. In most implementations, they are referred to as foe list or ignore list. The posts are usually not hidden, but minimized with only a small bar indicating a post from the user on the ignore list is there. Almost all Internet forums include a member list, which allows display of all forum members, with an integrated search feature. Some forums will not list members with 0 posts, even if they have activated their accounts. Many forums allow users to give themselves an avatar. An avatar is an image that appears beside all of a user's posts, in order to make the user more recognizable. The user may upload the image to the forum database or may provide a link to an image on a separate website. Each forum has limits on the height, width, and data size of avatars that may be used; if the user tries to use an avatar that is too big, it may be scaled down or rejected. Similarly, most forums allow users to define a signature (sometimes called a sig), which is a block of text, possibly with BBCode, which appears at the bottom of all of the user's posts. There is a character limit on signatures, though it may be so high that it is rarely hit. Often the forum's moderators impose manual rules on signatures to prevent them from being obnoxious (for example, being extremely long or having flashing images), and issue warnings or bans to users who break these rules. Like avatars, signatures may improve the recognizability of a poster. They may also allow the user to attach information to all of their posts, such as proclaiming support for a cause, noting facts about themselves, or quoting humorous things that have previously been said on the forum. Common on forums, a subscription is a form of automated notification integrated into the software of most forums. It usually notifies either by email or on the site when the member returns. The option to subscribe is available for every thread while logged in. Subscriptions work with read marking, namely the property of unread, which is given to the content never served to the user by the software. Recent development in some popular implementations of forum software has brought social network features and functionality. Such features include personal galleries, pages as well as a social network like chat systems. Most forum software is now fully customizable with "hacks" or "modifications" readily available to customize a person's forum to theirs and their members' needs. Often forums use "cookies", or information about the user's behavior on the site sent to a user's browser and used upon re-entry into the site. This is done to facilitate automatic login and to show a user whether a thread or forum has received new posts since his or her last visit. These may be disabled or cleared at any time. Rules and policies Forums are governed by a set of individuals, collectively referred to as staff, made up of administrators and moderators, which are responsible for the forums' conception, technical maintenance, and policies (creation and enforcing). Most forums have a list of rules detailing the wishes, aim, and guidelines of the forums' creators. There is usually also a FAQ section containing basic information for new members and people not yet familiar with the use and principles of a forum (generally tailored for specific forum software). Rules on forums usually apply to the entire user body and often have preset exceptions, most commonly designating a section as an exception. For example, in an IT forum any discussion regarding anything but computer programming languages may be against the rules, with the exception of a general chat section. Forum rules are maintained and enforced by the moderation team, but users are allowed to help out via what is known as a report system. Most Western forum platforms automatically provide such a system. It consists of a small function applicable to each post (including one's own). Using it will notify all currently available moderators of its location, and subsequent action or judgment can be carried out immediately, which is particularly desirable in large or very developed boards. Generally, moderators encourage members to also use the private message system if they wish to report behavior. Moderators will generally frown upon attempts of moderation by non-moderators, especially when the would-be moderators do not even issue a report. Messages from non-moderators acting as moderators generally declare a post as against the rules or predict punishment. While not harmful, statements that attempt to enforce the rules are discouraged. When rules are broken several steps are commonly taken. First, a warning is usually given; this is commonly in the form of a private message but recent development has made it possible for it to be integrated into the software. Subsequent to this, if the act is ignored and warnings do not work, the member is – usually – first exiled from the forum for a number of days. Denying someone access to the site is called a ban. Bans can mean the person can no longer log in or even view the site anymore. If the offender, after the warning sentence, repeats the offense, another ban is given, usually this time a longer one. Continuous harassment of the site eventually leads to a permanent ban. In most cases, this means simply that the account is locked. In extreme cases where the offender – after being permanently banned – creates another account and continues to harass the site, administrators will apply an IP address ban or block (this can also be applied at the server level): If the IP address is static, the machine of the offender is prevented from accessing the site. In some extreme circumstances, IP address range bans or country bans can be applied; this is usually for political, licensing, or other reasons. See also: Block (Internet), IP address blocking, and Internet censorship. Offending content is usually deleted. Sometimes if the topic is considered the source of the problem, it is locked; often a poster may request a topic expected to draw problems to be locked as well, although the moderators decide whether to grant it. In a locked thread, members cannot post anymore. In cases where the topic is considered a breach of rules it – with all of its posts – may be deleted. Troll Forum trolls are users that repeatedly and deliberately breach the netiquette of an established online community, posting inflammatory, extraneous, or off-topic messages to bait or excite users into responding or to test the forum rules and policies, and with that the patience of the forum staff. Their provocative behavior may potentially start flame wars (see below) or other disturbances. Responding to a troll's provocations is commonly known as 'feeding the troll' and is generally discouraged, as it can encourage their disruptive behavior. Sock puppet The term sock puppet refers to multiple pseudonyms in use by the same person on a particular message board or forum. The analogy of a sock puppet is of a puppeteer holding up both hands and supplying dialogue to both puppets simultaneously. A typical use of a sockpuppet account is to agree with or debate another sockpuppet account belonging to the same person, for the purposes of reinforcing the puppeteer's position in an argument. Sock puppets are usually found when an IP address check is done on the accounts in forums. Spamming Forum spamming is a breach of netiquette where users repeat the same word or phrase over and over, but differs from multiple posting in that spamming is usually a willful act that sometimes has malicious intent. This is a common trolling technique. It can also be traditional spam, unpaid advertisements that are in breach of the forum's rules. Spammers utilize a number of illicit techniques to post their spam, including the use of botnets. Some forums consider concise, comment-oriented posts spam, for example Thank you, Cool or I love it. Double posting One common faux pas on Internet forums is to post the same message twice. Users sometimes post versions of a message that are only slightly different, especially in forums where they are not allowed to edit their earlier posts. Multiple posting instead of editing prior posts can artificially inflate a user's post count. Multiple posting can be unintentional; a user's browser might display an error message even though the post has been transmitted or a user of a slow forum might become impatient and repeatedly hit the submit button. An offline editor may post the same message twice. Multiple posting can also be used as a method of trolling or spreading forum spam. A user may also send the same post to several forums, which is termed crossposting. The term derives from Usenet, where crossposting was an accepted practice but causes problems in web forums, which lack the ability to link such posts so replies in one forum are not visible to people reading the post in other forums. Necroposting A necropost is a message that revives (as in necromancy) an arbitrarily old thread, causing it to appear above newer and more active threads. This practice is generally seen as a breach of netiquette on most forums. Because old threads are not usually locked from further posting, necroposting is common for newer users and in cases where the date of previous posts is not apparent. Word censor A word censoring system is commonly included in the forum software package. The system will pick up words in the body of the post or some other user-editable forum element (like user titles), and if they partially match a certain keyword (commonly no case sensitivity) they will be censored. The most common censoring is letter replacement with an asterisk character. For example, in the user title, it is deemed inappropriate for users to use words such as "admin", "moderator", "leader" and so on. If the censoring system is implemented, a title such as "forum leader" may be filtered to "forum ******". Rude or vulgar words are common targets for the censoring system. But such auto-censors can make mistakes, for example censoring "wristwatch" to "wris****ch" and "Scunthorpe" to "S****horpe." Flame wars When a thread—or in some cases, an entire forum—becomes unstable, the result is usually uncontrolled spam in the form of one-line complaints, image macros, or abuse of the report system. When the discussion becomes heated and sides do nothing more than complain and not accept each other's differences in point of view, the discussion degenerates into what is called a flame war. To flame someone means to go off-topic and attack the person rather than their opinion. Likely candidates for flame wars are usually religion and socio-political topics, or topics that discuss pre-existing rivalries outside the forum (e.g., rivalry between games, console systems, car manufacturers, nationalities, etc.). When a topic that has degenerated into a flame war is considered akin to that of the forum (be it a section or the entire board), spam and flames have a chance of spreading outside the topic and causing trouble, usually in the form of vandalism. Some forums (commonly game forums) have suffered from forum-wide flame wars almost immediately after their conception, because of a pre-existing flame war element in the online community. Many forums have created devoted areas strictly for discussion of potential flame war topics that are moderated like normal. Registration or anonymity Many Internet forums require registration to post. Registered users of the site are referred to as members and are allowed to submit or send electronic messages through the web application. The process of registration involves verification of one's age (typically age 13 and over is required so as to meet COPPA requirements of American forum software) followed by a declaration of the terms of service (other documents may also be present) and a request for agreement to said terms. Subsequently, if all goes well, the candidate is presented with a web form to fill requesting at the very least a username (an alias), password, email and validation of a CAPTCHA code. While simply completing the registration web form is in general enough to generate an account, the status label Inactive is commonly provided by default until the registered user confirms the email address given while registering indeed belongs to the user. Until that time, the registered user can log in to the new account but may not post, reply, or send private messages in the forum. Sometimes a referrer system is implemented. A referrer is someone who introduced or otherwise "helped someone" with the decision to join the site (likewise, how a HTTP referrer is the site who linked one to another site). Usually, referrers are other forum members and members are usually rewarded for referrals. The referrer system is also sometimes implemented so that, if a visitor visits the forum through a link such as referrerid=300, the user with the id number (in this example, 300) would receive referral credit if the visitor registers. The purpose is commonly just to give credit (sometimes rewards are implied) to those who help the community grow. In areas such as Japan, registration is frequently optional and anonymity is sometimes even encouraged. On these forums, a tripcode system may be used to allow verification of an identity without the need for formal registration. People who regularly read the forum discussions but do not register or do not post are often referred to as "lurkers". Comparison with other web applications Electronic mailing lists: The main difference between forums and electronic mailing lists is that mailing lists automatically deliver new messages to the subscriber, while forums require the reader to visit the website and check for new posts. Because members may miss replies in threads they are interested in, many modern forums offer an "e-mail notification" feature, whereby members can choose to be notified of new posts in a thread, and web feeds that allow members to see a summary of the new posts using aggregator software. There are also software products that combine forum and mailing list features, i.e. posting and reading via email as well as the browser depending on the member's choice. Newsreader: The main difference between newsgroups and forums is that additional software, a News client, is required to participate in newsgroups whereas using a forum requires no additional software beyond the web browser. Shoutboxes: Unlike Internet forums, most shoutboxes do not require registration, only requiring an email address from the user. Additionally, shoutboxes are not heavily moderated, unlike most message boards. Wiki: Unlike conventional forums, the original wikis allowed all users to edit all content (including each other's messages). This level of content manipulation is reserved for moderators or administrators on most forums. Wikis also allow the creation of other content outside the talk pages. On the other hand, weblogs and generic content management systems tend to be locked down to the point where only a few select users can post blog entries, although many allow other users to comment upon them. The Wiki hosting site known as Wikia has two features in operation, known as the Forum and Message Wall. The forum is used solely for discussion and works through editing, while the message wall works through posted messages more similar to a traditional forum. Chat rooms and instant messaging: Forums differ from chats and instant messaging in that forum participants do not have to be online simultaneously to receive or send messages. Messages posted to a forum are publicly available for some time even if the forum or thread is closed, which is uncommon in chat rooms that maintain frequent activity. One rarity among forums is the ability to create a picture album. Forum participants may upload personal pictures onto the site and add descriptions to the pictures. Pictures may be in the same format as posting threads, and contain the same options such as "Report Post" and "Reply to Post". See also Comparison of Internet forum software Godwin's Law Internet social network List of Internet forums Warnock's dilemma Notes Examples References See also Thread Process scheduling Resources phpBB (open-source code copyrighted platform) Open directory project (multi-platform ontology-based open-standard content directory Active Directory (Microsoft-proprietary single sign-on, access, remote and domain control service) External links Xenforo Forums Delphi Forums Adobe forums posting guidelines The Purple Martin Forum Posting Policy Ubuntu Forums: The BUMP Thread Threadbombing: Bump Images Why people like to use Internet forums Groupware Forum Social information processing
55915043
https://en.wikipedia.org/wiki/Matthews%20family
Matthews family
The Matthews family is a prominent family in American football. One of only five third-generation families to play in the National Football League (NFL), it is often called the "NFL's First Family". Its seven members who have played in the NFL have combined for 25 Pro Bowl invitations, 11 first-team All-Pro selections, and three Super Bowl appearances. History The family patriarch, H. L. Matthews, was born in Jeffersonville, Ohio, in 1889. After serving in World War I, he held tenures as a boxing, baseball, and track coach for The Citadel, The Military College of South Carolina from 1926 to 1953. H. L. Matthews' son, Clay Matthews Sr., began the family's legacy in football. After playing football in college for the Georgia Tech Yellow Jackets, he was drafted in the 1949 NFL Draft by the Los Angeles Rams, but never played for the team. He instead joined the San Francisco 49ers in 1950 and played offensive tackle, defensive tackle, and defensive end. His career was interrupted by the Korean War, in which he served as a paratrooper. He rejoined the 49ers in 1953 and played for three more seasons before retiring. Matthews Sr. died on March 24, 2017, aged 88. Two of Matthews Sr.'s sons played in the NFL: Clay Matthews Jr. and Bruce Matthews. Each played college football for the USC Trojans, and they were both selected in the first round of their respective drafts; Clay Jr. in 1978 and Bruce in 1983. Clay Jr. was a linebacker for the Cleveland Browns from 1978 to 1993 and Atlanta Falcons from 1994 to 1996. With the Browns, he played in four Pro Bowls and in 1984 was a first-team All-Pro selection. Bruce, a highly versatile offensive lineman, played guard, tackle, center, and snapper, for the Houston / Tennessee Oilers / Titans franchise from 1983 to 2001. He was invited to a record-tying 14 Pro Bowls and was a nine-time first-team All-Pro selection. Bruce was inducted into the Pro Football Hall of Fame in 2007. Clay Jr. and Bruce are the only brothers to play on the same Pro Bowl team, doing so for the American Football Conference in both 1988 and 1989. Three sons of Clay Jr. have played football past the high school level: Kyle, Clay III, and Casey. The oldest, Kyle, was a safety for the USC Trojans. Clay III, a linebacker, also played for the Trojans. Clay III was drafted by Green Bay Packers in 2009, with whom he has earned six Pro Bowl selections, a victory in Super Bowl XLV, appeared in three NFC Championship Games (most recently of the 2016–17 NFL playoffs where he faced off against his cousin Jake), and is the franchise's all-time sacks leader. Casey Matthews played linebacker for the Oregon Ducks, after which he played in the NFL for the Philadelphia Eagles from 2011 to 2014 and Minnesota Vikings in 2015. Bruce has seven children, including five sons, two of whom have played in the NFL: Kevin and Jake. Kevin was a center for the Titans, Washington Redskins, and Carolina Panthers from 2010 to 2014. Jake has been an offensive tackle for the Falcons since being drafted by the team sixth overall in 2014. In the NFC championship game of the 2016–17 NFL playoffs, he and the Falcons defeated his cousin Clay III and the Packers to advance to Super Bowl LI. A third brother, Mike, has spent time on off-season rosters of the Cleveland Browns and Pittsburgh Steelers A fourth brother, Luke, is a sophomore for the Texas A&M Aggies, where each of his brothers played. Troy Niklas, the nephew of Bruce Matthews by way of Bruce's wife's sister, is a former tight end for the Arizona Cardinals, and a current free agent. Professional soccer player Ashley Nick is a granddaughter of Clay Matthews Sr. Matthews family tree H. L. "Matty" Matthews (1889–1975) Clay Matthews Sr. (1928–2017) Clay Matthews Jr. (born 1956) Kyle Matthews (born 1982) Clay Matthews III (born 1986) Casey Matthews (born 1989) Bruce Matthews (born 1961) Kevin Matthews (born 1987) Jake Matthews (born 1992) Mike Matthews (born 1994) Luke Matthews (born 2000) In media In 2017, Clay Jr., Clay III, and Casey were featured in a commercial for PlayStation Vue entitled "Football VUEing Family", which was filmed at Clay Jr.'s home in Southern California. Also in the commercial were Clay Jr.'s wife Leslie and daughter Jennifer. See also Poe brothers, a family of American football players in the late 19th century Nesser brothers, a family of American football players in the early 20th century References Matthews football family National Football League families
20889728
https://en.wikipedia.org/wiki/Zavrh%20pri%20Trojanah
Zavrh pri Trojanah
Zavrh pri Trojanah (; in older sources also Za Vrhom, ) is a small dispersed settlement above Trojane in the Municipality of Lukovica in the eastern part of the Upper Carniola region of Slovenia. Name The name of the settlement was changed from Zavrh to Zavrh pri Trojanah in 1953. In the past the German name was Sawerch. References External links Zavrh pri Trojanah on Geopedia Populated places in the Municipality of Lukovica
5664
https://en.wikipedia.org/wiki/Consciousness
Consciousness
Consciousness, at its simplest, is sentience or awareness of internal and external existence. Despite millennia of analyses, definitions, explanations and debates by philosophers and scientists, consciousness remains puzzling and controversial, being "at once the most familiar and [also the] most mysterious aspect of our lives". Perhaps the only widely agreed notion about the topic is the intuition that consciousness exists. Opinions differ about what exactly needs to be studied and explained as consciousness. Sometimes, it is synonymous with the mind, and at other times, an aspect of mind. In the past, it was one's "inner life", the world of introspection, of private thought, imagination and volition. Today, it often includes any kind of cognition, experience, feeling or perception. It may be awareness, awareness of awareness, or self-awareness either continuously changing or not. There might be different levels or orders of consciousness, or different kinds of consciousness, or just one kind with different features. Other questions include whether only humans are conscious, all animals, or even the whole universe. The disparate range of research, notions and speculations raises doubts about whether the right questions are being asked. Examples of the range of descriptions, definitions or explanations are: simple wakefulness, one's sense of selfhood or soul explored by "looking within"; being a metaphorical "stream" of contents, or being a mental state, mental event or mental process of the brain; having phanera or qualia and subjectivity; being the 'something that it is like' to 'have' or 'be' it; being the "inner theatre" or the executive control system of the mind. Inter-disciplinary perspectives Western philosophers since the time of Descartes and Locke have struggled to comprehend the nature of consciousness and how it fits into a larger picture of the world. These issues remain central to both continental and analytic philosophy, in phenomenology and the philosophy of mind, respectively. Some basic questions include: whether consciousness is the same kind of thing as matter; whether it may ever be possible for computing machines like computers or robots to be conscious; how consciousness relates to language; how consciousness as Being relates to the world of experience; the role of the self in experience; whether individual thought is possible at all; and whether the concept is fundamentally coherent. Recently, consciousness has also become a significant topic of interdisciplinary research in cognitive science, involving fields such as psychology, linguistics, anthropology, neuropsychology and neuroscience. The primary focus is on understanding what it means biologically and psychologically for information to be present in consciousness—that is, on determining the neural and psychological correlates of consciousness. The majority of experimental studies assess consciousness in humans by asking subjects for a verbal report of their experiences (e.g., "tell me if you notice anything when I do this"). Issues of interest include phenomena such as subliminal perception, blindsight, denial of impairment, and altered states of consciousness produced by alcohol and other drugs, or spiritual or meditative techniques. In medicine, consciousness is assessed by observing a patient's arousal and responsiveness, and can be seen as a continuum of states ranging from full alertness and comprehension, through disorientation, delirium, loss of meaningful communication, and finally loss of movement in response to painful stimuli. Issues of practical concern include how the presence of consciousness can be assessed in severely ill, comatose, or anesthetized people, and how to treat conditions in which consciousness is impaired or disrupted. The degree of consciousness is measured by standardized behavior observation scales such as the Glasgow Coma Scale. Etymology In the late 20th century, philosophers like Hamlyn, Rorty, and Wilkes have disagreed with Kahn, Hardie and Modrak as to whether Aristotle even had a concept of consciousness. Aristotle does not use any single word or terminology to name the phenomenon; it is used only much later, especially by John Locke. Caston contends that for Aristotle, perceptual awareness was somewhat the same as what modern philosophers call consciousness. The origin of the modern concept of consciousness is often attributed to Locke's Essay Concerning Human Understanding, published in 1690. Locke defined consciousness as "the perception of what passes in a man's own mind". His essay influenced the 18th-century view of consciousness, and his definition appeared in Samuel Johnson's celebrated Dictionary (1755). "Consciousness" (French: conscience) is also defined in the 1753 volume of Diderot and d'Alembert's Encyclopédie, as "the opinion or internal feeling that we ourselves have from what we do". The earliest English language uses of "conscious" and "consciousness" date back, however, to the 1500s. The English word "conscious" originally derived from the Latin conscius (con- "together" and scio "to know"), but the Latin word did not have the same meaning as the English word—it meant "knowing with", in other words, "having joint or common knowledge with another". There were, however, many occurrences in Latin writings of the phrase conscius sibi, which translates literally as "knowing with oneself", or in other words "sharing knowledge with oneself about something". This phrase had the figurative meaning of "knowing that one knows", as the modern English word "conscious" does. In its earliest uses in the 1500s, the English word "conscious" retained the meaning of the Latin conscius. For example, Thomas Hobbes in Leviathan wrote: "Where two, or more men, know of one and the same fact, they are said to be Conscious of it one to another." The Latin phrase conscius sibi, whose meaning was more closely related to the current concept of consciousness, was rendered in English as "conscious to oneself" or "conscious unto oneself". For example, Archbishop Ussher wrote in 1613 of "being so conscious unto myself of my great weakness". Locke's definition from 1690 illustrates that a gradual shift in meaning had taken place. A related word was conscientia, which primarily means moral conscience. In the literal sense, "conscientia" means knowledge-with, that is, shared knowledge. The word first appears in Latin juridical texts by writers such as Cicero. Here, conscientia is the knowledge that a witness has of the deed of someone else. René Descartes (1596–1650) is generally taken to be the first philosopher to use conscientia in a way that does not fit this traditional meaning. Descartes used conscientia the way modern speakers would use "conscience". In Search after Truth (, Amsterdam 1701) he says "conscience or internal testimony" (conscientiâ, vel interno testimonio). The problem of definition The dictionary definitions of the word consciousness extend through several centuries and reflect a range of seemingly related meanings, with some differences that have been controversial, such as the distinction between 'inward awareness' and 'perception' of the physical world, or the distinction between 'conscious' and 'unconscious', or the notion of a "mental entity" or "mental activity" that is not physical. The common usage definitions of consciousness in Webster's Third New International Dictionary (1966 edition, Volume 1, page 482) are as follows: awareness or perception of an inward psychological or spiritual fact; intuitively perceived knowledge of something in one's inner self inward awareness of an external object, state, or fact concerned awareness; INTEREST, CONCERN—often used with an attributive noun [e.g. class consciousness] the state or activity that is characterized by sensation, emotion, volition, or thought; mind in the broadest possible sense; something in nature that is distinguished from the physical the totality in psychology of sensations, perceptions, ideas, attitudes, and feelings of which an individual or a group is aware at any given time or within a particular time span—compare STREAM OF CONSCIOUSNESS waking life (as that to which one returns after sleep, trance, fever) wherein all one's mental powers have returned . . . the part of mental life or psychic content in psychoanalysis that is immediately available to the ego—compare PRECONSCIOUS, UNCONSCIOUS The Cambridge Dictionary defines consciousness as "the state of understanding and realizing something." The Oxford Living Dictionary defines consciousness as "The state of being aware of and responsive to one's surroundings.", "A person's awareness or perception of something." and "The fact of awareness by the mind of itself and the world." Philosophers have attempted to clarify technical distinctions by using a jargon of their own. The Routledge Encyclopedia of Philosophy in 1998 defines consciousness as follows: Many philosophers and scientists have been unhappy about the difficulty of producing a definition that does not involve circularity or fuzziness. In The Macmillan Dictionary of Psychology (1989 edition), Stuart Sutherland expressed a skeptical attitude more than a definition: A partisan definition such as Sutherland's can hugely affect researchers' assumptions and the direction of their work: Many philosophers have argued that consciousness is a unitary concept that is understood intuitively by the majority of people in spite of the difficulty in defining it. Others, though, have argued that the level of disagreement about the meaning of the word indicates that it either means different things to different people (for instance, the objective versus subjective aspects of consciousness), or else it encompasses a variety of distinct meanings with no simple element in common. Philosophy of mind Most writers on the philosophy of consciousness have been concerned with defending a particular point of view, and have organized their material accordingly. For surveys, the most common approach is to follow a historical path by associating stances with the philosophers who are most strongly associated with them, for example, Descartes, Locke, Kant, etc. An alternative is to organize philosophical stances according to basic issues. The coherence of the concept Philosophers differ from non-philosophers in their intuitions about what consciousness is. While most people have a strong intuition for the existence of what they refer to as consciousness, skeptics argue that this intuition is false, either because the concept of consciousness is intrinsically incoherent, or because our intuitions about it are based in illusions. Gilbert Ryle, for example, argued that traditional understanding of consciousness depends on a Cartesian dualist outlook that improperly distinguishes between mind and body, or between mind and world. He proposed that we speak not of minds, bodies, and the world, but of individuals, or persons, acting in the world. Thus, by speaking of "consciousness" we end up misleading ourselves by thinking that there is any sort of thing as consciousness separated from behavioral and linguistic understandings. Types of consciousness Ned Block argued that discussions on consciousness often failed to properly distinguish phenomenal (P-consciousness) from access (A-consciousness), though these terms had been used before Block. P-consciousness, according to Block, is simply raw experience: it is moving, colored forms, sounds, sensations, emotions and feelings with our bodies and responses at the center. These experiences, considered independently of any impact on behavior, are called qualia. A-consciousness, on the other hand, is the phenomenon whereby information in our minds is accessible for verbal report, reasoning, and the control of behavior. So, when we perceive, information about what we perceive is access conscious; when we introspect, information about our thoughts is access conscious; when we remember, information about the past is access conscious, and so on. Although some philosophers, such as Daniel Dennett, have disputed the validity of this distinction, others have broadly accepted it. David Chalmers has argued that A-consciousness can in principle be understood in mechanistic terms, but that understanding P-consciousness is much more challenging: he calls this the hard problem of consciousness.. Kong Derick has also stated that there are two types of consciousness; high level consciousness which he attribute to the Mind and low level consciousness which he attributes to the Submind. " Some philosophers believe that Block's two types of consciousness are not the end of the story. William Lycan, for example, argued in his book Consciousness and Experience that at least eight clearly distinct types of consciousness can be identified (organism consciousness; control consciousness; consciousness of; state/event consciousness; reportability; introspective consciousness; subjective consciousness; self-consciousness)—and that even this list omits several more obscure forms. There is also debate over whether or not A-consciousness and P-consciousness always coexist or if they can exist separately. Although P-consciousness without A-consciousness is more widely accepted, there have been some hypothetical examples of A without P. Block, for instance, suggests the case of a "zombie" that is computationally identical to a person but without any subjectivity. However, he remains somewhat skeptical concluding "I don't know whether there are any actual cases of A-consciousness without P-consciousness, but I hope I have illustrated their conceptual possibility." Consciousness in children Of the eight types of consciousness in the Lycan classification, some are detectable in utero and others develop years after birth. Psychologist and educator William Foulkes studied children's dreams and concluded that prior to the shift in cognitive maturation that humans experience during ages five to seven, children lack the Lockean consciousness that Lycan had labeled "introspective consciousness" and that Foulkes labels "self-reflection." In a 2020 paper, Katherine Nelson and Robyn Fivush use "autobiographical consciousness" to label essentially the same faculty, and agree with Foulkes on the timing of this faculty's acquisition. Nelson and Fivush contend that "language is the tool by which humans create a new, uniquely human form of consciousness, namely, autobiographical consciousness." Julian Jaynes had staked out these positions decades earlier. Citing the developmental steps that lead the infant to autobiographical consciousness, Nelson and Fivush point to the acquisition of "theory of mind," calling theory of mind "necessary for autobiographical consciousness" and defining it as "understanding differences between one's own mind and others' minds in terms of beliefs, desires, emotions and thoughts." They write, "The hallmark of theory of mind, the understanding of false belief, occurs ... at five to six years of age." Mind–body problem Mental processes (such as consciousness) and physical processes (such as brain events) seem to be correlated, however the specific nature of the connection is unknown. The first influential philosopher to discuss this question specifically was Descartes, and the answer he gave is known as Cartesian dualism. Descartes proposed that consciousness resides within an immaterial domain he called res cogitans (the realm of thought), in contrast to the domain of material things, which he called res extensa (the realm of extension). He suggested that the interaction between these two domains occurs inside the brain, perhaps in a small midline structure called the pineal gland. Although it is widely accepted that Descartes explained the problem cogently, few later philosophers have been happy with his solution, and his ideas about the pineal gland have especially been ridiculed. However, no alternative solution has gained general acceptance. Proposed solutions can be divided broadly into two categories: dualist solutions that maintain Descartes' rigid distinction between the realm of consciousness and the realm of matter but give different answers for how the two realms relate to each other; and monist solutions that maintain that there is really only one realm of being, of which consciousness and matter are both aspects. Each of these categories itself contains numerous variants. The two main types of dualism are substance dualism (which holds that the mind is formed of a distinct type of substance not governed by the laws of physics) and property dualism (which holds that the laws of physics are universally valid but cannot be used to explain the mind). The three main types of monism are physicalism (which holds that the mind consists of matter organized in a particular way), idealism (which holds that only thought or experience truly exists, and matter is merely an illusion), and neutral monism (which holds that both mind and matter are aspects of a distinct essence that is itself identical to neither of them). There are also, however, a large number of idiosyncratic theories that cannot cleanly be assigned to any of these schools of thought. Since the dawn of Newtonian science with its vision of simple mechanical principles governing the entire universe, some philosophers have been tempted by the idea that consciousness could be explained in purely physical terms. The first influential writer to propose such an idea explicitly was Julien Offray de La Mettrie, in his book Man a Machine (L'homme machine). His arguments, however, were very abstract. The most influential modern physical theories of consciousness are based on psychology and neuroscience. Theories proposed by neuroscientists such as Gerald Edelman and Antonio Damasio, and by philosophers such as Daniel Dennett, seek to explain consciousness in terms of neural events occurring within the brain. Many other neuroscientists, such as Christof Koch, have explored the neural basis of consciousness without attempting to frame all-encompassing global theories. At the same time, computer scientists working in the field of artificial intelligence have pursued the goal of creating digital computer programs that can simulate or embody consciousness. A few theoretical physicists have argued that classical physics is intrinsically incapable of explaining the holistic aspects of consciousness, but that quantum theory may provide the missing ingredients. Several theorists have therefore proposed quantum mind (QM) theories of consciousness. Notable theories falling into this category include the holonomic brain theory of Karl Pribram and David Bohm, and the Orch-OR theory formulated by Stuart Hameroff and Roger Penrose. Some of these QM theories offer descriptions of phenomenal consciousness, as well as QM interpretations of access consciousness. None of the quantum mechanical theories have been confirmed by experiment. Recent publications by G. Guerreshi, J. Cia, S. Popescu, and H. Briegel could falsify proposals such as those of Hameroff, which rely on quantum entanglement in protein. At the present time many scientists and philosophers consider the arguments for an important role of quantum phenomena to be unconvincing. Apart from the general question of the "hard problem" of consciousness (which is, roughly speaking, the question of how mental experience can arise from a physical basis), a more specialized question is how to square the subjective notion that we are in control of our decisions (at least in some small measure) with the customary view of causality that subsequent events are caused by prior events. The topic of free will is the philosophical and scientific examination of this conundrum. Problem of other minds Many philosophers consider experience to be the essence of consciousness, and believe that experience can only fully be known from the inside, subjectively. But if consciousness is subjective and not visible from the outside, why do the vast majority of people believe that other people are conscious, but rocks and trees are not? This is called the problem of other minds. It is particularly acute for people who believe in the possibility of philosophical zombies, that is, people who think it is possible in principle to have an entity that is physically indistinguishable from a human being and behaves like a human being in every way but nevertheless lacks consciousness. Related issues have also been studied extensively by Greg Littmann of the University of Illinois, and by Colin Allen (a professor at Indiana University) regarding the literature and research studying artificial intelligence in androids. The most commonly given answer is that we attribute consciousness to other people because we see that they resemble us in appearance and behavior; we reason that if they look like us and act like us, they must be like us in other ways, including having experiences of the sort that we do. There are, however, a variety of problems with that explanation. For one thing, it seems to violate the principle of parsimony, by postulating an invisible entity that is not necessary to explain what we observe. Some philosophers, such as Daniel Dennett in an essay titled The Unimagined Preposterousness of Zombies, argue that people who give this explanation do not really understand what they are saying. More broadly, philosophers who do not accept the possibility of zombies generally believe that consciousness is reflected in behavior (including verbal behavior), and that we attribute consciousness on the basis of behavior. A more straightforward way of saying this is that we attribute experiences to people because of what they can do, including the fact that they can tell us about their experiences. Animal consciousness The topic of animal consciousness is beset by a number of difficulties. It poses the problem of other minds in an especially severe form, because non-human animals, lacking the ability to express human language, cannot tell humans about their experiences. Also, it is difficult to reason objectively about the question, because a denial that an animal is conscious is often taken to imply that it does not feel, its life has no value, and that harming it is not morally wrong. Descartes, for example, has sometimes been blamed for mistreatment of animals due to the fact that he believed only humans have a non-physical mind. Most people have a strong intuition that some animals, such as cats and dogs, are conscious, while others, such as insects, are not; but the sources of this intuition are not obvious, and are often based on personal interactions with pets and other animals they have observed. Philosophers who consider subjective experience the essence of consciousness also generally believe, as a correlate, that the existence and nature of animal consciousness can never rigorously be known. Thomas Nagel spelled out this point of view in an influential essay titled What Is it Like to Be a Bat?. He said that an organism is conscious "if and only if there is something that it is like to be that organism—something it is like for the organism"; and he argued that no matter how much we know about an animal's brain and behavior, we can never really put ourselves into the mind of the animal and experience its world in the way it does itself. Other thinkers, such as Douglas Hofstadter, dismiss this argument as incoherent. Several psychologists and ethologists have argued for the existence of animal consciousness by describing a range of behaviors that appear to show animals holding beliefs about things they cannot directly perceive—Donald Griffin's 2001 book Animal Minds reviews a substantial portion of the evidence. On July 7, 2012, eminent scientists from different branches of neuroscience gathered at the University of Cambridge to celebrate the Francis Crick Memorial Conference, which deals with consciousness in humans and pre-linguistic consciousness in nonhuman animals. After the conference, they signed in the presence of Stephen Hawking, the 'Cambridge Declaration on Consciousness', which summarizes the most important findings of the survey: "We decided to reach a consensus and make a statement directed to the public that is not scientific. It's obvious to everyone in this room that animals have consciousness, but it is not obvious to the rest of the world. It is not obvious to the rest of the Western world or the Far East. It is not obvious to the society." "Convergent evidence indicates that non-human animals ..., including all mammals and birds, and other creatures, ... have the necessary neural substrates of consciousness and the capacity to exhibit intentional behaviors." Artifact consciousness The idea of an artifact made conscious is an ancient theme of mythology, appearing for example in the Greek myth of Pygmalion, who carved a statue that was magically brought to life, and in medieval Jewish stories of the Golem, a magically animated homunculus built of clay. However, the possibility of actually constructing a conscious machine was probably first discussed by Ada Lovelace, in a set of notes written in 1842 about the Analytical Engine invented by Charles Babbage, a precursor (never built) to modern electronic computers. Lovelace was essentially dismissive of the idea that a machine such as the Analytical Engine could think in a humanlike way. She wrote: One of the most influential contributions to this question was an essay written in 1950 by pioneering computer scientist Alan Turing, titled Computing Machinery and Intelligence. Turing disavowed any interest in terminology, saying that even "Can machines think?" is too loaded with spurious connotations to be meaningful; but he proposed to replace all such questions with a specific operational test, which has become known as the Turing test. To pass the test, a computer must be able to imitate a human well enough to fool interrogators. In his essay Turing discussed a variety of possible objections, and presented a counterargument to each of them. The Turing test is commonly cited in discussions of artificial intelligence as a proposed criterion for machine consciousness; it has provoked a great deal of philosophical debate. For example, Daniel Dennett and Douglas Hofstadter argue that anything capable of passing the Turing test is necessarily conscious, while David Chalmers argues that a philosophical zombie could pass the test, yet fail to be conscious. A third group of scholars have argued that with technological growth once machines begin to display any substantial signs of human-like behavior then the dichotomy (of human consciousness compared to human-like consciousness) becomes passé and issues of machine autonomy begin to prevail even as observed in its nascent form within contemporary industry and technology. Jürgen Schmidhuber argues that consciousness is simply the result of compression. As an agent sees representation of itself recurring in the environment, the compression of this representation can be called consciousness. In a lively exchange over what has come to be referred to as "the Chinese room argument", John Searle sought to refute the claim of proponents of what he calls "strong artificial intelligence (AI)" that a computer program can be conscious, though he does agree with advocates of "weak AI" that computer programs can be formatted to "simulate" conscious states. His own view is that consciousness has subjective, first-person causal powers by being essentially intentional due simply to the way human brains function biologically; conscious persons can perform computations, but consciousness is not inherently computational the way computer programs are. To make a Turing machine that speaks Chinese, Searle imagines a room with one monolingual English speaker (Searle himself, in fact), a book that designates a combination of Chinese symbols to be output paired with Chinese symbol input, and boxes filled with Chinese symbols. In this case, the English speaker is acting as a computer and the rulebook as a program. Searle argues that with such a machine, he would be able to process the inputs to outputs perfectly without having any understanding of Chinese, nor having any idea what the questions and answers could possibly mean. If the experiment were done in English, since Searle knows English, he would be able to take questions and give answers without any algorithms for English questions, and he would be effectively aware of what was being said and the purposes it might serve. Searle would pass the Turing test of answering the questions in both languages, but he is only conscious of what he is doing when he speaks English. Another way of putting the argument is to say that computer programs can pass the Turing test for processing the syntax of a language, but that the syntax cannot lead to semantic meaning in the way strong AI advocates hoped. In the literature concerning artificial intelligence, Searle's essay has been second only to Turing's in the volume of debate it has generated. Searle himself was vague about what extra ingredients it would take to make a machine conscious: all he proposed was that what was needed was "causal powers" of the sort that the brain has and that computers lack. But other thinkers sympathetic to his basic argument have suggested that the necessary (though perhaps still not sufficient) extra conditions may include the ability to pass not just the verbal version of the Turing test, but the robotic version, which requires grounding the robot's words in the robot's sensorimotor capacity to categorize and interact with the things in the world that its words are about, Turing-indistinguishably from a real person. Turing-scale robotics is an empirical branch of research on embodied cognition and situated cognition. In 2014, Victor Argonov has suggested a non-Turing test for machine consciousness based on machine's ability to produce philosophical judgments. He argues that a deterministic machine must be regarded as conscious if it is able to produce judgments on all problematic properties of consciousness (such as qualia or binding) having no innate (preloaded) philosophical knowledge on these issues, no philosophical discussions while learning, and no informational models of other creatures in its memory (such models may implicitly or explicitly contain knowledge about these creatures' consciousness). However, this test can be used only to detect, but not refute the existence of consciousness. A positive result proves that machine is conscious but a negative result proves nothing. For example, absence of philosophical judgments may be caused by lack of the machine's intellect, not by absence of consciousness. Scientific study For many decades, consciousness as a research topic was avoided by the majority of mainstream scientists, because of a general feeling that a phenomenon defined in subjective terms could not properly be studied using objective experimental methods. In 1975 George Mandler published an influential psychological study which distinguished between slow, serial, and limited conscious processes and fast, parallel and extensive unconscious ones. The Science and Religion Forum 1984 annual conference, 'From Artificial Intelligence to Human Consciousness' identified the nature of consciousness as a matter for investigation; Donald Michie was a keynote speaker. Starting in the 1980s, an expanding community of neuroscientists and psychologists have associated themselves with a field called Consciousness Studies, giving rise to a stream of experimental work published in books, journals such as Consciousness and Cognition, Frontiers in Consciousness Research, Psyche, and the Journal of Consciousness Studies, along with regular conferences organized by groups such as the Association for the Scientific Study of Consciousness and the Society for Consciousness Studies. Modern medical and psychological investigations into consciousness are based on psychological experiments (including, for example, the investigation of priming effects using subliminal stimuli), and on case studies of alterations in consciousness produced by trauma, illness, or drugs. Broadly viewed, scientific approaches are based on two core concepts. The first identifies the content of consciousness with the experiences that are reported by human subjects; the second makes use of the concept of consciousness that has been developed by neurologists and other medical professionals who deal with patients whose behavior is impaired. In either case, the ultimate goals are to develop techniques for assessing consciousness objectively in humans as well as other animals, and to understand the neural and psychological mechanisms that underlie it. Measurement Experimental research on consciousness presents special difficulties, due to the lack of a universally accepted operational definition. In the majority of experiments that are specifically about consciousness, the subjects are human, and the criterion used is verbal report: in other words, subjects are asked to describe their experiences, and their descriptions are treated as observations of the contents of consciousness. For example, subjects who stare continuously at a Necker cube usually report that they experience it "flipping" between two 3D configurations, even though the stimulus itself remains the same. The objective is to understand the relationship between the conscious awareness of stimuli (as indicated by verbal report) and the effects the stimuli have on brain activity and behavior. In several paradigms, such as the technique of response priming, the behavior of subjects is clearly influenced by stimuli for which they report no awareness, and suitable experimental manipulations can lead to increasing priming effects despite decreasing prime identification (double dissociation). Verbal report is widely considered to be the most reliable indicator of consciousness, but it raises a number of issues. For one thing, if verbal reports are treated as observations, akin to observations in other branches of science, then the possibility arises that they may contain errors—but it is difficult to make sense of the idea that subjects could be wrong about their own experiences, and even more difficult to see how such an error could be detected. Daniel Dennett has argued for an approach he calls heterophenomenology, which means treating verbal reports as stories that may or may not be true, but his ideas about how to do this have not been widely adopted. Another issue with verbal report as a criterion is that it restricts the field of study to humans who have language: this approach cannot be used to study consciousness in other species, pre-linguistic children, or people with types of brain damage that impair language. As a third issue, philosophers who dispute the validity of the Turing test may feel that it is possible, at least in principle, for verbal report to be dissociated from consciousness entirely: a philosophical zombie may give detailed verbal reports of awareness in the absence of any genuine awareness. Although verbal report is in practice the "gold standard" for ascribing consciousness, it is not the only possible criterion. In medicine, consciousness is assessed as a combination of verbal behavior, arousal, brain activity and purposeful movement. The last three of these can be used as indicators of consciousness when verbal behavior is absent. The scientific literature regarding the neural bases of arousal and purposeful movement is very extensive. Their reliability as indicators of consciousness is disputed, however, due to numerous studies showing that alert human subjects can be induced to behave purposefully in a variety of ways in spite of reporting a complete lack of awareness. Studies of the neuroscience of free will have also shown that the experiences that people report when they behave purposefully sometimes do not correspond to their actual behaviors or to the patterns of electrical activity recorded from their brains. Another approach applies specifically to the study of self-awareness, that is, the ability to distinguish oneself from others. In the 1970s Gordon Gallup developed an operational test for self-awareness, known as the mirror test. The test examines whether animals are able to differentiate between seeing themselves in a mirror versus seeing other animals. The classic example involves placing a spot of coloring on the skin or fur near the individual's forehead and seeing if they attempt to remove it or at least touch the spot, thus indicating that they recognize that the individual they are seeing in the mirror is themselves. Humans (older than 18 months) and other great apes, bottlenose dolphins, killer whales, pigeons, European magpies and elephants have all been observed to pass this test. Neural correlates A major part of the scientific literature on consciousness consists of studies that examine the relationship between the experiences reported by subjects and the activity that simultaneously takes place in their brains—that is, studies of the neural correlates of consciousness. The hope is to find that activity in a particular part of the brain, or a particular pattern of global brain activity, which will be strongly predictive of conscious awareness. Several brain imaging techniques, such as EEG and fMRI, have been used for physical measures of brain activity in these studies. Another idea that has drawn attention for several decades is that consciousness is associated with high-frequency (gamma band) oscillations in brain activity. This idea arose from proposals in the 1980s, by Christof von der Malsburg and Wolf Singer, that gamma oscillations could solve the so-called binding problem, by linking information represented in different parts of the brain into a unified experience. Rodolfo Llinás, for example, proposed that consciousness results from recurrent thalamo-cortical resonance where the specific thalamocortical systems (content) and the non-specific (centromedial thalamus) thalamocortical systems (context) interact in the gamma band frequency via synchronous oscillations. A number of studies have shown that activity in primary sensory areas of the brain is not sufficient to produce consciousness: it is possible for subjects to report a lack of awareness even when areas such as the primary visual cortex (V1) show clear electrical responses to a stimulus. Higher brain areas are seen as more promising, especially the prefrontal cortex, which is involved in a range of higher cognitive functions collectively known as executive functions. There is substantial evidence that a "top-down" flow of neural activity (i.e., activity propagating from the frontal cortex to sensory areas) is more predictive of conscious awareness than a "bottom-up" flow of activity. The prefrontal cortex is not the only candidate area, however: studies by Nikos Logothetis and his colleagues have shown, for example, that visually responsive neurons in parts of the temporal lobe reflect the visual perception in the situation when conflicting visual images are presented to different eyes (i.e., bistable percepts during binocular rivalry). Furthermore, top-down feedback from higher to lower visual brain areas may be weaker or absent in the peripheral visual field, as suggested by some experimental data and theoretical arguments; nevertheless humans can perceive visual inputs in the peripheral visual field arising from bottom-up V1 neural activities. Meanwhile, bottom-up V1 activities for the central visual fields can be vetoed, and thus made invisible to perception, by the top-down feedback, when these bottom-up signals are inconsistent with brain's internal model of the visual world. Modulation of neural responses may correlate with phenomenal experiences. In contrast to the raw electrical responses that do not correlate with consciousness, the modulation of these responses by other stimuli correlates surprisingly well with an important aspect of consciousness: namely with the phenomenal experience of stimulus intensity (brightness, contrast). In the research group of Danko Nikolić it has been shown that some of the changes in the subjectively perceived brightness correlated with the modulation of firing rates while others correlated with the modulation of neural synchrony. An fMRI investigation suggested that these findings were strictly limited to the primary visual areas. This indicates that, in the primary visual areas, changes in firing rates and synchrony can be considered as neural correlates of qualia—at least for some type of qualia. In 2011, Graziano and Kastner proposed the "attention schema" theory of awareness. In that theory, specific cortical areas, notably in the superior temporal sulcus and the temporo-parietal junction, are used to build the construct of awareness and attribute it to other people. The same cortical machinery is also used to attribute awareness to oneself. Damage to these cortical regions can lead to deficits in consciousness such as hemispatial neglect. In the attention schema theory, the value of explaining the feature of awareness and attributing it to a person is to gain a useful predictive model of that person's attentional processing. Attention is a style of information processing in which a brain focuses its resources on a limited set of interrelated signals. Awareness, in this theory, is a useful, simplified schema that represents attentional states. To be aware of X is explained by constructing a model of one's attentional focus on X. In 2013, the perturbational complexity index (PCI) was proposed, a measure of the algorithmic complexity of the electrophysiological response of the cortex to transcranial magnetic stimulation. This measure was shown to be higher in individuals that are awake, in REM sleep or in a locked-in state than in those who are in deep sleep or in a vegetative state, making it potentially useful as a quantitative assessment of consciousness states. Assuming that not only humans but even some non-mammalian species are conscious, a number of evolutionary approaches to the problem of neural correlates of consciousness open up. For example, assuming that birds are conscious—a common assumption among neuroscientists and ethologists due to the extensive cognitive repertoire of birds—there are comparative neuroanatomical ways to validate some of the principal, currently competing, mammalian consciousness–brain theories. The rationale for such a comparative study is that the avian brain deviates structurally from the mammalian brain. So how similar are they? What homologues can be identified? The general conclusion from the study by Butler, et al., is that some of the major theories for the mammalian brain also appear to be valid for the avian brain. The structures assumed to be critical for consciousness in mammalian brains have homologous counterparts in avian brains. Thus the main portions of the theories of Crick and Koch, Edelman and Tononi, and Cotterill seem to be compatible with the assumption that birds are conscious. Edelman also differentiates between what he calls primary consciousness (which is a trait shared by humans and non-human animals) and higher-order consciousness as it appears in humans alone along with human language capacity. Certain aspects of the three theories, however, seem less easy to apply to the hypothesis of avian consciousness. For instance, the suggestion by Crick and Koch that layer 5 neurons of the mammalian brain have a special role, seems difficult to apply to the avian brain, since the avian homologues have a different morphology. Likewise, the theory of Eccles seems incompatible, since a structural homologue/analogue to the dendron has not been found in avian brains. The assumption of an avian consciousness also brings the reptilian brain into focus. The reason is the structural continuity between avian and reptilian brains, meaning that the phylogenetic origin of consciousness may be earlier than suggested by many leading neuroscientists. Joaquin Fuster of UCLA has advocated the position of the importance of the prefrontal cortex in humans, along with the areas of Wernicke and Broca, as being of particular importance to the development of human language capacities neuro-anatomically necessary for the emergence of higher-order consciousness in humans. Biological function and evolution Opinions are divided as to where in biological evolution consciousness emerged and about whether or not consciousness has any survival value. Some argue that consciousness is a byproduct of evolution. It has been argued that consciousness emerged (i) exclusively with the first humans, (ii) exclusively with the first mammals, (iii) independently in mammals and birds, or (iv) with the first reptiles. Other authors date the origins of consciousness to the first animals with nervous systems or early vertebrates in the Cambrian over 500 million years ago. Donald Griffin suggests in his book Animal Minds a gradual evolution of consciousness. Each of these scenarios raises the question of the possible survival value of consciousness. Thomas Henry Huxley defends in an essay titled On the Hypothesis that Animals are Automata, and its History an epiphenomenalist theory of consciousness according to which consciousness is a causally inert effect of neural activity—"as the steam-whistle which accompanies the work of a locomotive engine is without influence upon its machinery". To this William James objects in his essay Are We Automata? by stating an evolutionary argument for mind-brain interaction implying that if the preservation and development of consciousness in the biological evolution is a result of natural selection, it is plausible that consciousness has not only been influenced by neural processes, but has had a survival value itself; and it could only have had this if it had been efficacious. Karl Popper develops in the book The Self and Its Brain a similar evolutionary argument. Regarding the primary function of conscious processing, a recurring idea in recent theories is that phenomenal states somehow integrate neural activities and information-processing that would otherwise be independent. This has been called the integration consensus. Another example has been proposed by Gerald Edelman called dynamic core hypothesis which puts emphasis on reentrant connections that reciprocally link areas of the brain in a massively parallel manner. Edelman also stresses the importance of the evolutionary emergence of higher-order consciousness in humans from the historically older trait of primary consciousness which humans share with non-human animals (see Neural correlates section above). These theories of integrative function present solutions to two classic problems associated with consciousness: differentiation and unity. They show how our conscious experience can discriminate between a virtually unlimited number of different possible scenes and details (differentiation) because it integrates those details from our sensory systems, while the integrative nature of consciousness in this view easily explains how our experience can seem unified as one whole despite all of these individual parts. However, it remains unspecified which kinds of information are integrated in a conscious manner and which kinds can be integrated without consciousness. Nor is it explained what specific causal role conscious integration plays, nor why the same functionality cannot be achieved without consciousness. Obviously not all kinds of information are capable of being disseminated consciously (e.g., neural activity related to vegetative functions, reflexes, unconscious motor programs, low-level perceptual analyses, etc.) and many kinds of information can be disseminated and combined with other kinds without consciousness, as in intersensory interactions such as the ventriloquism effect. Hence it remains unclear why any of it is conscious. For a review of the differences between conscious and unconscious integrations, see the article of E. Morsella. As noted earlier, even among writers who consider consciousness to be a well-defined thing, there is widespread dispute about which animals other than humans can be said to possess it. Edelman has described this distinction as that of humans possessing higher-order consciousness while sharing the trait of primary consciousness with non-human animals (see previous paragraph). Thus, any examination of the evolution of consciousness is faced with great difficulties. Nevertheless, some writers have argued that consciousness can be viewed from the standpoint of evolutionary biology as an adaptation in the sense of a trait that increases fitness. In his article "Evolution of consciousness", John Eccles argued that special anatomical and physical properties of the mammalian cerebral cortex gave rise to consciousness ("[a] psychon ... linked to [a] dendron through quantum physics"). Bernard Baars proposed that once in place, this "recursive" circuitry may have provided a basis for the subsequent development of many of the functions that consciousness facilitates in higher organisms. Peter Carruthers has put forth one such potential adaptive advantage gained by conscious creatures by suggesting that consciousness allows an individual to make distinctions between appearance and reality. This ability would enable a creature to recognize the likelihood that their perceptions are deceiving them (e.g. that water in the distance may be a mirage) and behave accordingly, and it could also facilitate the manipulation of others by recognizing how things appear to them for both cooperative and devious ends. Other philosophers, however, have suggested that consciousness would not be necessary for any functional advantage in evolutionary processes. No one has given a causal explanation, they argue, of why it would not be possible for a functionally equivalent non-conscious organism (i.e., a philosophical zombie) to achieve the very same survival advantages as a conscious organism. If evolutionary processes are blind to the difference between function F being performed by conscious organism O and non-conscious organism O*, it is unclear what adaptive advantage consciousness could provide. As a result, an exaptive explanation of consciousness has gained favor with some theorists that posit consciousness did not evolve as an adaptation but was an exaptation arising as a consequence of other developments such as increases in brain size or cortical rearrangement. Consciousness in this sense has been compared to the blind spot in the retina where it is not an adaption of the retina, but instead just a by-product of the way the retinal axons were wired. Several scholars including Pinker, Chomsky, Edelman, and Luria have indicated the importance of the emergence of human language as an important regulative mechanism of learning and memory in the context of the development of higher-order consciousness (see Neural correlates section above). Another idea suggested where consciousness originates from a cell that has nestled itself in a blood capillary in the brain where the blood flow determines whether or not one is conscious. States of consciousness There are some brain states in which consciousness seems to be absent, including dreamless sleep or coma. There are also a variety of circumstances that can change the relationship between the mind and the world in less drastic ways, producing what are known as altered states of consciousness. Some altered states occur naturally; others can be produced by drugs or brain damage. Altered states can be accompanied by changes in thinking, disturbances in the sense of time, feelings of loss of control, changes in emotional expression, alternations in body image and changes in meaning or significance. The two most widely accepted altered states are sleep and dreaming. Although dream sleep and non-dream sleep appear very similar to an outside observer, each is associated with a distinct pattern of brain activity, metabolic activity, and eye movement; each is also associated with a distinct pattern of experience and cognition. During ordinary non-dream sleep, people who are awakened report only vague and sketchy thoughts, and their experiences do not cohere into a continuous narrative. During dream sleep, in contrast, people who are awakened report rich and detailed experiences in which events form a continuous progression, which may however be interrupted by bizarre or fantastic intrusions. Thought processes during the dream state frequently show a high level of irrationality. Both dream and non-dream states are associated with severe disruption of memory: it usually disappears in seconds during the non-dream state, and in minutes after awakening from a dream unless actively refreshed. Research conducted on the effects of partial epileptic seizures on consciousness found that patients who have partial epileptic seizures experience altered states of consciousness. In partial epileptic seizures, consciousness is impaired or lost while some aspects of consciousness, often automated behaviors, remain intact. Studies found that when measuring the qualitative features during partial epileptic seizures, patients exhibited an increase in arousal and became absorbed in the experience of the seizure, followed by difficulty in focusing and shifting attention. A variety of psychoactive drugs, including alcohol, have notable effects on consciousness. These range from a simple dulling of awareness produced by sedatives, to increases in the intensity of sensory qualities produced by stimulants, cannabis, empathogens–entactogens such as MDMA ("Ecstasy"), or most notably by the class of drugs known as psychedelics. LSD, mescaline, psilocybin, dimethyltryptamine, and others in this group can produce major distortions of perception, including hallucinations; some users even describe their drug-induced experiences as mystical or spiritual in quality. The brain mechanisms underlying these effects are not as well understood as those induced by use of alcohol, but there is substantial evidence that alterations in the brain system that uses the chemical neurotransmitter serotonin play an essential role. There has been some research into physiological changes in yogis and people who practise various techniques of meditation. Some research with brain waves during meditation has reported differences between those corresponding to ordinary relaxation and those corresponding to meditation. It has been disputed, however, whether there is enough evidence to count these as physiologically distinct states of consciousness. The most extensive study of the characteristics of altered states of consciousness was made by psychologist Charles Tart in the 1960s and 1970s. Tart analyzed a state of consciousness as made up of a number of component processes, including exteroception (sensing the external world); interoception (sensing the body); input-processing (seeing meaning); emotions; memory; time sense; sense of identity; evaluation and cognitive processing; motor output; and interaction with the environment. Each of these, in his view, could be altered in multiple ways by drugs or other manipulations. The components that Tart identified have not, however, been validated by empirical studies. Research in this area has not yet reached firm conclusions, but a recent questionnaire-based study identified eleven significant factors contributing to drug-induced states of consciousness: experience of unity; spiritual experience; blissful state; insightfulness; disembodiment; impaired control and cognition; anxiety; complex imagery; elementary imagery; audio-visual synesthesia; and changed meaning of percepts. Phenomenology Phenomenology is a method of inquiry that attempts to examine the structure of consciousness in its own right, putting aside problems regarding the relationship of consciousness to the physical world. This approach was first proposed by the philosopher Edmund Husserl, and later elaborated by other philosophers and scientists. Husserl's original concept gave rise to two distinct lines of inquiry, in philosophy and psychology. In philosophy, phenomenology has largely been devoted to fundamental metaphysical questions, such as the nature of intentionality ("aboutness"). In psychology, phenomenology largely has meant attempting to investigate consciousness using the method of introspection, which means looking into one's own mind and reporting what one observes. This method fell into disrepute in the early twentieth century because of grave doubts about its reliability, but has been rehabilitated to some degree, especially when used in combination with techniques for examining brain activity. Introspectively, the world of conscious experience seems to have considerable structure. Immanuel Kant asserted that the world as we perceive it is organized according to a set of fundamental "intuitions", which include 'object' (we perceive the world as a set of distinct things); 'shape'; 'quality' (color, warmth, etc.); 'space' (distance, direction, and location); and 'time'. Some of these constructs, such as space and time, correspond to the way the world is structured by the laws of physics; for others, the correspondence is not as clear. Understanding the physical basis of qualities, such as redness or pain, has been particularly challenging. David Chalmers has called this the hard problem of consciousness. Some philosophers have argued that it is intrinsically unsolvable, because qualities ("qualia") are ineffable; that is, they are "raw feels", incapable of being analyzed into component processes. Other psychologists and neuroscientists reject these arguments. For example, research on ideasthesia shows that qualia are organised into a semantic-like network. Nevertheless, it is clear that the relationship between a physical entity such as light and a perceptual quality such as color is extraordinarily complex and indirect, as demonstrated by a variety of optical illusions such as neon color spreading. In neuroscience, a great deal of effort has gone into investigating how the perceived world of conscious awareness is constructed inside the brain. The process is generally thought to involve two primary mechanisms: hierarchical processing of sensory inputs, and memory. Signals arising from sensory organs are transmitted to the brain and then processed in a series of stages, which extract multiple types of information from the raw input. In the visual system, for example, sensory signals from the eyes are transmitted to the thalamus and then to the primary visual cortex; inside the cerebral cortex they are sent to areas that extract features such as three-dimensional structure, shape, color, and motion. Memory comes into play in at least two ways. First, it allows sensory information to be evaluated in the context of previous experience. Second, and even more importantly, working memory allows information to be integrated over time so that it can generate a stable representation of the world—Gerald Edelman expressed this point vividly by titling one of his books about consciousness The Remembered Present. In computational neuroscience, Bayesian approaches to brain function have been used to understand both the evaluation of sensory information in light of previous experience, and the integration of information over time. Bayesian models of the brain are probabilistic inference models, in which the brain takes advantage of prior knowledge to interpret uncertain sensory inputs in order to formulate a conscious percept; Bayesian models have successfully predicted many perceptual phenomena in vision and the nonvisual senses. Despite the large amount of information available, many important aspects of perception remain mysterious. A great deal is known about low-level signal processing in sensory systems. However, how sensory systems, action systems, and language systems interact are poorly understood. At a deeper level, there are still basic conceptual issues that remain unresolved. Many scientists have found it difficult to reconcile the fact that information is distributed across multiple brain areas with the apparent unity of consciousness: this is one aspect of the so-called binding problem. There are also some scientists who have expressed grave reservations about the idea that the brain forms representations of the outside world at all: influential members of this group include psychologist J. J. Gibson and roboticist Rodney Brooks, who both argued in favor of "intelligence without representation". Entropic brain The entropic brain is a theory of conscious states informed by neuroimaging research with psychedelic drugs. The theory suggests that the brain in primary states such as rapid eye movement (REM) sleep, early psychosis and under the influence of psychedelic drugs, is in a disordered state; normal waking consciousness constrains some of this freedom and makes possible metacognitive functions such as internal self-administered reality testing and self-awareness. Criticism has included questioning whether the theory has been adequately tested. Medical aspects The medical approach to consciousness is practically oriented. It derives from a need to treat people whose brain function has been impaired as a result of disease, brain damage, toxins, or drugs. In medicine, conceptual distinctions are considered useful to the degree that they can help to guide treatments. Whereas the philosophical approach to consciousness focuses on its fundamental nature and its contents, the medical approach focuses on the amount of consciousness a person has: in medicine, consciousness is assessed as a "level" ranging from coma and brain death at the low end, to full alertness and purposeful responsiveness at the high end. Consciousness is of concern to patients and physicians, especially neurologists and anesthesiologists. Patients may have disorders of consciousness or may need to be anesthetized for a surgical procedure. Physicians may perform consciousness-related interventions such as instructing the patient to sleep, administering general anesthesia, or inducing medical coma. Also, bioethicists may be concerned with the ethical implications of consciousness in medical cases of patients such as the Karen Ann Quinlan case, while neuroscientists may study patients with impaired consciousness in hopes of gaining information about how the brain works. Assessment In medicine, consciousness is examined using a set of procedures known as neuropsychological assessment. There are two commonly used methods for assessing the level of consciousness of a patient: a simple procedure that requires minimal training, and a more complex procedure that requires substantial expertise. The simple procedure begins by asking whether the patient is able to move and react to physical stimuli. If so, the next question is whether the patient can respond in a meaningful way to questions and commands. If so, the patient is asked for name, current location, and current day and time. A patient who can answer all of these questions is said to be "alert and oriented times four" (sometimes denoted "A&Ox4" on a medical chart), and is usually considered fully conscious. The more complex procedure is known as a neurological examination, and is usually carried out by a neurologist in a hospital setting. A formal neurological examination runs through a precisely delineated series of tests, beginning with tests for basic sensorimotor reflexes, and culminating with tests for sophisticated use of language. The outcome may be summarized using the Glasgow Coma Scale, which yields a number in the range 3–15, with a score of 3 to 8 indicating coma, and 15 indicating full consciousness. The Glasgow Coma Scale has three subscales, measuring the best motor response (ranging from "no motor response" to "obeys commands"), the best eye response (ranging from "no eye opening" to "eyes opening spontaneously") and the best verbal response (ranging from "no verbal response" to "fully oriented"). There is also a simpler pediatric version of the scale, for children too young to be able to use language. In 2013, an experimental procedure was developed to measure degrees of consciousness, the procedure involving stimulating the brain with a magnetic pulse, measuring resulting waves of electrical activity, and developing a consciousness score based on the complexity of the brain activity. Disorders of consciousness Medical conditions that inhibit consciousness are considered disorders of consciousness. This category generally includes minimally conscious state and persistent vegetative state, but sometimes also includes the less severe locked-in syndrome and more severe chronic coma. Differential diagnosis of these disorders is an active area of biomedical research. Finally, brain death results in possible irreversible disruption of consciousness. While other conditions may cause a moderate deterioration (e.g., dementia and delirium) or transient interruption (e.g., grand mal and petit mal seizures) of consciousness, they are not included in this category. Anosognosia One of the most striking disorders of consciousness goes by the name anosognosia, a Greek-derived term meaning 'unawareness of disease'. This is a condition in which patients are disabled in some way, most commonly as a result of a stroke, but either misunderstand the nature of the problem or deny that there is anything wrong with them. The most frequently occurring form is seen in people who have experienced a stroke damaging the parietal lobe in the right hemisphere of the brain, giving rise to a syndrome known as hemispatial neglect, characterized by an inability to direct action or attention toward objects located to the left with respect to their bodies. Patients with hemispatial neglect are often paralyzed on the right side of the body, but sometimes deny being unable to move. When questioned about the obvious problem, the patient may avoid giving a direct answer, or may give an explanation that doesn't make sense. Patients with hemispatial neglect may also fail to recognize paralyzed parts of their bodies: one frequently mentioned case is of a man who repeatedly tried to throw his own paralyzed right leg out of the bed he was lying in, and when asked what he was doing, complained that somebody had put a dead leg into the bed with him. An even more striking type of anosognosia is Anton–Babinski syndrome, a rarely occurring condition in which patients become blind but claim to be able to see normally, and persist in this claim in spite of all evidence to the contrary. Stream of consciousness William James is usually credited with popularizing the idea that human consciousness flows like a stream, in his Principles of Psychology of 1890. According to James, the "stream of thought" is governed by five characteristics: Every thought tends to be part of a personal consciousness. Within each personal consciousness thought is always changing. Within each personal consciousness thought is sensibly continuous. It always appears to deal with objects independent of itself. It is interested in some parts of these objects to the exclusion of others". A similar concept appears in Buddhist philosophy, expressed by the Sanskrit term Citta-saṃtāna, which is usually translated as mindstream or "mental continuum". Buddhist teachings describe that consciousness manifests moment to moment as sense impressions and mental phenomena that are continuously changing. The teachings list six triggers that can result in the generation of different mental events. These triggers are input from the five senses (seeing, hearing, smelling, tasting or touch sensations), or a thought (relating to the past, present or the future) that happen to arise in the mind. The mental events generated as a result of these triggers are: feelings, perceptions and intentions/behaviour. The moment-by-moment manifestation of the mind-stream is said to happen in every person all the time. It even happens in a scientist who analyses various phenomena in the world, or analyses the material body including the organ brain. The manifestation of the mindstream is also described as being influenced by physical laws, biological laws, psychological laws, volitional laws, and universal laws. The purpose of the Buddhist practice of mindfulness is to understand the inherent nature of the consciousness and its characteristics. Narrative form In the West, the primary impact of the idea has been on literature rather than science: "stream of consciousness as a narrative mode" means writing in a way that attempts to portray the moment-to-moment thoughts and experiences of a character. This technique perhaps had its beginnings in the monologues of Shakespeare's plays and reached its fullest development in the novels of James Joyce and Virginia Woolf, although it has also been used by many other noted writers. Here, for example, is a passage from Joyce's Ulysses about the thoughts of Molly Bloom: Spiritual approaches To most philosophers, the word "consciousness" connotes the relationship between the mind and the world. To writers on spiritual or religious topics, it frequently connotes the relationship between the mind and God, or the relationship between the mind and deeper truths that are thought to be more fundamental than the physical world. The mystical psychiatrist Richard Maurice Bucke, author of the 1901 book Cosmic Consciousness: A Study in the Evolution of the Human Mind, distinguished between three types of consciousness: 'Simple Consciousness', awareness of the body, possessed by many animals; 'Self Consciousness', awareness of being aware, possessed only by humans; and 'Cosmic Consciousness', awareness of the life and order of the universe, possessed only by humans who are enlightened. Many more examples could be given, such as the various levels of spiritual consciousness presented by Prem Saran Satsangi and Stuart Hameroff. Another thorough account of the spiritual approach is Ken Wilber's 1977 book The Spectrum of Consciousness'', a comparison of western and eastern ways of thinking about the mind. Wilber described consciousness as a spectrum with ordinary awareness at one end, and more profound types of awareness at higher levels. See also Antahkarana Centipede's dilemma Cognitive closure Cognitive neuroscience Cognitive psychology Chaitanya (consciousness) Episodic memory Explanatory gap Feelings Functionalism (philosophy of mind) Indian psychology Merkwelt Mirror neuron Models of Consciousness Modularity of mind New mysterianism Plant perception (paranormal) Sakshi (Witness) Solipsism References Further reading External links Cognitive neuroscience Cognitive psychology Concepts in epistemology Concepts in metaphysics Concepts in the philosophy of mind Concepts in the philosophy of science Emergence Mental processes Metaphysics of mind Neuropsychological assessment Neuropsychology Ontology Phenomenology Theory of mind
144634
https://en.wikipedia.org/wiki/Free%20music
Free music
Free music or libre music is music that, like free software, can freely be copied, distributed and modified for any purpose. Thus free music is either in the public domain or licensed under a free license by the artist or copyright holder themselves, often as a method of promotion. It does not mean that there should be no fee involved. The word free refers to freedom (as in free software), not to price. The Free Music Philosophy generally encourages creators to free music using whatever language or methods they wish. A Free Music Public License (FMPL) is available for those who prefer a formal approach. Some free music is licensed under licenses that are intended for software (like the GPL) or other writings (the GFDL). But there are also licenses especially for music and other works of art, such as EFF's Open Audio License, LinuxTag's Open Music License, the Free Art license and some of the Creative Commons Licences. History Before the advent of copyright law in the early 18th century and its subsequent application to music compositions first, all music was "free" according to the definitions used in free software or free music, since there were no copyright restrictions. In practice however, music reproduction was generally restricted to live performances and the legalities of playing other people's music was unclear in most jurisdictions. Copyright laws changed this gradually so much so that in the late 20th century, copying a few words of a musical composition or a few seconds of a sound recording, the two forms of music copyright, could be considered criminal infringement. In response, the concept of free music was codified in the Free Music Philosophy by Ram Samudrala in early 1994. It was based on the idea of Free Software by Richard Stallman and coincided with nascent open art and open information movements. Up to this point, few modern musicians distributed their recordings and compositions in an unrestricted manner, and there was no concrete rationale as to why they did it, or should do it. The Free Music Philosophy used a three pronged approach to voluntarily encourage the spread of unrestricted copying, based on the fact that copies of recordings and compositions could be made and distributed with complete accuracy and ease via the Internet. First, since music by its very nature is organic in its growth, the ethical basis of limiting its distribution using copyright laws was questioned. That is, an existential responsibility was fomented upon music creators who were drawing upon the creations of countless others in an unrestricted manner to create their own. Second, it was observed that the basis of copyright law, "to promote the progress of science and useful arts", had been perverted by the music industry to maximise profit over creativity resulting in a huge burden on society (the control of copying) simply to ensure its profits. Third, as copying became rampant, it was argued that musicians would have no choice but to move to a different economic model that exploited the spread of information to make a living, instead of trying to control it with limited government enforced monopolies. The Free Music Philosophy was reported on by diverse media outlets including Billboard, Forbes, Levi's Original Music Magazine, The Free Radical, Wired and The New York Times. Along with free software and Linux (a free operating system), copyleft licenses, the explosion of the Web and rise of P2P, the cementing of mp3 as a compression standard for recordings, and despite the efforts of the music industry, free music became largely the reality in the early 21st century. Organisations such as the Electronic Frontier Foundation and Creative Commons with free information champions like Lawrence Lessig were devising numerous licenses that offered different flavours of copyright and copyleft. The question was no longer why and how music should be free, but rather how creativity would flourish while musicians developed models to generate revenue in the Internet era. Record labels and websites distributing free music Audition Records – free and non-free CC licenses Dogmazic – free and non-free CC licenses, GNU GPL Free Music Archive – free and non-free CC licenses Jamendo – free and non-free CC licenses, Free Art License Incompetech - CC-BY, paid licenses available LOCA Records Magnatune Opsound Musopen Unsigned band web Notable bands distributing their music under free or close-to-free conditions Note that some licenses, such as CC-BY-NC, are not free by definition. However, works under these licenses are listed here as being related to the topic. See also Copyleft Deezer Free Culture movement File sharing Guvera Libre.fm List of musical works released in a stem format Mutopia Project Open music Open Music Model Podsafe Spotify We7 Wolfgang's Vault References External links The etree.org wiki: etree pioneered the standards for distributing lossless audio on the net. Free Music Licenses Music industry
2428195
https://en.wikipedia.org/wiki/Secure%20access%20module
Secure access module
A secure access module or secure application module (SAM) is a piece of cryptographic hardware typically used by smart card card readers to perform mutual key authentication. SAMs can be used to manage access in a variety of contexts, such as public transport fare collection and point of sale devices. Physically, a SAM card can be a SIM card plugged into a SAM slot in a card reader, or a fixed integrated circuit in a housing directly soldered on a printed circuit board. Generally, a reader system consists of a microcontroller and a reader IC to communicate over the RF interface with a contactless smartcard. The microcontroller takes the part of controlling the reader IC functions such as protocol handling, command flow and data interpretation. By integrating a SAM into the reader system, the SAM handles all the key management and cryptography in a secure way. The entire system enables authentication and encryption of the contactless communication between the SAM and host system. SAMs can be deployed in any of the following applications: Generate application keys based on master keys Store and secure master keys Perform cryptographic functions with smart cards Use as a secure encryption device Perform mutual authentication Generate session keys Perform secure messaging References Encryption devices
38294193
https://en.wikipedia.org/wiki/Volatility%20%28software%29
Volatility (software)
Volatility is an open-source memory forensics framework for incident response and malware analysis. It is written in Python and supports Microsoft Windows, Mac OS X, and Linux (as of version 2.5). Volatility was created by Aaron Walters, drawing on academic research he did in memory forensics. Operating system support Volatility supports investigations of the following memory images: Windows: 32-bit Windows XP (Service Pack 2 and 3) 32-bit Windows 2003 Server (Service Pack 0, 1, 2) 32-bit Windows Vista (Service Pack 0, 1, 2) 32-bit Windows 2008 Server (Service Pack 1, 2) 32-bit Windows 7 (Service Pack 0, 1) 32-bit Windows 8, 8.1, and 8.1 Update 1 32-bit Windows 10 (initial support) 64-bit Windows XP (Service Pack 1 and 2) 64-bit Windows 2003 Server (Service Pack 1 and 2) 64-bit Windows Vista (Service Pack 0, 1, 2) 64-bit Windows 2008 Server (Service Pack 1 and 2) 64-bit Windows 2008 R2 Server (Service Pack 0 and 1) 64-bit Windows 7 (Service Pack 0 and 1) 64-bit Windows 8, 8.1, and 8.1 Update 1 64-bit Windows Server 2012 and 2012 R2 64-bit Windows 10 (including at least 10.0.14393) 64-bit Windows Server 2016 (including at least 10.0.14393.0) Mac OSX: 32-bit 10.5.x Leopard (the only 64-bit 10.5 is Server, which isn't supported) 32-bit 10.6.x Snow Leopard 32-bit 10.7.x Lion 64-bit 10.6.x Snow Leopard 64-bit 10.7.x Lion 64-bit 10.8.x Mountain Lion 64-bit 10.9.x Mavericks 64-bit 10.10.x Yosemite 64-bit 10.11.x El Capitan 64-bit 10.12.x Sierra 64-bit 10.13.x High Sierra 64-bit 10.14.x Mojave 64-bit 10.15.x Catalina Linux: 32-bit Linux kernels 2.6.11 to 5.5 64-bit Linux kernels 2.6.11 to 5.5 OpenSuSE, Ubuntu, Debian, CentOS, Fedora, Mandriva, etc. Memory format support Volatility supports a variety of sample file formats and the ability to convert between these formats: Raw/Padded Physical Memory Firewire (IEEE 1394) Expert Witness (EWF) 32- and 64-bit Windows Crash Dump 32- and 64-bit Windows Hibernation (from Windows 7 or earlier) 32- and 64-bit Mach-O files Virtualbox Core Dumps VMware Saved State (.vmss) and Snapshot (.vmsn) HPAK Format (FastDump) QEMU memory dumps LiME format References Computer forensics
6139840
https://en.wikipedia.org/wiki/Missile%20launch%20control%20center
Missile launch control center
A launch control center (LCC), in the United States, is the main control facility for intercontinental ballistic missiles (ICBMs). A launch control center monitors and controls missile launch facilities. From a launch control center, the missile combat crew can monitor the complex, launch the missile, or relax in the living quarters (depending on the ICBM system). The LCC is designed to provide maximum protection for the missile combat crew and equipment vital to missile launch. Missile silos are common across the midwestern United States, and over 450 missiles remain in US Air Force (USAF) service. Due to modern conventional weapons, missile launch control centers are becoming rarer in the US, and it is expected that the number of missiles will stay at 450 Minuteman III. General information All LCCs are dependent on a missile support base (MSB) for logistics support. For example, Minot AFB is the MSB for the 91st Missile Wing. Three types of Minuteman LCCs exist: Alternate Command Post (ACP): performed backup functions to missile support base; control missile wing communications Squadron Command Post (SCP): perform backup functions to ACP; control squadron execution and communications Primary LCC (PLCC): perform execution and rapid message processing There are four configurations of the LCC, differing primarily in the amount and location of communications equipment. Functionally, there are three LCC designations. One Alternate Command Post (ACP) LCC is located within each Minuteman wing and serves as backup for the wing command post. Three Squadron Command Posts (SCPs) serve as command units for the remaining squadrons within the wing, and report directly to the wing command post. The ACP doubles as SCP for the squadron it is located within. The remainder of the LCCs (16) are classified as primary LCCs. Four primary LCCs are located within each squadron and report to their respective command post. Titan II LCC The Titan LCCs held four crew members: the Missile Combat Crew Commander (MCCC), the Deputy Missile Combat Crew Commander (DMCCC), Ballistic Missile Analyst Technician (BMAT), and the Missile Facilities Technician (MFT). Titan II had a three-story LCC dome. The first level was the crews living area and contained a kitchen, bathroom, bedroom, and a small equipment area that housed an exhaust fan and a water heater. The second level was the launch control area and held the LCCFC (Launch Control Complex Facility Console, the main launch console), the ALOC (Alternate Launch Officer Console), the Control Monitor Group (monitored the missile), and several other pieces of equipment. The lowest level, level 3, held communications equipment, the two battery backup supplies, the sewage lift station, the motor-generator, and several other pieces of equipment. There were two types of Titan II sites: standard, and ACP (alternate command post) sites. ACPs had all of the equipment that one would find on a standard site plus additional communication equipment. Minuteman facilities Launch Control Center A Minuteman wing consists of either three or four squadrons. Five flights comprise each squadron. Each flight directly controls ten Minuteman missiles remotely. Each flight is commanded from a Launch Control Center, or LCC. The Minuteman LCC is an underground structure of reinforced concrete and steel of sufficient strength to withstand weapon effects. It contains equipment and a Missile combat crew of two officers capable of controlling, monitoring, and launching the 10 Minuteman missiles in unmanned launch facilities (LFs) within the flight. The Combat Crew monitors message traffic from higher headquarters to all the other four flights in its squadron, and has the ability to countermand launch attempts initiated by any other flight in its squadron. One LCC in each Minuteman squadron is designated a Squadron Command Post and has the ability to take control of and remotely launch the Minuteman missiles of any other flight in its squadron, in the event of receipt of an authenticated Emergency War Order and the flight designated in the EWO fails to execute its ICBM fire mission contained therein. One of the wing's Squadron Command Posts is designated a Wing Command Post and can execute an authenticated EWO for any flight of Minuteman missiles in the wing. It can also countermand a launch attempt by any flight in any squadron in the wing. The Minuteman Combat Crew has voice communications capability with all the LFs of the flight which it commands. Under ordinary circumstances this is almost always used to coordinate with maintenance crews on-site at an LF. If the maintenance crew is performing a site penetration (entry into the missile silo) communication with the Combat Crew will always be necessary in order to properly authenticate (prove who you are). Under extraordinary circumstances it may be necessary to communicate with a flight security squad that is dispatched to the LF, usually to investigate a perimeter security alarm. Each Combat Crew has a voice circuit called the Hardened Voice Channel which links the five Combat Crews (LCCs) that comprise the squadron. There is also a voice circuit called the EWO (Emergency War Order) which links the squadron command posts (CPs). One of the squadron command posts (CPs) is also the wing CP. These two voice circuits work like a party line with all LCCs connected simultaneously. Thus, it is not possible for any of the Combat Crews to have private conversations. The term "EWO" used here is not to be confused with an actual Emergency War Order message from the National Command Authority. The same term is used to denote both this circuit and the message transmitted over the Primary Alert System. Message traffic over the LF, HVC, and EWO voice circuits are transmitted via the Hardened Intersite Cable System. Each Combat Crew also has access to commercial telephone lines for ordinary civilian communications. The outer structure of the LCC itself is cylindrical with hemispherical ends. Its walls are of steel-reinforced concrete and approximately 4.5 feet thick. It is normally accessed from the LCF/MAF by a freight-size elevator. A blast door permits entry into the LCC from the tunnel junction (adjoining the LCC Equipment Building housing the backup diesel-electric generator and emergency supplies). An escape hatch 3-ft in diameter is located at the far end of the LCC. The escape hatch and associated tunnel are constructed to withstand weapon effects and allow personnel egress in the event of damage to the vertical access shaft. The tunnel is sand-filled and the sand will fall into the LCC if the hatch at the bottom of the tunnel is opened. Essential LCC launch equipment and communications gear, along with the missile combat crew, are located in a shock isolated compartment suspended within the outer structure. The room is steel and suspended as a pendulum by four shock isolators (see picture below). The LCC's electronics are fully shielded from Electromagnetic Pulse damage with carbon block surge arresters. REACT-A LCCs REACT-A capsules were brought online in the mid-1990s and continue in service with the 341st Missile Wing, the 90th Missile Wing, and the 91st Missile Wing. This was an upgrade from the ILCS (Improved Launch Control System) capsules at the 341 MW that date to the late 1970s, and from the CDB capsules at the 90th and 91st missile wings. This was a major upgrade. The two launch control officers now sit side by side and must turn four launch keys to initiate a launch. REACT-B LCCs The B/CDB capsules were upgraded to REACT-B in the mid-1990s and used only at the 321st Missile Wing at Grand Forks AFB, ND and the 564th Missile Squadron (the "odd squad") of the 341st Missile Wing at Malmstrom AFB, MT until both were shut down. (19 August 1998 for the 564th, 30 September 1998 for the 321st.) CDB LCCs Command Data Buffer (CDB) was a configuration for early Minuteman missiles at the 90th Missile Wing at FE Warren AFB, WY, the 91st Missile Wing at Minot AFB, ND, and the 351st Missile Wing at Whiteman AFB, MO. The overall layout of the LCC did not change through the upgrade to REACT, however there were some major equipment changes. Airborne Launch Control Centers Airborne Launch Control Centers (ALCC) provide a survivable launch capability for the Minuteman force by utilizing the Airborne Launch Control System (ALCS) which is operated by an airborne missile combat crew. From 1967 to 1998, the ALCC mission was performed by United States Air Force EC-135 command post aircraft. This included EC-135A, EC-135C, EC-135G, and EC-135L aircraft. Today, the ALCC mission is performed by airborne missileers from Air Force Global Strike Command's (AFGSC) 625th Strategic Operations Squadron (STOS) and United States Strategic Command (USSTRATCOM). Starting on October 1, 1998, the ALCS has been located on board the United States Navy's E-6B Mercury. The ALCS crew is integrated into the battle staff of the USSTRATCOM "Looking Glass" Airborne Command Post (ABNCP) and is on alert around-the-clock. Launch control equipment building The Launch Control Equipment Building (LCEB) is a hardened, below-ground capsule for support equipment such as air conditioners, diesel generators, etc. At Wing 1 (and the former Wing 2 setup at Ellsworth AFB) this equipment is above ground ("topside") in the MAF. Missile Alert Facility A Minuteman Missile Alert Facility (MAF), previously known as the Launch Control Facility (LCF), is the above-ground component. It is "soft" or not able to withstand nuclear explosions. It consists of a security control office, dining room, kitchen, sleeping areas for the security forces stationed there (and occasional maintenance troops), garages for various vehicles, and other facilities. Netlink As of 2006, all Minuteman LCCs were modified to handle the LCC Netlink upgrade. The Netlink system brought internet access underground for missile combat crews. Communications equipment Primary Alerting System (PAS) Strategic Automated Command and Control System (SACCS) - formerly known as Strategic Air Command Digital Information Network (SACDIN) Minimum Essential Emergency Communications Network (MEECN) Air Force Satellite Communications (AFSATCOM), using both Milstar and Defense Satellite Communications System satellites Survivable Low Frequency Communications System (SLFCS) Hardened Intersite Cable System lines (HICS) Voice Dial Lines 1 & 2 The Minuteman LCC differs from previous missile systems in that it only held room for two personnel, the Missile Combat Crew Commander (MCCC) and the Deputy Missile Combat Crew Commander (DMCCC). Previously, each MAF was equipped with the ICBM SHF Satellite Terminal (ISST) communications system. This system has since been deactivated, with Francis E. Warren Air Force Base being the first to completely remove the system components. Peacekeeper LCC The Peacekeeper LCCs were non-REACT modified CDB LCCs. Instead of replacing the command and control equipment, the 'old' Minuteman CDB C2 system was modified for the 50 Peacekeeper ICBMs. Photo gallery See also Airborne Launch Control System (ALCS) Airborne Launch Control Center (ALCC) Continuity of government Emergency Rocket Communications System (ERCS) Game theory Ground Wave Emergency Network (GWEN) Minimum Essential Emergency Communications Network (MEECN) Post-Attack Command and Control System (PACCS) Survivable Low Frequency Communications System (SLFCS) The Cold War References External links U.S. National Park Service article with detailed information on Minuteman missile launch control centers. Titan Missile Museum: Pima Air & Space Museum 20th Century Castles: LCC real estate sales Intercontinental ballistic missiles of the United States United States nuclear command and control
19945
https://en.wikipedia.org/wiki/Motherboard
Motherboard
A motherboard (also called mainboard, main circuit board, or mobo) is the main printed circuit board (PCB) in general-purpose computers and other expandable systems. It holds and allows communication between many of the crucial electronic components of a system, such as the central processing unit (CPU) and memory, and provides connectors for other peripherals. Unlike a backplane, a motherboard usually contains significant sub-systems, such as the central processor, the chipset's input/output and memory controllers, interface connectors, and other components integrated for general use. Motherboard means specifically a PCB with expansion capabilities. As the name suggests, this board is often referred to as the "mother" of all components attached to it, which often include peripherals, interface cards, and daughterboards: sound cards, video cards, network cards, host bus adapters, TV tuner cards, IEEE 1394 cards; and a variety of other custom components. Similarly, the term mainboard describes a device with a single board and no additional expansions or capability, such as controlling boards in laser printers, television sets, washing machines, mobile phones, and other embedded systems with limited expansion abilities. History Prior to the invention of the microprocessor, the digital computer consisted of multiple printed circuit boards in a card-cage case with components connected by a backplane, a set of interconnected sockets. In very old designs, copper wires were the discrete connections between card connector pins, but printed circuit boards soon became the standard practice. The central processing unit (CPU), memory, and peripherals were housed on individually printed circuit boards, which were plugged into the backplane. The ubiquitous S-100 bus of the 1970s is an example of this type of backplane system. The most popular computers of the 1980s such as the Apple II and IBM PC had published schematic diagrams and other documentation which permitted rapid reverse-engineering and third-party replacement motherboards. Usually intended for building new computers compatible with the exemplars, many motherboards offered additional performance or other features and were used to upgrade the manufacturer's original equipment. During the late 1980s and early 1990s, it became economical to move an increasing number of peripheral functions onto the motherboard. In the late 1980s, personal computer motherboards began to include single ICs (also called Super I/O chips) capable of supporting a set of low-speed peripherals: PS/2 keyboard and mouse, floppy disk drive, serial ports, and parallel ports. By the late 1990s, many personal computer motherboards included consumer-grade embedded audio, video, storage, and networking functions without the need for any expansion cards at all; higher-end systems for 3D gaming and computer graphics typically retained only the graphics card as a separate component. Business PCs, workstations, and servers were more likely to need expansion cards, either for more robust functions, or for higher speeds; those systems often had fewer embedded components. Laptop and notebook computers that were developed in the 1990s integrated the most common peripherals. This even included motherboards with no upgradeable components, a trend that would continue as smaller systems were introduced after the turn of the century (like the tablet computer and the netbook). Memory, processors, network controllers, power source, and storage would be integrated into some systems. Design A motherboard provides the electrical connections by which the other components of the system communicate. Unlike a backplane, it also contains the central processing unit and hosts other subsystems and devices. A typical desktop computer has its microprocessor, main memory, and other essential components connected to the motherboard. Other components such as external storage, controllers for video display and sound, and peripheral devices may be attached to the motherboard as plug-in cards or via cables; in modern microcomputers, it is increasingly common to integrate some of these peripherals into the motherboard itself. An important component of a motherboard is the microprocessor's supporting chipset, which provides the supporting interfaces between the CPU and the various buses and external components. This chipset determines, to an extent, the features and capabilities of the motherboard. Modern motherboards include: CPU sockets (or CPU slots) in which one or more microprocessors may be installed. In the case of CPUs in ball grid array packages, such as the VIA Nano and the Goldmont Plus, the CPU is directly soldered to the motherboard. Memory slots into which the system's main memory is to be installed, typically in the form of DIMM modules containing DRAM chips can be DDR3, DDR4 or DDR5 The chipset which forms an interface between the CPU, main memory, and peripheral buses Non-volatile memory chips (usually Flash ROM in modern motherboards) containing the system's firmware or BIOS The clock generator which produces the system clock signal to synchronize the various components Slots for expansion cards (the interface to the system via the buses supported by the chipset) Power connectors, which receive electrical power from the computer power supply and distribute it to the CPU, chipset, main memory, and expansion cards. , some graphics cards (e.g. GeForce 8 and Radeon R600) require more power than the motherboard can provide, and thus dedicated connectors have been introduced to attach them directly to the power supply Connectors for hard disk drives, optical disc drives, or solid-state drives, typically SATA and NVMe now. Additionally, nearly all motherboards include logic and connectors to support commonly used input devices, such as USB for mouse devices and keyboards. Early personal computers such as the Apple II or IBM PC included only this minimal peripheral support on the motherboard. Occasionally video interface hardware was also integrated into the motherboard; for example, on the Apple II and rarely on IBM-compatible computers such as the IBM PC Jr. Additional peripherals such as disk controllers and serial ports were provided as expansion cards. Given the high thermal design power of high-speed computer CPUs and components, modern motherboards nearly always include heat sinks and mounting points for fans to dissipate excess heat. Form factor Motherboards are produced in a variety of sizes and shapes called form factors, some of which are specific to individual computer manufacturers. However, the motherboards used in IBM-compatible systems are designed to fit various case sizes. , most desktop computer motherboards use the ATX standard form factor — even those found in Macintosh and Sun computers, which have not been built from commodity components. A case's motherboard and power supply unit (PSU) form factor must all match, though some smaller form factor motherboards of the same family will fit larger cases. For example, an ATX case will usually accommodate a microATX motherboard. Laptop computers generally use highly integrated, miniaturized, and customized motherboards. This is one of the reasons that laptop computers are difficult to upgrade and expensive to repair. Often the failure of one laptop component requires the replacement of the entire motherboard, which is usually more expensive than a desktop motherboard CPU sockets A CPU socket (central processing unit) or slot is an electrical component that attaches to a Printed Circuit Board (PCB) and is designed to house a CPU (also called a microprocessor). It is a special type of integrated circuit socket designed for very high pin counts. A CPU socket provides many functions, including a physical structure to support the CPU, support for a heat sink, facilitating replacement (as well as reducing cost), and most importantly, forming an electrical interface both with the CPU and the PCB. CPU sockets on the motherboard can most often be found in most desktop and server computers (laptops typically use surface mount CPUs), particularly those based on the Intel x86 architecture. A CPU socket type and motherboard chipset must support the CPU series and speed. Integrated peripherals With the steadily declining costs and size of integrated circuits, it is now possible to include support for many peripherals on the motherboard. By combining many functions on one PCB, the physical size and total cost of the system may be reduced; highly integrated motherboards are thus especially popular in small form factor and budget computers. Disk controllers for SATA drives, and historical PATA drives. Historical floppy-disk controller Integrated graphics controller supporting 2D and 3D graphics, with VGA, DVI, HDMI, DisplayPort and TV output integrated sound card supporting 8-channel (7.1) audio and S/PDIF output Ethernet network controller for connection to a LAN and to receive Internet USB controller Wireless network interface controller Bluetooth controller Temperature, voltage, and fan-speed sensors that allow software to monitor the health of computer components. Peripheral card slots A typical motherboard will have a different number of connections depending on its standard and form factor. A standard, modern ATX motherboard will typically have two or three PCI-Express x16 connection for a graphics card, one or two legacy PCI slots for various expansion cards, and one or two PCI-E x1 (which has superseded PCI). A standard EATX motherboard will have two to four PCI-E x16 connection for graphics cards, and a varying number of PCI and PCI-E x1 slots. It can sometimes also have a PCI-E x4 slot (will vary between brands and models). Some motherboards have two or more PCI-E x16 slots, to allow more than 2 monitors without special hardware, or use a special graphics technology called SLI (for Nvidia) and Crossfire (for AMD). These allow 2 to 4 graphics cards to be linked together, to allow better performance in intensive graphical computing tasks, such as gaming, video editing, etc. In newer motherboards, the M.2 slots are for SSD and/or Wireless network interface controller. Temperature and reliability Motherboards are generally air cooled with heat sinks often mounted on larger chips in modern motherboards. Insufficient or improper cooling can cause damage to the internal components of the computer, or cause it to crash. Passive cooling, or a single fan mounted on the power supply, was sufficient for many desktop computer CPU's until the late 1990s; since then, most have required CPU fans mounted on heat sinks, due to rising clock speeds and power consumption. Most motherboards have connectors for additional computer fans and integrated temperature sensors to detect motherboard and CPU temperatures and controllable fan connectors which the BIOS or operating system can use to regulate fan speed. Alternatively computers can use a water cooling system instead of many fans. Some small form factor computers and home theater PCs designed for quiet and energy-efficient operation boast fan-less designs. This typically requires the use of a low-power CPU, as well as a careful layout of the motherboard and other components to allow for heat sink placement. A 2003 study found that some spurious computer crashes and general reliability issues, ranging from screen image distortions to I/O read/write errors, can be attributed not to software or peripheral hardware but to aging capacitors on PC motherboards. Ultimately this was shown to be the result of a faulty electrolyte formulation, an issue termed capacitor plague. Modern motherboards use electrolytic capacitors to filter the DC power distributed around the board. These capacitors age at a temperature-dependent rate, as their water based electrolytes slowly evaporate. This can lead to loss of capacitance and subsequent motherboard malfunctions due to voltage instabilities. While most capacitors are rated for 2000 hours of operation at , their expected design life roughly doubles for every below this. At a lifetime of 3 to 4 years can be expected. However, many manufacturers deliver substandard capacitors, which significantly reduce life expectancy. Inadequate case cooling and elevated temperatures around the CPU socket exacerbate this problem. With top blowers, the motherboard components can be kept under , effectively doubling the motherboard lifetime. Mid-range and high-end motherboards, on the other hand, use solid capacitors exclusively. For every 10 °C less, their average lifespan is multiplied approximately by three, resulting in a 6-times higher lifetime expectancy at . These capacitors may be rated for 5000, 10000 or 12000 hours of operation at , extending the projected lifetime in comparison with standard solid capacitors. In Desktop PCs and notebook computers, the motherboard cooling and monitoring solutions are usually based on Super I/O or Embedded Controller. Bootstrapping using the Basic Input/Output System Motherboards contain a ROM (and later EPROM, EEPROM, NOR flash) to initialize hardware devices, and loads an operating system from the peripheral device. Microcomputers such as the Apple II and IBM PC used ROM chips mounted in sockets on the motherboard. At power-up, the central processor unit would load its program counter with the address of the Boot ROM and start executing instructions from the Boot ROM. These instructions initialized and tested the system hardware, displays system information on the screen, performed RAM checks, and then loaded an operating system from a peripheral device. If none was available, then the computer would perform tasks from other ROM stores or display an error message, depending on the model and design of the computer. For example, both the Apple II and the original IBM PC had Cassette BASIC (ROM BASIC) and would start that if no operating system could be loaded from the floppy disk or hard disk. Most modern motherboard designs use a BIOS, stored in an EEPROM or NOR flash chip soldered to or socketed on the motherboard, to boot an operating system. When the computer is powered on, the BIOS firmware tests and configures memory, circuitry, and peripherals. This Power-On Self Test (POST) may include testing some of the following things: Video card Expansion cards inserted into slots, such as conventional PCI and PCI Express Historical floppy drive Temperatures, voltages, and fan speeds for hardware monitoring CMOS memory used to store BIOS configuration Keyboard and Mouse Sound card Network adapter Optical drives: CD-ROM or DVD-ROM Hard disk drive and solid state drive Security devices, such as a fingerprint reader USB devices, such as a USB mass storage device Many motherboards now use a successor to BIOS called UEFI. It became popular after Microsoft began requiring it for a system to be certified to run Windows 8. See also Peripheral Component Interconnect (PCI) PCI-X PCI Express (PCIe) Accelerated Graphics Port (AGP) M.2 U.2 Computer case screws CMOS battery Expansion card List of computer hardware manufacturers Basic Input/Output System (BIOS) Unified Extensible Firmware Interface (UEFI) Overclocking Single-board computer Switched-mode power supply applications Symmetric multiprocessing Chip creep References External links The Making of a Motherboard: ECS Factory Tour The Making of a Motherboard: Gigabyte Factory Tour Front Panel I/O Connectivity Design Guide - v1.3 (pdf file) Motherboard IBM PC compatibles
31182877
https://en.wikipedia.org/wiki/Serious%20Sam%203%3A%20BFE
Serious Sam 3: BFE
Serious Sam 3: BFE is a first-person shooter video game developed by Croatia-based indie development studio Croteam and published by Devolver Digital. It is part of the Serious Sam series and the prequel to the 2001 video game, Serious Sam. The game takes place in 22nd-century Egypt, during Mental's invasion on Earth, as implied in The First Encounter. The game features a 16-player online, as well as 4-player splitscreen co-op campaign mode. The game was first released for Microsoft Windows on 22 November 2011. The OS X support for the game followed shortly after and was released on 23 April 2012. The Linux version of the game started being worked on after a high number of requests, where the first Linux-related update was the porting of the game's dedicated server. The game itself, however, was released one day after Valve opened the beta branch for "Steam for Linux", namely 20 December 2012. Gameplay Like previous titles in the series, Serious Sam 3: BFE involves fighting against many hordes of enemies in wide-open environments. However Serious Sam 3 has more closed environments than its predecessors, particularly in the early levels. There are also a larger number of enemies that can attack the player from a distance. The player can carry an unlimited number of weapons, including a minigun, rocket launcher, assault rifle and a cannon. There are 13 weapons in total, five of which have a manual reload. The signature close-combat weapons from the first game, the knife and chainsaw, have been replaced with a sledgehammer with three modes of attack (vertical strike, 180 turn and full 360 turn). In the style of old-fashioned first-person shooters, there is no regenerating health, instead there are health and armor power-ups scattered throughout the levels that the player must pick up. Additionally, the levels are full of secret areas, where health, armor, ammo and in some cases, weapons from later levels can be found, following the tradition of the previous games. Some weapons such as the Lasergun and the Sniper Rifle and their respective ammunition pickups are in fact secret-only, and otherwise are not found in the levels normally. There are no puzzles, however the player must find keys, pull levers and find environmental anomalies to progress. Classic enemies such as the Beheaded Kamikaze, Beheaded Rocketeer and Kleer Skeleton return in the game. New ones include the Khnum and Scrapjack (resembling the Hell Knight and Mancubus from Doom, respectively), as well as the cloned soldiers reminiscent of the Strogg from the Quake series. The Kamikaze has returned to his original design, rather than the Serious Sam 2 design. The Gnaar's design has radically been changed from the original game. It is now much larger, differently shaped and walks on all fours, unlike its bipedal counterpart in Serious Sam. Serious Sam 3 features some new gameplay mechanics such as sprinting and iron sights. Unlike most other FPS games that have sprinting, you are able to sprint as much as you want, but cannot attack while sprinting. The pistol and assault rifle have the ability to aim down sights, which increases accuracy but only slightly. The player moves slower while aiming down the sight, making it impractical at close range. The player can perform hand kills or kick enemies to conserve ammo, depending on the weapon selected. For example, a Gnaar's eyeball can be ripped out or an Antaresian Spider's shell can be broken. Plot Serious Sam 3: BFE serves as a prequel to the original Serious Sam: The First Encounter and depicts the events on Earth before Sam's journey into the past. Prior to AD 2060, humanity had slowly begun uncovering artifacts and ruins left behind in ancient times by the Sirians, the famous and long-thought extinct race from the place of Sirius. Unfortunately, Mental has chosen this time to turn his attention upon Earth. He dispatches his space fleet carrying his endless hordes to attack Earth, leading a three-year conquest that has humanity driven almost to the point of extinction. In a last-ditch effort, the survivors turn to the Time-Lock, a recently excavated device supposedly capable of granting a single person the ability of time travel via an inter-dimensional portal. Through it, that person could reach a pivotal point in time and alter events of the past. But as the device lies dormant, they must first discover a means to turn it on. Sam "Serious" Stone, part of the Earth Defense force, is dispatched with a detachment of soldiers in Alpha team to modern Egypt, which is occupied by Mental's alien army. Their original mission is to recon, rendezvous and extract Bravo team who are protecting Dr Stein, a scientist carrying hieroglyphics believed to contain instructions for powering up the Time-Lock. Sam's insertion goes haywire as his chopper is shot down and both teams are quickly wiped out. However, he is able to recover the hieroglyphics from Stein's phone in the museum and transmit them to headquarters. Deciphering indicates there is a hidden Sirian chamber below the Great Pyramid. Sam clears himself a path to a tunnel underneath the Sphinx and descends into the Pyramid. He not only discovers the hidden chamber, but recovers crucial information and a bracelet device from the remains of what might have been Earth's last Sirian. In order to power the Time-Lock, two dormant but incredibly powerful plasma-energy generators need to be activated. Hellfire from Charlie team inserts Sam to bring both online. This is slowly accomplished and Team Charlie is staged to enter the Time-Lock. Sam is relieved of duty and in the process of being extracted from Cairo, but is shot down once again and is forced to flee towards the lost ruins of Nubia. Traversing through more tombs, Sam gets back in touch with Hellfire and learns that Mental's forces have overrun the human military and killed them all shortly before dying herself. Now determined to finish what the Sirians has started, Sam vows to use the Time-Lock himself and kill Mental in the past before he can destroy humanity in the present. Sam then makes one last travel to Hatshepsut Temple, where the Time-Lock is located. The struggle to this destination ends with Sam killing Ugh-Zan IV, the father of Ugh-Zan III from Serious Sam: The First Encounter. With Ugh-Zan IV dead, the Time-Lock then activates, displaying an inter-dimensional portal to 3000 B.C., the timeline where the Sirians became extinct. Sam calls Mental on Stein's phone and is answered by Mental's daughter, Judy. She tells him that Mental is planning to "moon" him. Sam notices the Moon plummeting rapidly within Earth's atmosphere and escapes through the Time-Lock to 3000 B.C. as the Moon impacts Earth, destroying the planet. Copy protection The game features Steamworks DRM and a complex system of custom checks as part of its copy protection system. If the game code detects what it believes to be an unauthorised copy, it alters gameplay to make play exceedingly difficult. An invincible arachnoid is spawned: this creature can charge at high speed, melee attack, and attacks from a range with twin chainguns. Marketing and release Indie games Before the release of Serious Sam 3: BFE, three indie games were announced to be in development. All were released around the time of Serious Sam 3s release. Serious Sam Double D - A side-scroller platforming game featuring an ability to wield multiple weapons at a time. Serious Sam: Kamikaze Attack! - An Android and iOS 2D-game that allows player to direct and control Beheaded Kamikaze. Serious Sam: The Random Encounter - An action role-playing game with simplified graphics. Gold Edition The Gold Edition of Serious Sam 3: BFE includes: The main game The "Jewel of the Nile" expansion pack The Bonus Pack, which consists of: The original soundtrack of the game composed by Damjan Mravunac and Filip Brtan in AAC, FLAC and WMA formats The "Brett Sanderson Headless Kamikaze" skin for the "Headless Kamikaze" multiplayer model A sniper scope for the single-player AS-24 "Devastator" weapon A making-of video of the game (in Croatian, with English subtitles) A digital 42-page colored artwork album A digital copy of the game's box art High-resolution images of the game's posters High-resolution versions of the game's different trailers Pre-orders If the user pre-ordered Serious Sam 3: BFE, they received Devolver Digital's CFO Fork Parker as a model for the game's multiplayer mode. Additionally, if the user decided to pre-preorder the "Serious Deluxe Edition", they received the Gold Edition, the Fork Parker model, golden skins for the Fork Parker and Serious Sam models and both classic Serious Sam titles, The First Encounter and The Second Encounter. On top, all deluxe pre-orderers were gifted a copy of Serious Sam Classics: Revolution, which released into Steam Early Access in late 2014. Downloadable content A downloadable content pack titled Jewel of the Nile was released for the PC and Mac versions of the game on 16 October 2012. The DLC contains a new single-player campaign and competitive game modes for the game's multiplayer mode. New achievements are also included. Jewel of the Nile was released for Xbox 360 on 17 October 2012, together with the game itself. VR port A virtual reality version of the game titled Serious Sam 3 VR was released on 9 November 2017. Reception Serious Sam 3: BFE has garnered mostly positive reviews. Eurogamer gave the game a 7 out of 10, praising it for what Duke Nukem Forever failed to deliver, however criticizing the redundancy of the title's gameplay compared to its previous iterations in the series. Game Informer rewarded the game a score of 7.75 and praised the game for its graphics and the heavy metal score while being true to its original concept. Destructoid gave it an 8.5, saying "It's a lot of fun indeed. A lot of backbreaking, grueling, soul-destroying fun." Review aggregation website Metacritic gave the game a score of 72 out of 100 based on 53 reviews. References External links 2011 video games Devolver Digital games First-person shooters Indie video games Linux games MacOS games Multiplayer and single-player video games PlayStation 3 games Serious Sam Video games with Steam Workshop support Video game prequels Video games developed in Croatia Video games set in Egypt Windows games Xbox 360 games
530509
https://en.wikipedia.org/wiki/Horst%20Feistel
Horst Feistel
Horst Feistel (January 30, 1915 – November 14, 1990) was a German-American cryptographer who worked on the design of ciphers at IBM, initiating research that culminated in the development of the Data Encryption Standard (DES) in the 1970s. The structure used in DES, called a Feistel network, is commonly used in many block ciphers. Life and work Feistel was born in Berlin, Germany in 1915, and moved to the United States in 1934. During World War II, he was placed under house arrest, but gained US citizenship on 31 January 1944. The following day he was granted a security clearance and began work for the US Air Force Cambridge Research Center (AFCRC) on Identification Friend or Foe (IFF) devices until the 1950s. He was subsequently employed at MIT's Lincoln Laboratory, then the MITRE corporation. Finally, he moved to IBM, where he received an award for his cryptographic work. His research at IBM led to the development of the Lucifer and Data Encryption Standard (DES) ciphers. Feistel was one of the earliest non-government researchers to study the design and theory of block ciphers. Feistel lent his name to the Feistel network construction, a common method for constructing block ciphers (for example DES). Feistel obtained a bachelor's degree at MIT, and his master's at Harvard, both in physics. He married Leona (Gage) in 1945, with whom he had a daughter, Peggy. Notes References Whitfield Diffie, Susan Landau (1998). Privacy on the Line: The Politics of Wiretapping and Encryption. Horst Feistel, "Cryptography and Computer Privacy." Scientific American, Vol. 228, No. 5, 1973. (JPEG format scanned) Horst Feistel, H, W. Notz, J. Lynn Smith. "Some cryptographic techniques for machine-to-machine data communications." IEEE Proceedings, 63(11), 1545–1554, 1975. Levy, Steven. Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, 2001. External links 1915 births 1990 deaths People from Berlin German emigrants to the United States MIT Department of Physics alumni Harvard Graduate School of Arts and Sciences alumni Modern cryptographers IBM employees IBM Research computer scientists German computer scientists Mitre Corporation people MIT Lincoln Laboratory people
62632
https://en.wikipedia.org/wiki/Lycoris%20%28company%29
Lycoris (company)
Lycoris, (formerly Redmond Linux Corp.), was started in the year 2000 with the intent to make free software easy enough for anyone to use. Redmond Linux was founded by Joseph Cheek, an entrepreneur who had previously worked for Linuxcare. In late 2001 it merged with embedded systems company DeepLinux; the merged entity was named Redmond Linux Corporation. The company's first product was Redmond Linux Personal, an easy-to-use Linux desktop operating system. The company was renamed Lycoris in January 2002 and its assets were acquired by Mandriva on June 15, 2005. The flagship product of Lycoris was Lycoris Desktop/LX, a Linux distribution. The company was based in Maple Valley, Washington, USA. Lycoris is currently part of Mandriva. Lycoris Desktop/LX Lycoris Desktop/LX installer was originally based on Caldera International's OpenLinux Workstation 3.1 distribution with the rest of the distribution built from the kernel up. The desktop and applications looked a lot like Microsoft Windows XP, right down to the background image that shipped with the software. References Linux companies
2839721
https://en.wikipedia.org/wiki/Adaptive%20software%20development
Adaptive software development
Adaptive software development (ASD) is a software development process that grew out of the work by Jim Highsmith and Sam Bayer on rapid application development (RAD). It embodies the principle that continuous adaptation of the process to the work at hand is the normal state of affairs. Adaptive software development replaces the traditional waterfall cycle with a repeating series of speculate, collaborate, and learn cycles. This dynamic cycle provides for continuous learning and adaptation to the emergent state of the project. The characteristics of an ASD life cycle are that it is mission focused, feature based, iterative, timeboxed, risk driven, and change tolerant. As with RAD, ASD is also an antecedent to agile software development. The word speculate refers to the paradox of planning – it is more likely to assume that all stakeholders are comparably wrong for certain aspects of the project’s mission, while trying to define it. During speculation, the project is initiated and adaptive cycle planning is conducted. Adaptive cycle planning uses project initiation information—the customer’s mission statement, project constraints (e.g., delivery dates or user descriptions), and basic requirements—to define the set of release cycles (software increments) that will be required for the project. Collaboration refers to the efforts for balancing the work based on predictable parts of the environment (planning and guiding them) and adapting to the uncertain surrounding mix of changes caused by various factors, such as technology, requirements, stakeholders, software vendors. The learning cycles, challenging all stakeholders, are based on the short iterations with design, build and testing. During these iterations the knowledge is gathered by making small mistakes based on false assumptions and correcting those mistakes, thus leading to greater experience and eventually mastery in the problem domain. References Adaptive Software Development: A Collaborative Approach to Managing Complex Systems, Highsmith, J.A., 2000 New York: Dorset House, 392pp, Agile Project Management: Creating Innovative Products, Addison-Wesley, Jim Highsmith, March 2004, 277pp, Software Engineering: A Practitioner's Approach, Roger Pressman, Bruce Maxim. Software development process Agile software development
2059043
https://en.wikipedia.org/wiki/Compact%20Disc%20and%20DVD%20copy%20protection
Compact Disc and DVD copy protection
CD/DVD copy protection is a blanket term for various methods of copy protection for CDs and DVDs. Such methods include DRM, CD-checks, Dummy Files, illegal tables of contents, over-sizing or over-burning the CD, physical errors and bad sectors. Many protection schemes rely on breaking compliance with CD and DVD standards, leading to playback problems on some devices. Protection schemes rely on distinctive features that: can be applied to a medium during the manufacturing process, so that a protected medium is distinguishable from an unprotected one. cannot be faked, copied, or retroactively applied to an unprotected medium using typical hardware and software. Technology Filesystems / Dummy files Most CD-ROMs use the ISO9660 file system to organize the available storage space for use by a computer or player. This has the effect of establishing directories (i.e., folders) and files within those directories. Usually, the filesystem is modified to use extensions intended to overcome limitations in the ISO9660 filesystem design. These include Joliet, RockRidge and El Torito extensions. These are, however, compatible additions to the underlying ISO9660 structure, not complete replacements or modifications. The most basic approach for a distinctive feature is to purposely fake some information within the filesystem. Early generations of software copied every single file one by one from the original medium and re-created a new filesystem on the target medium. Sectors A sector is the primary data structure on a CD-ROM accessible to external software (including the OS). On a Mode-1 CD-ROM, each sector contains 2048 bytes of user-data (content) and 304 bytes of structural information. Among other things, the structural information consists of the sector number, the sector's relative and absolute logical position an error detection code (EDC), which is an advanced checksum used to detect (if possible) read-errors an error correction code (ECC), an advanced method of detecting and correcting errors Using the EDC and ECC information, the drive can detect and repair many (but not all) types of read-error. Copy protections can use these fields as a distinctive feature by purposely crafting sectors with improper EDC/ECC fields during manufacture. The protection software tries to read those sectors, awaiting read-errors. As early generations of end-user soft/hardware were not able to generate sectors with illegal structural information, this feature could not be re-generated with such soft/hardware. If the sectors forming the distinctive feature have become readable, the medium is presumed to be a copy. A modification of this approach uses large regions of unreadable sectors with small islands of readable ones interspersed. Most software trying to copy protected media will skip intervals of sectors when confronted with unreadable ones, expecting them all to be bad. In contrast to the original approach, the protection scheme expects the sectors to be readable, supposing the medium to be a copy when read-errors occur. Sub-channels Beside the main-channel which holds all of the user-data, a CD-ROM contains a set of eight sub-channels where certain meta-information can be stored. (For an audio CD, the user-data is the audio itself; for a data CD, it is the filesystem and file data.) One of the sub-channels — the Q-channel — states the drive's current position relative to the beginning of the CD and the current track. This was designed for Audio-CDs (which for a few years were the only CDs), where this information is used to keep the drive on track; nevertheless the Q-channel is filled even on Data-CDs. Another sub-channel, the P-channel (which is the first of the subchannels) carries even more primitive information—a sort of semaphore—indicating the points where each track starts. As every Q-channel field contains a 16-bit checksum over its content, copy protection can yet again use this field to distinguish between an original medium and a copy. Early generations of end-user soft/hardware calculated the Q-channel by themselves, not expecting them to carry any valuable information. Modern software and hardware are able to write any information given into the subchannels Q and P. Twin sectors This technique exploits the way the sectors on a CD-ROM are addressed and how the drive seeks from one sector to another. On every CD-ROM the sectors state their logical absolute and relative position in the corresponding sector-headers. The drive can use this information when it is told to retrieve or seek to a certain sector. Note that such information is not physically "hard-wired" into the CD-ROM itself but part of user-controlled data. A part of an unprotected CD-ROM may look like this (simplified): When the drive is told to read from or seek to sector 6553, it calculates the physical distance, moves the laser-diode and starts reading from the (spinning) disc, waiting for sector 6553 to come by. A protected CD-ROM may look like this: In this example, a sector was inserted ("Mary") with a sector-address identical to the one right before the insertion-point (6553). When the drive is told to read from or seek to sector 6553 on such a disc, the resulting sector-content depends on the position the drive starts seeking from. If the drive has to seek forwards, the sector's original content "Jill" is returned. If the drive has to seek backwards, the sector's twin "Mary" is returned. A protected program can check whether the CD-ROM is original by positioning the drive behind sector 6553 and then reading from it — expecting the Mary version to appear. When a program tries to copy such a CD-ROM, it will miss the twin-sector as the drive skips the second 6553-sector, seeking for sector 6554. There are more details about this technique (e.g. the twin-sectors need to be recorded in large extents, the SubQ-channel has to be modified etc.) that were omitted. If the twin sectors are right next to each other as shown, the reader would always read the first one, Jill; the twin sectors need to be farther apart on the disc. Data position measurement Stamped CDs are perfect clones and have the data always at the same position, whereas writable media differ from each other. Data Position Measurement (DPM) detects these little physical differences to efficiently protect against duplicates. DPM was first used publicly in 1996 by Link Data Security's CD-Cops. SecuROM 4 and later uses this protection method, as do Nintendo optical discs. Changes that followed The Red Book CD-DA audio specification does not include any copy protection mechanism other than a simple anti-copy flag. Starting in early 2002, attempts were made by record companies to market "copy-protected" non-standard compact discs. Philips stated that such discs were not permitted to bear the trademarked Compact Disc Digital Audio logo because they violate the Red Book specification. There was great public outcry over copy-protected discs because many saw it as a threat to fair use. For example, audio tracks on such media cannot be easily added to a personal music collection on a computer's hard disk or a portable (non-CD) music player. Also, many ordinary CD audio players (e.g. in car radios) had problems playing copy-protected media, mostly because they used hardware and firmware components also used in CD-ROM drives. The reason for this reuse is cost efficiency; the components meet the Red Book standard, so no valid reason existed not to use them. Other car stereos that supported CD-ROM discs containing compressed audio files (such as MP3, FLAC, or Windows Media) had to use some CD-ROM drive hardware (meeting the Yellow Book CD-ROM standard) in order to be capable of reading those discs. In late 2005, Sony BMG Music sparked the Sony CD copy protection scandal when it included a form of copy protection called Extended Copy Protection ("XCP") on discs from 52 artists. Upon inserting such a disc in the CD drive of a computer running Microsoft Windows, the XCP software would be installed. If CD ripper software (or other software, such as a real-time effects program, that reads digital audio from the disc in the same way as a CD ripper) were to subsequently access the music tracks on the CD, XCP would substitute white noise for the audio on the disc. Technically inclined users and computer security professionals found that XCP contains a rootkit component. After installation, XCP went to great lengths to disguise its existence, and it even attempted to disable the computer's CD drive if XCP was forcibly removed. XCP's efforts to cloak itself unfortunately allowed writers of malware to amplify the damage done by their software, hiding the malware under XCP's cloak if XCP had been installed on the victim's machine. Several publishers of antivirus and anti-spyware software updated their products to detect and remove XCP if found, on the grounds that it is a trojan horse or other malware; and an assistant secretary for the United States' Department of Homeland Security chastised companies that would cause security holes on customers' computers, reminding the companies that they do not own the computers. Facing resentment and class action lawsuits Sony BMG issued a product recall for all discs including XCP, and announced it was suspending use of XCP on future discs. On November 21, 2005 the Texas Attorney General Greg Abbott sued Sony BMG for XCP and on December 21, 2005 sued Sony BMG for MediaMax copy protection. United Kingdom position The provisions of law allow for redress to buyers of Audio CDs with Copyright-Protection. The Copyright, Designs and Patents Act 1988 contains provisions in section 296ZE part VII that allow for "[a] remedy where effective technological measures prevent permitted acts". In practice, the consumer would make a complaint to the copyright holder of the Audio CD, usually a Record Label. The complaint would contain a request to the holder of the copyright to provide a "work-around" in order to make use of the copy-protected CD, to the extent that a non-copyright protected CD could be used lawfully. Where the consumer believes the copyright holder has not been reasonable in entertaining the request, they are within their rights under the Act to make an application to the Secretary of State to review the merits of the complaint and (if the complaint is upheld) to instruct the copyright holder to implement a work-around circumventing the copyright protection. Schedule 5A of the Copyright, Designs and Patent Act 1988 lists the permitted acts, to which the provisions of section 296ZE apply (i.e. lists the cases in which the consumer can use the remedy, if the copy protection prevents the user doing a permitted act). See also List of Compact Disc and DVD copy protection schemes List of copy protection schemes References External links CDMediaWorld's CD protection page Compact Disc and DVD
39185592
https://en.wikipedia.org/wiki/Twimight
Twimight
Twimight was an open source Android client for the social networking site Twitter. The client let users view in real time "tweets" or micro-blog posts on the Twitter website as well as publish their own. Added value In addition to being a fully functional, ad-free and open-source Twitter client, Twimight allowed communication if the cellular network is unavailable (for example, in case of a natural disaster). Twimight was also equipped with a feature called the "disaster mode", which users could enable or disable at will. When the disaster mode was enabled and the cellular network was down, Twimight used peer-to-peer communication to let users tweet in any circumstance. Enabling the disaster mode enabled on the phone's Bluetooth transceiver and connected the user to other nearby phones. This created a mobile ad hoc network or MANET, which could be used, for example, to locate missing persons even when the communication infrastructure had failed. History Twimight started out as a project for a Master thesis at ETH Zurich in the spring of 2011. References External links The Twimight development website Free mobile software Mobile social software Free and open-source Android software Android (operating system) software Twitter services and applications 2013 software Wireless networking Microblogging software Geosocial networking
5363
https://en.wikipedia.org/wiki/Video%20game
Video game
A video game or computer game is an electronic game that involves interaction with a user interface or input device such as a joystick, controller, keyboard, or motion sensing device to generate visual feedback. This feedback is shown on a video display device, such as a TV set, monitor, touchscreen, or virtual reality headset. Video games are often augmented with audio feedback delivered through speakers or headphones, and sometimes with other types of feedback, including haptic technology. Computer games are not all video games—for example text adventure games, chess, and so on do not depend upon a graphics display. Video games are defined based on their platform, which include arcade video games, console games, and personal computer (PC) games. More recently, the industry has expanded onto mobile gaming through smartphones and tablet computers, virtual and augmented reality systems, and remote cloud gaming. Video games are classified into a wide range of genres based on their type of gameplay and purpose. The first video game prototypes in the 1950s and 1960s are simple extensions of electronic games using video-like output from large room-size computers. The first consumer video game is the arcade video game Computer Space in 1971. In 1972 came the iconic hit arcade game Pong, and the first home console, the Magnavox Odyssey. The industry grew quickly during the golden age of arcade video games from the late 1970s to early 1980s, but suffered from the crash of the North American video game market in 1983 due to loss of publishing control and saturation of the market. Following the crash, the industry matured, dominated by Japanese companies such as Nintendo, Sega, and Sony, and established practices and methods around the development and distribution of video games to prevent a similar crash in the future, many which continue to be followed. Today, video game development requires numerous skills to bring a game to market, including developers, publishers, distributors, retailers, console and other third-party manufacturers, and other roles. In the 2000s, the core industry centered on "AAA" games, leaving little room for riskier, experimental games. Coupled with the availability of the Internet and digital distribution, this gave room for independent video game development (or indie games) to gain prominence into the 2010s. Since then, the commercial importance of the video game industry has been increasing. The emerging Asian markets and mobile games on smartphones in particular are altering player demographics towards casual gaming and increasing monetization by incorporating games as a service. As of 2020, the global video game market has estimated annual revenues of across hardware, software, and services. This is three times the size of the 2019 global music industry and four times that of the 2019 film industry. Origins Early video games use interactive electronic devices with various display formats. The earliest example is from 1947—a "Cathode-ray tube amusement device" was filed for a patent on 25 January 1947, by Thomas T. Goldsmith Jr. and Estle Ray Mann, and issued on 14 December 1948, as U.S. Patent 2455992. Inspired by radar display technology, it consists of an analog device allowing a user to control the parabolic arc of a dot on the screen to simulate a missile being fired at targets, which are paper drawings fixed to the screen. Other early examples include Christopher Strachey's Draughts game, the Nimrod computer at the 1951 Festival of Britain; OXO, a tic-tac-toe Computer game by Alexander S. Douglas for the EDSAC in 1952; Tennis for Two, an electronic interactive game engineered by William Higinbotham in 1958; and Spacewar!, written by MIT students Martin Graetz, Steve Russell, and Wayne Wiitanen's on a DEC PDP-1 computer in 1961. Each game has different means of display: NIMROD has a panel of lights to play the game of Nim, OXO has a graphical display to play tic-tac-toe, Tennis for Two has an oscilloscope to display a side view of a tennis court, and Spacewar! has the DEC PDP-1's vector display to have two spaceships battle each other. These preliminary inventions paved the way for the origins of video games today. Ralph H. Baer, while working at Sanders Associates in 1966, devised a control system to play a rudimentary game of table tennis on a television screen. With the company's approval, Baer built the prototype "Brown Box". Sanders patented Baer's inventions and licensed them to Magnavox, which commercialized it as the first home video game console, the Magnavox Odyssey, released in 1972. Separately, Nolan Bushnell and Ted Dabney, inspired by seeing Spacewar! running at Stanford University, devised a similar version running in a smaller coin-operated arcade cabinet using a less expensive computer. This was released as Computer Space, the first arcade video game, in 1971. Bushnell and Dabney went on to form Atari, Inc., and with Allan Alcorn, created their second arcade game in 1972, the hit ping pong-style Pong, which was directly inspired by the table tennis game on the Odyssey. Sanders and Magnavox sued Atari for infringement of Baer's patents, but Atari settled out of court, paying for perpetual rights to the patents. Following their agreement, Atari made a home version of Pong, which was released by Christmas 1975. The success of the Odyssey and Pong, both as an arcade game and home machine, launched the video game industry. Both Baer and Bushnell have been titled "Father of Video Games" for their contributions. Terminology The term "video game" was developed to distinguish this class of electronic games that were played on some type of video display rather than on a teletype printer or similar device. This also distinguished from many handheld electronic games like Merlin which commonly used LED lights for indicators but did not use these in combination for imaging purposes. "Computer game" may also be used to describe video games because all video games essentially require a computer processor, and in some situations, may be used interchangeably with "video game". However, the term "computer game" may also be more specific to games played primarily on personal computers or other type of flexible hardware system (also known as a PC game), to distinguish from video games that are played on fixed console systems. Other terms such as "television game" or "telegame" had been used in the 1970s and early 1980s, particularly for the home consoles that connect to a television set. In Japan, where consoles like the Odyssey were first imported and then made within the country by the large television manufacturers such as Toshiba and Sharp Corporation, such games are known as "TV games", or TV geemu or terebi geemu, "Electronic game" may also be used to refer to video games, but this also incorporates devices like early handheld electronic games that lack any video output. and the term "TV game" is still commonly used into the 21st century. The first appearance of the term "video game" emerged around 1973. The Oxford English Dictionary cited a November 10, 1973 BusinessWeek article as the first printed use of the term. Though Bushnell believed the term came from a vending magazine review of Computer Space in 1971, a review of the major vending magazines Vending Times and Cashbox showed that the term came much earlier, appearing first around March 1973 in these magazines in mass usage including by the arcade game manufacturers. As analyzed by video game historian Keith Smith, the sudden appearance suggested that the term had been proposed and readily adopted by those involved. This appeared to trace to Ed Adlum, who ran Cashboxs coin-operated section until 1972 and then later founded RePlay Magazine, covering the coin-op amusement field, in 1975. In a September 1982 issue of RePlay, Adlum is credited with first naming these games as "video games": "RePlay's Eddie Adlum worked at 'Cash Box' when 'TV games' first came out. The personalities in those days were Bushnell, his sales manager Pat Karns and a handful of other 'TV game' manufacturers like Henry Leyser and the McEwan brothers. It seemed awkward to call their products 'TV games', so borrowing a word from Billboards description of movie jukeboxes, Adlum started to refer to this new breed of amusement machine as 'video games.' The phrase stuck." Adlum explained in 1985 that up until the early 1970s, amusement arcades typically had non-video arcade games such as pinball machines and electro-mechanical games. With the arrival of video games in arcades during the early 1970s, there was initially some confusion in the arcade industry over what term should be used to describe the new games. He "wrestled with descriptions of this type of game," alternating between "TV game" and "television game" but "finally woke up one day" and said, "what the hell... video game!" Definition While many games readily fall into a clear, well-understood definition of video games, new genres and innovations in game development have raised the question of what are the essential factors of a video game that separate the medium from other forms of entertainment. The introduction of interactive films in the 1980s with games like Dragon's Lair, featured games with full motion video played off a form of media but only limited user interaction. This had required a means to distinguish these games from more traditional board games that happen to also use external media, such as the Clue VCR Mystery Game which required players to watch VCR clips between turns. To distinguish between these two, video games are considered to require some interactivity that affects the visual display. Most video games tend to feature some type of victory or winning conditions, such as a scoring mechanism or a final boss fight. The introduction of walking simulators (adventure games that allow for exploration but lack any objectives) like Gone Home, and empathy games (video games that tend to focus on emotion) like That Dragon, Cancer brought the idea of games that did not have any such type of winning condition and raising the question of whether these were actually games. These are still commonly justified as video games as they provide a game world that the player can interact with by some means. The lack of any industry definition for a video game by 2021 was an issue during the case Epic Games v. Apple which dealt with video games offered on Apple's iOS App Store. Among concerns raised were games like Fortnite Creative and Roblox which created metaverses of interactive experiences, and whether the larger game and the individual experiences themselves were games or not in relation to fees that Apple charged for the App Store. Judge Yvonne Gonzalez Rogers, recognizing that there was yet an industry standard definition for a video game, established for her ruling that "At a bare minimum, videogames appear to require some level of interactivity or involvement between the player and the medium" compared to passive entertainment like film, music, and television, and "videogames are also generally graphically rendered or animated, as opposed to being recorded live or via motion capture as in films or television". Rogers still concluded that what is a video game "appears highly eclectic and diverse". Video game terms The gameplay experience varies radically between video games, but many common elements exist. Most games will launch into a title screen and give the player a chance to review options such as the number of players before starting a game. Most games are divided into levels which the player must work the avatar through, scoring points, collecting power-ups to boost the avatar's innate attributes, all while either using special attacks to defeat enemies or moves to avoid them. This information is relayed to the player through a type of on-screen user interface such as a heads-up display atop the rendering of the game itself. Taking damage will deplete their avatar's health, and if that falls to zero or if the avatar otherwise falls into an impossible-to-escape location, the player will lose one of their lives. Should they lose all their lives without gaining an extra life or "1-UP", then the player will reach the "game over" screen. Many levels as well as the game's finale end with a type of boss character the player must defeat to continue on. In some games, intermediate points between levels will offer save points where the player can create a saved game on storage media to restart the game should they lose all their lives or need to stop the game and restart at a later time. These also may be in the form of a passage that can be written down and reentered at the title screen. Product flaws include software bugs which can manifest as glitches which may be exploited by the player; this is often the foundation of speedrunning a video game. These bugs, along with cheat codes, Easter eggs, and other hidden secrets that were intentionally added to the game can also be exploited. On some consoles, cheat cartridges allow players to execute these cheat codes, and user-developed trainers allow similar bypassing for computer software games. Both of which might make the game easier, give the player additional power-ups, or change the appearance of the game. Components To distinguish from electronic games, a video game is generally considered to require a platform, the hardware which contains computing elements, to process player interaction from some type of input device and displays the results to a video output display. Platform Video games require a platform, a specific combination of electronic components or computer hardware and associated software, to operate. The term system is also commonly used. Games are typically designed to be played on one or a limited number of platforms, and exclusivity to a platform is used as a competitive edge in the video game market. However, games may be developed for alternative platforms than intended, which are described as ports or conversions. These also may be remasters - where most of the original game's source code is reused and art assets, models, and game levels are updated for modern systems - and remakes, where in addition to asset improvements, significant reworking of the original game and possibly from scratch is performed. The list below is not exhaustive and excludes other electronic devices capable of playing video games such as PDAs and graphing calculators. Computer game Most computer games are PC games, referring to those that involve a player interacting with a personal computer (PC) connected to a video monitor. Personal computers are not dedicated game platforms, so there may be differences running the same game on different hardware. Also, the openness allows some features to developers like reduced software cost, increased flexibility, increased innovation, emulation, creation of modifications or mods, open hosting for online gaming (in which a person plays a video game with people who are in a different household) and others. A gaming computer is a PC or laptop intended specifically for gaming, typically using high-performance, high-cost components. In additional to personal computer gaming, there also exist games that work on mainframe computers and other similarly shared systems, with users logging in remotely to use the computer. Home console A console game is played on a home console, a specialized electronic device that connects to a common television set or composite video monitor. Home consoles are specifically designed to play games using a dedicated hardware environment, giving developers a concrete hardware target for development and assurances of what features will be available, simplifying development compared to PC game development. Usually consoles only run games developed for it, or games from other platform made by the same company, but never games developed by its direct competitor, even if the same game is available on different platforms. It often comes with a specific game controller. Major console platforms include Xbox, PlayStation, and Nintendo. Handheld console A handheld gaming device is a small, self-contained electronic device that is portable and can be held in a user's hands. It features the console, a small screen, speakers and buttons, joystick or other game controllers in a single unit. Like consoles, handhelds are dedicated platforms, and share almost the same characteristics. Handheld hardware usually is less powerful than PC or console hardware. Some handheld games from the late 1970s and early 1980s could only play one game. In the 1990s and 2000s, a number of handheld games used cartridges, which enabled them to be used to play many different games. The handheld console has waned in the 2010s as mobile device gaming has become a more dominant factor. Arcade video game An arcade video game generally refers to a game played on an even more specialized type of electronic device that is typically designed to play only one game and is encased in a special, large coin-operated cabinet which has one built-in console, controllers (joystick, buttons, etc.), a CRT screen, and audio amplifier and speakers. Arcade games often have brightly painted logos and images relating to the theme of the game. While most arcade games are housed in a vertical cabinet, which the user typically stands in front of to play, some arcade games use a tabletop approach, in which the display screen is housed in a table-style cabinet with a see-through table top. With table-top games, the users typically sit to play. In the 1990s and 2000s, some arcade games offered players a choice of multiple games. In the 1980s, video arcades were businesses in which game players could use a number of arcade video games. In the 2010s, there are far fewer video arcades, but some movie theaters and family entertainment centers still have them. Browser game A browser game takes advantages of standardizations of technologies for the functionality of web browsers across multiple devices providing a cross-platform environment. These games may be identified based on the website that they appear, such as with Miniclip games. Others are named based on the programming platform used to develop them, such as Java and Flash games. Mobile game With the introduction of smartphones and tablet computers standardized on the iOS and Android operating systems, mobile gaming has become a significant platform. These games may utilize unique features of mobile devices that are not necessary present on other platforms, such as accelerometers, global positing information and camera devices to support augmented reality gameplay. Cloud gaming Cloud gaming requires a minimal hardware device, such as a basic computer, console, laptop, mobile phone or even a dedicated hardware device connected to a display with good Internet connectivity that connects to hardware systems by the cloud gaming provider. The game is computed and rendered on the remote hardware, using a number of predictive methods to reduce the network latency between player input and output on their display device. For example, the Xbox Cloud Gaming and PlayStation Now platforms use dedicated custom server blade hardware in cloud computing centers. Virtual reality Virtual reality (VR) games generally require players to use a special head-mounted unit that provides stereoscopic screens and motion tracking to immerse a player within virtual environment that responds to their head movements. Some VR systems include control units for the player's hands as to provide a direct way to interact with the virtual world. VR systems generally require a separate computer, console, or other processing device that couples with the head-mounted unit. Emulation An emulator enables games from a console or otherwise different system to be run in a type of virtual machine on a modern system, simulating the hardware of the original and allows old games to be played. While emulators themselves have been found to be legal in United States case law, the act of obtaining the game software that one does not already own may violate copyrights. However, there are some official releases of emulated software from game manufacturers, such as Nintendo with its Virtual Console or Nintendo Switch Online offerings. Backward compatibility Backward compatibility is similar in nature to emulation in that older games can be played on newer platforms, but typically directly though hardware and build-in software within the platform. For example, the PlayStation 2 is capable of playing original PlayStation games simply by inserting the original game media into the newer console, while Nintendo's Wii could play Nintendo GameCube titles as well in the same manner. Game media Early arcade games, home consoles, and handheld games were dedicated hardware units with the game's logic built into the electronic componentry of the hardware. Since then, most video game platforms are considered programmable, having means to read and play multiple games distributed on different types of media or formats. Physical formats include ROM cartridges, magnetic storage including magnetic tape data storage and floppy discs, optical media formats including CD-ROM and DVDs, and flash memory cards. Furthermore digital distribution over the Internet or other communication methods as well as cloud gaming alleviate the need for any physical media. In some cases, the media serves as the direct read-only memory for the game, or it may be the form of installation media that is used to write the main assets to the player's platform's local storage for faster loading periods and later updates. Games can be extended with new content and software patches through either expansion packs which are typically available as physical media, or as downloadable content nominally available via digital distribution. These can be offered freely or can be used to monetize a game following its initial release. Several games offer players the ability to create user-generated content to share with others to play. Other games, mostly those on personal computers, can be extended with user-created modifications or mods that alter or add onto the game; these often are unofficial and were developed by players from reverse engineering of the game, but other games provide official support for modding the game. Input device Video game can use several types of input devices to translate human actions to a game. Most common are the use of game controllers like gamepads and joysticks for most consoles, and as accessories for personal computer systems along keyboard and mouse controls. Common controls on the most recent controllers include face buttons, shoulder triggers, analog sticks, and directional pads ("d-pads"). Consoles typically include standard controllers which are shipped or bundled with the console itself, while peripheral controllers are available as a separate purchase from the console manufacturer or third-party vendors. Similar control sets are built into handheld consoles and onto arcade cabinets. Newer technology improvements have incorporated additional technology into the controller or the game platform, such as touchscreens and motion detection sensors that give more options for how the player interacts with the game. Specialized controllers may be used for certain genres of games, including racing wheels, light guns and dance pads. Digital cameras and motion detection can capture movements of the player as input into the game, which can, in some cases, effectively eliminate the control, and on other systems such as virtual reality, are used to enhance immersion into the game. Display and output By definition, all video games are intended to output graphics to an external video display, such as cathode-ray tube televisions, newer liquid-crystal display (LCD) televisions and built-in screens, projectors or computer monitors, depending on the type of platform the game is played on. Features such as color depth, refresh rate, frame rate, and screen resolution are a combination of the limitations of the game platform and display device and the program efficiency of the game itself. The game's output can range from fixed displays using LED or LCD elements, text-based games, two-dimensional and three-dimensional graphics, and augmented reality displays. The game's graphics are often accompanied by sound produced by internal speakers on the game platform or external speakers attached to the platform, as directed by the game's programming. This often will include sound effects tied to the player's actions to provide audio feedback, as well as background music for the game. Some platforms support additional feedback mechanics to the player that a game can take advantage of. This is most commonly haptic technology built into the game controller, such as causing the controller to shake in the player's hands to simulate a shaking earthquake occurring in game. Classifications Video games are frequently classified by a number of factors related to how one plays them. Genre A video game, like most other forms of media, may be categorized into genres. However, unlike film or television which use visual or narrative elements, video games are generally categorized into genres based on their gameplay interaction, since this is the primary means which one interacts with a video game. The narrative setting does not impact gameplay; a shooter game is still a shooter game, regardless of whether it takes place in a fantasy world or in outer space. An exception is the horror game genre, used for games that are based on narrative elements of horror fiction, the supernatural, and psychological horror. Genre names are normally self-describing in terms of the type of gameplay, such as action game, role playing game, or shoot 'em up, though some genres have derivations from influential works that have defined that genre, such as roguelikes from Rogue, Grand Theft Auto clones from Grand Theft Auto III, and battle royale games from the film Battle Royale. The names may shift over time as players, developers and the media come up with new terms; for example, first-person shooters were originally called "Doom clones" based on the 1993 game. A hierarchy of game genres exist, with top-level genres like "shooter game" and "action game" that broadly capture the game's main gameplay style, and several subgenres of specific implementation, such as within the shooter game first-person shooter and third-person shooter. Some cross-genre types also exist that fall until multiple top-level genres such as action-adventure game. Mode A video game's mode describes how many players can use the game at the same type. This is primarily distinguished by single-player video games and multiplayer video games. Within the latter category, multiplayer games can be played in a variety of ways, including locally at the same device, on separate devices connected through a local network such as LAN parties, or online via separate Internet connections. Most multiplayer games are based on competitive gameplay, but many offer cooperative and team-based options as well as asymmetric gameplay. Online games use server structures that can also enable massively multiplayer online games (MMOs) to support hundreds of players at the same time. A small number of video games are zero-player games, in which the player has very limited interaction with the game itself. These are most commonly simulation games where the player may establish a starting state and then let the game proceed on its own, watching the results as a passive observer, such as with many computerized simulations of Conway's Game of Life. Intent Most video games are created for entertainment purposes, a category otherwise called "core games". There are a subset of games developed for additional purposes beyond entertainment. These include: Casual games Casual games are designed for ease of accessibility, simple to understand gameplay and quick to grasp rule sets, and aimed at mass market audience, as opposed to a hardcore game. They frequently support the ability to jump in and out of play on demand, such as during commuting or lunch breaks. Numerous browser and mobile games fall into the casual game area, and casual games often are from genres with low intensity game elements such as match three, hidden object, time management, and puzzle games. Causal games frequently use social-network game mechanics, where players can enlist the help of friends on their social media networks for extra turns or moves each day. Popular casual games include Tetris and Candy Crush Saga. More recent, starting in the late 2010s, are hyper-casual games which use even more simplistic rules for short but infinitely replayable games, such as Flappy Bird. Educational games Education software has been used in homes and classrooms to help teach children and students, and video games have been similarly adapted for these reasons, all designed to provide a form of interactivity and entertainment tied to game design elements. There are a variety of differences in their designs and how they educate the user. These are broadly split between edutainment games that tend to focus on the entertainment value and rote learning but are unlikely to engage in critical thinking, and educational video games that are geared towards problem solving through motivation and positive reinforcement while downplaying the entertainment value. Examples of educational games include The Oregon Trail and the Carmen Sandiego series. Further, games not initially developed for educational purposes have found their way into the classroom after release, such as that feature open worlds or virtual sandboxes like Minecraft, or offer critical thinking skills through puzzle video games like SpaceChem. Serious games Further extending from educational games, serious games are those where the entertainment factor may be augmented, overshadowed, or even eliminated by other purposes for the game. Game design is used to reinforce the non-entertainment purpose of the game, such as using video game technology for the game's interactive world, or gamification for reinforcement training. Educational games are a form of serious games, but other types of serious games include fitness games that incorporate significant physical exercise to help keep the player fit (such as Wii Fit), flight simulators that simulate piloting commercial and military aircraft (such as Microsoft Flight Simulator), advergames that are built around the advertising of a product (such as Pepsiman), and newsgames aimed at conveying a specific advocacy message (such as NarcoGuerra). Art game Though video games have been considered an art form on their own, games may be developed to try to purposely communicate a story or message, using the medium as a work of art. These art or arthouse games are designed to generate emotion and empathy from the player by challenging societal norms and offering critique through the interactivity of the video game medium. They may not have any type of win condition and are designed to let the player explore through the game world and scenarios. Most art games are indie games in nature, designed based on personal experiences or stories through a single developer or small team. Examples of art games include Passage, Flower, and That Dragon, Cancer. Content rating Video games can be subject to national and international content rating requirements. Like with film content ratings, video game ratings typing identify the target age group that the national or regional ratings board believes is appropriate for the player, ranging from all-ages, to a teenager-or-older, to mature, to the infrequent adult-only games. Most content review is based on the level of violence, both in the type of violence and how graphic it may be represented, and sexual content, but other themes such as drug and alcohol use and gambling that can influence children may also be identified. A primary identifier based on a minimum age is used by nearly all systems, along with additional descriptors to identify specific content that players and parents should be aware of. The regulations vary from country to country but generally are voluntary systems upheld by vendor practices, with penalty and fines issued by the ratings body on the video game publisher for misuse of the ratings. Among the major content rating systems include: Entertainment Software Rating Board (ESRB) that oversees games released in the United States. ESRB ratings are voluntary and rated along a E (Everyone), E10+ (Everyone 10 and older), T (Teen), M (Mature), and AO (Adults Only). Attempts to mandate video games ratings in the U.S. subsequently led to the landmark Supreme Court case, Brown v. Entertainment Merchants Association in 2011 which ruled video games were a protected form of art, a key victory for the video game industry. Pan European Game Information (PEGI) covering the United Kingdom, most of the European Union and other European countries, replacing previous national-based systems. The PEGI system uses content rated based on minimum recommended ages, which include 3+, 8+, 12+, 16+, and 18+. Australian Classification Board (ACB) oversees the ratings of games and other works in Australia, using ratings of G (General), PG (Parental Guidance), M (Mature), MA15+ (Mature Accompanied), R18+ (Restricted), and X (Restricted for pornographic material). ACB can also deny to give a rating to game (RC – Refused Classification). The ACB's ratings are enforceable by law, and importantly, games cannot be imported or purchased digitally in Australia if they have failed to gain a rating or were given the RC rating, leading to a number of notable banned games. Computer Entertainment Rating Organization (CERO) rates games for Japan. Their ratings include A (all ages), B (12 and older), C (15 and over), D (17 and over), and Z (18 and over). Additionally, the major content system provides have worked to create the International Age Rating Coalition (IARC), a means to streamline and align the content ratings system between different region, so that a publisher would only need to complete the content ratings review for one provider, and use the IARC transition to affirm the content rating for all other regions. Certain nations have even more restrictive rules related to political or ideological content. Within Germany, until 2018, the Unterhaltungssoftware Selbstkontrolle (Entertainment Software Self-Regulation) would refuse to classify, and thus allow sale, of any game depicting Nazi imagery, and thus often requiring developers to replace such imagery with fictional ones. This ruling was relaxed in 2018 to allow for such imagery for "social adequacy" purposes that applied to other works of art. China's video game segment is mostly isolated from the rest of the world due to the government's censorship, and all games published there must adhere to strict government review, disallowing content such as smearing the image of the Chinese Communist Party. Foreign games published in China often require modification by developers and publishers to meet these requirements. Development Video game development and authorship, much like any other form of entertainment, is frequently a cross-disciplinary field. Video game developers, as employees within this industry are commonly referred, primarily include programmers and graphic designers. Over the years this has expanded to include almost every type of skill that one might see prevalent in the creation of any movie or television program, including sound designers, musicians, and other technicians; as well as skills that are specific to video games, such as the game designer. All of these are managed by producers. In the early days of the industry, it was more common for a single person to manage all of the roles needed to create a video game. As platforms have become more complex and powerful in the type of material they can present, larger teams have been needed to generate all of the art, programming, cinematography, and more. This is not to say that the age of the "one-man shop" is gone, as this is still sometimes found in the casual gaming and handheld markets, where smaller games are prevalent due to technical limitations such as limited RAM or lack of dedicated 3D graphics rendering capabilities on the target platform (e.g., some PDAs). Video games are programmed like any other piece of computer software. Prior to the mid-1970s, arcade and home consoles were programmed by assembling discrete electro-mechanical components on circuit boards, which limited games to relatively simple logic. By 1975, low-cost microprocessors were available at volume to be used for video game hardware, which allowed game developers to program more detailed games, widening the scope of what was possible. Ongoing improvements in computer hardware technology has expanded what has become possible to create in video games, coupled with convergence of common hardware between console, computer, and arcade platforms to simplify the development process. Today, game developers have a number of commercial and open source tools available for use to make games, often which are across multiple platforms to support portability, or may still opt to create their own for more specialized features and direct control of the game. Today, many games are built around a game engine that handles the bulk of the game's logic, gameplay, and rendering. These engines can be augmented with specialized engines for specific features, such as a physics engine that simulates the physics of objects in real-time. A variety of middleware exists to help developers to access other features, such as for playback of videos within games, network-oriented code for games that communicate via online services, matchmaking for online games, and similar features. These features can be used from a developers' programming language of choice, or they may opt to also use game development kits that minimize the amount of direct programming they have to do but can also limit the amount of customization they can add into a game. Like all software, video games usually undergo quality testing before release to assure there are no bugs or glitches in the product, though frequently developers will release patches and updates. With the growth of the size of development teams in the industry, the problem of cost has increased. Development studios need the best talent, while publishers reduce costs to maintain profitability on their investment. Typically, a video game console development team ranges from 5 to 50 people, and some exceed 100. In May 2009, Assassin's Creed II was reported to have a development staff of 450. The growth of team size combined with greater pressure to get completed projects into the market to begin recouping production costs has led to a greater occurrence of missed deadlines, rushed games and the release of unfinished products. While amateur and hobbyist game programming had existed since the late 1970s with the introduction of home computers, a newer trend since the mid-2000s is indie game development. Indie games are made by small teams outside any direct publisher control, their games being smaller in scope than those from the larger "AAA" game studios, and are often experiment in gameplay and art style. Indie game development are aided by larger availability of digital distribution, including the newer mobile gaming marker, and readily-available and low-cost development tools for these platforms. Game theory and studies Although departments of computer science have been studying the technical aspects of video games for years, theories that examine games as an artistic medium are a relatively recent development in the humanities. The two most visible schools in this emerging field are ludology and narratology. Narrativists approach video games in the context of what Janet Murray calls "Cyberdrama". That is to say, their major concern is with video games as a storytelling medium, one that arises out of interactive fiction. Murray puts video games in the context of the Holodeck, a fictional piece of technology from Star Trek, arguing for the video game as a medium in which the player is allowed to become another person, and to act out in another world. This image of video games received early widespread popular support, and forms the basis of films such as Tron, eXistenZ and The Last Starfighter. Ludologists break sharply and radically from this idea. They argue that a video game is first and foremost a game, which must be understood in terms of its rules, interface, and the concept of play that it deploys. Espen J. Aarseth argues that, although games certainly have plots, characters, and aspects of traditional narratives, these aspects are incidental to gameplay. For example, Aarseth is critical of the widespread attention that narrativists have given to the heroine of the game Tomb Raider, saying that "the dimensions of Lara Croft's body, already analyzed to death by film theorists, are irrelevant to me as a player, because a different-looking body would not make me play differently... When I play, I don't even see her body, but see through it and past it." Simply put, ludologists reject traditional theories of art because they claim that the artistic and socially relevant qualities of a video game are primarily determined by the underlying set of rules, demands, and expectations imposed on the player. While many games rely on emergent principles, video games commonly present simulated story worlds where emergent behavior occurs within the context of the game. The term "emergent narrative" has been used to describe how, in a simulated environment, storyline can be created simply by "what happens to the player." However, emergent behavior is not limited to sophisticated games. In general, any place where event-driven instructions occur for AI in a game, emergent behavior will exist. For instance, take a racing game in which cars are programmed to avoid crashing, and they encounter an obstacle in the track: the cars might then maneuver to avoid the obstacle causing the cars behind them to slow and/or maneuver to accommodate the cars in front of them and the obstacle. The programmer never wrote code to specifically create a traffic jam, yet one now exists in the game. Intellectual property for video games Most commonly, video games are protected by copyright, though both patents and trademarks have been used as well. Though local copyright regulations vary to the degree of protection, video games qualify as copyrighted visual-audio works, and enjoy cross-country protection under the Berne Convention. This typically only applies to the underlying code, as well as to the artistic aspects of the game such as its writing, art assets, and music. Gameplay itself is generally not considered copyrightable; in the United States among other countries, video games are considered to fall into the idea–expression distinction in that it is how the game is presented and expressed to the player that can be copyrighted, but not the underlying principles of the game. Because gameplay is normally ineligible for copyright, gameplay ideas in popular games are often replicated and built upon in other games. At times, this repurposing of gameplay can be seen as beneficial and a fundamental part of how the industry has grown by building on the ideas of others. For example Doom (1993) and Grand Theft Auto III (2001) introduced gameplay that created popular new game genres, the first-person shooter and the Grand Theft Auto clone, respectively, in the few years after their release. However, at times and more frequently at the onset of the industry, developers would intentionally create video game clones of successful games and game hardware with few changes, which led to the flooded arcade and dedicated home console market around 1978. Cloning is also a major issue with countries that do not have strong intellectual property protection laws, such as within China. The lax oversight by China's government and the difficulty for foreign companies to take Chinese entities to court had enabled China to support a large grey market of cloned hardware and software systems. The industry remains challenged to distinguish between creating new games based on refinements of past successful games to create a new type of gameplay, and intentionally creating a clone of a game that may simply swap out art assets. Industry History The early history of the video game industry, following the first game hardware releases and through 1983, had little structure. Video games quickly took off during the golden age of arcade video games from the late 1970s to early 1980s, but the newfound industry was mainly composed of game developers with little business experience. This led to numerous companies forming simply to create clones of popular games to try to capitalize on the market. Due to loss of publishing control and oversaturation of the market, the North American market crashed in 1983, dropping from revenues of around in 1983 to by 1985. Many of the North American companies created in the prior years closed down. Japan's growing game industry was briefly shocked by this crash but had sufficient longevity to withstand the short-term effects, and Nintendo helped to revitalize the industry with the release of the Nintendo Entertainment System in North America in 1985. Along with it, Nintendo established a number of core industrial practices to prevent unlicensed game development and control game distribution on their platform, methods that continue to be used by console manufacturers today. The industry remained more conservative following the 1983 crash, forming around the concept of publisher-developer dichotomies, and by the 2000s, leading to the industry centralizing around low-risk, triple-A games and studios with large development budgets of at least or more. The advent of the Internet brought digital distribution as a viable means to distribute games, and contributed to the growth of more riskier, experimental independent game development as an alternative to triple-A games in the late 2000s and which has continued to grow as a significant portion of the video game industry. Industry roles Video games have a large network effect that draw on many different sectors that tie into the larger video game industry. While video game developers are a significant portion of the industry, other key participants in the market include: Publishers: Companies generally that oversee bringing the game from the developer to market. This often includes performing the marketing, public relations, and advertising of the game. Publishers frequently pay the developers ahead of time to make their games and will be involved in critical decisions about the direction of the game's progress, and then pay the developers additional royalties or bonuses based on sales performances. Other smaller, boutique publishers may simply offer to perform the publishing of a game for a small fee and a portion of the sales, and otherwise leave the developer with the creative freedom to proceed. A range of other publisher-developer relationships exist between these points. Distributors: Publishers often are able to produce their own game media and take the role of distributor, but there are also third-party distributors that can mass-produce game media and distribute to retailers. Digital storefronts like Steam and the iOS App Store also serve as distributors and retailers in the digital space. Retailers: Physical storefronts, which include large online retailers, department and electronic stores, and specialty video game stores, sell games, consoles, and other accessories to consumers. This has also including a trade-in market in certain regions, allowing players to turn in used games for partial refunds or credit towards other games. However, with the uprising of digital marketplaces and e-commerce revolution, retailers have been performing worse than in the past. Hardware manufacturers: The video game console manufacturers produce console hardware, often through a value chain system that include numerous component suppliers and contract manufacturer that assemble the consoles. Further, these console manufacturers typically require a license to develop for their platform and may control the production of some games, such as Nintendo does with the use of game cartridges for its systems. In exchange, the manufacturers may help promote games for their system and may seek console exclusivity for certain games. For games on personal computers, a number of manufacturers are devoted to high-performance "gaming computer" hardware, particularly in the graphics card area; several of the same companies overlap with component supplies for consoles. A range of third-party manufacturers also exist to provide equipment and gear for consoles post-sale, such as additional controllers for console or carrying cases and gear for handheld devices. Journalism: While journalism around video games used to be primarily print-based, and focused more on post-release reviews and gameplay strategy, the Internet has brought a more proactive press that use web journalism, covering games in the months prior to release as well as beyond, helping to build excitement for games ahead of release. Influencers: With the rising importance of social media, video game companies have found that the opinions of influencers using streaming media to play through their games has had a significant impact on game sales, and have turned to use influencers alongside traditional journalism as a means to build up attention to their game before release. Esports: Esports is a major function of several multiplayer games with numerous professional leagues established since the 2000s, with large viewership numbers, particularly out of southeast Asia since the 2010s. Trade and advocacy groups: Trade groups like the Entertainment Software Association were established to provide a common voice for the industry in response to governmental and other advocacy concerns. They frequently set up the major trade events and conventions for the industry such as E3. Gamers: The players and consumers of video games, broadly. While their representation in the industry is primarily seen through game sales, many companies follow gamers' comments on social media or on user reviews and engage with them to work to improve their products in addition to other feedback from other parts of the industry. Demographics of the larger player community also impact parts of the market; while once dominated by younger men, the market shifted in the mid-2010s towards women and older players who generally preferred mobile and causal games, leading to further growth in those sectors. Major regional markets The industry itself grew out from both the United States and Japan in the 1970s and 1980s before having a larger worldwide contribution. Today, the video game industry is predominately led by major companies in North America (primarily the United States and Canada), Western Europe, and southeast Asia including Japan, South Korea, and China. Hardware production remains an area dominated by Asian companies either directly involved in hardware design or part of the production process, but digital distribution and indie game development of the late 2000s has allowed game developers to flourish nearly anywhere and diversify the field. Game sales According to the market research firm Newzoo, the global video game industry drew estimated revenues of over in 2020. Mobile games accounted for the bulk of this, with a 48% share of the market, followed by console games at 28% and personal computer games at 23%. Sales of different types of games vary widely between countries due to local preferences. Japanese consumers tend to purchase much more handheld games than console games and especially PC games, with a strong preference for games catering to local tastes. Another key difference is that, though having declined in the West, arcade games remain an important sector of the Japanese gaming industry. In South Korea, computer games are generally preferred over console games, especially MMORPG games and real-time strategy games. Computer games are also popular in China. Effects on society Culture Video game culture is a worldwide new media subculture formed around video games and game playing. As computer and video games have increased in popularity over time, they have had a significant influence on popular culture. Video game culture has also evolved over time hand in hand with internet culture as well as the increasing popularity of mobile games. Many people who play video games identify as gamers, which can mean anything from someone who enjoys games to someone who is passionate about it. As video games become more social with multiplayer and online capability, gamers find themselves in growing social networks. Gaming can both be entertainment as well as competition, as a new trend known as electronic sports is becoming more widely accepted. In the 2010s, video games and discussions of video game trends and topics can be seen in social media, politics, television, film and music. The COVID-19 pandemic during 2020-2021 gave further visibility to video games as a pastime to enjoy with friends and family online as a means of social distancing. Since the mid-2000s there has been debate whether video games qualify as art, primarily as the form's interactivity interfered with the artistic intent of the work and that they are designed for commercial appeal. A significant debate on the matter came after film critic Roger Ebert published an essay "Video Games can never be art", which challenged the industry to prove him and other critics wrong. The view that video games were an art form was cemented in 2011 when the U.S. Supreme Court ruled in the landmark case Brown v. Entertainment Merchants Association that video games were a protected form of speech with artistic merit. Since then, video game developers have come to use the form more for artistic expression, including the development of art games, and the cultural heritage of video games as works of arts, beyond their technical capabilities, have been part of major museum exhibits, including The Art of Video Games at the Smithsonian American Art Museum and toured at other museums from 2012 to 2016. Video games will inspire sequels and other video games within the same franchise, but also have influenced works outside of the video game medium. Numerous television shows (both animated and live-action), films, comics and novels have been created based on existing video game franchises. Because video games are an interactive medium there has been trouble in converting them to these passive forms of media, and typically such works have been critically panned or treated as children's media. For example, until 2019, no video game film had ever been received a "Fresh" rating on Rotten Tomatoes, but the releases of Detective Pikachu (2019) and Sonic the Hedgehog (2020), both receiving "Fresh" ratings, shows signs of the film industry having found an approach to adapt video games for the large screen. That said, some early video game-based films have been highly successful at the box office, such as 1995's Mortal Kombat and 2001's Lara Croft: Tomb Raider. More recently since the 2000s, there has also become a larger appreciation of video game music, which ranges from chiptunes composed for limited sound-output devices on early computers and consoles, to fully-scored compositions for most modern games. Such music has frequently served as a platform for covers and remixes, and concerts featuring video game soundtracks performed by bands or orchestras, such as Video Games Live, have also become popular. Video games also frequently incorporate licensed music, particularly in the area of rhythm games, furthering the depth of which video games and music can work together. Further, video games can serve as a virtual environment under full control of a producer to create new works. With the capability to render 3D actors and settings in real-time, a new type of work machinima (short for "machine cinema") grew out from using video game engines to craft narratives. As video game engines gain higher fidelity, they have also become part of the tools used in more traditional filmmaking. Unreal Engine has been used as a backbone by Industrial Light & Magic for their StageCraft technology for shows like The Mandalorian. Separately, video games are also frequently used as part of the promotion and marketing for other media, such as for films, anime, and comics. However, these licensed games in the 1990s and 2000s often had a reputation for poor quality, developed without any input from the intellectual property rights owners, and several of them are considered among lists of games with notably negative reception, such as Superman 64. More recently, with these licensed games being developed by triple-A studios or through studios directly connected to the licensed property owner, there has been a significant improvement in the quality of these games, with an early trendsetting example of Batman: Arkham Asylum. Beneficial uses Besides their entertainment value, appropriately-designed video games have been seen to provide value in education across several ages and comprehension levels. Learning principles found in video games have been identified as possible techniques with which to reform the U.S. education system. It has been noticed that gamers adopt an attitude while playing that is of such high concentration, they do not realize they are learning, and that if the same attitude could be adopted at school, education would enjoy significant benefits. Students are found to be "learning by doing" while playing video games while fostering creative thinking. Video games are also believed to be beneficial to the mind and body. It has been shown that action video game players have better hand–eye coordination and visuo-motor skills, such as their resistance to distraction, their sensitivity to information in the peripheral vision and their ability to count briefly presented objects, than nonplayers. Researchers found that such enhanced abilities could be acquired by training with action games, involving challenges that switch attention between different locations, but not with games requiring concentration on single objects. A 2018 systematic review found evidence that video gaming training had positive effects on cognitive and emotional skills in the adult population, especially with young adults. A 2019 systematic review also added support for the claim that video games are beneficial to the brain, although the beneficial effects of video gaming on the brain differed by video games types. Organisers of video gaming events, such as the organisers of the D-Lux video game festival in Dumfries, Scotland, have emphasised the positive aspects video games can have on mental health. Organisers, mental health workers and mental health nurses at the event emphasised the relationships and friendships that can be built around video games and how playing games can help people learn about others as a precursor to discussing the person's mental health. A study in 2020 from Oxford University also suggested that playing video games can be a benefit to a person's mental health. The report of 3,274 gamers, all over the age of 18, focused on the games Animal Crossing: New Horizons and Plants vs Zombies: Battle for Neighborville and used actual play-time data. The report found that those that played more games tended to report greater "wellbeing". Also in 2020, computer science professor Regan Mandryk of the University of Saskatchewan said her research also showed that video games can have health benefits such as reducing stress and improving mental health. The university's research studied all age groups – "from pre-literate children through to older adults living in long term care homes" – with a main focus on 18 to 55-year-olds. A study of gamers attitudes towards gaming which was reported about in 2018 found that millennials use video games as a key strategy for coping with stress. In the study of 1,000 gamers, 55% said that it "helps them to unwind and relieve stress ... and half said they see the value in gaming as a method of escapism to help them deal with daily work pressures". Controversies Video games have had controversy since the 1970s. Parents and children's advocates have raised concerns that violent video games can influence young players into performing those violent acts in real life, and events such as the Columbine High School massacre in 1999 in which the perpetrators specifically alluded to using video games to plot out their attack, raised further fears. Medical experts and mental health professionals have also raised concerned that video games may be addictive, and the World Health Organization has included "gaming disorder" in the 11th revision of its International Statistical Classification of Diseases. Other health experts, including the American Psychiatric Association, have stated that there is insufficient evidence that video games can create violent tendencies or lead to addictive behavior, though agree that video games typically use a compulsion loop in their core design that can create dopamine that can help reinforce the desire to continue to play through that compulsion loop and potentially lead into violent or addictive behavior. Even with case law establishing that video games qualify as a protected art form, there has been pressure on the video game industry to keep their products in check to avoid over-excessive violence particularly for games aimed at younger children. The potential addictive behavior around games, coupled with increased used of post-sale monetization of video games, has also raised concern among parents, advocates, and government officials about gambling tendencies that may come from video games, such as controversy around the use of loot boxes in many high-profile games. Numerous other controversies around video games and its industry have arisen over the years, among the more notable incidents include the 1993 United States Congressional hearings on violent games like Mortal Kombat which lead to the formation of the ESRB ratings system, numerous legal actions taken by attorney Jack Thompson over violent games such as Grand Theft Auto III and Manhunt from 2003 to 2007, the outrage over the "No Russian" level from Call of Duty: Modern Warfare 2 in 2009 which allowed the player to shoot a number of innocent non-player characters at an airport, and the Gamergate harassment campaign in 2014 that highlighted misogamy from a portion of the player demographic. The industry as a whole has also dealt with issues related to gender, racial, and LGBTQ+ discrimination and mischaracterization of these minority groups in video games. A further issue in the industry is related to working conditions, as development studios and publishers frequently use "crunch time", required extended working hours, in the weeks and months ahead of a game's release to assure on-time delivery. Collecting and preservation Players of video games often maintain collections of games. More recently there has been interest in retrogaming, focusing on games from the first decades. Games in retail packaging in good shape have become collectors items for the early days of the industry, with some rare publications having gone for over as of 2020. Separately, there is also concern about the preservation of video games, as both game media and the hardware to play them degrade over time. Further, many of the game developers and publishers from the first decades no longer exist, so records of their games have disappeared. Archivists and preservations have worked within the scope of copyright law to save these games as part of the cultural history of the industry. There are many video game museums around the world, including the National Videogame Museum in Frisco, Texas, which serves as the largest museum wholly dedicated to the display and preservation of the industry's most important artifacts. Europe hosts video game museums such as the Computer Games Museum in Berlin and the Museum of Soviet Arcade Machines in Moscow and Saint-Petersburg. The Museum of Art and Digital Entertainment in Oakland, California is a dedicated video game museum focusing on playable exhibits of console and computer games. The Video Game Museum of Rome is also dedicated to preserving video games and their history. The International Center for the History of Electronic Games at The Strong in Rochester, New York contains one of the largest collections of electronic games and game-related historical materials in the world, including a exhibit which allows guests to play their way through the history of video games. The Smithsonian Institution in Washington, DC has three video games on permanent display: Pac-Man, Dragon's Lair, and Pong. The Museum of Modern Art has added a total of 20 video games and one video game console to its permanent Architecture and Design Collection since 2012. In 2012, the Smithsonian American Art Museum ran an exhibition on "The Art of Video Games". However, the reviews of the exhibit were mixed, including questioning whether video games belong in an art museum. See also Lists of video games List of accessories to video games by system Outline of video games Notes References Sources Further reading External links Video games bibliography by the French video game research association Ludoscience The Virtual Museum of Computing (VMoC) Games and sports introduced in 1947 Digital media American inventions
7040357
https://en.wikipedia.org/wiki/Tony%20Coe
Tony Coe
Anthony George Coe (born 29 November 1934) is an English jazz musician who plays clarinet, bass clarinet, flute as well as soprano, alto, and tenor saxophones. Career Born in Canterbury, Kent, England, Coe started out on clarinet and was self-taught on tenor saxophone. At just 15 years of age in 1949 he played in his school's (Simon Langton Grammar School for Boys) trad band and two years later, aged 17, became a full professional with Joe Daniels. In 1953, aged 18, he joined the army where he played clarinet in the Military band and saxophone with the unit Dance Band. After demob in 1955 he spent some time in France with the Micky Bryan Band (Micky on piano, Gerry Salisbury (valve trombone), Harry Bryan (trumpet), Lennie Hastings on drums and Coe on clarinet), before rejoining Joe Daniels. In 1957 Tony's father went to see Humphrey Lyttelton and, as a result, Tony spent just over four years with Humphrey's band from 1957 to the end of 1961. This was a period when Coe was brought to the attention of critics and fans as well as giving him some degree of international fame. He left Lyttleton at the end of 1961 to form his own outfit. In 1965, he was invited to join Count Basie's band ('I'm glad it didn't come off – I would have lasted about a fortnight') and has since played with the John Dankworth Orchestra, the Kenny Clarke-Francy Boland Big Band, Derek Bailey's free improvisation group Company, Stan Tracey, Michael Gibbs, Stan Getz, Dizzy Gillespie, and Bob Brookmeyer, and performed under Pierre Boulez as well as leading a series of groups of his own, including Coe Oxley & Co with drummer Tony Oxley. He played clarinet on Paul McCartney's recording of "I'll Give You a Ring" released in 1982 and saxophone on John Martyn's 1973 album, Solid Air. Coe has also worked with the Matrix, a small ensemble formed by clarinettist Alan Hacker, with a wide-ranging repertoire of early, classical, and contemporary music, the Danish Radio Big Band, Metropole Orchestra and Skymasters in the Netherlands. He has worked additionally with the Mike Gibbs big band and the United Jazz and Rock Ensemble. Coe has recorded on soundtracks for several films, including Superman II, Victor/Victoria, Nous irons tous au paradis, Leaving Las Vegas, Le Plus beau métier du monde and The Loss of Sexual Innocence. He also composed the film score for Camomille. Awards and honours In 1976, a grant from the Arts Council enabled him to write Zeitgeist - Based On Poems Of Jill Robin, a large-scale orchestral work fusing jazz and rock elements with techniques from classical music which was recorded on EMI records on 29 and 30 July 1976 at Lansdowne Studios based in Holland Park, London. In 1995 he received an honorary degree and the Danish Jazzpar Prize. Discography As leader Swingin' Till the Girls Come Home with the Tony Coe Quintet (Philips, 1962) Tony's Basement with the Lansdowne String Quartet (Columbia, 1967) Sax with Sex (Metronome, 1968) Pop Makes Progress with Robert Farnon (Chapter One, 1970) With Brian Lemon Trio (77 Records, 1971) Zeitgeist: Based on Poems of Jill Robin (EMI, 1977) Coe-Existence (Lee Lambert, 1978) Time with Derek Bailey (Incus, 1979) Get It Together with Al Grey (Pizza Express, 1979) Tournee Du Chat (Nato, 1983) Le Chat Se Retourne (Nato, 1984) Mainly Mancini (Chabada, 1985) Mer De Chine (Nato, 1988) Canterbury Song (Hot House, 1989) Les Voix D'Itxassou (Nato, 1990) Les Sources Bleues with Tony Hymas, Chris Laurence (Nato, 1991) Captain Coe's Famous Racearound with Bob Brookmeyer (Storyville, 1996) In Concert with John Horler, Malcolm Creese (ABCDs, 1997) Jazz Piquant N'oublie Jamais with Tina May (Doz, 1998) Days of Wine and Roses with Alan Barnes (Zephyr, 1998) Street of Dreams with Warren Vaché (Zephyr, 1999) Jumpin with Warren Vaché, Alan Barnes (Zephyr, 1999) Sun, Moon, and Stars with Alan Hacker (Zah Zah, 1999) British-American Blue with Roger Kellaway (Between the Lines, 2000) Dreams with Gerard Presencer, Brian Lemon, Dave Green (Zephyr, 2001) What in the World with Richard Sinclair, David Rees Williams (Sinclair Songs, 2003) More Than You Know with Tina May, Nikki Iles (33 Records, 2004) As sideman With Steve Beresford 1985 Eleven Songs for Doris Day (Chabada) 1988 L'Extraordinaire Jardin De Charles Trenet (Chabada) 1989 Pentimento (Cinenato) 1996 Cue Sheets (Tzadik) With the Kenny Clarke/Francy Boland Big Band (MPS) 1968 Latin Kaleidoscope (MPS) 1968 All Smiles (MPS) 1969 Faces (MPS) 1969 All Blues (MPS) 1969 Fellini 712 (MPS) 1969 More Smiles (MPS) 1969 At Her Majesty's Pleasure 1969 Let's Face the Music and Dance 1969 Live at Ronnie Scott's 1969 Rue Chaptal 1969 Volcano 1971 Off Limits (Polydor) 1971 Change of Scenes with Stan Getz (Verve) 1971 Second Greatest Jazz Big Band in the World (Black Lion) 1973 Big Band Sound of Kenny Clarke & Francy Boland 1975 Open Door (Muse) 1976 November Girl with Carmen McRae (Black Lion) 1976 Live at Ronnie Scotts (MPS) 1988 Meets the Francy Boland Kenny Clark Big Band with Gitte Hænning (veraBra) 1992 Clarke Boland Big Band en Concert avec Europe 1 (Tréma) 1999 Our Kinda Strauss With Georgie Fame 1966 Sound Venture (Columbia) 1967 The Two Faces of Fame (CBS) 1968 The Third Face of Fame (CBS) With Tony Hymas 1988 Flying Fortress (Nato) 1990 Oyate (Nato) 1995 Remake of the American Dream With Franz Koglmann 1990 A White Line (hatART) 1991 The Use of Memory (hatART) 1991 L'Heure Bleue (hatART) 1993 Cantos I-IV (hatART) 1995 We Thought About Duke with Lee Konitz (hatART) 1998 Make Believe 1999 An Affair With Strauss (Between the Lines) 2001 Don't Play Just Be (Between the Lines) 2001 O Moon My Pin-Up (hatOLOGY) 2003 Fear Death by Water (Between the Lines) 2005 Let's Make Love (Between the Lines) 2009 Lo-Lee-Ta: Music on Nabokov With Humphrey Lyttelton 1957 Here's Humph! (Parlophone) 1960 Blues in the Night (Columbia) 1965 Humphrey Lyttelton and His Band 1971 Duke Ellington Classics (Black Lion) 2001 The Humphrey Lyttelton Big Band with Jimmy Rushing 2002 Humph Bruce & Sandy Swing at the BBC 2003 A Night in Oxford Street 2005 Humph Dedicates (Vocalion) 2013 Live at the Nottingham Jazz Festival 1972 (Calligraph) With Mike McGear 1972 Woman 1974 McGear (Warner Bros.) With Norma Winstone 1986 Somewhere Called Home 1998 Manhattan in the Rain With others 1966 Black Marigolds, Michael Garrick 1969 Windmill Tilter: Story of Don Quixote, John Dankworth/Kenny Wheeler 1971 Mirrors, Benny Bailey 1972 Bootleg Him!, Alexis Korner 1973 For Girls Who Grow Plump in the Night, Caravan 1973 Labyrinth, Nucleus 1973 Nigel Lived, Murray Head 1973 Solid Air, John Martyn 1974 Krysia, Krysia Kocjan 1974 Living on a Back Street, The Spencer Davis Group 1974 The Road of Silk, Pete Atkin 1975 Floresta Canto, Phil Woods 1975 Only Chrome-Waterfall Orchestra, Mike Gibbs 1976 Terminator, Nick Ingman 1978 A Crazy Steal, The Hollies 1978 Clark After Dark: The Ballad Album, Clark Terry 1979 Harmony of the Spheres, Neil Ardley 1982 Tug of War, Paul McCartney 1983 Visit with the Great Spirit, Bob Moses 1984 Berlin Djungle, Peter Brötzmann 1984 I'm Alright, Loudon Wainwright III 1984 The Mystery of Man, Sarah Vaughan 1989 For Heaven's Sake, Benny Bailey 1994 Jazz Tete a Tete, Tubby Hayes 1994 R.S.V.P., Richard Sinclair 1994 View from the Edge, Theo Travis 1996 Cue Sheets, Steve Beresford 1998 N'Oublie Jamais, Tina May 1998 Ridin' High: The British Sessions 1960–1971, Cleo Laine 1999 Sun Moon & Stars, Alan Hacker 2000 Where But for Caravan Would I?, Caravan 2001 Easy to Remember, Joe Temperley 2002 At the BBC Vol. 2: More Wireless Days, Chris Barber 2002 In the Evening, Sandy Brown 2002 Labyrinth, Ian Carr/Nucleus 2002 Songs for Sandy, Digby Fairweather 2002 Spectral Soprano, Lol Coxhill 2003 Transformations, James Emery/Klangforum Wien/Emilio Pomárico 2006 Dhammapada, John Mayer (composer) 2006 Jazz Icons: Live in '58 & '70, Dizzy Gillespie 2007 Dixie Band Stomp, Joe Daniels 2008 Etudes/Radha Krishna, John Mayer (composer) 2008 Harlem Airshaft: The Music of Duke Ellington, Alan Barnes 2015 A Good Time Was Had By All, Danish Radio Big Band References External links [ All Music Album Highlights] 1934 births Living people Bebop saxophonists Bebop clarinetists Post-bop saxophonists Post-bop clarinetists Hard bop saxophonists Hard bop clarinetists English jazz saxophonists British male saxophonists English jazz clarinetists People from Canterbury People educated at Simon Langton Grammar School for Boys Nucleus (band) members Musicians from Kent 21st-century saxophonists 21st-century clarinetists British male jazz musicians Kenny Clarke/Francy Boland Big Band members Incus Records artists Storyville Records artists 21st-century British male musicians
12200454
https://en.wikipedia.org/wiki/MC%20Router
MC Router
Abedah Ritchie (born Kristin Nicole Ritchie; May 6, 1986), is a former Nerdcore rapper, better known by the stage name MC Router. In 2009 she worked in the Netherlands under the new stage name Krisje before leaving hip hop altogether. Ritchie later converted to Islam, changing her given name to Abedah, and became a translator. Hip hop career Ritchie, as the self-proclaimed "First Lady of Nerdcore" founded the group "1337 g33k b34t" with friend Tanner Brown (aka "T-Byte") in 2004. Although the two are still friends and occasionally collaborate musically, the group disbanded in late 2006 to leave each of them to perform as solo acts. Late 2006 also marked the birth of "Tri-forc3", a joint effort between MC Router, Beefy, and Shael Riley. As year 2007 began, Router released another new track entitled Trekkie Pride, which is known as "The First Nerdcore Song of 2007". Conversion to Islam Ritchie converted to Islam and changed her given name from Kristin to Abedah, which means "Worshiper of God" in Arabic, sometime shortened to "Abby". She said in an interview with Muglatte that before Islam she was a Christian but never took religion seriously. She also said that she converted to Islam because she found logic in it. On March 3, 2014, she appeared on the Dr. Phil show with her mother, Darlene, who was concerned about her daughter's new Islamic beliefs. References 1986 births American Muslims American women rappers Living people Nerdcore artists Rappers from Texas 21st-century American rappers 21st-century American women musicians American expatriates in the Netherlands American former Christians Converts to Islam from Christianity
61614937
https://en.wikipedia.org/wiki/Photopea
Photopea
Photopea ( ) is a web-based photo & graphics editor which can work with raster and vector graphics. It can be used for image editing, making illustrations, web design or converting between different image formats. Photopea is advertising-supported software. It is compatible with all modern web browsers, including Opera, Edge, Chrome, and Firefox. The app is compatible with Photoshop’s PSD as well as JPEG, PNG, DNG, GIF, SVG, PDF and other image file formats. While browser-based, Photopea stores all files locally, and does not upload any data to a server. Photopea is often considered a free alternative to Adobe Photoshop with less features. Features Photopea offers a wide variety of image editing tools, including features like spot healing, a clone stamp healing brush, and a patch tool. The software supports layers, layer masks, channels, selections, paths, smart objects, layer styles, text layers, filters and vector shapes. Reception Photopea has received positive coverage due to its similarities to Adobe Photoshop in design and workflow, making it an easier program for those trained in Photoshop to use, compared to other free raster image editors such as GIMP. See also Comparison of raster graphics editors Adobe Photoshop GIMP Krita References External links Official website Photopea Blog Photopea review Photopea reviews Adware Cross-platform software Photo software Web applications 2013 software Graphics software Proprietary cross-platform software Raster graphics editors Vector graphics editors
285435
https://en.wikipedia.org/wiki/Gentoo%20penguin
Gentoo penguin
The gentoo penguin ( ) (Pygoscelis papua) is a penguin species (or possibly a species complex) in the genus Pygoscelis, most closely related to the Adélie penguin (P. adeliae) and the chinstrap penguin (P. antarcticus). The earliest scientific description was made in 1781 by Johann Reinhold Forster with a type locality in the Falkland Islands. The species calls in a variety of ways, but the most frequently heard is a loud trumpeting, which the bird emits with its head thrown back. Names The application of "gentoo" to the penguin is unclear. Gentoo was an Anglo-Indian term to distinguish Hindus from Muslims. The English term may have originated from the Portuguese gentio ("pagan, gentile"). Some speculate that the white patch on the bird's head was thought to resemble a turban. It may also be a variation of another name for this bird, "Johnny penguin", with Johnny being in Spanish and sounds vaguely like gentoo. The Johnny rook, a predator, is likely named after the Johnny penguin. The specific name papua is a misnomer; in the original description, Johann Reinhold Forster, a naturalist who had circumnavigated the world with Captain James Cook, mistakenly assumed that the species occurred in Papua (New Guinea), the closest gentoos actually being over 6000 km to the south (on Macquarie Island). No penguins are found in New Guinea. Others trace the error to a "possibly fraudulent claim" in 1776 by French naturalist Pierre Sonnerat, who also alleged a Papuan location for the king penguin despite never having been to the island himself. Taxonomy The gentoo penguin is one of three species in the genus Pygoscelis. Mitochondrial and nuclear DNA evidence suggests the genus split from other penguins around 38 million years ago (Mya), about 2 million years after the ancestors of the genus Aptenodytes. In turn, the Adélie penguins split off from the other members of the genus around 19 Mya, and the chinstrap and gentoo finally diverged around 14 Mya. Two subspecies of this penguin are recognised: P. p. papua (subantarctic gentoo) and the smaller P. p. ellsworthi (Antarctic gentoo). A recent study suggests that the gentoo penguin should be split into a species complex of four morphologically similar but separate species: the northern gentoo penguin (P. papua sensu stricto), the southern gentoo penguin (P. ellsworthi), the eastern gentoo penguin (P. taeniata), and the newly described South Georgia gentoo penguin (P. poncetii). Description The gentoo penguin is easily recognized by the wide, white stripe extending like a bonnet across the top of its head and its bright orange-red bill. It has pale whitish-pink, webbed feet and a fairly long tail – the most prominent tail of all penguin species. Chicks have grey backs with white fronts. As the gentoo penguin waddles along on land, its tail sticks out behind, sweeping from side to side, hence the scientific name Pygoscelis, which means "rump-tailed". Gentoo penguins can reach a height of , making them the third-largest species of penguin after the emperor penguin and the king penguin. Males have a maximum weight around just before molting, and a minimum weight of about just before mating. For females, the maximum weight is just before molting, but their weight drops to as little as when guarding the chicks in the nest. Birds from the north are on average heavier and taller than the southern birds. Southern gentoo penguins reach in length. They are the fastest underwater swimmers of all penguins, reaching speeds up to . Gentoos are well adapted to extremely cold and harsh climates. Breeding The breeding colonies of gentoo penguins are located on ice-free surfaces. Colonies can be located directly on the shoreline or considerably inland. They prefer shallow coastal areas, and often nest between tufts of grass. In South Georgia, for example, breeding colonies are 2 km inland. In colonies farther inland, where the penguins nest in grassy areas, they shift location slightly every year because the grass becomes trampled over time. Gentoos breed on many subantarctic islands. The main colonies are on the Falkland Islands, South Georgia and the South Sandwich Islands, and Kerguelen Islands; smaller colonies are found on Macquarie Island, Heard Islands, Crozet Islands, South Shetland Islands, and the Antarctic Peninsula. The total breeding population is estimated to be over 600,000 birds. Gentoos breed monogamously, and infidelity is typically punished with banishment from the colony. Nests are usually made from a roughly circular pile of stones and can be quite large, high and in diameter. The stones are jealously guarded, and their ownership can be the subject of noisy disputes and physical attacks between individuals. They are also prized by the females, even to the point that a male penguin can obtain the favors of a female by offering her a choice stone. Two eggs are laid, both weighing around . The parents share incubation, changing duty daily. The eggs hatch after 34 to 36 days. The chicks remain in the nests for around 30 days before joining other chicks in the colony and forming crèches. The chicks molt into subadult plumage and go out to sea at around 80 to 100 days. Diet Gentoos live mainly on crustaceans, such as krill, with fish making up only about 15% of the diet. They are, however, opportunistic feeders, and around the Falklands are known to take roughly equal proportions of fish (Patagonotothen sp., Thysanopsetta naresi, Micromesistius australis), squat lobsters (Munida gregaria), and squid (Loligo gahi, Gonatus antarcticus, and Moroteuthis ingens). Physiology The gentoos' diet is high in salt, as they eat organisms with relatively the same salinity as sea water, which can lead to complications associated with high sodium concentrations in the body, especially for gentoo chicks. To counteract this, gentoos, as well as many other marine bird species, have a highly developed salt gland located above their eyes that takes the high concentration of sodium within the body and produces a highly saline-concentrated solution that drips out of the body from the tip of the beak. Gentoo penguins do not store as much fat as Adélie penguins, their closest relative; gentoos require less energy investment when hunting because the net gain of energy after hunting is greater in gentoos than Adélies. As embryos, gentoos require a lot of energy to develop. Oxygen consumption is high for a developing gentoo embryo. As the embryo grows and requires more oxygen, consumption increases exponentially until the gentoo chick hatches. By then, the chick is consuming around 1800 ml O2 per day. Predators In the sea, leopard seals, sea lions, and killer whales are all predators of the gentoo. On land, no predators of full-grown, healthy gentoo penguins exist. Skuas and giant petrels regularly kill many chicks and steal eggs; petrels kill injured and sick adult gentoos. Various other seabirds, such as the kelp gull and snowy sheathbill, also snatch chicks and eggs. Skuas on King George Island have been observed attacking and injuring adult gentoo penguins in apparent territorial disputes. Conservation status , the IUCN Red List lists the gentoo as least concern with a stable population trend, although rapid declines in some key areas are believed to be driving a moderate overall decline in the species population. Examples include Bird Island, South Georgia, where the population has fallen by two-thirds over 25 years. Many threats to this species still exist, including pollution, hunting, fishing, and human recreational activities that continue to affect them. Influence The Linux distribution Gentoo Linux is named after the gentoo penguin. This is a nod to the fact that the penguin is the fastest swimming penguin, as Gentoo Linux aims to be a high-performance operating system. Gallery References External links 70South – more info on the gentoo penguin Gentoo penguin on PenguinWorld Gentoo penguins from the International Penguin Conservation website www.pinguins.info: information about all species of penguins Gentoo penguin images Gentoo penguin webcam from the Antarctic – worldwide first webcam with wild penguins; photo quality gentoo penguin Birds of Antarctica Birds of islands of the Atlantic Ocean Birds of subantarctic islands Birds of the Indian Ocean Birds of the Southern Ocean Fauna of Heard Island and McDonald Islands Fauna of subantarctic islands Fauna of the Crozet Islands Fauna of the Prince Edward Islands Flightless birds Macquarie Island gentoo penguin gentoo penguin
5710697
https://en.wikipedia.org/wiki/Innovation%20Quarter
Innovation Quarter
Innovation Quarter in Winston-Salem, North Carolina, formerly Wake Forest Innovation Quarter, is an innovation district focused on research, business and education in biomedical science, information technology, digital media, clinical services and advanced materials. The Innovation Quarter, operated by Wake Forest Baptist Medical Center, is home to academic groups, private companies and other organizations located on 330 acres in downtown Winston-Salem. Its tenants include departments from five academic institutions—Wake Forest School of Medicine, Wake Forest University, Forsyth Technical Community College, Winston-Salem State University, UNC School of the Arts— as well as private businesses and other organizations. One tenant is the Wake Forest Institute for Regenerative Medicine (WFIRM), which is working to engineer more than 30 different replacement tissues and organs and to develop healing cell therapies. The science and research conducted at WFIRM is behind two start-up companies at Innovation Quarter. The ability of researchers and scientists to work alongside entrepreneurs furthers a goal of Innovation Quarter to develop new treatments and cures for disease and advances in technology. History and Growth The idea of a research park in Winston-Salem was a community-wide effort that began in the early 1990s in the wake of R. J. Reynolds Tobacco Company closing many of its former downtown warehouse and manufacturing buildings. Wake Forest School of Medicine's Department of Physiology and Pharmacology moved into one former Reynolds warehouse in 1993, along with eight researchers from Winston-Salem State University. Civic committees and discussion led to a master plan being announced in 2002 for what was then called Piedmont Triad Research Park. On August 27, 1998, a former Reynolds factory building burned in one of the city's worst fires ever. JDL Castle Corp. was renovating Building 256-2 and several other buildings for the research park. The first new building, One Technology Place, opened in 2000, occupied by Targacept Inc., a biopharmaceutical company that was spun out of R.J. Reynolds Tobacco. The company developed drugs to treat nervous system diseases and disorders. On April 7, 2000, developer David Shannon announced plans for a three-story building on the site of Building 256-2, and in a style recalling that building, which would house the medical school's Physician Assistant program. Biotech Place opened in February 2012. The 242,000-square-foot structure is composed of two former Reynolds warehouses that have been renovated into a modern biotech research facility, with custom-designed wet and labs as well as Class A office space. The $100 million project was Winston-Salem's most expensive ever downtown project; it houses Wake Forest School of Medicine's departments of Physiology and Pharmacology, Biomedical Engineering, and Immunology and Microbiology, as well as the Childress Institute for Pediatric Trauma. Private businesses—Carolina Liquid Chemistries, Allegacy Federal Credit Union, Brioche Doree cafe—also are tenants at Biotech Place. Piedmont Triad Research Park was renamed in March 2013 as Wake Forest Innovation Quarter in recognition of the shift from biotechnology to a mix of biomedical and material sciences, information technology, and other health and communications fields. Early 2014 saw Inmar Inc., an information technology company, move into another renovated former R.J. Reynolds building in the Innovation Quarter. Inmar relocated 900 employees from other sites in Winston-Salem to its new, state-of-the-art headquarters. The company also announced a partnership with the Division of Public Health Sciences of Wake Forest School of Medicine in which Inmar's digital analytics will be used to help locate and enroll patients for clinical trials conducted by the school. The Division of Public Health Sciences in early 2015 completed a move into the Innovation Quarter, in a building called 525@vine adjacent to Inmar's headquarters. The 525@vine building, a five-story R.J. Reynolds factory built in 1926 and renovated in 2012-13, also houses the School of Medicine's Physician Assistant program, as well as Forsyth Technical Community College’s Emerging Technologies Center, which trains more than 1,200 students annually. Also in 2014, work began on 1.6-acre Bailey Park at Fourth Street and Patterson Avenue. The park was intended to be a space for events such as concerts. In 2020, the Innovation Quarter announced that it was simplifying its name in order to better reflect the diversity of companies, peoples and institutions located in the innovation district. A master plan for the 28-acre Phase II, former site of the city bus station and a location once considered for a soccer stadium, was presented June 14, 2021. Unlike previous development, this area would not include renovated historic buildings, since there were none in the area. About 1 million square feet of clinical, laboratory and office space would be added to the 2.1 million square feet already developed. The plan called for as many as 450 residential units and 30,000 square feet of retail and restaurants. 15 acres would become green space. Fogle Commons would be a space for entertainment and events. Public-Private Collaboration With state and federal funding, and the cooperation of neighboring communities, Innovation Quarter's expansion plan includes private businesses, retail and residential units. Among the work is relocating Norfolk Southern Railroad lines, construction of a new rail bridge and burying Duke Energy transmission lines. More than $17 million from the City of Winston-Salem and Forsyth County, have helped leverage $350 million in state, federal and private investment at Innovation Quarter. Wake Forest Innovations The renaming of Piedmont Triad Research Park to Wake Forest Innovation Quarter came shortly after Wake Forest Baptist Medical Center created a new operating division, Wake Forest Innovations, to establish and manage new business, partnerships, licenses and start-up companies based on the discoveries, intellectual property and research assets of the medical center and Wake Forest University. Wake Forest Innovations has separate units that market its scientific business assets-core laboratories, preclinical translational services, for example-to outside partners, while also promoting discovery and innovation and the licensing of technologies. References External links https://www.innovationquarter.com/ Science parks in the United States High-technology business districts in the United States Buildings and structures in Winston-Salem, North Carolina Economy of Winston-Salem, North Carolina
32024165
https://en.wikipedia.org/wiki/Victor%20Orsatti
Victor Orsatti
Victor Manuel Orsatti (November 25, 1905 – June 9, 1984) was an American talent agent and film producer. As an agent, he represented some of the biggest stars of the 1930s and 1940s, including Judy Garland, Betty Grable, and Edward G. Robinson, as well as directors Frank Capra and George Stevens. He was credited with persuading figure skating champion Sonja Henie to move to Hollywood and become an actress after the 1936 Winter Olympics. He later became a motion picture and television producer, whose works include Flight to Hong Kong and the television series The Texan. He was also married to actress June Lang, singer/actress Marie "The Body" McDonald, and model/actress Dolores Donlon. Early years Orsatti was born in Los Angeles, California, the son of Morris Orsatti and Mary Manse, both born in Italy. He had six siblings, including stuntman and baseball player for the St. Louis Cardinals Ernie Orsatti. Orsatti attended Los Angeles Manual Arts High School. He was recognized in 1923 as the best all-around high school athlete in Los Angeles. He played third base for the baseball team. In 1923, he won a bat with which Babe Ruth had hit the first home run in Yankee Stadium. The bat was the prize given by the Los Angeles Evening Herald for a high school home run hitting contest they sponsored. The bat, which was inscribed to Orsatti, sold in 2004 for $1.2 million. Orsatti subsequently attended the University of Southern California (USC) where he played quarterback on Howard Jones's 1925 and 1926 USC Trojans football teams, wearing number 5. He also played baseball and ran track and field at USC. Hollywood agent and producer Orsatti became a Hollywood talent agent in the 1930s. Along with his brothers Frank, Al and Ernie (a former minor-league baseball player), he was a principal in the Orsatti Talent Agency. He was known as "one of the industry's sharpest agents," and his clients included some of Hollywood's biggest stars, such as Sonja Henie, Judy Garland, Betty Grable, Edward G. Robinson, Frank Capra, George Stevens, Margaret O'Brien, and Alice Faye. His accomplishments as a talent agent include: Orsatti was credited with persuading Sonja Henie to move to Hollywood and become an actress after she won her third gold medal in figure skating at the 1936 Winter Olympics. Henie went on to become one of the highest paid stars in Hollywood. In 1939, syndicated columnist Louella Parsons reported that the romantic relationship between Henie and Orsatti was the talk of Hollywood. Orsatti negotiated the contract for Judy Garland to play the role of Dorothy in The Wizard of Oz. He was also credited with discovering Alexis Smith while she was a student at Los Angeles City College and offering her a screen test. Orsatti also formed a production company in the 1950s called Saber Productions. The company produced 14 films including Flight to Hong Kong. Orsatti formed a television production company, Rorvic Productions, in partnership with actor Rory Calhoun. Rorvic produced the CBS television series The Texan, which aired on Monday evenings from 1958 to 1960. Actually the idea for The Texan came from Orsatti's then neighbor Desi Arnaz Sr. Episodes were budgeted at $40,000 each, with two black-and-white segments filmed weekly through Desilu Studios. Despite the name, the series was filmed not in Texas but mostly in Pearl Flats in the Mojave Desert of southern California. The program could have been renewed for a third season had Calhoun not desired to return to films. Motion picture credits Flight to Hong Kong (1956) associate producer The Domino Kid (1957) producer The Hired Gun (1957) producer Ride Out for Revenge (1957) associate producer Apache Territory (1958) producer A Face in the Rain (1963) executive producer Marriages Orsatti was married four times. He was married to film actress June Lang in 1937. Their June 1937 wedding was attended by a guest list like a "Hollywood Who's Who" and was reported as "the biggest movie wedding in years." The breakup of their marriage after only six weeks was also covered in the Hollywood press. Orsatti was next married in 1943 to singer and actress, Marie McDonald, who was known as "The Body Beautiful" and later nicknamed "The Body". Orsatti was working at the time as a test pilot for Lockheed. McDonald had previously been Bugsy Siegel's girlfriend, and author Tim Adler in his book, "Hollywood and the Mob," described Orsatti as a "gangster-cum-agent" and claimed that his brother Frank Orsatti was "a bootlegger and gangster" who got into the movie business by supplying Louis B. Mayer with alcohol and women and later had a reputation for "handling all of MGM's 'dirty work'." Orsatti and McDonald remained married until 1947. Even after their divorce, McDonald continued to use Orsatti as her agent, noting, "Husbands are much easier to find than good agents." Orsatti's third marriage was to actress and Playboy Playmate Dolores Donlon. He was married to Donlon from 1949 to 1960. His fourth wife was Arla Turner Orsatti. They remained married at the time of Orsatti's death in 1984. References 1905 births 1984 deaths Film producers from California Baseball players from California USC Trojans football players Players of American football from California Hollywood talent agents USC Trojans baseball players 20th-century American businesspeople
4796976
https://en.wikipedia.org/wiki/Access%20control%20matrix
Access control matrix
In computer science, an access control matrix or access matrix is an abstract, formal security model of protection state in computer systems, that characterizes the rights of each subject with respect to every object in the system. It was first introduced by Butler W. Lampson in 1971. An access matrix can be envisioned as a rectangular array of cells, with one row per subject and one column per object. The entry in a cell – that is, the entry for a particular subject-object pair – indicates the access mode that the subject is permitted to exercise on the object. Each column is equivalent to an access control list for the object; and each row is equivalent to an access profile for the subject. Definition According to the model, the protection state of a computer system can be abstracted as a set of objects , that is the set of entities that needs to be protected (e.g. processes, files, memory pages) and a set of subjects , that consists of all active entities (e.g. users, processes). Further there exists a set of rights of the form , where , and . A right thereby specifies the kind of access a subject is allowed to process object. Example In this matrix example there exist two processes, two assets, a file, and a device. The first process is the owner of asset 1, has the ability to execute asset 2, read the file, and write some information to the device, while the second process is the owner of asset 2 and can read asset 1. Utility Because it does not define the granularity of protection mechanisms, the Access Control Matrix can be used as a model of the static access permissions in any type of access control system. It does not model the rules by which permissions can change in any particular system, and therefore only gives an incomplete description of the system's access control security policy. An Access Control Matrix should be thought of only as an abstract model of permissions at a given point in time; a literal implementation of it as a two-dimensional array would have excessive memory requirements. Capability-based security and access control lists are categories of concrete access control mechanisms whose static permissions can be modeled using Access Control Matrices. Although these two mechanisms have sometimes been presented (for example in Butler Lampson's Protection paper) as simply row-based and column-based implementations of the Access Control Matrix, this view has been criticized as drawing a misleading equivalence between systems that does not take into account dynamic behaviour. See also Access control list (ACL) Capability-based security Computer security model Computer security policy References Computer security models Computer access control
8762849
https://en.wikipedia.org/wiki/Polyvalente%20de%20l%27%C3%89rabli%C3%A8re
Polyvalente de l'Érablière
The Polyvalente de l'Érablière (or also called École Secondaire de l'Érabliere; English translation: Maple Tree High School) is a french language public high school located in Gatineau, Quebec. It is located in the Côte d'Azur/Limbour district of the Gatineau sector at the intersection of rue de Cannes and Boulevard la Verendrye. History Start and development The school is one of the smallest high schools inside the city of Gatineau. It enrolls approximately 1,200 students from Secondary 1 to 5 and other groups such as the Cheminement Particulier Continu (Continuous Individual Advancement) and the Cheminement Particulier Temporaire (Temporary Individual Advancement). It was opened in the Fall of 1978 and the student population came primarily from the surrounding Limbour and Mont-Luc and Touraine subdivisions. At that time, the school was built in the middle of a large vacant field with little development around it. Today, several condominium projects have been built around the school. In addition, Boulevard La Vérendrye was also extended in 1999 as a freeway towards the Alonzo Wright Bridge with a partial interchange connecting the school. Also, multiple bike paths were built in the vicinity south towards Mont-Luc, Cantley and the lower Limbour area, west towards rue Saint-Louis, Chelsea and the Hull sector and east towards the lower Côte d'Azur area, Touraine and eastern section of the Gatineau sector. Additionally, a large stairwell at the back of the school offers an easier access to students from the Mont Luc and Limbour areas. The stairwell was subject to controversy in 1993 when the school administration criticized the former city of Gatineau for using property that belonged to the Commission Scolaire des Draveurs. The project was halted but later completed after a compromise between the two parties. Today, students come from all across the Gatineau sector and some from the Hull sector as well as from the Cantley and Val-des-Monts municipalities. Uniform policy In April 2004, the school administration, led by principal Micheline Boucher, decided to adopt a strict dress-code by imposing that all students wear uniforms, consisting of polos, shirts, sweaters and pants depending on the season. After a delay of the delivery due to problems involving the distributor, Mini Polo, uniforms were distributed to all students by the end of 2004. New demands and new collections of uniforms were also responsible for the delay. There were also problems during the distribution of the uniforms when several of the articles were missing or not delivered properly. The problem was fully resolved by the end of 2004. In addition to the difficulties surrounding the delivery, the dress-code was met with some opposition, mainly by the student population, some of which defied the new code. Prior to the official decision by the administration, a student petition was circulated to show their opposition to the new rule. However, a telephone survey conducted by the school's administration showed that 72% of parents were in favor of the use of the uniform at the l'Érablière High School Prior to this policy, there were also protests made by students in 1993 and 1999 contesting the dress code policy of the school. In 1993, over 150 students were suspended due to their participation in a marching protest. Their true intentions were to publish the fact that all pieces of clothing were going to be made in other countries, therefore forcing over a thousand students to participate in a transaction with other countries, which was not to the advantage of the province of Quebec at the time. All clothing ended up being made in Mexico, Thailand and other foreign countries. Programs Micro-Computer Program The school also has a special computer program for skilled students, called "Programme Micro-Informatique"; English: Micro-Computer Program. Many projects in all courses on this program require the usage of the computer and an admission test is required as well as grades from elementary school to obtain a spot for this program. Individual Learning Programs There is also a program for students having behavioral or learning difficulties. The program is called Cheminement Particulier Continu (Continuous Individual Advancement) in which students learn in a different way, conducting various types of activities in addition to their regular one. There was also the Cheminement Particulier Temporaire (Temporary Individual Advancement) which was heavily used before the students were sent to regular and larger groups. Junior football team The school is also home to a Juvénile AA (junior) football team called the Jaguars who were Division II provincial champions in 2004. In 2007, the team won the Outaouais Division I championship but lost to the J-H Leclerc Incroyables (Granby) in the provincial final. web site : [www.jags.ca.cx] References External links Polyvalente de l'Erablière website Website of the Polyvalente de l'Erabliere football team, les Jaguars Erabliere
24068622
https://en.wikipedia.org/wiki/CISQ
CISQ
The Consortium for IT Software Quality (CISQ) is an IT industry group comprising IT executives from the Global 2000, systems integrators, outsourced service providers, and software technology vendors committed to making improvements in the quality of IT application software. Overview Jointly organized by the Software Engineering Institute (SEI) at Carnegie Mellon University and the Object Management Group (OMG), CISQ is designed to be a neutral forum in which customers and suppliers of IT application software can develop an industry-wide agenda of actions for defining, measuring, and improving IT software quality. History CISQ was launched in August 2009 by 24 founders including SEI and OMG. The founders of CISQ are Paul D. Nielsen, Director and CEO of SEI and Richard Mark Soley, Chairman and CEO of OMG. Bill Curtis, the co-author of the CMM framework is CISQ's first Director. Software measurement and productivity expert Capers Jones is a CISQ Distinguished Advisor. In September 2012, CISQ published its standard measures for evaluating and benchmarking the reliability, security, performance efficiency, and maintainability of IT software. In January 2013, OMG adopted Automated Function Point specifications. In May 2013 CISQ reached 500 members. In December 2013, WIPRO become the fourth major sponsor to join the list of industry participants investing in the completion and adoption of CISQ standards in the IT industry. See also ISO/IEC 9126 References External links Software quality
516294
https://en.wikipedia.org/wiki/Post%20Office%20Ltd
Post Office Ltd
Post Office Ltd is a retail post office company in the United Kingdom that provides a wide range of products including postage stamps and banking to the public through its nationwide network of post office branches. History Post Office branches, along with the Royal Mail delivery service, were formerly part of the General Post Office and after 1969, the Post Office corporation. Post Office Counters Ltd was created as a wholly owned subsidiary of the Post Office in 1986. After the Post Office statutory corporation was changed to a public company, Royal Mail Group, in 2001, Post Office Counters Ltd became Post Office Ltd. Post Office Ltd has, in recent years, announced losses, with a reported £102 million lost in 2006. This has raised many concerns in the media regarding Post Office Ltd's ability as a company to operate efficiently. Plans to cut the £150m-a-year subsidy for rural post offices led to the announcement that 2,500 local post offices were to be closed. This announcement resulted in a backlash from local communities that relied on the service. In 2007, the government gave a £1.7 billion subsidy to Royal Mail Group so that it could turn a profit by 2011. This was to be used to invest across the whole network of Royal Mail, Post Office Ltd and Parcelforce. 85 Crown post offices were closed, 70 of which were sold to WHSmith. This followed a trial of six Post Office outlets in WHSmith stores. WHSmith was expected to make up to £2.5 million extra in annual profit. 2,500 sub-post offices closed between 2008 and 2009. Redundancy packages were provided from public funding (subpostmasters were paid over 20 months salary, roughly £65,000 each). In November 2010, the government committed £1.34 billion of funding up to 2015 to Post Office Ltd to enable it to modernise the Post Office network. As part of the Postal Services Act 2011, Post Office Ltd became independent of Royal Mail Group on 1 April 2012. A ten-year inter-business agreement was signed between the two companies to allow post offices to continue issuing stamps and handling letters and parcels for Royal Mail. The Act also contained the option for Post Office Ltd to become a mutual organisation in the future. On 8 February 2013, Post Office Ltd announced it was planning to move around seventy of its Crown post offices into shops. This would reduce the Crown network, which it stated was losing £40 million a year, to around 300. On 27 November 2013, the government committed an additional £640 million of funding for 2015 to 2018 to allow Post Office Ltd to complete its network modernisation. In April 2016, the Post Office agreed to hand over up to 61 more branches to WHSmith in a 10-year deal. The deal was condemned as "blatant back-door privatisation" by the Communications Workers Union. Corporate affairs Chief executives David Mills (2002-2005) Alan Cook (2006-2010) Paula Vennells (2012-2019) Nick Read (2019-) Chair Alice Perkins (2011-2015) Tim Parker (2015-2022) Services There are currently around 11,500 post office branches across the UK, of which 191 are directly managed by Post Office Ltd (known as Crown offices). The majority of other branches are either run by various franchise partners or local subpostmaster or operators (who may be members of the National Federation of SubPostmasters or the CWU Postmasters Branch), as "sub-postoffices". The Post Office has a wide variety of services throughout the network of branches. Products and services available vary throughout the network; main post offices generally provide the full range of services. The Post Office rolled out the 'ParcelShop' scheme in Summer 2019, allowing retail stores to accept Royal Mail internet returns, in order to expand Post Office facilities. In towns, post offices are usually open from around 09:00 to 17:30 from Monday to Friday and from 09:00 to 12:30 on Saturday. In some country areas, opening hours are much shorter—perhaps only four hours per week. In some villages an outreach service is provided in village halls or shops. There are also "mobile post offices" using converted vans which travel between rural areas. Many post offices are shut on Sundays and Bank Holidays. Some in smaller towns or villages are shut at lunchtime. Postal services The Post Office provides information on services and accepts postal items and payment on behalf of the two collection and delivery divisions of Royal Mail Group, Royal Mail and Parcelforce. These include a variety of ordinary and guaranteed services both for delivery within the United Kingdom and to international destinations. Postage stamps (including commemorative stamps and other philatelic items) are sold, while applications for redirection of mail are accepted on behalf of Royal Mail. Post Office Local Collect is a scheme whereby undelivered mail can be redirected at customer request to a post office for convenient collection. Poste restante mail can also be held for collection by people travelling. Financial services The Post Office provides credit cards, insurance products, mortgages, access to high street banking services and savings through the Post Office Money umbrella brand which was launched in 2015. Most Post Office Money branded products are provided by Bank of Ireland (UK) plc with Post Office Ltd acting as an appointed representative and credit broker. However, with the sale of the Bank of Ireland's UK assets to Jaja Finance in 2019, Post Office branded Credit Cards are now issued by Capital One UK. Branch banking Personal banking services are offered on behalf of a number of "partner banks" that the Post Office has agreements with. Although different services are available on behalf of different institutions, these may include cash withdrawals, paying in cash and cheques, and balance enquiries. Some post offices also have cash machines, mainly provided by Bank of Ireland. Business banking services are also offered for customers of twenty different UK banks. Services include balance enquiries, cash withdrawals, depositing cash and cheques, and giving change. Bill payments A number of bill payments can be accepted on behalf of a variety of organisations including utilities, local authorities and others. These are in the form of automated payments (barcoded bills, swipe cards, key charging). The Santander Transcash system, which had been a Girobank service, enabled manual bill payment transactions, but this service was discontinued by Santander in December 2017. Broadband and phone The Post Office also operates as an internet service provider; providing consumer broadband and phone services and is part of the wider Post Office Limited Group. By February 2019, it had just over half a million customers across the UK. Post Office provides asymmetric digital subscriber line broadband and fibre broadband internet products (FTTC) to residential customers. Post Office offers two variants of router: A standard Wi-Fi router (Zyxel AMG1302-T11C) router with its ADSL broadband packages and the Zyxel VMG3925-B10B with its Fibre broadband packages. Post Office Broadband and Phone services are currently supplied using the TalkTalk network and it operates UK-based call centres, with teams based in Preston, Selkirk and Chiswick. In June 2015, the Post Office launched its own mobile virtual network operator service, Post Office Mobile. However, in August 2016 it decided "to conclude the trial as the results did not give us sufficient confidence that mobile will contribute to our goal of commercial sustainability". In February 2021, the Post Office agreed to sell its broadband and phone services to Shell Energy and exit the telecoms market. The deal was believed to have cost Shell around £80million, with around 500,000 customers transferring to the new provider. Post Office also runs its own flat rate 118 Directory Enquiries service (118 855). Mobile phone top ups are also available in Post Office branches on behalf of all the major UK mobile networks. ID services A passport check-and-send service is available for passport applications, where the post office staff check that a passport application is filled in correctly and has an acceptable photograph accompanying it. The service is not affiliated with HM Passport Office. Check-and-send service is not guaranteed service. The Post Office used to offer a check-and-send service with DVLA for the photocard driving licence. Some branches now offer a photocard driving licence renewal service. Fishing licences are issued on behalf of the Environment Agency and Natural Resources Wales from branches in England and Wales. Selected branches issue International Driving Permits. In 2019, availability of this service was expanded from 89 to approximately 2,500 branches due to increased demand associated with the possibility of a "no deal" Brexit. Post Office saving stamps Post Office savings stamps were first introduced by Henry Fawcett in the 1880s but were phased out in the 1960s. These were re-introduced in August 2004 because of consumer demand. In 2010 saving stamps were withdrawn and replaced by the Budget Card. Other services National Lottery games and scratchcards Sale and encashment of postal orders Foreign currency exchange and Travel Money Card Sales of gift vouchers redeemable at certain high street merchants PostPak Fast drop Drop and Go National Express coach tickets Post offices not open to the public Seven post office branches are not open to the public: Court (Buckingham Palace) – however, this is managed by Royal Mail as of 2014 House of Commons Portcullis House Royal Automobile Club, 89 Pall Mall, London Scottish Parliament Windsor Castle 20 Finsbury Street, London, EC2Y 9AQ. Post Office HQ Controversies Horizon payment system errors In April 2015, the BBC described a confidential report that alleged that the Post Office had made 'failings' with regard to accounting issues with its Horizon IT system that were identified by sub-postmasters as early as 2000. The article claimed that an independent investigation by forensic accountants Second Sight had found that the Post Office had failed to identify the root cause of accounting shortfalls in many cases before launching court proceedings against sub-postmasters. The shortfalls could have been caused by criminals using malicious software, by IT systems or by human error, the report said. An earlier article by the BBC had claimed that a confidential report contained allegations that the Post Office had refused to hand over documents that the accountants felt they needed to investigate properly, that training was not good enough, that equipment was outdated, and that power cuts and communication problems had made things worse. The Post Office has claimed that their system was not at fault. In 2019, the Post Office was lambasted by the High Court for its 'institutional obstinacy or refusal to consider' that its Horizon computer system might be flawed. The judge, Mr Justice Fraser, characterised this stance as "the 21st-century equivalent of maintaining that the earth is flat." In spite of the court action against its sub-postmasters, which was described by a judge as "aggressive and, literally, dismissive", the Post Office's chief executive Paula Vennells, who had in the meantime left the Post Office and taken up posts in the NHS and the Cabinet Office, was controversially awarded a CBE in the 2019 New Year Honours for "services to the Post Office and to charity". On 19 March 2020 she was harshly criticised in the House of Commons, particularly by Kevan Jones, MP for North Durham, who said: See also Penny Post Credit Union 1st Class Credit Union References External links Postwatch – the watchdog for postal services joined Consumer Focus in October 2008 Financial services companies established in 1986 Government-owned companies of the United Kingdom Postal system of the United Kingdom Retail companies established in 1986 Retail companies of the United Kingdom Royal Mail 1986 establishments in the United Kingdom
22104615
https://en.wikipedia.org/wiki/Well%20Dunn
Well Dunn
Well Dunn, also known as The Southern Rockers, was a professional wrestling tag team who competed in several promotions in the United States. The team was composed of Rex King and Steve Doll, and the team name "Well Dunn" was a play on the term "well done". Accordingly, King wrestled as "Timothy Well" and Doll as "Steven Dunn". King and Doll held championships in Pacific Northwest Wrestling (PNW), the United States Wrestling Association (USWA), the World Wrestling Council (WWC), and Music City Wrestling (MCW). They are best known, however, for competing in the World Wrestling Federation from 1993 to 1995. In the WWF, Well Dunn faced the promotion's top tag teams and were contenders for the WWF Tag Team Championship. They had a feud with The Bushwhackers that lasted for most of Well Dunn's tenure with the company. The team disbanded in 1996, but reunited briefly in 1998. During this reunion, Doll attacked King and the team separated permanently. Doll died from complications related to a blood clot in 2009, and Well died in 2017 from kidney failure. History Early years Prior to teaming with Rex King, Steve Doll competed in Pacific Northwest Wrestling with partner Scott Peterson as the Southern Rockers. The team was fashioned after The Rock 'n' Roll Express, and Doll and Peterson held the NWA Pacific Northwest Tag Team Championship together seven times. In August 1989, Peterson left the territory and King began teaming with Doll. They won the tag team title by defeating Scotty the Body and The Grappler on August 26. After an inconclusive rematch on September 9, the title was vacated and another rematch was ordered; the Southern Rockers regained the belts one week later. They held the title until PNW ordered it vacated again on November 4 after a match against Brian Adams and Jeff Warner; once again, Doll and King won a rematch the following week to regain the championship. They dropped the title to Adams and The Grappler on December 14, but regained the belts by winning a rematch on January 27, 1990. In February 1990, the Southern Rockers vacated the championship and left PNW. They began competing for the United States Wrestling Association, where they quickly won the USWA World Tag Team Championship. Three days after winning the title belts, King and Doll dropped them to Robert Fuller and Brian Lee. They regained them six days later, however. On April 28, they lost the title to The Uptown Posse, but were able to regain it less than a month later. On June 2, King competed in a handicap match, in which he faced The Dirty White Boys (Tony Anthony and Tom Burton). Unable to defeat both men, he lost the match and the championship. Doll returned to PNW, where he held the tag team title another seven times with various partners. King remained in Tennessee, winning the USWA World Tag Team Championship with Joey Maggs. He later moved to Puerto Rico, where he wrestled for the World Wrestling Council. With Ricky Santana as his new partner, he held the WWC World Tag Team Championship twice. The team separated in May 1992 when Santana did not appear for a match to determine the winners of the vacant tag team title. Reunion In June 1992, Doll and King reunited to win the WWC World Tag Team Championship from Doug Masters and Ron Starr. On August 1, the title was vacated due to a controversial finish in a match against a tag team known as Solid Gold. King replaced Doll with Ray González to win the vacant title. The team got back together again in the USWA, where they defeated The Moondogs to win the tag team belts for a fourth time together. Due to an interpromotional agreement between the USWA and the World Wrestling Federation, Doll and King wrestled at several events alongside WWF wrestlers. While in Tennessee, the team showed a "blatant disregard for the rules" and were involved in a storyline in which they were suspended indefinitely from the USWA as a result. World Wrestling Federation 1993 The team signed with the WWF and took on the new name Well Dunn, with Doll competing as Steven Dunn and King wrestling under the name Timothy Well. They wore bow ties in addition to wrestling singlets with thongs over top. This led wrestling author RD Reynolds to state that the team was "proof positive that bow ties and thongs do not match". On June 15, 1993, they wrestled in their first official WWF match and defeated the team of Tito Santana and Virgil. It was in the WWF that Well Dunn "first experienced widespread fame". In the WWF, they continued to wrestle as heels (rule breakers) and were described as "among the sneakiest and most cunning" teams in the promotion. The team made its WWF television debut on the July 8, 1993 episode of WWF Superstars in a loss to The Smoking Gunns. They teamed with Blake Beverly in a loss to Tatanka and The Steiner Brothers on July 16. They continued to face the WWF's top face (fan favorite) tag teams, including The Smoking Gunns and Men on a Mission. In August, manager Harvey Wippleman began appearing with the team. On the October 10 episode of All-American Wrestling, they defeated Smoky Mountain Wrestling Tag Team Champions The Rock 'n' Roll Express by countout, but did not win the championship. On October 8, 1993, Well Dunn wrestled their first match against The Bushwhackers, who became the team's longtime rivals. During a match against Men on a Mission, Timothy Well sustained an injury. This forced the team out of action in the WWF for several months, although they did return to competing in Tennessee. 1994–1995 In February 1994, through Jim Cornette, Well Dunn was sent to Smoky Mountain Wrestling. While there, they feuded with The Thrillseekers (Chris Jericho and Lance Storm), which culminated in a series of penalty box matches. Upon their return to the WWF, Well Dunn had a short series of matches against WWF Tag Team Champions The Headshrinkers, including a match televised on WWF Wrestling Challenge. Well Dunn became involved in a storyline in which Adam Bomb, who was also managed by Wippleman, turned on the manager and began wrestling as a face. On the August 13 episode of WWF Superstars, Well wrestled a singles match against Bomb; Dunn interfered, and Bomb attacked Well Dunn and Wippleman after the match. This led to a blow off match one week later, in which Bomb teamed with The Smoking Gunns to defeat Kwang (another of Wippleman's wrestlers) and Well Dunn. On August 17, the team also served as lumberjacks in a lumberjack match between Bret and Owen Hart. Well Dunn continued to face the top teams in the WWF, including The Smoking Gunns, The Headshrinkers, and the newly formed team of Sparky Plugg and the 1-2-3 Kid. When The Smoking Gunns were unable to compete due to the birth of Billy Gunn's son, Well Dunn competed against The Heavenly Bodies instead; these matches were unusual, as they pitted heel tag teams against each other. As in 1993, Well Dunn lost more matches than they won in each series, but they had occasional victories against established tag teams and were often booked to defeat jobber tag teams. On September 29, Well Dunn began another series of matches against The Bushwhackers. The feud lasted the remainder of the year, although Barry Horowitz substituted for Steven Dunn in several matches when Dunn was unable to appear. The Bushwhackers were victorious in the majority of matches, but Well Dunn won occasional matches. One of these matches was featured on the Coliseum Video release Wham Bam Bodyslam, and another two were televised on Monday Night Raw. During one of the Monday Night Raw matches, The Bushwhackers were accompanied by ring announcer Howard Finkel, who had a long-standing rivalry with Wippleman. Finkel and Wippleman had an argument during the match that led to a tuxedo match, in which Finkel was declared the winner after stripping Wippleman to his underwear. Leading up to the 1995 Royal Rumble pay-per-view, Well Dunn was entered in a tournament for the vacant WWF Tag Team Championship. They were scheduled to face The Smoking Gunns, but Bob Holly (formerly Sparky Plugg) and the 1-2-3 Kid took the Gunns' place; Holly and the 1-2-3 Kid won the match and went on to win the tournament. Eliminated from the tournament, Well and Dunn competed individually in the Royal Rumble match, a battle royal. Well was eliminated by Davey Boy Smith, and Dunn was eliminated by Aldo Montoya. After The Smoking Gunns regained the championship, Well Dunn challenged for the belts in a series of matches but was unable to win the title. Well Dunn was featured in the Dirtiest Dozen subset of the Action Packed line of WWF trading cards in 1995. The team continued to face the WWF's top tag teams, including The Bushwhackers and The Blu Brothers, but were unable to win any of these matches. The team's final WWF match came in a loss to The Allied Powers, after which Well Dunn disappeared from the WWF. Split After a brief tour in All Japan Pro Wrestling in January 1996, King returned to the World Wrestling Council, and Doll went to World Championship Wrestling, before going back to the United States Wrestling Association. In 1997, King returned to the USWA and teaming with Paul Diamond, feuding with Doll and Flash Flanagan. A year later, the team reunited in the Nashville, Tennessee-based Music City Wrestling, where they won the MCW North American Tag Team Championship on May 30, 1998. The reunion was short-lived, as Doll and Reno Riggins attacked King after the match. Riggins took King's place as co-holder of the title with Doll. Deaths On July 25, 1994, Scott Peterson died in a motorcycle accident. On March 23, 2009, Steve Doll died when a blood clot from his lungs entered his heart. On January 9, 2017, Rex King died from kidney failure. Championships and accomplishments Doll and Peterson Pacific Northwest Wrestling NWA Pacific Northwest Tag Team Championship (7 times) Doll and King Music City Wrestling MCW North American Tag Team Championship (1 time) Pacific Northwest Wrestling NWA Pacific Northwest Tag Team Championship (4 times) United States Wrestling Association USWA World Tag Team Championship (4 times) World Wrestling Council WWC World Tag Team Championship (1 time) References Smoky Mountain Wrestling teams and stables United States Wrestling Association teams and stables WWE teams and stables
1419419
https://en.wikipedia.org/wiki/List%20of%20Jupiter%20trojans%20%28Trojan%20camp%29
List of Jupiter trojans (Trojan camp)
This is a list of Jupiter trojans that lie in the Trojan camp, an elongated curved region around the trailing Lagrangian point, 60° behind Jupiter. All the asteroids at the trailing point have names corresponding to participants on the Trojan side of the Trojan War, except for 617 Patroclus, which was named before this naming convention was instituted. Correspondingly, 624 Hektor is a Trojan-named asteroid at the "Greek" () Lagrangian point. In 2018, at its 30th General Assembly in Vienna, the International Astronomical Union amended this naming convention, allowing for Jupiter trojans with H larger than 12 (that is, a mean diameter smaller than approximately 22 kilometers, for an assumed albedo of 0.057) to be named after Olympic athletes, as the number of known Jupiter trojans, currently more than 10,000, far exceeds the number of available names of heroes from the Trojan War in Greek mythology. Trojans in the Greek and Trojan camp are discovered mainly in turns, because they are separated by 120°, and for a period of time one group of trojans will be behind the Sun, and the other will be visible. Partial lists , there are 4006 known objects in the Trojan camp, of which 1928 are numbered and listed in the following partial lists: List of Jupiter trojans (Trojan camp) (1–100000) List of Jupiter trojans (Trojan camp) (100001–200000) List of Jupiter trojans (Trojan camp) (200001–300000) List of Jupiter trojans (Trojan camp) (300001–400000) List of Jupiter trojans (Trojan camp) (400001–500000) List of Jupiter trojans (Trojan camp) (500001–600000) Largest members This is a list of the largest 100+ Jupiter trojans of both the Trojan and Greek camp References Trojan Jupiter Trojans (Trojan Camp)
37709058
https://en.wikipedia.org/wiki/Raymond%20J.%20Lane
Raymond J. Lane
Raymond J. Lane (born December 26, 1946) is an American business executive and strategist specializing in technology and finance. Lane is best known for assisting corporations with technology strategy, organizational development, team building, and sales and growth management. Lane led a "go to market" overhaul of Oracle Corporation, which led to an increase in sales and stock price in the 1990s. He is cited as being the catalyst for "Oracle's success, ‘past, present and future.’" Lane is a partner emeritus at Kleiner Perkins Caufield & Byers, a venture capital firm in Silicon Valley. He managed investments and held board membership in startup companies, including the development of enterprise technology and alternative energy. Early life and education Raymond Jay Lane was born on December 26, 1946 in McKeesport, Pennsylvania, near Pittsburgh. Lane grew up in rural western Pennsylvania where his father was employed as a design engineer for steel production plants. Lane was heavily influenced by his father, who had emerged from the Depression-era steel business as the first member of his family to go to college, graduating from Carnegie Mellon as a mechanical engineer. Lane attended public schools, graduating from Moon High School in 1964. Wishing to follow in his father's footsteps, Lane first pursued a collegiate career in aeronautical engineering at West Virginia University (WVU). He later changed his major and graduated with a bachelor's degree in mathematics in 1968. Career IBM Shortly after graduating, Lane was recruited to IBM's data processing division for a sales position. In 1969, Lane was drafted to the U.S. Army, assigned to the 1st Infantry Division at Fort Riley, Kansas. Because of his computer knowledge, he was made a systems analyst responsible for logistical computer systems. He completed two years of military service and resumed his career in 1971 at IBM, achieving top sales awards for three years and being promoted to a product manager responsible for large mainframe and storage systems in one of IBM's largest regions. EDS After eight years with IBM, Lane was recruited by Electronic Data Systems (EDS), led at the time by Ross Perot. He was handed responsibility for a new division that provided services to large manufacturers, distributors, and retail and transportation companies. Lane ran the nascent division for nearly four years. Booz Allen Hamilton In 1981 Lane accepted a principal position with Booz Allen Hamilton in Chicago. At Booz Allen, he helped develop information technology strategies. Lane became a partner in three years and a senior partner by 1986. He led the development in the late 1980s of became known as Information Systems Group, one of two functional practices, and four industry practices that were the firm's primary organization. He sat on the firm's executive committee and board of directors from 1987 to 1992. Oracle In 1992, Lane was recruited by Oracle Corporation to turn around the firm's sales, service, consulting and marketing, and named president of Oracle USA in June. Oracle, suffering from rapid growth in the late '80s without checks and balances on its customer practices, was also falling behind technologically. The rapid turnaround in the mid-90s, fueled by a new database technology, Oracle 7, and by Lane's organization of sales and services at the company, led to the rise of Oracle's business applications division. Apple founder Steve Jobs recalled that "Larry told me that 15 minutes into the meeting, he knew Ray was the only guy he had met who was near smart enough to run Oracle." In 1996, Lane was named the president and chief operating officer of Oracle. Under his leadership, along with Larry Ellison and Jeff Henley, Oracle expanded from 7,500 to 40,000 employees, defeating its main database rivals Sybase and Informix, to become the leader in the database industry while building major businesses in ERP applications and consulting. In mid-2000, Lane suddenly left the company, leading to speculation Oracle's business needs had outgrown him. Kleiner Perkins Caufield & Byers In 2000, Lane accepted a position with Kleiner Perkins Caufield & Byers. Lane's work at KPCB centered on enterprise technology and alternative energy. As of 2013 Lane became partner emeritus of Kleiner Perkins. He serves as chairman of the board of Elance, an online staffing platform, and of Aquion Energy. HP Lane served as non-executive chairman on Hewlett-Packard's board of directors from 2010 to 2011, and as executive chairman from 2011 to 2013. Lane restructured the board to add seven new directors, replaced the short-tenured CEO Leo Apotheker, and led the placement of Meg Whitman as CEO. Lane was chairman when HP decided to acquire Autonomy Corporation, a controversial acquisition that led him to step down as chairman, but remain on the board of directors. GreatPoint Ventures Lane currently serves as a Managing Partner at GreatPoint Ventures, a venture capital firm headquartered in San Francisco, CA. Tax dispute In 2013, it was reported that Lane was involved in a personal tax dispute. The case stemmed from a tax year 2000 audit that reviewed an investment in a tax strategy used by Lane's advisors called POPS (Partnership Option Portfolio Securities) to offset a minority of Lane's income. Lane agreed to settle in excess of $100 million. On January 3, 2014 Lane filed a lawsuit against Deutsche Bank, and BDO Seidman, LLP alleging that he had suffered damage as a result of their having designed an allegedly "fraudulent" tax shelter. Personal life and philanthropy Lane has three children, Kristi, Kelley, and Kari, with his first wife, Donna. He is married to Stephanie (Herle) Lane, with whom he has two children, Raymond Jay Lane III ("RJay") and Catherine Victoria ("Tori"). Lane's philanthropic interests include work in higher education, the Special Olympics, and Cancer Research. Lane serves as the chairman of the board of trustees of Carnegie Mellon University. He led the institution's capital campaign and efforts to establish a Silicon Valley campus in 2002. In 2010, the Lanes funded the Carnegie Mellon's Computational Biology program (The Lane Center for Computational Biology), which is the Computational Biology Department, one of six degree-granting departments in the School of Computer Science. In 2015, the Lane Center for Computational Cancer Research was established within the department. Lane also funded a professorship chair in his father's name held by Robert F. Murphy (computational biologist). Additionally, he is a benefactor to his alma mater, West Virginia University, where he sits on its board of governors. He chaired a WVU capital campaign in the early 2000s. In September 2007, the Lanes made a $5 million contribution to the university's Computer Science and Electrical Engineering programs, for which the university honored them by naming the department The Lane Department of Computer Science and Electrical Engineering. Lane has served as vice chairman of Special Olympics International for several years. In October 2011, Ray and Stephanie funded the organization's international expansion project, Unify, which unites school age youth with intellectual disabilities with their healthy counterparts. Ray and Stephanie Lane are contributors to the American Cancer Society, and sponsor functions to raise additional funds. The Lanes initiated and funded the Stephanie H. Lane Cancer Research Network, a central service in the state of California to help cancer patients get information and treatment. As of 2019, Lane is an advisor for Inxeption, a California-based technology platform. Recognition TechAmerica, "David Packard Lifetime Medal of Achievement," 2011. Inducted into West Virginia University's Business Hall of Fame (2003) Academy of Distinguished Alumni (2004) and the Order of the Vandalia (2005). Smithsonian Leadership Award for Collaborative Innovation, 2001. Kappa Sigma Man of the Year, 2000. Honorary Ph.D.'s, West Virginia University, Golden Gate University. West Virginia University, Lane Department of Computer Science and Electrical Engineering. Carnegie Mellon, Lane Center for Computational Cancer Research. References 1946 births Living people American computer businesspeople Carnegie Mellon University trustees Booz Allen Hamilton people West Virginia University alumni Kleiner Perkins people
3024416
https://en.wikipedia.org/wiki/Graham%E2%80%93Denning%20model
Graham–Denning model
The Graham–Denning model is a computer security model that shows how subjects and objects should be securely created and deleted. It also addresses how to assign specific access rights. It is mainly used in access control mechanisms for distributed systems. There are three main parts to the model: A set of subjects, a set of objects, and a set of eight rules. A subject may be a process or a user that makes a request to access a resource. An object is the resource that a user or process wants to access. Features This model addresses the security issues associated with how to define a set of basic rights on how specific subjects can execute security functions on an object. The model has eight basic protection rules (actions) that outline: How to securely create an object. How to securely create a subject. How to securely delete an object. How to securely delete a subject. How to securely provide the read access right. How to securely provide the grant access right. How to securely provide the delete access right. How to securely provide the transfer access right. Moreover, each object has an owner that has special rights on it, and each subject has another subject (controller) that has special rights on it. The model is based on the Access Control Matrix model where rows correspond to subjects and columns correspond to objects and subjects, each element contains a set of rights between subject i and object j or between subject i and subject k. For example an action A[s,o] contains the rights that subject s has on object o (example: {own, execute}). When executing one of the 8 rules, for example creating an object, the matrix is changed: a new column is added for that object, and the subject that created it becomes its owner. Each rule is associated with a precondition, for example if subject x wants to delete object o, it must be its owner (A[x,o] contains the 'owner' right). Limitations Harrison-Ruzzo-Ullman extended this model by defining a system of protection based on commands made of primitive operations and conditions. See also Access Control Matrix Bell–LaPadula model Biba model Brewer and Nash model Clark-Wilson model Harrison–Ruzzo–Ullman model References Krutz, Ronald L. and Vines, Russell Dean, The CISSP Prep Guide; Gold Edition, Wiley Publishing, Inc., Indianapolis, Indiana, 2003. Security in Computing (by Charles P. Pfleeger, Shari Lawrence Pfleeger) http://www.cs.ucr.edu/~brett/cs165_s01/LECTURE11/lecture11-4up.pdf Computer security models
619756
https://en.wikipedia.org/wiki/Pegasus%20Mail
Pegasus Mail
Pegasus Mail is a proprietary email client developed by David Harris (who also develops the Mercury Mail Transport System). It was originally released in 1990 for internal and external mail on NetWare networks with MS-DOS and later Apple Macintosh clients. It was subsequently ported to Microsoft Windows, which is now the only platform actively supported. Previously freeware, Pegasus Mail is now donationware. The early versions of Pegasus were installed on MS-DOS or Mac workstations on a Netware network, and supported only mail between network users; for external (Internet) the Mercury Mail Transport System for Netware was required. Features Pegasus Mail (PMail) is suitable for single or multiple users on stand-alone computers and for internal and Internet mail on local area networks. Pegasus Mail has minimal system requirements compared with competing products, for instance the installed program (excluding mailboxes) for version 4.52 requires only around 13.5 MB of hard drive space. Since Pegasus Mail does not make changes to the Windows registry or the system directory, it is suitable as a portable application for USB drives. It is available in German (as well as English), in the past language packs were available for French and Italian. A significant feature of the Microsoft Windows version of Pegasus Mail is that users have the choice not to use Microsoft Internet Explorer's HTML layout engine when displaying HTML email. Malicious HTML tends to be highly dependent on the exact target application and OS, therefore by avoiding both the ubiquitous HTML renderer supplied with Windows and not allowing automation commands such as ActiveX and JavaScript to execute from within an email in its inbuilt renderer, Pegasus reduces substantially the risk of infection from viewing email. (Note that this is not the same as the risk of malicious email or email attachments if opened outside Pegasus.) Pegasus has the facility, not provided by all mail clients, optionally to download headers only, allowing the user to select mail to ignore for now and deal with later, download and delete from the server (normal mail operation for POP3 access), download a copy of a message while leaving it on the server, or delete without downloading. Mail may be marked by the user as read or unread, overriding the default setting. Supported protocols The original version worked with Novell NetWare networks and their Message Handling System (MHS) mail system; a cut-down MHS-only version called FirstMail was bundled with NetWare. Early versions used only a non-standard format for mail folders; later versions offer the standard Unix mailbox format as an alternative to the Pegasus Mail format. Although no longer developed or supported, older versions for MS-DOS and Apple Macintosh are still available. A problem with early versions is that mail was stored on the NetWare System volume. This was not a problem when messages were brief text files. Later it was possible, although rather complicated, to store received mail on another volume, but new mail had to remain on System. This caused problems with NetWare servers with a small System volume and larger ones for data storage, as users often kept many large messages in their new mail folders. Pegasus Mail supports the POP3, IMAP, and SMTP protocols as well as Novell's MHS. Release 4.41 added support for filtering of spam with header and body checking for key phrases (already before download). Release 4.41 also has, amongst other features, an improved HTML rendering engine, better support for special character encoding (especially with UTF-8), phishing protection, and a full-fledged Bayesian spam filter. Pegasus Mail for Windows can be used as a standalone mail client using POP3 or IMAP for incoming mail and SMTP for outgoing, or on a NetWare or Windows network in conjunction with the Mercury Mail Transport System for Windows or NetWare, also by David Harris, running on a network server to receive mail and distribute it to users. While Pegasus Mail and Mercury handle email only, the function of Pegasus Mail is comparable to Microsoft Outlook's mail handling, and Mercury to Microsoft Exchange Server. Criticism of features Pegasus Mail pioneered many features now taken for granted with other email clients, such as filtering and simultaneous access to multiple POP3 and IMAP4 accounts. However, the free distribution of Microsoft Outlook Express or later email client as a standard part of Microsoft Windows since Windows 98, and the distribution of Microsoft Outlook, initially free of charge with PC magazines and then as an integral part of Microsoft Office, from 1997 dealt a significant blow to the market share of Pegasus Mail for Windows and other email clients, from which many never fully recovered. Also, with the widespread distribution of Microsoft Outlook, user expectations of the range of features a program incorporating an email client should offer (Outlook's email, newsgroups, calendar, etc., eventually as part of an integrated suite) did not fulfil new users' expectations, regardless of the features of the email client proper. Trends in interface design also changed over the years, and Pegasus Mail did not follow those changes, still having essentially the same user interface it had in its first Windows version, with very few later additions (such as the "preview window" mode). Pegasus was initially a text-mode application for networks, handling both internal and Internet mail, often operating in conjunction with the Mercury mail transport. Pegasus Mail for stand-alone machines was initially developed at a time when the typical email user had to be somewhat more knowledgeable of the way computers, the Internet and particularly email operate than most of today's users have to be, as PCs and the Internet have become more widespread, reached a broader audience and adapted themselves to those new users' needs. At the time Pegasus Mail was first conceived, its extensive array of features coupled with a simple user interface provided an ideal mix for most users' needs. As years went by it was seen as departing from the de facto standard, and lacking features expected by the typical user. Advanced features Pegasus Mail's takes an "old-fashioned" approach with advantages for knowledgeable users with complex email usage patterns, or who need special features. Some examples include: support for three encoding standards (MIME, uuencoding and BinHex); a powerful filtering system, so much so that it is possible to run a fully automated client-based electronic mailing list (including processing subscribe and unsubscribe requests and forwards to moderation) using solely Pegasus Mail; the ability to automatically select which email address to send a reply from, based on the mail folder containing the original received message; the ability to include custom e-mail header lines (useful for tracking emails, for example); the ability to delete attachments without deleting the message's text body, or to delete the HTML version of a message while keeping the plain-text version, or vice versa, saving disk space; easy access to a message, including all headers, in raw form, which is difficult or impossible in some other clients; a "tree view" of the structure of a multipart message with all its sections and attachments, giving access to view or save any of the parts separately support for downloading headers only, then deciding for each message whether to download, delete, or leave for later ("Selective mail download"). It is possible to download a message in full without deleting it from the server. The drop in usage and funding slowed development, and features that were initially to be included in version 4 were not implemented. Some of these features are scheduled for inclusion in version 5. Development status The development of versions for DOS (MS-DOS and PC DOS 5.0 and higher), Apple Macintosh and 16-bit Windows (Windows 3.1 and higher) stopped in or before 2000. The latest released versions for DOS (3.50, released in or around June 1999) and 16-bit Windows (3.12b, released on 24 November 1999) are available for download. (Version 3.12c for 16-bit Windows was in beta-testing during 2000 but has not been released.) The Mac version (2.21 from 1997) can be found on some ftp servers that in the past offered an official Pegasus mirror service. Until 2006 all versions of Pegasus Mail were supplied free of charge, and printed user manuals were available for purchase. In January 2007 it was announced that distribution and development of Pegasus Mail had ceased due to inadequate financial support from the sale of the manuals. Later in the month, due to an avalanche of support from the user community, it was announced that development would resume. However, Pegasus Mail would change from freeware to donationware and Mercury would change to a licence for fee for configurations with more than a certain number of email boxes. A donation button was added to the website on 1 March 2007. A public beta test version of version 4.5 was announced on 3 October 2008 which is stated to be "very complete and stable, but is provided without formal technical support - you should almost certainly apply due diligence testing to it before using it in a production environment". The new version has not only been developed further beyond earlier versions, but has been ported from now obsolete v5.02 of Borland C++ to Microsoft Visual Studio 2008, a major undertaking in itself. A list of changes and other information is linked from here. On 19 June 2009 Harris announced on the Pegasus Mail site that all development of Pegasus Mail and the associated Mercury program could only continue if sufficient users would commit to donating US$50 annually; on 21 July 2009 he said that there had been a good start. On 3 July 2009 Pegasus Mail 4.51 and Mercury/32 v4.72 were released. On 23 January 2010 Pegasus Mail 4.52 was released, which included improvements for Windows 7. On 2 November 2010 Harris posted the following message regarding the progress of development on the newest release of Pegasus Mail, PMail Version 5.0: On 23 February 2011 Pegasus Mail 4.61 was released. It includes a new HTML renderer which uses the built-in Windows renderer of Internet Explorer, but the BearHTML renderer has also been improved and can be used instead. V. 4.61 included new graphics and an updated interface. v4.62 had improvements to the editor and elsewhere. On 22 December 2011 bug fix version 4.63 became available. On 8 March 2014 version 4.70 was released. This version includes Hunspell for spelling check and OpenSSL for encryption besides further improvements. v4.71 was released in January 2016. Version 4.72 was released in April 2016. On 7 June 2018 version 4.73 was released. This includes a much improved help file. On 25 December 2019 Harris said that, while there has been a delay due to health issues, he "can only promise you that there is progress, and that [he is] totally committed to getting these new versions released" and he is working, among others, on support for OAuth2 and OpenSSL v 1.1.1. Wiki Since February 2009 Pegasus Mail has had its own Wiki, used as an on-line knowledge resource. The Wiki has crashed around 2014 and has not been restored since but parts of it have been archived by the Internet Archive. Linux There is no Linux version of Pegasus Mail. Pegasus runs under Linux using the Wine compatibility layer. See also Comparison of e-mail clients Comparison of feed aggregators Mercury Mail Transport System Further reading Notes and references External links Windows email clients Freeware Portable software Information technology in New Zealand
50399900
https://en.wikipedia.org/wiki/Dendroid%20%28malware%29
Dendroid (malware)
Dendroid is malware that affects Android OS and targets the mobile platform. It was first discovered in early of 2014 by Symantec and appeared in the underground for sale for $300. Certain features were noted as being used in Dendroid, such as the ability to hide from emulators at the time. When first discovered in 2014 it was one of the most sophisticated Android remote administration tools known at that time. It was one of the first Trojan applications to get past Google's Bouncer and caused researchers to warn about it being easier to create Android malware due to it. It also seems to have followed in the footsteps of Zeus and SpyEye by having simple-to-use command and control panels. The code appeared to be leaked somewhere around 2014. It was noted that an apk binder was included in the leak, which provided a simple way to bind Dendroid to legitimate applications. It is capable of: Deleting call logs Opening web pages Dialing any number Recording calls SMS intercepting Uploading images and video Opening an application Performing denial-of-service attacks Changing the command and control server See also Botnet Mirai Shedun Zombie (computer science) Kill system References Android (operating system) malware Botnets Denial-of-service attacks Mobile malware
14830102
https://en.wikipedia.org/wiki/Raima%20Database%20Manager
Raima Database Manager
Raima Database Manager (or RDM) is an ACID-compliant embedded database management system designed for use in embedded systems applications. RDM has been designed to utilize multi-core computers, networking (local or wide area), and on-disk or in-memory storage management. RDM provides support for multiple application programming interfaces (APIs): low-level C API, C++, and SQL(native, ODBC, JDBC, ADO.NET, and REST). RDM is highly portable and is available on Windows, Linux, Unix and several real-time or embedded operating systems. A source-code license is also available. RDM has support for both non-SQL (record and cursor level database access) and SQL database design and manipulation capabilities. The non-SQL features are important for the most resource-restricted embedded system environments, where high performance in a very small footprint is the priority. SQL is important in providing a widely known standard database access method in a small enough footprint for most embedded systems environments. History Raima Inc. originally released RDM in 1984 and it was called db_VISTA. It was one of the first microcomputer network model database management systems designed exclusively for use with C language applications. A companion product called db_QUERY was introduced in 1986, which was the first SQL-like query and report writing utility for a network model database. A db_VISTA derivative DBMS designed to provide a high-performance, transaction-processing client-server SQL DBMS called Raima Database Server (RDS) was released in 1993. This was the first DBMS that provided an ODBC API as its native SQL interface. It was also the first SQL system that incorporated use of the network model in its DDL features. Soon thereafter, RDS was renamed Velocis and in 2001, RDM Server. Version 8.4 of RDM Server was released in 2012. Uninterrupted development of RDM (also known as RDM Embedded) has continued, with the most recent feature additions including database mirroring use in support of highly available (HA) systems, database replication, multi-version concurrency with read-only transactions, multiple transactional file server access, encryption, and an SQL designed specifically for use in embedded systems applications. Version 12.0 of RDM was released in 2013. Also in 2013, RDM introduced the first on-platform SQL DBMS available for use with National Instruments' LabView graphical programming language; it was named the National Instruments' LabView Embedded Tools Network Product of the Year. Version 14.0 of RDM was released in Q3/16. RDM v. 14.0 contains an all-new data storage engine optimized specifically for working with in-memory resident data sets. The new in-memory database (IMDB) allows for significant performance gains and a reduction in processing requirements when compared to older in-memory or on-disk implementations. Version 14.1 of RDM was released in Q1/18. The new release focuses on ease of use, portability, and speed. With Raima's new file format, you can develop once and deploy anywhere. Performance is increased by over 50-100% depending on the use case when compared to previous RDM releases. Raima has extended and improved SQL support, snapshots, and geospatial functionality. Version 14.2 of RDM was released in 2020. The new release has continuous focus on ease of use, portability, and speed. Multi-user Focused Storage Format: the updated database file format increase database throughput through a focus on locking contention prevention. Extended and improved geospatial functionality and a newly supported REST-ful interface have been added to the database server functionality. Version 15.0 of RDM was released Q2 in 2021. The focus of that release was speed, ease of use and new functionality. Custom generated time series support has been added to RDM. Also FFT support for data transformations. An administrative GUI is introduced. Product features Both the source code lines and features in Raima Database Manager and RDM Server are consolidated into one source code. RDM includes these major features: updated in-memory support, time series and FFT support, snapshots, R-Tree support, compression, encryption, SQL, SQL PL, and platform independence—develop once, deploy anywhere. RMD includes portability options such as direct copy and paste that permit development and deployment on different target platforms, regardless of architecture or byte order. The release includes a streamlined interface that is cursor-based, extended SQL support and stored procedures that support SQL PL; it also supports ODBC (C, C++), ADO.NET (C#), RESTful and JDBC (Java). Supported development environments include Microsoft Visual Studio, Apple XCode, Eclipse and Wind River Workbench. A redesigned and optimized database file format architecture maintains ACID compliance and data safeguards, with separate formats for in-memory, on-disk or hybrid storage. File formats hide hardware platform specifics (e.g., byte ordering). Download packages include examples of RDM speed and performance benchmarks. Transactional File Server (TFS) A software component within the RDM system that maintains safe multi-user transactional updates to a set of files and responds to page requests. The TFServer utility links to the TFS to allow it to run as a separate utility, allowing users to run RDM in a distributed computing environment. The TFS may also be linked directly into an application to avoid the RPC overhead of calling a separate server. Modes of Operation Single-Process, Multi-Thread Multi-Process, separate Transactional File Server Multi-Process, shared in-process Transactional File Server Dynamic DDL Support for on-the-fly alterations of the database and tables themselves Encryption AES 128, 192, 256 bit Additional SQL Data Types Date Time Datetime Binary Unicode Bit Data Providers and Drivers: Interoperability ADO.Net 4.0 Data Provider JDBC 4.2 Type 4 Driver ODBC 3.51 Driver RESTful API Different “tree” support AVL-Tree Indexing support B-Tree indexing support R-Tree indexing support Hash table indexing support Snapshots Snapshot isolation allows concurrent reads to the database when write transactions are occurring. RDM takes a frozen image of the current state of the system and that information can be read from without stopping writes. At any point in time, the user can issue a snapshot of specific tables through calling our rdm_dbStartSnapshot() API. Once done, the RDM system will create a static view of the tables specified where any changes to those tables will not be reflected in the snapshot. The user is then free to issue writes to that table outside of the snapshot and any reads within the snapshot view will not be waiting for those writes to complete or preventing those writes from finishing. Once the snapshot is no longer needed, a simple end transaction can be called to easily and quickly get rid of it. This feature provides the end user with the largest number of writes and reads possible simultaneously. Circular Tables support A record type, or table, can be defined as “circular.” With circular tables, when the table becomes full, RDM will still allow new record instances to be created. The new record instances will overwrite existing ones, starting with the oldest. RDM does not allow explicit deletion of record instances in a circular table. The definition of a circular table includes a size limit. This provides a useful way of allocating a fixed amount of storage space for storing the most recent instances of a particular record type. For example, this may be useful in storing event data that is being generated rapidly, where only the most recent data is relevant. Circular tables remove the risk that incoming data may fail to be stored due to lack of space, while avoiding the need for the application to delete obsolete data. Database limitations Maximum Databases Open Simultaneously: No Limit Maximum Records Per Database: No Limit Maximum Size of Database File: Limited only by file system Maximum Tables Per Database: No Limit Maximum Records Per Table: No Limit Maximum Record Size: 32 kb (excluding BLOB or VARCHAR) Maximum Fields Per Table: No Limit Maximum Keys Per Database: No Limit RAM Requirements: User-configurable, minimum 50 kb Code Footprint: Starting at ~270 kb, depending on OS and database features Data Types Supported Time Series BLOBs Character Widechar Varchar DBADDR (ROWID) Floating Point – 32 bit and 64 bit Integer – 8 bit, 16 bit, 32 bit and 64 bit C Struct (Core only) Data/Time/Timestamp BCD (SQL Decimal) – Binary-code-decimal is a standard database representation for financial applications. GUID Product features in depth Database Design Language (DDL) Non-SQL (core) DDL Features: C struct-like record type (table) declarations. Network model set declarations for defining 1-many inter-record relationships. Support for direct, B-tree, and hashed record access. In-memory database or file declarations. A database can be designed to be either on-disk or in-memory, or a hybrid where some parts reside in-memory while others are stored on disk. Circular record types (tables). Circular tables store a user-specified maximum number of records (rows). When that maximum has been reached, newly inserted records are stored in the location occupied by the oldest one. Circular tables are important for storing status data on resource-restricted devices. Supported datatypes: 8, 16, 32, and 64 bit signed or unsigned integers, float, double, decimal (BCD), fixed or variable-length character or wide character, binary or character large objects (blobs), date, time, timestamp, guid/uuid, and db_addr (database address—aka, rowid). Support for struct and array data fields. Optional user control over database file organization and page sizes. SQL DDL Features: Declared referential integrity support automatically implemented using RDM's network model sets. Support for direct, B-tree, and hashed row access. In-memory database or table declarations. Circular tables. Virtual tables declarations that provide SQL access to external data sources (e.g., real-time sensor data). Supported data types: boolean, tinyint, smallint, integer, bigint, decimal, real, float/double, binary/varbinary, long varbinary, char/varchar, wchar/wvarchar, long varchar, long wvarchar, date, time, timestamp, guid/uuid, rowid (foreign and primary keys). Domain declarations. Transactional File Server The RDM Transactional File Server (TFS) specializes in the serving and managing of database files on a given medium. The TFS is a set of functions called by the RDM runtime to manage the sharing of database files among one or more runtime library instances. In a normal multi-user configuration, the TFS functions are wrapped into a server process called TFServer. Standard TCP/IP can be used to make the connection, whether the runtime library and TFServer are on the same computer or different computers. However, when on the same computer, a faster, shared-memory protocol is available by default. The figure shows that one RDM client runtime may have connections to multiple TFServers, and one TFServer may be used by multiple client runtimes. To the applications using the RDM runtime and the TFServers, the locations of the other processes are invisible, so all processes may be on one computer, or all may be on different computers. This provides opportunities for true distributed processing. A TFServer should be considered a “database controller” in much the same way as a disk controller manages a storage device. A TFS is initialized with a root directory in which are stored all files managed by the TFS. If one computer has multiple disk controllers, it is recommended that one TFServer be assigned to each controller. This facilitates parallelism on one computer, especially when multiple CPU cores are also present. A complete application system may have multiple TFServers running on one computer, and multiple computers networked together. Each TFServer will be able to run in parallel with the others, allowing the performance to scale accordingly. The TFS functions are used by the RDM runtime, so the programmer has no visibility of the calls made to them. These functions are made available to the runtime library in three forms. For descriptive reasons, we call them TFSr, TFSt and TFSs: TFSt: The actual, full-featured TFS functions, called directly by the runtime library. Supports multiple threads in a single application. TFSr: The RPC (Remote Procedure Call) library. When called by the runtime library, these functions connect to one or more TFServer processes and call the TFS functions within them. A client/server configuration. TFSs: “Standalone” TFS functions called directly by the runtime library, but intended only for single-process use (if multiple threads are used, each must be accessing a different database only). To be used for high-throughput batch operations while the database(s) are otherwise offline. Unsafe (but fast) updates are allowed. Database unions RDM's database union feature provides a unified view of multiple identically structured databases. Since RDM allows highly distributed data storage and processing, this feature provides a mechanism for unifying the distributed data, giving it the appearance of a single, large database. As a simple illustration, consider a widely distributed database for an organization that has its headquarters in Seattle, and branch offices in Boston, London and Mumbai. Each office owns and maintains employee records locally, but the headquarters also performs reporting on the entire organization. The database at each location has a structure identical to the others, and although it is a fully contained database at each location, it is also considered a partition of the larger global database. In this case, the partitioning is based on geographical location. The mechanism for querying a distributed database is simple for the programmer. When the database is opened, all partitions are referenced together, with OR symbols (“|”) between the individual partition names. Partitioning and unified queries are also used for scaling the performance. Consider a database where each operation begins with a lookup of a record’s primary key. If the“database” is composed of four partitions, each stored on the same multi-core computer, but on different disks controlled by different disk controllers, then the only requirement is a scheme that divides the primary key among the four partitions. If that scheme is a modulo of the primary key, then the application quickly determines which partition to store a record into or read the record from. Since there are multiple CPU cores to run the multiple processes (both the applications and the TFSs), and the four partitions are accessible in parallel (the four controllers permit this), the processing capacity is four times larger than with a single-core, single-disk, and single-partition configuration. Database encryption RDM allows all database content to be encrypted before it is transported across a network and written to the database files. RDM's encryption supports the Rijndael/AES encryption algorithm with 128-, 192- or 256-bit keys based on an application-specified encryption key. Database Mirroring and HA Support Database mirroring in RDM reproduces an exact, byte-for-byte copy of a master database onto the mirrored (or slave) database. Database mirroring is an important database feature for applications that require high availability (HA) where should a TFServer fail for some reason, then the application's HA monitor can automatically switch over to the mirrored TFServer. RDM provides synchronous mirroring where each transaction that is committed on the master TFServer is also securely committed to the mirror TFServer. RDM also provides a set HA support API functions that can be called from the application's HA monitor to monitor the operational status of the TFServers. Mirroring can also be used to support maintaining multiple copies of a database in which updates are only made to the master but readers are directed to one of the mirrored slaves in order to distribute many possible database readers across multiple computers. In this situation, it is not necessary for the master to wait for each slave to confirm a successful commit of each transaction and the mirroring process can run asynchronously. RDM database mirroring requires that the master and all mirrored databases be maintained on the same computer/operating system platforms. Database replication This is due to be released in Q2/18. Replication is similar to mirroring but it not really intended for HA support but for transferring all or, more likely, portions of one database (master) to another database (slave). Replication is designed to work where the databases are not necessarily being maintained on the same platform. The slave databases can be other RDM-managed databases or they can be a 3rd-party DBMS. RDM's replication includes support for multiple master to single slave selective replication of circular table data—important for embedded computers and devices at the edge of the data grid where status and condition monitoring occurs. The status data stored in each master's circular table is replicated to a central control system maintaining a permanent history of all device statuses, which can then be made available for a variety of time series and other analyses. RDM also provides a database change notification API library that allows a slave to access the master replication logs without the data be stored and managed in a database. This allows, for example, a master to store device control information in a database that is replicated to the device through the notification API in order to efficiently control device operation. SQL/PL The RDM SQL Programming Language (SQL PL) is based on the ansi/ISO SQL Persistent Stored Modules (PSM) specification (ISO/IEC 90756-4:2011 +2012). It provides a high-level language in which stored procedures and functions can be written, compiled and called within the RDM SQL system. SQL PL is a computationally complete programming language for use in RDM SQL stored routines (procedures or functions). The language is block structured with the ability to declare variables that conform to the usual scoping rules with an assignment statement so that value can be assigned to them. Control flow constructs provided include if-elseif-else and case statements along with several loop control constructs (including while, repeat-until, and for loop statement). Seamless access to SQL is provided via the ability to execute most SQL statements that can include references to locally declared variables. Also provided is the ability to declare cursors allowing rows from select statements to be fetched into locally declared variables, allowing the result column values to be checked and manipulated within the stored routine. Exception handling is also provided, allowing handlers to be coded for specific or classes of errors or statuses returned from the execution of an SQL statement. In addition, it is also possible to define a user condition and exception handler and for the program to signal its own, special-purpose exceptions. RDM SQL has been designed specifically for use in embedded systems applications. Some of the more important features of RDM SQL include: Small footprint—no SQL views or security is provided as these are usually unnecessary in embedded systems apps and their absence helps to keep the SQL footprint small. Standard SQL transaction and referential integrity support. The SQL system catalog and stored procedures can be stored in a file or as statically declared data structures in C modules. Cost-based query optimization with a rich set of built-in scalar and aggregate functions. A variety of table access methods are available for consideration by the optimizer: direct row access (through rowid primarykeys), optimal primary/foreign key join access through network model sets, B-tree and hash indexes. Ability to extend SQL capabilities through C-based user-defined scalar and aggregate functions. Ability to extend SQL capabilities through C-based user-defined virtual table interfaces that provide SQL access to external data sources such as real-time sensor data. Database table import/export to/from comma-delimited or XML files. Ability to have read-only access from SQL to a non-SQL (i.e.,core level) database. This means, for example, that a remote RDM SQL application can access a non-SQL RDM database running on a very resource-restricted device. Application programming interfaces RDM provides application programming interfaces that allow application development in a variety of programming languages: C based Cursor API – Facilitates the traversal of database records for retrieval, insertion, update and removal of database records. With Record, Key and Set cursors, it fits seamlessly with RDM’s database concepts. It resembles modern programming concepts of iteration over a collection. Comprehensive SQL API – Accessed internally through a simplified ODBC-like API that uses a Raima design. It also supports stored procedures and most other standard SQL. SQL Programming Language (PL) API – Allows programming logic to be done through pure SQL. Developers can leverage their knowledge of SQL and still add programming conditionals and logic. Standards Based ODBC API – Following the ODBC standards, the ODBC API was developed so developers have a familiar way to use the RDM database engine. JDBC – Standard JAVA interface to the RDM database engine, with two modes of operation: the first through TCP/IP and the second a direct link through JNI. ADO.NET – Standard C# interface. Supported connection method is through TCP/IP. RESTful – The REST-ful API is a modern API designed for application developers who want to be able to view and modify database contents through HTTP GET, POST, PUT and DELETE methods with a return format of JSON. Additionally, an administrative set of APIs is exposed to allow for a quick overview of the whole RDM subsystem's status. Items such as memory usage, CPU usage, database size, and database configuration are all available through the HTTP interface. This API is perfect for the developer who is interested in web development or wants to create a quick interface to an RDM database that is accessible on any platform through a web browser. Object Oriented C++ Cursor API – The C++ API was designed to be easy to use while providing developers with full access and control to both RDM’s network and relational functionality. Legacy Navigational C API –RDM’s low-level C is still supported, with minor changes required for the developer. Supported platforms RDM has been ported to a wide variety of computers and operating systems. Packages are available for the following platforms: Microsoft Windows 32, 64 bit Linux 32, 64 bit Android QNX Neutrino ARM/x86/PPC 32-bit Wind River VxWorks Green Hills Integrity macOS 64-bit HPUX PA-Risc/Itanium 32, 64 bit Solaris SPARC/x86 32, 64 bit AIX PPC 32, 64 bit. iOS FreeRTOS RDM packages RDM consists of two packages: RDM Core and RDM Enterprise. RDM Core includes just the core cursor API interface; it is the underlying and most-optimized API designed for use with the C programming language. RDM Enterprise contains both the core cursor API and the SQL interface, in addition to all of the remaining APIs. This package allows for the use of the C# ADO.NET interface, the JAVA JDBC interface, RESTful API and the ODBC interface. It also has full support for third-party connectivity and administrative tools, in addition to supporting the full legacy API from previous versions of the RDM product line. Customers and applications RDM based applications are used today in all major industries including Aerospace & Defense, Automotive, Business Automation, Financial, Government, Industrial Automation, Medical, and Telecommunication. A sampling of RDM users includes the following: Mitsubishi Electric—iQ Platform C Controller PLC Schneider Electric—"ezXOS" in OASyS DNA product Hydro-Québec—CEDA system to manage set up and configuration of power plant alternators General Dynamics—"TIEF" – Tactical Information Exchange Capability database agent Boeing—"AWACS"– Airborne Warning and Control System's radar electronics system Raytheon—Low-level tactical flight profile management in Pave Hawk Lockheed Martin—Flight simulators Benu Networks—Broadband Service Delivery Platform Johnson & Johnson—VITROS patient systems Beckman Coulter—UniCel DxC 800 Synchron clinical system Siemens—RapidPoint 400 medical fluid test equipment IBM—ClearCase source code control system Magellan Navigation—MAPSEND GPS used in PC-based and embedded products NSE—Reliable stock trade data storage NCDEX—Real-time database services for trading application References External links Raima Database Manager RDM Evaluation Packages Proprietary database management systems Embedded databases
64674413
https://en.wikipedia.org/wiki/2019%E2%80%9320%20Troy%20Trojans%20women%27s%20basketball%20team
2019–20 Troy Trojans women's basketball team
The 2019–20 Troy Trojans women's basketball team represents Troy University during the 2019–20 NCAA Division I women's basketball season. The Trojans, led by seventh year head coach Chanda Rigby, play their home games at Trojan Arena and were members of the Sun Belt Conference. They finished the season 25–4, 16–2 in Sun Belt play to finish regular season champions. They received a first and second round bye in the Sun Belt tournament after being seeded 1st. Before their first game, the tournament was canceled due to the COVID-19 pandemic. Preseason Sun Belt coaches poll On October 30, 2019, the Sun Belt released their preseason coaches poll with the Trojans predicted to finish in second place in the conference. Sun Belt Preseason All-Conference team 2nd team Amber Rivers – SR, Forward 3rd team Japonica James – SR, Forward Roster Schedule |- !colspan=9 style=| Non-conference regular season |- !colspan=9 style=| Sun Belt regular season |- !colspan=9 style=| Sun Belt Women's Tournament Rankings 2018–19 NCAA Division I women's basketball rankings See also 2018–19 Troy Trojans men's basketball team References Troy Trojans women's basketball seasons Troy Troy
6605912
https://en.wikipedia.org/wiki/SS%20Ellan%20Vannin%20%281883%29
SS Ellan Vannin (1883)
SS (RMS) (the Manx name for the Isle of Man) was built as an iron paddle steamer in 1860 at Meadowside, Glasgow for the Isle of Man Steam Packet Company. She was originally named Mona's Isle - the second ship in the company's history to be so named. She served for 23 years under that name before being rebuilt, re-engined and renamed in 1883. As Ellan Vannin she served for a further 26 years before being lost in a storm on 3 December 1909 in Liverpool Bay. Mona's Isle Mona's Isle was built by Tod and McGregor Ltd, Glasgow, at a cost of £10,673. She entered service with the Isle of Man Steam Packet fleet in June 1860. Mona's Isle is important in the history of the line, as she was the first vessel to be fitted with oscillating engines, which were also manufactured by Tod and McGregor Ltd. Until 1860 the company had always used the side-lever engines favoured by Robert Napier and Sons. The oscillating engine had advantages over the side-lever: it took less space and had fewer working parts. A further enhancement was the addition of improved feathering floats fitted to the paddle wheels. There was no requirement for a connecting rod, and the upper end of the piston rod was fitted with a bearing which worked directly on to the crankpin. The cylinder was placed vertically under the crankshaft and could pivot through a small arc, permitting the rod to follow the movement of the crank. The plant produced which gave her a speed of approximately 12 knots. When launched, Mona's Isle measured 339 register tons. On the foggy morning of 5 February 1873, Mona's Isle ran aground at Ashton, Renfrewshire. She got off, undamaged, on the afternoon tide, and resumed her voyage to Glasgow from the Isle of Man. On 14 December 1878, arriving at Liverpool from Ramsey in thick fog, the ship grounded on Burbo Bank off New Brighton, but was refloated the following day. After 23 years of service, Mona's Isle was laid up at Ramsey and on 19 January 1883 was taken under tow to Barrow by the Fenella to be rebuilt. Ellan Vannin Rebuilt in 1883, her size was increased to 375 gross register tons and her speed to . She was renamed (the Manx translation for Mona's Isle) on 16 November 1883, following her conversion to a propeller-driven ship. Ellan Vannin was a twin-screw vessel driven by a two-cylinder compound steam engine made by Westray, Copeland and Co. at Barrow. Her boiler pressure was raised to . She was capable of carrying 300 passengers and normally had a crew of 14. Ellan Vannin primarily operated out of Ramsey to Whitehaven, Liverpool and Scotland. She gave 26 years more service, and became the main mail carrier out of Ramsey. In December 1891, she completed a special overhaul at the Naval Construction Works at Barrow, costing £2,913. By 1909 she was the smallest and oldest ship in the Steam Packet Fleet. Ellan Vannin was considered an exceptionally fine vessel in bad weather, carrying out the daily mail contract when other vessels were safe in harbour. Indeed stormy weather appeared to be no deterrent to her, and it is reported that when up to 12 ocean liners had been taking shelter in Ramsey Bay, Ellan Vannin steamed through them as she made passage to Whitehaven and returned in the evening, the completion of her voyage being heralded by the ships sheltering in the bay sounding their whistles. Ellan Vannin became looked upon as a mascot of the Steam Packet fleet, and known by Manx sailors as the Li'l Daisy. Loss On Friday 3 December 1909, Ellan Vannin left her home port of Ramsey at 01:13hrs, under the command of her master, 37-year-old Captain James Teare, who had some 18 years of experience working his way through the company to the position of master. During the summer of 1909 Teare was in command of the King Orry and had only joined the Ellan Vannin the day before her ill-fated voyage. Captain Teare was married with four children. Ellan Vannin was carrying 15 passengers and 21 crew as well as mail and 60 tonnes of cargo which included approximately 60 sheep. The weather on departure was moderate and although the barometric pressure was falling the captain did not expect a significant deterioration in the weather. The wind direction on departure was from the northwest meaning Ellan Vannin would have a following sea during her passage – something which would have caused her master no particular concern. However, the weather rapidly worsened and by 06:35hrs, when the ship arrived at the Mersey Bar lightvessel, the wind had risen to hurricane force 12, and waves were reported to be exceeding in height. A strong consensus at the time was that with a following sea the Ellan Vannin had made good time to the Bar lightship. Upon reaching the Bar her course would have been changed from approximately 130 degrees to 080 degrees as she entered the approach channel to the river. This would have caused her to take the sea on her port beam with the result that she got sufficiently off her course to strike a sandbank thereby causing her to founder between the Bar lightship and the Q1 buoy, sinking in the Mersey approach channel (at ). It is believed she was broached by a large wave, which overwhelmed the ship. She was swept by heavy seas and filled, sinking by the stern with the loss of all passengers and crew. News of the disaster reached Douglas on the Friday evening, and the directors sat in almost continuous session until Monday. Communication was by telegram and information was difficult to ascertain. At approximately 19:00hrs a telegram was received which reported that the crew of the Formby lightship had seen lifebuoys, bags of turnips, several dead sheep and a piano floating near the lightship. It was also reported that the crew of the lightship had picked up a mail bag which was destined for the Birkenhead Post Office and which was found to contain letters despatched from Ramsey. The following morning the company offices in Douglas received a telegram from Liverpool stating that one of the Ellan Vannin's lifeboats had been washed ashore at New Brighton with its cover on and its working gear inside. Also washed ashore were parts of the ship's bridge. On Saturday morning Tynwald departed Douglas to replace Fenella on the Douglas – Liverpool service, with Fenella in turn taking the sailing to Ramsey which should have been undertaken by Ellan Vannin. It was five days after the ship went down that the first bodies were recovered. In January 1910, Cpt. Teare's body was found washed ashore on Ainsdale beach in Southport. It was subsequently returned to the Isle of Man for burial. Aftermath The Board of Trade inquiry found that the captain was not to blame for the disaster and the cause was extreme weather. The official inquiry referred to waves high and declared the ship to have been in good condition and fully seaworthy. After the foundering, her masts broke the surface. Divers inspecting the ship found damage to the bows and that the lifeboat davits had been swung out ready for lowering. Soon after the disaster the Mersey Docks and Harbour Board destroyed the wreck using explosives, as it was causing a hazard to shipping in the channel. A disaster fund was established for those who were dependants of the deceased, the Steam Packet contributed £1,000 to this fund. It was set up by The Daily Telegraph at writer Hall Caine's instigation. Caine headed the list of contributors and wrote a poem The Loss of the Ellan Vannin. The crew of 21 included one woman, a Mrs. Collister, of Crosby, Isle of Man, who left one child. The 20 men were survived by 18 widows and 70 children. All but two of the crew lived on the Isle of Man. Five of the passengers came from off Island, the rest from the north of the Island. The last beneficiary of the fund was a Miss Benson of Ramsey, the daughter of one of the crew. She was 20 at the time of the disaster and was in very poor health. She was the last beneficiary of the fund, dying in 1974 at the age of 85. Although the Isle of Man Steam Packet Company has a tradition of reusing ship names, they have never reused the name . Commemorations In addition to Caine's poem following the disaster, another, "The Sorrowful Crossing" was written by Josephine Kermode. The Ellen Vannin Tragedy, a song written by Hughie Jones of the British folk band The Spinners, commemorates the disaster. To mark the centenary of the tragedy, in 2009 the Isle of Man Post Office issued two stamps, picturing Ellan Vannin and Captain Teare. Victims Crew Second in command to Capt. Teare was 45-year-old First Officer John Craine of Leigh Terrace, Douglas, who like Capt. Teare had worked his way up through the company. During the previous summer Craine had served on the Mona. Second Officer was John Kinley of Surby, Port Erin, who had previously served on the Fenella. William Kelly, of Mill St, Castletown, although sailing as deck crew was a Licensed mariner who had served as Second Officer on the Tynwald during the summer. Kelly's brother was also in the employ of the Isle of Man Steam Packet Company and was also serving on the Ellan Vannin leaving the ship only a day or so before the sailing. Passengers Of the passengers on board, two were leaving the Isle of Man for business overseas. Mark Joughin of Ballawhannell, Bride, Isle of Man, was on his way to America to make enquiries into an estate which had been left to him. Another passenger was Edgar Blevin of the accountancy firm Kerruish, Son & Blevin of Douglas and Liverpool, whilst Christopher Heaton-Johnson of Beaconsfield Towers, Ramsey, Isle of Man, was en route to India. Thomas Quayle of Andreas, Isle of Man was a former steward for the Archdeacon of Sodor and Man, he was on his way to Liverpool in order to undergo an operation. He left a widow and three children. Two of the passengers, Daniel Newell of Croydon, Surrey, and Walter Williams of Earl's Court, London, were engaged in carrying out stone work on the Catholic church, Ramsey, Isle of Man. The actual name, age and address of the person listed as Miss Louis Findlay in the table below is Louisa Hannah Findlay, who was 21 at the time of her death. Her full address was 183 Grangehill Road, Eltham. Her death was registered in Ormskirk, Lancashire in January 1910, and she was buried at the church of St John the Baptist, Eltham on 21 January 1910. She was the second eldest of eight children born in Woolwich to William and Ann Findlay. Her younger siblings were born variously in Bombay, India, and Bermuda, suggesting that the family was well travelled - William rose to be an examiner with the Royal Arsenal in Woolwich. Gallery References Notes Bibliography External links Isle of Man disasters Video of Hughie Jones' Song "Ellan Vannin" - performed as part of the Irish Sea Sessions 2012 at Liverpool Philharmonic Hall 1860 ships Ships built on the River Clyde Ships of the Isle of Man Steam Packet Company Passenger ships of the Isle of Man Ferries of the Isle of Man Steamships of the Isle of Man Maritime incidents in February 1873 Maritime incidents in December 1878 Maritime incidents in 1909 Shipwrecks in the Irish Sea Shipwrecks of the River Mersey
24741111
https://en.wikipedia.org/wiki/Weev
Weev
Andrew Alan Escher Auernheimer ( ; born ), best known by his pseudonym weev, is an American computer hacker and self-avowed Internet troll. Affiliated with the alt-right, the Southern Poverty Law Center has described him as being a neo-Nazi, white supremacist, and antisemitic conspiracy theorist. He has used many aliases in contacting the media, although most sources indicate his real first name as Andrew. As a member of the hacker group Goatse Security, Auernheimer exposed a flaw in AT&T security that compromised the e-mail addresses of iPad users. In revealing the flaw to the media, the group also exposed personal data from over 100,000 people, which led to a criminal investigation and indictment for identity fraud and conspiracy. Auernheimer was sentenced to 41 months in federal prison, of which he served approximately 13 months before the conviction was vacated by a higher court. In 2016, Auernheimer was responsible for sending thousands of white-supremacist flyers to unsecured web-connected printers at multiple universities and other locations in the U.S. Since his release from prison, he has lived in a variety of locations in Eastern Europe and the Middle East. In 2016, he told an interviewer that he was living in Kharkiv. In 2017, it was reported that he was acting as webmaster for the neo-Nazi website The Daily Stormer. The Southern Poverty Law Center describes him as "a neo-Nazi white supremacist" known for "extremely violent rhetoric advocating genocide of non-whites". Early life and education Auernheimer was born in Arkansas in 1985. At age 14, in 1999, he enrolled at James Madison University to study mathematics, and dropped out in 2000. Despite his neo-Nazi affiliations, Auernheimer's mother has stated that he "comes from a 'large, mixed-race family' with Native American heritage, and that he most certainly has Jewish lineage 'on both sides of his family.'" Early hacking and trolling Auernheimer claimed responsibility for the reclassification of many books on gay issues as pornography on Amazon's services in April 2009. Amazon said that he was not responsible for the incident. Even before the Amazon incident, several media publications profiled him regarding his hacking and trolling activities, including The New York Times, in which he claimed to be a member of a hacker group called "the organization," making $10 million annually. He also claimed to be the owner of a Rolls-Royce Phantom. After the Times story on Auernheimer was published, reporters sought him out for commentary on hacking-related stories. Gawker published a story on the Sarah Palin email hacking incident and prominently featured Auernheimer's comments in the title of the story. In a 2008 interview, Auernheimer claimed responsibility for harassing the author and game developer Kathy Sierra in response to her "touchy" reaction to receiving threatening comments on her blog. This included posting a false account of her career online, including charges that she was a former sex worker, along with her home address and Social Security number. The post instigated further harassment and abuse of Sierra, which led her to withdraw from online activity for several years. Author Bailey Poland calls the "highly gendered nature" of his attacks on women a form of "cybersexism". He is a member of the Gay Nigger Association of America, an anti-blogging trolling group who take their name from the 1992 Danish movie Gayniggers from Outer Space. Members of Goatse Security involved with the iPad hack are also members of GNAA. He was also formerly GNAA's president. AT&T data breach Auernheimer was a member of the hacker group known as "Goatse Security" that exposed a flaw in AT&T security in June 2010, which allowed the e-mail addresses of iPad users to be revealed. The flaw was part of a publicly-accessible URL, which allowed the group to collect the e-mails without having to break into AT&T's system. Contrary to what it first claimed, the group revealed the security flaw to Gawker Media before AT&T had been notified, and also exposed the data of 114,000 iPad users, including those of celebrities, the government and the military. The group's actions rekindled public debate on the disclosure of security flaws. Auernheimer maintains that Goatse Security used common industry standard practices and has said that "we tried to be the good guys". Jennifer Granick of the Electronic Frontier Foundation has also defended the methods used by Goatse Security. Investigation The FBI opened an investigation into the incident, which led to a criminal complaint in January 2011 under the Computer Fraud and Abuse Act. Shortly after the investigation was opened, the FBI and local police raided Auernheimer's home in Arkansas. The FBI search was related to its investigation of the AT&T security breach, but Auernheimer was instead detained on state drug charges. Police alleged that, during their execution of the search warrant related to the AT&T breach, they found cocaine, ecstasy, LSD, and Schedule 2 and 3 pharmaceuticals. He was released on a $3,160 bail pending state trial. After his release on bail, he broke a gag order to protest what he maintained were violations of his civil rights. In particular, he disputed the legality of the search of his house and denial of access to a public defender. He also asked for donations via PayPal, to defray legal costs. In January 2011, all drug-related charges were dropped immediately following Auernheimer's arrest by federal authorities. The U.S. Justice Department announced that he would be charged with one count of conspiracy to access a computer without authorization and one count of fraud. Although his co-defendant, Daniel Spitler, was quickly released on bail, Auernheimer was initially denied bail because of his unemployment and lack of a family member to host him. He was incarcerated in the Federal Transfer Center, Oklahoma City before being released on $50,000 bail in late February 2011. A federal grand jury in Newark, New Jersey, indicted Auernheimer with one count of conspiracy to gain unauthorized access to computers and one count of identity theft in July 2011. In September 2011, he was freed on bail and raising money for his legal defense fund. Trial On November 20, 2012, Auernheimer was found guilty of one count of identity fraud and one count of conspiracy to access a computer without authorization. On November 29, 2012, Auernheimer wrote an article in Wired entitled "Forget Disclosure – Hackers Should Keep Security Holes to Themselves," advocating the disclosure of any zero-day exploit only to individuals who will "use it in the interests of social justice." In a January 2013 TechCrunch article, he likened his prosecution to that of Aaron Swartz, writing Auernheimer was found guilty of identity fraud and conspiracy to access a computer without authorization. Before his sentencing hearing, Auernheimer told reporters, "I'm going to jail for doing arithmetic". He was sentenced to 41 months in federal prison and ordered to pay $73,000 in restitution. Just prior to his sentencing, he posted an "Ask Me Anything" thread on Reddit; his comments, such as "I hope they give me the maximum, so people will rise up and storm the docks" and "My regret is being nice enough to give AT&T a chance to patch before dropping the dataset to Gawker. I won't nearly be as nice next time", were cited by the prosecution the next day in court as justification for the sentence. Later in March 2013, civil rights lawyer and George Washington University Law School faculty Orin Kerr joined Auernheimer's legal team, free of charge. Imprisonment Auernheimer was serving his sentence at the Federal Correctional Institution, Allenwood Low, a low-security federal prison in Pennsylvania, and was scheduled for release in January 2016. On July 1, 2013, his legal team filed a brief with the Third Circuit Court of Appeals, arguing that his convictions should be reversed because he had not violated the relevant provisions of the Computer Fraud and Abuse Act. On April 11, 2014, the Third Circuit issued an opinion vacating Auernheimer's conviction, on the basis that the New Jersey venue was improper, since neither Auernheimer, his co-conspirators, nor AT&T's servers were in New Jersey at the time of the data breach. While the judges did not address the substantive question on the legality of the site access, they were skeptical of the original conviction, observing that no circumvention of passwords had occurred and that only publicly accessible information was obtained. He was released from prison on April 11, 2014. In a letter to the Federal government the following month, he demanded compensation for his jailing to be awarded in bitcoin. He referred to three men, including Oklahoma bomber Timothy McVeigh, as being among "the greatest patriots of our generation" and wished to use the compensation to build memorials to them. The other men were Andrew Stack and Marvin Heemeyer, two men who had also died in violent incidents. (Stack flew his plane into a building in Austin, Texas; Heemeyer also took his own life, in his case after using a bulldozer to demolish many buildings in a Colorado town.) Auernheimer told a journalist from Vice: "I honestly think we need to build statues of them just to piss off federal agents really." After prison Following his release, Auernheimer lived for a time in Lebanon, Serbia, and Ukraine. In 2016, he told an interviewer that he was living in Kharkiv. The Southern Poverty Law Center (SPLC) reported Auernheimer to have left Ukraine in 2017 for Tiraspol, the capital of Transnistria. Far-right affiliations In early October 2014, The Daily Stormer published an article by Auernheimer in which he effectively identified himself as a white supremacist and neo-Nazi. He is known for his "extremely violent rhetoric advocating genocide of non-whites", according to the SPLC. "Hitler did nothing wrong", he tweeted in March 2016. Auernheimer's Twitter account was banned the following December. In incidents occurring in March and August 2016, Auernheimer sent flyers adorned with racist and anti-Semitic messages to thousands of unsecured printers across the United States; flyers bearing swastikas and promoting The Daily Stormer were sent to multiple universities. He claimed responsibility for 50,000 flyers sent to printers across the U.S. by using a tool to scour the Internet for unsecured printers, and described in a blog post, finding over a million vulnerable devices. In an interview with The Washington Times, founder of The Daily Stormer Andrew Anglin gave his approval of Auernheimer's actions concerning unsecured printers. In the second unsolicited flyer printing incident in August 2016, Auernheimer called for violence against individuals he considered non-white: "the hordes of our enemies from the blacks to the Jews to the federal agents are deserving of fates of violence so extreme that there is no limit to the acts by which can be done upon them in defense of the white race." He "unequivocally" supported the killing of children. The Southern Poverty Law Center speculated that motivation for the attack was the then imminent trial of Dylann Roof (later convicted for the Charleston church shooting). Auernheimer wrote of Roof: "I am thank thankful [sic] for his personal sacrifice of his life and future for white children." At the same time, he praised Anders Breivik who was responsible for the 2011 Norway attacks in which 77 people died in two attacks. "He is a hero of his people, and I cannot wait for his liberation from captivity at the hands of swine," Newsweek in April 2016 quoted Auernheimer as saying of Breivik. He claimed to be in contact with a network of thousands of nationalists: "We all love and support him unconditionally. His lawsuit and Roman salute have only increased sympathy and appreciation for him." An email leak by BuzzFeed News in October 2017 revealed that Auernheimer was in contact with Milo Yiannopoulos, who had asked Auernheimer for advice on an article about the alt-right. Yiannopoulos asked his editor at Breitbart in April 2016 for permission for Auernheimer to appear on his podcast, an option which was rejected since editor Alex Marlow did not want Breitbart to associate with a "legit racist". In 2017, Auernheimer was reported to be working as the webmaster for The Daily Stormer. An SPLC analyst described Auernheimer and Anglin as "primary innovators" in the use of online trolling by right-wing extremists. Other data releases In October 2015, Auernheimer published the names of U.S. government employees who were exposed by the Adult FriendFinder and Ashley Madison data breaches. He told CNN: "I went straight for government employees because they seem the easiest to shame." Auernheimer has also been involved in the release of the undercover Planned Parenthood videos, which were under a temporary restraining order. The Washington Post quoted him as saying he did it "for the lulz." References Further reading U.S. v. Auernheimer from the Electronic Frontier Foundation U.S. v. Auernheimer from the Digital Media Law Project External links weev's LiveJournal blog 1985 births Living people Alt-right Alt-right writers American neo-Nazis Hacking in the 2000s Internet trolls People associated with computer security People from Fayetteville, Arkansas Hacking in the 2010s Jewish fascists American people of Jewish descent American white supremacists American conspiracy theorists Hackers Prisoners and detainees of the United States federal government
33913807
https://en.wikipedia.org/wiki/List%20of%20autonomous%20higher%20education%20institutes%20in%20India
List of autonomous higher education institutes in India
The higher education system in India includes both private and public universities. Public universities are supported by the Government of India and the state governments, while private universities are mostly supported by various bodies and societies. Universities in India are recognized by the University Grants Commission (UGC), which draws its power from the University Grants Commission Act, 1956. In addition, 16 Professional Councils are established, controlling different aspects of accreditation and coordination. The types of universities controlled by the UGC include Central universities, State universities, Deemed universities and Private universities In addition to the above universities, other institutions are granted the permission to autonomously award degrees, and while not called "university" by name, act as such. They usually fall under the administrative control of the Department of Higher Education. In official documents they are called "autonomous bodies", "university-level institutions", or even simply "other central institutions". Such institutes include: Indian Institutes of Technology (IITs) are a group of autonomous engineering, science, and management institutes with special funding and administration. The Institutes of Technology Act, 1961 lists twenty three IITs. National Institutes of Technology (NITs) are a group of engineering, science, technology and management institutes with special funding and administration. They were established as "Regional Engineering Colleges" and upgraded in 2003 to national status and central funding. The latest act governing NITs is the National Institutes of Technology Act, 2007 which declared them Institutes of National Importance. It lists thirty one NITs. Indian Institutes of Management (IIMs) are a group of business schools created by the Government of India. IIMs are registered Societies governed by their respective Board of Governors. The Department of Higher Education lists 19 IIMs. Indian Institutes of Information Technology (IIITs) are a group of autonomous information technology oriented institutes with special funding and administration. The Indian Institutes of Information Technology Act lists five central and twenty public-private partnership IIITs. Schools of Planning and Architecture (SPAs) are a group of architecture and planning schools established by Ministry of HRD, Government of India. All the SPAs are premier centrally funded institution. Indian Institutes of Science Education and Research (IISERs) are a group of seven premier institutes established by the Ministry of Human Resource Development, devoted to science education and research in basic sciences. They are broadly set on the lines of the Indian Institute of Science. All India Institutes of Medical Sciences (AIIMS) are a group of autonomous public medical colleges of higher education. As of 2020, these are 15 in number and are established by the Ministry of Health and Family Welfare. National Law Universities (NLU) are law schools established for the promotion of legal education and research. As of 2020, there are 22 NLUs in India regulated by the Ministry of Law and Justice and the Bar Council of India. Institutes of National Importance (INIs) are institutions which are set by an act of parliament. They receive special recognition and funding. The Department of Higher Education's list includes 95 institutions including all of AIIMSs, IITs, NITs, SPAs, IIITs and some others like NIMHANS, ISI etc. were also legally awarded the status. INIs are marked below with a hash (#). Institute under State Legislature Act (IuSLAs) are autonomous higher education institutes established or incorporated by a State legislature Act. Institutes that are ‘under State Legislature Act’ enjoy academic status and privileges like State universities. Government-Funded Institutes See also List of universities in India References
2318943
https://en.wikipedia.org/wiki/Backup%20software
Backup software
Backup software are computer programs used to perform a backup; they create supplementary exact copies of files, databases or entire computers. These programs may later use the supplementary copies to restore the original contents in the event of data loss. Hence they're very useful to users. Key features There are several features of backup software that make it more effective in backing up data. Volumes Voluming allows the ability to compress and split backup data into separate parts for storage on smaller, removable media such as CDs. It was often used because CDs were easy to transport off-site and inexpensive compared to hard drives or servers. However, the recent increase in hard drive capacity and decrease in drive cost has made voluming a far less popular solution. The introduction of small, portable, durable USB drives, and the increase in broadband capacity has provided easier and more secure methods of transporting backup data off-site. Data compression Since hard drive space has cost, compressing the data will reduce the size allowing for less drive space to be used to save money. Access to open files Many backup solutions offer a plug-in for access to exclusive, in use, and locked files. Differential and incremental backups Backup solutions generally support differential backups and incremental backups in addition to full backups, so only material that is newer or changed compared to the backed up data is actually backed up. The effect of these is to increase significantly the speed of the backup process over slow networks while decreasing space requirements. Schedules Backup schedules are usually supported to reduce maintenance of the backup tool and increase the reliability of the backups. Encryption To prevent data theft, some backup software offers cryptography features to protect the backup. Transaction mechanism To prevent loss of previously backed up data during a backup, some backup software (e.g. Areca Backup, Argentum Backup) offer Transaction mechanism (with commit / rollback management) for all critical processes (such as backups or merges) to guarantee the backups' integrity. See also Backup Cloud storage List of backup software References Utility software types
1591708
https://en.wikipedia.org/wiki/SunView
SunView
SunView (Sun Visual Integrated Environment for Workstations, originally SunTools) was a windowing system from Sun Microsystems developed in the early 1980s. It was included as part of SunOS, Sun's UNIX implementation; unlike later UNIX windowing systems, much of it was implemented in the system kernel. SunView ran on Sun's desktop and deskside workstations, providing an interactive graphical environment for technical computing, document publishing, medical, and other applications of the 1980s, on high resolution monochrome, greyscale and color displays. Bundled productivity applications SunView included a full suite of productivity applications, including an email reader, calendaring tool, text editor, clock, preferences, and menu management interface (all GUIs). The idea of shipping such clients and the associated server software with the base OS was several years ahead of the rest of the industry. Sun’s original SunView application suite was later ported to X, featuring the OPEN LOOK look and feel. Known as the DeskSet productivity tool set, this was one distinguishing element of Sun's OpenWindows desktop environment. The DeskSet tools became a unifying element at the end of the Unix wars, where the open systems industry was embroiled in a battle which would last for years. As part of the COSE initiative, it was decided that Sun’s bundled applications would be ported yet again, this time to the Motif widget toolkit, and the result would be part of CDE. This became the standard for a time across all open systems vendors. The full suite of group productivity applications that Sun had bundled with the desktop workstations turned out to be a significant legacy of SunView. While the underlying windowing infrastructure changed, protocols changed, and windowing systems changed, the Sun applications remained largely the same, maintaining interoperability with previous implementations. Successors SunView was intended to be superseded by NeWS, a more sophisticated window system based on PostScript; however, the actual successor turned out to be OpenWindows, whose window server supported SunView, NeWS and the X Window System. Support for the display of SunView programs was phased out after Solaris 2.2. Sun provided a toolkit for X called XView, with an API similar to that of SunView, simplifying the transition for developers between the two environments. Sun later announced its migration to the GNOME desktop environment from CDE, presumably marking the end of the 20-year-plus history of the SunView/DeskSet code base. Sun Microsystems software Windowing systems Widget toolkits
10884994
https://en.wikipedia.org/wiki/SystemBuilder/SB%2B
SystemBuilder/SB+
SB/XA is a 4GL development and runtime environment originally written for the Pick family of computer databases/environments and now part of the Rocket U2 software suite. The SystemBuilder environment comprises SB+ Server, often running on a Rocket U2 database, SBClient which runs as a Microsoft Windows desktop client and the SB/XA Communications server for browser clients. The product can be run in either developer or runtime mode. The development environment enables rapid prototyping, development and deployment of applications and supports a variety of user interface environments. History System Builder originally owned by Computermatic PL was started in a garden shed in South Africa by first cousins Neill and Derek Miller in 1982. The popularity of the Pick database system, combined with a lack of a good development framework led them to develop a tool to build standard menus and screens. The product was very successful and after expanding into international markets and after a few versions they began to re-develop the product from the ground up. This was to become SB+ and was released in early 1990. Traditionally, up to this time, Pick systems were accessed using green-screen terminals like the Wyse60 or VT100 but with the rise of the PC a new paradigm presented itself and so around the end of 1991 the product was enhanced with the addition of a specialised terminal emulation client called Termulator! This was able to tightly integrate the server and the PC to allow for facilities like downloads direct from the server into Lotus 1-2-3 or the new kid on the block, Microsoft Excel. Shortly after, the client program was renamed SBClient and the ability to develop and render screens in either character or GUI mode. Having been bought by Unidata Corporation in 1996, and following Unidata's merger with VMark Software Inc to form Ardent Software in 1998, the SystemBuilder product set came under the ownership of Informix in 2000 following their purchase of Ardent Software for its Datastage product. Subsequently, in 2001 Informix themselves were bought by IBM, and the U2 and SystemBuilder products eventually found their way to become part of IBM's Data Management portfolio. Development of the SystemBuilder and RedBack products continued in Sydney until 2005, when they were merged with the U2 development team located in Denver, United States. On 1 October 2009, Rocket Software announced the purchase of the entire U2 suite, which includes SystemBuilder, from IBM. The SystemBuilder Development Environment The System Builder/SB+ server environment is based around a set of key tools and utilities. These leverage out to provide a powerful and comprehensive development environment which is, itself, built mainly from these tools. SB+ includes an application menuing system, screen generator, a 3GL programming language, an expression language, the GUI components and report writer tool. Evolution In August 2008, System Builder released SB/XA v6.0.0 SB/XA which includes many enhancements to the System Builder suite including a new user interface based on Web/XAML protocols. The most recent iteration of SB/XA may be found here in the Rocket Software Product Matrix External links System Builder home page "SB+ Solutions" by Kevin King References SB+
51498967
https://en.wikipedia.org/wiki/Zstd
Zstd
Zstandard, commonly known by the name of its reference implementation zstd, is a lossless data compression algorithm developed by Yann Collet at Facebook. Zstd is the reference implementation in C. Version 1 of this implementation was released as open-source software on 31 August 2016. Features Zstandard was designed to give a compression ratio comparable to that of the DEFLATE algorithm (developed in 1991 and used in the original ZIP and gzip programs), but faster, especially for decompression. It is tunable with compression levels ranging from negative 7 (fastest) to 22 (slowest in compression speed, but best compression ratio). The zstd package includes parallel (multi-threaded) implementations of both compression and decompression. Starting from version 1.3.2 (October 2017), zstd optionally implements very long range search and deduplication (, 128 MiB window) similar to rzip or lrzip. Compression speed can vary by a factor of 20 or more between the fastest and slowest levels, while decompression is uniformly fast, varying by less than 20% between the fastest and slowest levels. Zstandard command-line has an "adaptive" () mode that varies compression level depending on I/O conditions, mainly how fast it can write the output. Zstd at its maximum compression level gives a compression ratio close to lzma, lzham, and ppmx, and performs better than lza, or bzip2. Zstandard reaches the current Pareto frontier, as it decompresses faster than any other currently-available algorithm with similar or better compression ratio. Dictionaries can have a large impact on the compression ratio of small files, so Zstandard can use a user-provided compression dictionary. It also offers a training mode, able to generate a dictionary from a set of samples. In particular, one dictionary can be loaded to process large sets of files with redundancy between files, but not necessarily within each file, e.g., log files. Design Zstandard combines a dictionary-matching stage (LZ77) with a large search window and a fast entropy coding stage, using both Finite State Entropy (a fast tabled version of ANS, tANS, used for entries in the Sequences section), and Huffman coding (used for entries in the Literals section). Because of the way that FSE carries over state between symbols, decompression involves processing symbols within the Sequences section of each block in reverse order (from last to first). Usage The Linux kernel has included Zstandard since November 2017 (version 4.14) as a compression method for the btrfs and squashfs filesystems. In 2017, Allan Jude integrated Zstandard into the FreeBSD kernel and it was subsequently integrated as a compressor option for core dumps (both user programs and kernel panics). It was also used to create a proof of concept OpenZFS compression method which was integrated in 2020. The AWS Redshift and RocksDB databases include support for field compression using Zstandard. In March 2018, Canonical tested the use of zstd as a deb package compression method by default for the Ubuntu Linux distribution. Compared with xz compression of deb packages, zstd at level 19 decompresses significantly faster, but at the cost of 6% larger package files. Debian developer Ian Jackson favored waiting several years before official adoption. In 2018 the algorithm was published as , which also defines an associated media type "application/zstd", filename extension "zst", and HTTP content encoding "zstd". Arch Linux added support for zstd as a package compression method in October 2019 with the release of the pacman 5.2 package manager, and in January 2020 switched from xz to zstd for the packages in the official repository. Arch uses zstd -c -T0 --ultra -20 -, the size of all compressed packages combined increased by 0.8% (compared to xz), the decompression speed is 14 times faster, decompression memory increased by 50 MiB when using multiple threads, compression memory increases but scales with the number of threads used. Arch Linux later also switched to zstd as default compression algorithm for mkinitcpio initial ramdisk generator. Fedora added ZStandard support to RPM in May 2018 (Fedora release 28), and used it for packaging the release in October 2019 (Fedora 31) Full implementation of the algorithm with an option to choose the compression level is used in the .NSZ / .XCZ file formats, developed by the homebrew community for the Nintendo Switch hybrid game console. 7-Zip ZS, a fork of 7-Zip FM with Zstandard (and other formats) support, is developed by Tino Reichardt. Modern7z, a Zstandard (and other formats) plugin for 7-Zip FM is developed by Denis Anisimov (TC4shell). License The reference implementation is licensed under the BSD license, published at GitHub. Since version 1.0, it had an additional Grant of Patent Rights. From version 1.3.1, this patent grant was dropped and the license was changed to a BSD + GPLv2 dual license. See also Zlib LZFSE – a similar algorithm by Apple used since iOS 9 and OS X 10.11 made open source on 1 June 2016 LZ4 (compression algorithm) – a fast member of the LZ77 family References External links "Smaller and faster data compression with Zstandard", Yann Collet and Chip Turner, 31 August 2016, Facebook Announcement The Guardian is using ZStandard instead of zlib Lossless compression algorithms Free data compression software C (programming language) libraries 2016 software Software using the BSD license
26949079
https://en.wikipedia.org/wiki/WebSphere%20Optimized%20Local%20Adapters
WebSphere Optimized Local Adapters
IBM WebSphere Optimized Local Adapters (OLA or WOLA) is a functional component of IBM's WebSphere Application Server for z/OS that provides an efficient cross-memory mechanism for calls both inbound to WAS z/OS and outbound from z/OS. Because it avoids the overhead of other communication mechanisms, it is capable of high volume exchange of messages. WOLA is an extension to the existing cross-memory exchange mechanism of WAS z/OS, with WOLA providing an external interface so z/OS address spaces outside the WAS z/OS server may participate in cross-memory exchanges. WOLA supports connectivity between a WAS z/OS server and one or more of the following: CICS, IMS, Batch, UNIX Systems Services and ALCS. WOLA was first made available in WAS z/OS Version 7, Fixpack 4 (7.0.0.4). Functional enhancements have appeared in subsequent fixpacks as documented in this article. History The WebSphere Optimized Local Adapters for WAS z/OS (WOLA or OLA for short) has its origins in a desire to provide an efficient inbound calling mechanism; that is, from outside the Java EE environment into it to exercise Java EE assets. This requirement was particularly pronounced on z/OS where traditional batch processing sought the use of a growing base of programming assets based on Java EE and EJB technology. Other inbound solutions existed, for example: Messaging, such as Websphere MQ or other JMS providers. RMI/IIOP Web Services While each had its respective strengths; each also had its particular shortcomings: overhead and latency; difficulty in construction; or deficiencies in the security or transaction propagation model. This was the original design point for the Optimized Local Adapters. The architects of the solution extended the design to include bi-directional invocations: inbound to WAS z/OS from an external address space, and outbound from WAS to an external address space. Technical Foundation The architects of this solution chose to leverage an existing element of the WAS z/OS design called "local communications," a cross-memory mechanism used by WebSphere Application Server for z/OS since the V4.x days that optimized IIOP traffic between application servers on the same LPAR. OLA is essentially an externalization of that existing cross-memory mechanism so that address spaces outside WAS z/OS may connect and exchange messages across a shared memory space. External address space programs access the OLA interface using a set of supplied APIs. Java programs running in WAS z/OS access the OLA interface through an implementation packaged as a standard JCA resource adapter. Current Support The currently supported external address spaces supported for WAS z/OS OLA are: IBM CICS Batch Jobs UNIX System Services (USS) Airline Control System (ALCS) IMS (support started with maintenance 7.0.0.12) The programming languages supported in the external address spaces are: C/C++ COBOL High Level Assembler PL/I Java is the programming language used to access WAS z/OS OLA from inside the Java EE containers of WAS z/OS. Function Update History IBM WebSphere Optimized Adapters function support has been updated as new versions or fixpacks are released. The function was first made available in WAS z/OS Version 7 Release 0 Fixpack 4 level (7.0.0.4). 7.0.0.4 WOLA was introduced with Fixpack 4 to the WAS z/OS Version 7 Release 0 product. Application of maintenance resulted in a new directory in the product file system that provided the WOLA modules, shared objects, JCA resource adapter and development class libraries. A shell script (olaInstall.sh) created the necessary UNIX symbolic links from the runtime environment to product install file system. The functional supported offered in the 7.0.0.4 release was: Support for CICS, Batch, USS, and ALCS One-phase commit for outbound WAS into CICS (2PC into CICS TS 4.1 provided with 7.0.0.12) Two-phase commit for inbound CICS into WAS Native APIs JCA resource adapter 7.0.0.12 Fixpack 12 to WAS z/OS Version 7 Release 0 provided two updates to the WOLA support: Support for WOLA and IMS Two-phase commit transaction processing from WAS outbound to CICS TS 4.1 8.0.0.0 WebSphere Application Server for z/OS Version 8 Release 0 continued support for WebSphere Optimized Local Adapters. WOLA was shipped incorporated into the product, which meant running the olaInstall.sh was no longer required to create UNIX symbolic links to the product files. In addition the following function updates were provided: Multi-segment large message support (greater than 32K in size) for work with IMS Support for inbound transaction classification of WOLA calls separate from IIOP calls Identification within the SMF 120.9 record for WOLA calls as WOLA rather than IIOP Resource failure identification and alternative JNDI failover Resource Failover and Failback This function provides a means of detecting the loss of a data resource attached to a JCA connection factory and automatically failing over to a defined alternate JNDI. Detection of primary data resource recovery and failback is also an element of this functional design. The resource failover design is present in WebSphere Application Server Version 8 across all platforms for JDBC and JCA. WAS z/OS Version 8 provides support for WOLA resource failover as part of the general support for JCA resource failover. Invocation of the failover occurs when a configurable threshold number of () failures occur. After failover is invoked, all new () requests are routed to the alternate connection factory connection pool. Failback occurs when WAS z/OS determines the failed primary data resource has returned. New () requests are processed against the primary connection factory. A common usage pattern for this function is outbound to CICS where the target CICS region is a routing region. This failover function provides the ability to architect multiple routing regions so that the loss of any single routing region does not affect the overall availability of CICS overall. Several connection pool custom properties were added to support this resource failover and failback mechanism: failureThreshold - the number of consecutive () failures that must occur before automatic failover is invoked alternateResourceJNDIName - the JNDI name of the alternate connection factory to use if automatic failover is invoked resourceAvailabilityTestRetryInterval - the interval in seconds WAS employs to test for return of primary resource Note: other connection pool custom properties exist for this function. Search on string "cdat_dsfailover" in the WAS z/OS InfoCenter for a complete listing. 8.0.0.1 / 8.5.0.0 Note: WAS z/OS 8.5.0.0 provides WOLA support functionally identical to 8.0.0.1 Fixpack 1 to WebSphere Application Server for z/OS Version 8 provided the following functional updates to WOLA: 64-bit callable native APIs for C/C++ programs operating in 64-bit mode SMF 120 subtype 10 records for WOLA outbound calls from WAS (SMF 120 subtype 9 captures inbound call information) Work Distribution - the ability to round-robin outbound calls across multiple external registrations of the same name Proxy support for remote access - this takes two forms: inbound and outbound 64-bit Callable Native API Modules Prior to 8.0.0.1 the native API modules were supplied in 31-bit callable format only. These modules had the four-character prefix BBOA* associated with each module name. With 8.0.0.1 both 31-bit and 64-bit callable API modules are provided. The 31-bit modules retain the four-character prefix BBOA* for each module name. The 64-bit modules carry the four-character prefix BBGA* for each module name. The number of APIs remains the same as before: 13 specific APIs. Usage is the same as before. InfoCenter Search: cdat_olaapis SMF 120.10 for WOLA Outbound Calls In WAS z/OS V7 the WOLA support for SMF was limited to inbound calls only. Inbound WOLA calls to target EJBs in the WAS z/OS container were identified as IIOP calls and captured by SMF as IIOP calls, indistinguishable from any other IIOP call. The normal WAS z/OS SMF 120 subtype 9 record (or 120.9 in shorthand notation) was used to capture the inbound call information. With WAS z/OS 8.0.0.0 the SMF 120.9 record and capture function was modified to identify inbound WOLA calls separate from inbound IIOP calls. With WAS z/OS 8.0.0.1 the SMF 120.10 record was created to capture information about outbound calls from WAS z/OS. The SMF 120.10 record has eight sections: Platform neutral server information section z/OS server information section Outbound Request information section WOLA Outbound Request type specific section Outbound Request transaction context section Outbound Request security context section Outbound Request CICS context section OTMA Outbound Request type specific section One record is created for each outbound request. InfoCenter Search: rtrb_SMFsubtype10 Work Distribution This functional update provides the ability to distribute outbound calls across multiple external address spaces registered into a given WAS z/OS server using the same registration name. A common usage pattern for this would multiple CICS regions with the same stateless target program service deployed. A new environment variable was created to indicate the type of work distribution desired. The following illustrates the usage of this function: InfoCenter Search: cdat_olacustprop Proxy Support: Inbound and Outbound The cross-memory nature of WOLA communications implies the WAS z/OS server and the external address space must be on the same z/OS logical partition (LPAR). WAS z/OS 8.0.0.1 provides a proxy function to allow WOLA callers and WOLA targets to be located separately. This includes location on operating system instances other than z/OS. This function has two formats: proxy support for outbound calls, and proxy support for inbound calls. Proxy Support for Outbound Calls This provides a mechanism by which Java applications may use the supplied WOLA JCA resource adapter to access a target address space on remote z/OS. An example usage pattern would be development or test of a proposed an application. Access to the cross-memory WOLA connection on the target z/OS system is provided by a supplied WOLA proxy application installed in a WAS z/OS server enabled for WOLA. The following picture illustrates the topology: The network flow from the application to the WAS z/OS system is by way of IIOP. The WOLA connection factory is informed of this IIOP flow to the proxy by way of several new custom properties to the connection pool. The proxy application on WAS z/OS receives the call and forwards it over an actual cross-memory WOLA connection to the named target service. This topology has limitations compared to outbound WOLA calls on the same z/OS LPAR: global transactions requiring two-phase commit can not be propagated across the IIOP connection to the WOLA proxy, and the user identity on the WAS thread can not be asserted into the target service on z/OS. Proxy Support for Inbound Calls This provides a mechanism by which non-Java applications in an external address space may make inbound calls to a target WOLA-enabled EJB in a remote WAS instance, either on another z/OS LPAR or a distributed WAS platform. The same supplied WOLA proxy application installed in a local WAS z/OS instance is required to handle the initial cross-memory WOLA call and forward that to the named target EJB on the remote WAS instance. The following picture illustrates the topology: The target WOLA-enabled EJB is unaware the proxy is in use. The inbound flow arrives as an IIOP call just as it does if cross-memory WOLA on the same LPAR was used. The calling program must indicate the flow will use the proxy service. This is done with a parameter on BBOA1INV (or BBOA1SRQ) of 2 for the requesttype parameter. This tells the local proxy application to treat requested service, which is specified as the JNDI name of the target EJB, as a request to invoke the EJB using IIOP. This requires the local and remote WAS instances to have federated names spaces or operate as a single cell for the JNDI lookup to succeed. 8.0.0.3 and 8.0.0.4 / 8.5.0.1 In 8.0.0.3 (and 8.5.0.1) WOLA support included in IBM Integration Designer for BPEL Processes. In 8.0.0.4 (and 8.5.0.1) support updated to include RRS transaction context assertion from IMS dependent regions into WAS over WOLA: Applications in IMS use set the "transaction supported" flag on register API Target WAS environment has ola_rrs_context_propagate = 1 environment variable set and enabled IMS Control Region needs to be running with RRS=Y 8.0.0.5 (and 8.5.0.2) Fixpack 8.0.0.5 / 8.5.0.2 provided two functional enhancements: (1) RRS transaction context assertion from WAS into IMS over WOLA / OTMA, and (2) enhanced support for CICS channels and containers. For IMS transaction: IMS Control Region needs to be running with RRS=Y Target WAS environment has ola_rrs_context_propagate_otma = 1 environment variable set and enabled For enhanced support for CICS channels and containers, prior to 8.0.0.5 / 8.5.0.2 the CICS channels and containers support was limited to a single fixed-name channel for both request and response, and a single container of type BIT or CHAR. With 8.0.0.5 / 8.5.0.2: Send and receive one or more containers from target CICS program Channel name is set by you using setLinkTaskChanID() method Channel type is set by you using setLinkTaskChanType() method The names of the individual request containers are set by adding data to the MappedRecord, using the put() method. The keys of the MappedRecord correspond to the CICS container names, and the corresponding value will be used to fill the container in CICS. The response container names will be extracted from the channel after the CICS request is finished, and populated into a new MappedRecord, which is returned to the client. Components The Optimized Local Adapters may be categorized into the following components: Interface Modules -- provide the programmatic access to the OLA interface and the OLA APIs CICS Task Related User Exit, Link Task Server and control transaction -- provides a simplified mechanism for supporting outbound calls to program assets in CICS. JCA Resource Adapter -- provides the interface between the Java environment and the external environment Development Tooling Support -- provides the supporting classes for developing OLA-enabled applications Samples -- a set of C/C++, COBOL and Java samples that illustrate the use of the programming model Overview of CICS support The Optimized Local Adapters are implemented in CICS as a Task Related User Exit (TRUE). This is what provides the essential connectivity from CICS cross memory to the WAS z/OS address space. In addition, a Link Server Task (BBO$) and a Link Invocation Task (BBO#) is supplied for calls from WAS to CICS. The BBO$/BBO# link server task shields programming specifics from CICS programs. The OLA call from WAS is handled by these supplied tasks, and the named CICS program is invoked with the standard EXEC CICS LINK call. The named CICS program remains unchanged and unaware the call came from WAS using OLA. The target program in CICS must be able to be invoked with a LINK call. Both COMMAREA and Channels/Containers is supported. A BBOC transaction is also supplied to provide a set of control commands to do things such as manually start the TRUE (if not in PLTPI), stop the TRUE, start and stop the Link Server, as well as other control and management functions. The OLA programming interface module library data set must be concatenated to the CICS region's DFHRPL DD statement. The following picture summarizes the WOLA CICS support for transaction propagation and security assertion: Overview of IMS support The Optimized Local Adapters are implemented as an external subsystem to IMS. Usage is supported for Message Processing Programs (MPP), Batch Message Processing programs (BMP), IMS Fast Path (IFP) and Batch DL/I applications. Calls from IMS into WAS use the External Subsystem Attach Facility (ESAF). This is the same interface as used by other subsystems such as DB2 or MQ. Calls from WAS into the IMS dependent region may be done using OTMA or directly (that is, program in IMS uses OLA APIs to "host a service" as described below). OTMA provides OLA transparency to the IMS applications at a cost of some overhead. Using the OLA APIs in the IMS application reduces the overhead which results in better performance and throughput. The programming APIs for IMS are the same format and syntax as introduced originally. But they have been updated to be aware of IMS if running there and to use ESAF. Further, the ola.rar file that implements the JCA resource adapter for WAS must be the one shipped with Fixpack 7.0.0.12 or later to use with IMS. The method parameters have been updated for the IMS support and that update is made available to WAS by re-installing the ola.rar that comes with 7.0.0.12. The following picture summarizes the WOLA IMS support for transaction propagation and security assertion: Programming Considerations Inbound to WAS z/OS The external address space accesses the OLA mechanism through the supplied interface modules and documented APIs. There are 13 APIs at the present time. They are categorized below. Java programs running in the WAS z/OS environment wishing to be the target of an invocation from outside must implement the OLA interface in a stateless session bean using the OLA class files supplied in the development tooling support. Outbound from WAS z/OS A Java program wishing to initiate an OLA call outbound may be implemented as either a servlet or EJB. The Java program codes to the supplied JCA resource adapter (ola.rar) using the class files supplied in the development tooling support. External address spaces that are the target of the outbound call must be in a state ready to accept the call. Two basic models exist: If the external address space is CICS, then the user has the option to employ the supplied Link Server Task to act as the receiving agent on behalf of existing CICS program assets. The Link Server task (BBO$ by default) receives the call and issues an EXEC CICS LINK of the program named on interactionSpecImpl.setServiceName(). No changes to the existing CICS program are necessary provided it supports either COMMAREA or Channels/Containers. If the external address is IMS, then the call may be made using the IMS OTMA interface (which implies no change to your IMS application), or directly using OLA (which implies using the OLA APIs in the IMS program to "host a service"). If the external address space is something other than CICS or IMS, then the program needs to "host a service" using one of the supplied APIs. That puts the program in a state ready to receive a call from the Java program in WAS z/OS. When the call is received it may then process the request and supply a response back to the Java program in WAS z/OS Synchronous and Asynchronous Operations The APIs support both modes. Synchronous provides a simpler programming model because program control is not returned to the calling program until a response has been received. Asynchronous provides the architect with an opportunity to process other work without having to wait on a response coming back from a long running target process. Modular Design It is possible to design the OLA-specific programming artifacts to serve as "bridges" between the OLA interface and existing assets. That serves to minimize impact to existing programming assets and limits the degree of "platform lock in." Outbound to CICS—use the provided Link Server implementation; no changes to your CICS programs at all. Inbound to WAS—construct an EJB that takes the OLA call, then turns and calls the specified EJB. If the target EJB is in the same JVM then it can be highly efficient. If the target EJB is in the same cell on the same LPAR then the previously mentioned "local communications" function is used. APIs There are 13 APIs, categorized into the following categories: General Setup and Teardown -- BBOA1REG (register) and BBOA1URG (unregister) Inbound Basic -- BBOA1INV (invoke with automatic get response) Inbound Advanced -- BBOA1CNG (get connection), BBOA1SRQ (send request), BBOA1RCL (get response length), BBOA1GET (get message data), BBOA1CNR (release connection) Outbound Basic -- BBOA1SRV (host a service), BBOA1SRP (send response) Outbound Advanced -- BBOA1RCA (receive on connection any), BBOA1RCS (receive on connection specific), BBOA1GET (get message data), BBOA1SRP (send response) and BBOA1SRX (send an exception) The InfoCenter has a full write-up of each along with parameter lists and return code (RC) and reason codes (RSN). Search on cdat_olaapis. Illustrations of Common API Patterns A common inbound API usage model would be: In this case the BBOA1REG API is used to register into the WebSphere Application Server for z/OS Daemon group (the cell short name), and multiple invocations of BBOA1INV are used to invoke the target EJB. BBOA1INV is synchronous so program control is held until the EJB returns a response. This API is useful when the calling program knows the size of the response message in advance. If the response message size is unknown at the time of the call then the more primitive APIs (BBOA1SRQ (send request), BBOA1RCL (get response length), BBOA1GET (get message data)) would be more appropriate. When the calling program determines it has finished its work, it uses BBOA1URG to unregister from the Daemon group. If the target Java program has a longer response interval then an asynchronous model is likely better. The following picture illustrates how an asynchronous call would be done using what is known as the primitive API: BBOA1SRQ with the async=1 parameter set: As the picture illustrates, the asynchronous mode allows the non-Java program to get control and do other processing. That implies checking for a response at some future point. BBOA1RCL is used for that purpose. In this example BBOA1RCL is issued synchronously (parameter async=0). If a response is available BBOA1RCL will provide the length and program control returns to the program. If no response is available BBOA1RCL holds program control until one is available. BBOA1RCL with async=1 will return x'FFFFFFFF' if no response is available; program control is returned immediately. Other illustrations for outbound may be found in the WP101490 document found on the IBM Techdocs website. Note: Outbound from WAS to CICS would not require API coding. In that case the supplied BBO$/BBO# link server transactions would do that processing. Those link server transactions "host a service" using the internal constructs similar to the BBOA1SRV API. Outbound to a batch program would require the use of the APIs to "host a service." Transactionality The Optimized Local Adapters support two-phase commit (2PC) processing from CICS inbound to WAS. With the advent of maintenance 7.0.0.12, the Optimized Local adapters also support two-phase commit outbound from WAS to CICS. Prior to 7.0.0.12 the transactional support from WAS to CICS was limited to "sync on return." For IMS, support for transactional assertion inbound to WAS from IMS dependent regions was provided in fixpack 8.0.0.4 and 8.5.0.1. Transaction assertion outbound from WAS to IMS over WOLA/OTMA provided in fixpack 8.0.0.5. Transactional propagation is not supported inbound or outbound to batch, USS or Airlines Line Control. Security The Optimized Local Adapters are capable of asserting identity in the following circumstances: WAS --> CICS : The identity on the WAS thread used to call the WOLA API can be used to assert identity into CICS. In order to do this, the WOLA CICS link server must be used and started with the SEC=Y parameter and the CICS region must be running with SEC=YES and the ID the link server task runs under must have SURROGAT SAF authority to start transactions on behalf of the propagated user ID. Refer to the IBM InfoCenter for more details on this. WAS --> Batch, USS or ALCS : no attempt to assert identity is made. The target process runs under the identity used when it was started. CICS --> WAS : CICS can assert its region ID or the application user ID Batch, USS or ALCS : The external process will attempt to assert its identity into WAS z/OS. Limitations The WAS z/OS Optimized Local Adapters can be used only within a given LPAR. It is a cross-memory mechanism and can not go between LPARs or off the machine. External links Redbooks: WebSphere on z/OS - Optimized Local Adapters IBM Techdocs: WebSphere z/OS Optimized Local Adapters IBM InfoCenter: Planning to use optimized local adapters for z/OS Video demonstrations can be seen on YouTube by searching on the keyword WASOLA1 YouTube videos: WP101490 - WOLA - Essentials of WOLA WP101490 - WOLA - CICS WP101490 - WOLA - IMS WP101490 - WOLA - Native APIs Part 1 of 2 WP101490 - WOLA - Native APIs Part 2 of 2 WP101490 - WOLA - Java Considerations Java enterprise platform Optimized Local Adapters IBM mainframe operating systems
2185099
https://en.wikipedia.org/wiki/Gizmo5
Gizmo5
Gizmo5 (formerly known as Gizmo Project and SIPphone) was a voice over IP communications network and a proprietary freeware soft phone for that network. On November 12, 2009, Google announced that it had acquired Gizmo5. On March 4, 2011, Google announced that the service would be discontinued as of April 3, 2011. The Gizmo5 network used open standards for call management, Session Initiation Protocol (SIP) and Extensible Messaging and Presence Protocol (XMPP). However, the Gizmo5 client application was proprietary software and used several proprietary codecs, including GIPS and Internet Speech Audio Codec (iSAC). History Gizmo Project was founded by Michael Robertson and his company SIPphone. On November 12, 2009, Google announced that it had acquired Gizmo5 for a reported $30 million in cash. Prior to this acquisition, Gizmo5 had a working relationship with GrandCentral (now Google Voice) for years. Upon announcement, Gizmo5 suspended new signups until a Google relaunch. Google was also dogfooding a Google Voice desktop client based on Gizmo5, branded as Gizmo5 by Google. On April 3, 2011, Google shut down Gizmo5 and recommended users to use Google Talk instead. Technology Gizmo5 was based on the Session Initiation Protocol and could interoperate with other SIP-based networks directly, including the public switched telephone network. The latter required the Gizmo5 service features CallOut and CallIn. CallOut was available at a fee, whereas CallIn and calls to other VoIP users were generally free of cost. Gizmo5 also used encryption (Secure Real-time Transport Protocol) for network calls and worked well with Phil Zimmermann's Zfone security features. Gizmo5 supported the following Codecs: GSM — fixed bit rate, not loss tolerant, narrow band (8 kHz sampling rate) PCMA — fixed bit rate (8 kHz sampling rate) PCMU — fixed bit rate (8 kHz sampling rate, high band width) EG711 (Enhanced G.711) — fixed bit rate, loss tolerant, narrowband iSAC — variable bit rate, loss tolerant, narrow and wideband (8 to 16 kHz) iLBC — variable bit rate, loss tolerant, narrow iPCMwb — 16 kHz sampling rate iPCM — fixed bit rate, loss tolerant, wideband Version 4.0 of the Gizmo5 softphone offered video calls. Gizmo5 also offered smartphone version. As of July 20, 2009, Gizmo5 was the only SIP service that could be used with Google Voice directly (without requiring a U.S. based phone number). The text chat function of Gizmo5 utilized the Extensible Messaging and Presence Protocol (XMPP) protocol. Users were addressed by an identification string in the format of [email protected]. An earlier incarnation of the service was , a free software VoIP system based on the Pidgin instant messaging software and the SIP protocol handling of the Linphone VoIP software, but restricted to using (only) the SIPphone service. It is available under the GNU General Public License and sponsored by Linspire. Service features Gizmo5 supported outbound caller line identification in the United States. Gizmo5 provided a free voicemail service. Gizmo5 allowed paying subscribers of LiveJournal to place voiceposts if they are unable to use the voicepost telephone lines provided by the website. Mobile phone support The Gizmo5 mobile phone application used the phone's carrier voice network for all calls. The service called the phone numbers of both parties and bridged the call. On mobile phones that support SIP applications, calls may be placed over WiFi or 3G. In the case of WiFi, calls to Gizmo5 users were free, and calls to the public switched telephone network were charged Gizmo5 Call Out credit. On 3G, additional costs would apply depending on the user's data plan. Gmail On August 26, 2010, Gmail accounts with Google voice were given a function to make and receive calls. Google Voice product manager, Vincent Paquet, confirmed that this function was added through the help of the technology received after the Gizmo5 acquisition. Service Terminated On Fri, Mar 4, 2011, subscribers received the following message from Gizmo5, indicating that the service would be terminated. "Gizmo5 is writing to let you know that we will no longer be providing service starting on April 3, 2011. A week from today, March 11, 2011, you will no longer be able to add credit to your account. Although the standalone Gizmo5 client will no longer be available, we have since launched the ability to call phones from within Gmail at even more affordable rates. If you purchased calling credit and have a balance remaining in your account, you can request a refund by logging into http://my.gizmo5.com. If you are in the United States, you can instead choose to transfer your credit to a Google Voice account, so it can be used for calling from Google Voice or Gmail. If you don’t have a Google Voice account, please create one so that we can transfer your credit. ''Please request a call credit transfer or refund by April 3, 2011. If you don't request a call credit transfer or refund by this date, we will automatically refund your remaining call credit via the payment method you originally used to purchase the credit...." There was no indication made if the service would be revived in another form, or if there would be similar functionality added to any of Google's current telephony offerings. On April 4, around midnight for most users, service was finally cut. See also Comparison of VoIP software List of XMPP client software Ekiga QuteCom Google Voice References External links Discontinued Google acquisitions Freeware VoIP software XMPP clients Defunct VoIP companies Instant messaging clients for Linux Voice over IP clients for Linux MacOS instant messaging clients Windows instant messaging clients Google services
218067
https://en.wikipedia.org/wiki/Shellcode
Shellcode
In hacking, a shellcode is a small piece of code used as the payload in the exploitation of a software vulnerability. It is called "shellcode" because it typically starts a command shell from which the attacker can control the compromised machine, but any piece of code that performs a similar task can be called shellcode. Because the function of a payload is not limited to merely spawning a shell, some have suggested that the name shellcode is insufficient. However, attempts at replacing the term have not gained wide acceptance. Shellcode is commonly written in machine code. When creating shellcode, it is generally desirable to make it both small and executable, which allows it to be used in as wide a variety of situations as possible. Writing good shellcode can be as much an art as it is a science. In assembly code, the same function can be performed in a multitude of ways and there is some variety in the lengths of opcodes that can be used for this purpose; good shellcode writers can put these small opcodes to use to create more compact shellcode. Some reached the smallest possible size while maintaining stability. Types of shellcode Shellcode can either be local or remote, depending on whether it gives an attacker control over the machine it runs on (local) or over another machine through a network (remote). Local Local shellcode is used by an attacker who has limited access to a machine but can exploit a vulnerability, for example a buffer overflow, in a higher-privileged process on that machine. If successfully executed, the shellcode will provide the attacker access to the machine with the same higher privileges as the targeted process. Remote Remote shellcode is used when an attacker wants to target a vulnerable process running on another machine on a local network, intranet, or a remote network. If successfully executed, the shellcode can provide the attacker access to the target machine across the network. Remote shellcodes normally use standard TCP/IP socket connections to allow the attacker access to the shell on the target machine. Such shellcode can be categorized based on how this connection is set up: if the shellcode establishes the connection, it is called a "reverse shell" or a connect-back shellcode because the shellcode connects back to the attacker's machine. On the other hand, if the attacker establishes the connection, the shellcode is called a bindshell because the shellcode binds to a certain port on the victim's machine. There's a peculiar shellcode named bindshell random port that skips the binding part and listens on a random port made available by the operating system. Because of that the bindshell random port became the smallest and stable bindshell shellcode for x86_64 available to this date. A third, much less common type, is socket-reuse shellcode. This type of shellcode is sometimes used when an exploit establishes a connection to the vulnerable process that is not closed before the shellcode is run. The shellcode can then re-use this connection to communicate with the attacker. Socket re-using shellcode is more elaborate, since the shellcode needs to find out which connection to re-use and the machine may have many connections open. A firewall can be used to detect outgoing connections made by connect-back shellcode as well as incoming connections made by bindshells. They can therefore offer some protection against an attacker, even if the system is vulnerable, by preventing the attacker from connecting to the shell created by the shellcode. This is one reason why socket re-using shellcode is sometimes used: it does not create new connections and therefore is harder to detect and block. Download and execute Download and execute is a type of remote shellcode that downloads and executes some form of malware on the target system. This type of shellcode does not spawn a shell, but rather instructs the machine to download a certain executable file off the network, save it to disk and execute it. Nowadays, it is commonly used in drive-by download attacks, where a victim visits a malicious webpage that in turn attempts to run such a download and execute shellcode in order to install software on the victim's machine. A variation of this type of shellcode downloads and loads a library. Advantages of this technique are that the code can be smaller, that it does not require the shellcode to spawn a new process on the target system, and that the shellcode does not need code to clean up the targeted process as this can be done by the library loaded into the process. Staged When the amount of data that an attacker can inject into the target process is too limited to execute useful shellcode directly, it may be possible to execute it in stages. First, a small piece of shellcode (stage 1) is executed. This code then downloads a larger piece of shellcode (stage 2) into the process's memory and executes it. Egg-hunt This is another form of staged shellcode, which is used if an attacker can inject a larger shellcode into the process but cannot determine where in the process it will end up. Small egg-hunt shellcode is injected into the process at a predictable location and executed. This code then searches the process's address space for the larger shellcode (the egg) and executes it. Omelette This type of shellcode is similar to egg-hunt shellcode, but looks for multiple small blocks of data (eggs) and recombines them into one larger block (the omelette) that is subsequently executed. This is used when an attacker can only inject a number of small blocks of data into the process. Shellcode execution strategy An exploit will commonly inject a shellcode into the target process before or at the same time as it exploits a vulnerability to gain control over the program counter. The program counter is adjusted to point to the shellcode, after which it gets executed and performs its task. Injecting the shellcode is often done by storing the shellcode in data sent over the network to the vulnerable process, by supplying it in a file that is read by the vulnerable process or through the command line or environment in the case of local exploits. Shellcode encoding Because most processes filter or restrict the data that can be injected, shellcode often needs to be written to allow for these restrictions. This includes making the code small, null-free or alphanumeric. Various solutions have been found to get around such restrictions, including: Design and implementation optimizations to decrease the size of the shellcode. Implementation modifications to get around limitations in the range of bytes used in the shellcode. Self-modifying code that modifies a number of the bytes of its own code before executing them to re-create bytes that are normally impossible to inject into the process. Since intrusion detection can detect signatures of simple shellcodes being sent over the network, it is often encoded, made self-decrypting or polymorphic to avoid detection. Percent encoding Exploits that target browsers commonly encode shellcode in a JavaScript string using percent-encoding, escape sequence encoding "" or entity encoding. Some exploits also obfuscate the encoded shellcode string further to prevent detection by IDS. For example, on the IA-32 architecture, here's how two NOP (no-operation) instructions would look, first unencoded: 90 NOP 90 NOP This instruction is used in NOP slides. Null-free shellcode Most shellcodes are written without the use of null bytes because they are intended to be injected into a target process through null-terminated strings. When a null-terminated string is copied, it will be copied up to and including the first null but subsequent bytes of the shellcode will not be processed. When shellcode that contains nulls is injected in this way, only part of the shellcode would be injected, making it incapable of running successfully. To produce null-free shellcode from shellcode that contains null bytes, one can substitute machine instructions that contain zeroes with instructions that have the same effect but are free of nulls. For example, on the IA-32 architecture one could replace this instruction: B8 01000000 MOV EAX,1 // Set the register EAX to 0x000000001 which contains zeroes as part of the literal (1 expands to 0x00000001) with these instructions: 33C0 XOR EAX,EAX // Set the register EAX to 0x000000000 40 INC EAX // Increase EAX to 0x00000001 which have the same effect but take fewer bytes to encode and are free of nulls. Alphanumeric and printable shellcode An alphanumeric shellcode is a shellcode that consists of or assembles itself on execution into entirely alphanumeric ASCII or Unicode characters such as 0-9, A-Z and a-z. This type of encoding was created by hackers to hide working machine code inside what appears to be text. This can be useful to avoid detection of the code and to allow the code to pass through filters that scrub non-alphanumeric characters from strings (in part, such filters were a response to non-alphanumeric shellcode exploits). A similar type of encoding is called printable code and uses all printable characters (0-9, A-Z, a-z, !@#%^&*() etc.). An similarly restricted variant is ECHOable code not containing any characters which are not accepted by the ECHO command. It has been shown that it is possible to create shellcode that looks like normal text in English. Writing alphanumeric or printable code requires good understanding of the instruction set architecture of the machine(s) on which the code is to be executed. It has been demonstrated that it is possible to write alphanumeric code that is executable on more than one machine, thereby constituting multi-architecture executable code. In certain circumstances, a target process will filter any byte from the injected shellcode that is not a printable or alphanumeric character. Under such circumstances, the range of instructions that can be used to write a shellcode becomes very limited. A solution to this problem was published by Rix in Phrack 57 in which he showed it was possible to turn any code into alphanumeric code. A technique often used is to create self-modifying code, because this allows the code to modify its own bytes to include bytes outside of the normally allowed range, thereby expanding the range of instructions it can use. Using this trick, a self-modifying decoder can be created that initially uses only bytes in the allowed range. The main code of the shellcode is encoded, also only using bytes in the allowed range. When the output shellcode is run, the decoder can modify its own code to be able to use any instruction it requires to function properly and then continues to decode the original shellcode. After decoding the shellcode the decoder transfers control to it, so it can be executed as normal. It has been shown that it is possible to create arbitrarily complex shellcode that looks like normal text in English. Unicode proof shellcode Modern programs use Unicode strings to allow internationalization of text. Often, these programs will convert incoming ASCII strings to Unicode before processing them. Unicode strings encoded in UTF-16 use two bytes to encode each character (or four bytes for some special characters). When an ASCII (Latin-1 in general) string is transformed into UTF-16, a zero byte is inserted after each byte in the original string. Obscou proved in Phrack 61 that it is possible to write shellcode that can run successfully after this transformation. Programs that can automatically encode any shellcode into alphanumeric UTF-16-proof shellcode exist, based on the same principle of a small self-modifying decoder that decodes the original shellcode. Platforms Most shellcode is written in machine code because of the low level at which the vulnerability being exploited gives an attacker access to the process. Shellcode is therefore often created to target one specific combination of processor, operating system and service pack, called a platform. For some exploits, due to the constraints put on the shellcode by the target process, a very specific shellcode must be created. However, it is not impossible for one shellcode to work for multiple exploits, service packs, operating systems and even processors. Such versatility is commonly achieved by creating multiple versions of the shellcode that target the various platforms and creating a header that branches to the correct version for the platform the code is running on. When executed, the code behaves differently for different platforms and executes the right part of the shellcode for the platform it is running on. Shellcode analysis Shellcode cannot be executed directly. In order to analyze what a shellcode attempts to do it must be loaded into another process. One common analysis technique is to write a small C program which holds the shellcode as a byte buffer, and then use a function pointer or use inline assembler to transfer execution to it. Another technique is to use an online tool, such as shellcode_2_exe, to embed the shellcode into a pre-made executable husk which can then be analyzed in a standard debugger. Specialized shellcode analysis tools also exist, such as the iDefense sclog project which was originally released in 2005 as part of the Malcode Analyst Pack. Sclog is designed to load external shellcode files and execute them within an API logging framework. Emulation based shellcode analysis tools also exist such as the sctest application which is part of the cross platform libemu package. Another emulation based shellcode analysis tool, built around the libemu library, is scdbg which includes a basic debug shell and integrated reporting features. See also Alphanumeric code Computer security Buffer overflow Exploit (computer security) Heap overflow Metasploit Project Shell (computing) Shell shoveling Stack buffer overflow Vulnerability (computing) References External links Shell-Storm Database of shellcodes Multi-Platform. An introduction to buffer overflows and shellcode The Basics of Shellcoding (PDF) An overview of x86 shellcoding by Angelo Rosiello An introduction to shellcode development Contains x86 and non-x86 shellcode samples and an online interface for automatic shellcode generation and encoding, from the Metasploit Project a shellcode archive, sorted by Operating system. Microsoft Windows and Linux shellcode design tutorial going from basic to advanced. Windows and Linux shellcode tutorial containing step by step examples. Designing shellcode demystified ALPHA3 A shellcode encoder that can turn any shellcode into both Unicode and ASCII, uppercase and mixedcase, alphanumeric shellcode. Writing Small shellcode by Dafydd Stuttard A whitepaper explaining how to make shellcode as small as possible by optimizing both the design and implementation. Writing IA32 Restricted Instruction Set Shellcode Decoder Loops by SkyLined A whitepaper explaining how to create shellcode when the bytes allowed in the shellcode are very restricted. BETA3 A tool that can encode and decode shellcode using a variety of encodings commonly used in exploits. Shellcode 2 Exe - Online converter to embed shellcode in exe husk Sclog - Updated build of the iDefense sclog shellcode analysis tool (Windows) Libemu - emulation based shellcode analysis library (*nix/Cygwin) Scdbg - shellcode debugger built around libemu emulation library (*nix/Windows) Injection exploits
45486119
https://en.wikipedia.org/wiki/Ground%20segment
Ground segment
A ground segment consists of all the ground-based elements of a spacecraft system used by operators and support personnel, as opposed to the space segment and user segment. The ground segment enables management of a spacecraft, and distribution of payload data and telemetry among interested parties on the ground. The primary elements of a ground segment are: Ground (or Earth) stations, which provide radio interfaces with spacecraft Mission control (or operations) centers, from which spacecraft are managed Ground networks, which connect the other ground elements to one another Remote terminals, used by support personnel Spacecraft integration and test facilities Launch facilities These elements are present in nearly all space missions, whether commercial, military, or scientific. They may be located together or separated geographically, and they may be operated by different parties. Some elements may support multiple spacecraft simultaneously. Elements Ground stations Ground stations provide radio interfaces between the space and ground segments for telemetry, tracking, and command (TT&C), as well as payload data transmission and reception. Tracking networks, such as NASA's Near Earth Network and Space Network, handle communications with multiple spacecraft through time-sharing. Ground station equipment may be monitored and controlled remotely, often via serial and/or IP interfaces. There are often backup stations from which radio contact can be maintained if there is a problem at the primary ground station which renders it unable to operate, such as a natural disaster. Such contingencies are considered in a Continuity of Operations plan. Transmission and reception Signals to be uplinked to a spacecraft must first be extracted from ground network packets, encoded to baseband, and modulated, typically onto an intermediate frequency (IF) carrier, before being up-converted to the assigned radio frequency (RF) band. The RF signal is then amplified to high power and carried via waveguide to an antenna for transmission. In colder climates, electric heaters or hot air blowers may be necessary to prevent ice or snow buildup on the parabolic dish. Received ("downlinked") signals are passed through a low-noise amplifier (often located in the antenna hub to minimize the distance the signal must travel) before being down-converted to IF; these two functions may be combined in a low-noise block downconverter. The IF signal is then demodulated, and the data stream extracted via bit and frame synchronization and decoding. Data errors, such as those caused by signal degradation, are identified and corrected where possible. The extracted data stream is then packetized or saved to files for transmission on ground networks. Ground stations may temporarily store received telemetry for later playback to control centers, often when ground network bandwidth is not sufficient to allow real-time transmission of all received telemetry. A single spacecraft may make use of multiple RF bands for different telemetry, command, and payload data streams, depending on bandwidth and other requirements. Passes The timing of passes, when a line of sight exists to the spacecraft, is determined by the location of ground stations, and by the characteristics of the spacecraft orbit or trajectory. The Space Network uses geostationary relay satellites to extend pass opportunities over the horizon. Tracking and ranging Ground stations must track spacecraft in order to point their antennas properly, and must account for Doppler shifting of RF frequencies due to the motion of the spacecraft. Ground stations may also perform automated ranging; ranging tones may be multiplexed with command and telemetry signals. Ground station tracking and ranging data are passed to the control center along with spacecraft telemetry, where they are often used in orbit determination. Mission control centers Mission control centers process, analyze, and distribute spacecraft telemetry, and issue commands, data uploads, and software updates to spacecraft. For crewed spacecraft, mission control manages voice and video communications with the crew. Control centers may also be responsible for configuration management and data archival. As with ground stations, there are often backup control facilities available to support continuity of operations. Telemetry processing Control centers use telemetry to determine the status of a spacecraft and its systems. Housekeeping, diagnostic, science, and other types of telemetry may be carried on separate virtual channels. Flight control software performs the initial processing of received telemetry, including: Separation and distribution of virtual channels Time-ordering and gap-checking of received frames (gaps may be filled by commanding a retransmission) Decommutation of parameter values, and association of these values with parameter names called mnemonics Conversion of raw data to calibrated (engineering) values, and calculation of derived parameters Limit and constraint checking (which may generate alert notifications) Generation of telemetry displays, which may be tabular, graphical (plots of parameters against each other or over time), or synoptic (interface-oriented graphics). A spacecraft database provided by the spacecraft manufacturer is called on to provide information on telemetry frame formatting, the positions and frequencies of parameters within frames, and their associated mnemonics, calibrations, and soft and hard limits. The contents of this database—especially calibrations and limits—may be updated periodically to maintain consistency with onboard software and operating procedures; these can change during the life of a mission in response to upgrades, hardware degradation in the space environment, and changes to mission parameters. Commanding Commands sent to spacecraft are formatted according to the spacecraft database, and are validated against the database before being transmitted via a ground station. Commands may be issued manually in real time, or they may be part of automated or semi-automated procedures. Typically, commands successfully received by the spacecraft are acknowledged in telemetry, and a command counter is maintained on the spacecraft and at the ground to ensure synchronization. In certain cases, closed-loop control may be performed. Commanded activities may pertain directly to mission objectives, or they may be part of housekeeping. Commands (and telemetry) may be encrypted to prevent unauthorized access to the spacecraft or its data. Spacecraft procedures are generally developed and tested against a spacecraft simulator prior to use with the actual spacecraft. Analysis and support Mission control centers may rely on "offline" (i.e., non-real-time) data processing subsystems to handle analytical tasks such as: Orbit determination and maneuver planning Conjunction assessment and collision avoidance planning Mission planning and scheduling On-board memory management Short- and long-term trend analysis Path planning, in the case of planetary rovers Dedicated physical spaces may be provided in the control center for certain mission support roles, such as flight dynamics and network control, or these roles may be handled via remote terminals outside the control center. As on-board computing power and flight software complexity have increased, there is a trend toward performing more automated data processing on board the spacecraft. Staffing Control centers may be continuously or regularly staffed by flight controllers. Staffing is typically greatest during the early phases of a mission, and during critical procedures and periods. Increasingly commonly, control centers for uncrewed spacecraft may be set up for "lights-out" (or automated) operation, as a means of controlling costs. Flight control software will typically generate notifications of significant events – both planned and unplanned – in the ground or space segment that may require operator intervention. Ground networks Ground networks handle data transfer and voice communication between different elements of the ground segment. These networks often combine LAN and WAN elements, for which different parties may be responsible. Geographically separated elements may be connected via leased lines or virtual private networks. The design of ground networks is driven by requirements on reliability, bandwidth, and security. Reliability is a particularly important consideration for critical systems, with uptime and mean time to recovery being of paramount concern. As with other aspects of the spacecraft system, redundancy of network components is the primary means of achieving the required system reliability. Security considerations are vital to protect space resources and sensitive data. WAN links often incorporate encryption protocols and firewalls to provide information and network security. Antivirus software and intrusion detection systems provide additional security at network endpoints. Remote terminals Remote terminals are interfaces on ground networks, separate from the mission control center, which may be accessed by payload controllers, telemetry analysts, instrument and science teams, and support personnel, such as system administrators and software development teams. They may be receive-only, or they may transmit data to the ground network. Terminals used by service customers, including ISPs and end users, are collectively called the "user segment", and are typically distinguished from the ground segment. User terminals including satellite television systems and satellite phones communicate directly with spacecraft, while other types of user terminals rely on the ground segment for data receipt, transmission, and processing. Integration and test facilities Space vehicles and their interfaces are assembled and tested at integration and test (I&T) facilities. Mission-specific I&T provides an opportunity to fully test communications between, and behavior of, both the spacecraft and the ground segment prior to launch. Launch facilities Vehicles are delivered to space via launch facilities, which handle the logistics of rocket launches. Launch facilities are typically connected to the ground network to relay telemetry prior to and during launch. The launch vehicle itself is sometimes said to constitute a "transfer segment", which may be considered distinct from both the space and ground segments. Costs Costs associated with the establishment and operation of a ground segment are highly variable, and depend on accounting methods. According to a study by Delft University of Technology, the ground segment contributes approximately 5% to the total cost of a space system. According to a report by the RAND Corporation on NASA small spacecraft missions, operation costs alone contribute 8% to the lifetime cost of a typical mission, with integration and testing making up a further 3.2%, ground facilities 2.6%, and ground systems engineering 1.1%. Ground segment cost drivers include requirements placed on facilities, hardware, software, network connectivity, security, and staffing. Ground station costs in particular depend largely on the required transmission power, RF band(s), and the suitability of preexisting facilities. Control centers may be highly automated as a means of controlling staffing costs. Images See also Consultative Committee for Space Data Systems (CCSDS), which maintains standards for telemetry and command formatting Radiocommunication service, as defined by ITU Radio Regulations On-board data handling subsystem References Telecommunications infrastructure Spaceflight ground equipment Spaceflight technology Spacecraft communication Spaceflight concepts
3054820
https://en.wikipedia.org/wiki/Rhinoceros%203D
Rhinoceros 3D
Rhinoceros (typically abbreviated Rhino or Rhino3D) is a commercial 3D computer graphics and computer-aided design (CAD) application software developed by Robert McNeel & Associates, an American, privately held, employee-owned company founded in 1980. Rhinoceros geometry is based on the NURBS mathematical model, which focuses on producing mathematically precise representation of curves and freeform surfaces in computer graphics (as opposed to polygon mesh-based applications). Rhinoceros is used for computer-aided design (CAD), computer-aided manufacturing (CAM), rapid prototyping, 3D printing and reverse engineering in industries including architecture, industrial design (e.g. automotive design, watercraft design), product design (e.g. jewelry design) as well as for multimedia and graphic design. Rhinoceros is developed for the Microsoft Windows operating system and macOS. A visual scripting language add-on for Rhino, Grasshopper, is developed by Robert McNeel & Associates. Overview Characteristics Rhinoceros is primarily a freeform surface modeler that utilizes the NURBS mathematical model. Rhinoceros's application architecture and open SDK make it modular and enable the user to customize the interface and create custom commands and menus. There are dozens of plug-ins available from both McNeel and other software companies that complement and expand Rhinoceros's capabilities in specific fields, such as rendering and animation, architecture, marine, jewelry, engineering, prototyping, and others. File format The Rhinoceros file format (.3DM) is useful for the exchange of NURBS geometry. The Rhino developers started the openNURBS Initiative to provide computer graphics software developers the tools to accurately transfer 3-D geometry between applications. An open-source toolkit, openNURBS includes the 3DM file format specification, documentation, C++ source code libraries and .NET 2.0 assemblies to read and write the file format on supported platforms – Windows, Windows x64, Mac, and Linux. Compatibility Rhinoceros offers compatibility with other software as it supports over 30 CAD file formats for importing and exporting. The following CAD and image file formats are natively supported (without the use of external plug-ins): DWG/DXF (AutoCAD 200x, 14, 13, and 12) IGES STEP SolidWorks SLDPRT and SLDASM SAT (ACIS, export only) MicroStation DGN Direct X (X file format) FBX X_T (Parasolid, export only) .3ds LWO STL SLC OBJ AI RIB POV UDO VRML CSV (export properties and hydrostatics) BMP TGA uncompressed TIFF VDA GHS GTS KML PLY SketchUp The following CAD file formats are supported with use of external plug-ins: 3DPDF ACIS CATIA V4 CATIA V5 CATIA V6 CGR Inventor JT Parasolid PLMXML Creo Parametric Solid Edge Siemens NX When opening CAD file formats not in its native .3dm file format, Rhinoceros will convert the geometry into its native format; when importing a CAD file, the geometry is added to the current file. When Autodesk AutoCAD's file format changes (see DWG file format for more information), the Open Design Alliance reverse engineers the file format to allow these files to be loaded by other vendors' software. Rhinoceros's import and export modules are actually plug-ins, so they can be easily updated via a service release. Rhinoceros Service Releases (SR) are frequent and freely downloadable. Rhinoceros 5 SR10 can import and export DWG/DXF file formats up to version 2014. Rhinoceros is also compatible with a number of graphic design-based programs. Among them is Adobe Illustrator. This method is best when working with a vector-based file. The user starts by saving the file, then, when prompted, saves as Adobe Illustrator (*ai). From there, the user can control the vectors created in Rhinoceros, which can be enhanced further in Adobe Illustrator. 3D printing Rhinoceros 3D relies on a few plug-ins that facilitate 3D printing and allows the export of .STL and .OBJ file formats, both of which are supported by numerous 3D printers and 3D printing services. Scripting and programming Rhinoceros supports two scripting languages, Rhinoscript (based on VBScript) and Python (V5.0+ and Mac). It also has an SDK and a complete plug-in system. One McNeel plug-in, a parametric modeling/visual programming tool called Grasshopper, has attracted many architects to Rhinoceros due to its ease of use and ability to create complex algorithmic structures. See also Comparison of computer-aided design editors Computer-aided industrial design References External links McNeel Wiki The History of Rhino – notable project milestones. Food4Rhino – apps for Rhino and Grasshopper. Rhino News, etc. – the official blog. 3D graphics software Computer-aided design software Computer-aided design software for Windows
12868247
https://en.wikipedia.org/wiki/Eclipse%20Buckminster
Eclipse Buckminster
The Buckminster Project is an Eclipse (software) technology sub-project focused on component assembly. Buckminster facilitates straightforward sharing of complex assemblies of software components. It is particularly useful for building and sharing virtual distros, distributions of software components which share components across multiple software projects and repositories. In February 2019 project was archived. Operation A Buckminster CQUERY (component query) names a component assembly. Using a CQUERY, Buckminster can find and locate all the components necessary to complete that particular configuration. Finding needed components includes transitively finding all the components needed by those components. The process which Buckminster implements to transitively locate and then download and install a full set of components for a particular CQUERY is called materialization. A CQUERY is typically published by a developer (or development team) to denote their work: those interested in accessing and using this software can ask Buckminster to fetch everything necessary by quoting the CQUERY. A Buckminster RMAP (resource map) is associated with a CQUERY, and lists one or more software repositories in which appropriate components can be found. Many popular repository formats are supported, including Concurrent Versions System, Subversion, Apache Maven, Perforce and Eclipse platform infrastructures. A Buckminster CSPEC (component specification) lists appropriate attributes of a component such as how to build it and on what other components it depends on. CSPECs are frequently automatically generated by Buckminster based on meta-information available elsewhere within repositories and the build environment. Automatically generated CSPECs can be manually via CSPECX CSPEC eXtensions". A Buckminster BOM (bill of materials)'' lists in full all the details necessary to fulfill a particular CQUERY, and is automatically generated by Buckminster. BOMs are sometimes saved and re-submitted so as to ensure that specific users materialize precisely the same components, in the right versions, as one another. Buckminster CQUERYs, RMAPs, CSPECs and BOMs are specified in XML. See also Build automation List of build automation software Apache Maven Apache Ant, External links Main wiki page for Buckminster High level introduction to Buckminster Typical usage scenarios, including building virtual distros Full XML specifications of Buckminster model Bricklaying with Buckminster Software distribution Version control systems Software development process Eclipse technology Eclipse software
4073739
https://en.wikipedia.org/wiki/Epic%20Systems
Epic Systems
Epic Systems Corporation, or Epic, is an American privately held healthcare software company. According to the company, hospitals that use its software held medical records of 54% of patients in the United States and 2.5% of patients worldwide in 2015. History Epic was founded in 1979 by Judith R. Faulkner with a $70,000 investment (). Originally headquartered in Madison, Wisconsin, Epic moved its headquarters to a large campus in the suburb of Verona, Wisconsin in 2005, where it employs 10,000 people as of 2019. The campus has themed areas/buildings, such as a castle-like structure, a "Wizard Campus" that appears to be inspired by J.K. Rowling's Harry Potter, and a dining facility designed to mimic a train station. As of 2015, the company was in the fifth phase of campus expansion with five new buildings each planned to be around 100,000 square feet. The company also has offices in Bristol, UK; 's-Hertogenbosch, Netherlands; Dubai, United Arab Emirates; Dhahran, Saudi Arabia; Helsinki, Finland; Melbourne, Australia; Singapore; Trondheim, Norway; and Søborg, Denmark. Product and market Epic primarily develops, manufactures, licenses, supports, and sells a proprietary electronic medical record software application, known in whole as 'Epic' or an Epic EMR. The company's healthcare software is centered on its Chronicles database management system. Epic's applications support functions related to patient care, including registration and scheduling; clinical systems for doctors, nurses, emergency personnel, and other care providers; systems for lab technologists, pharmacists, and radiologists; and billing systems for insurers. Epic also offers cloud hosting for customers that do not wish to maintain their own servers; and short-term optimization and implementation consultants through their wholly-owned subsidiary Boost, Inc.. The company's competitors include Cerner, MEDITECH, Allscripts, athenahealth, and units of IBM, McKesson, and Siemens. The majority of U.S. News & World Report's top-ranked hospitals and medical schools use Epic. In 2003, Kaiser Permanente began using Epic for its electronic records system. Among many others, Epic provides electronic record systems for Cedars-Sinai Medical Center in Los Angeles, the Cleveland Clinic, Johns Hopkins Hospital, UC Davis Medical Center in Sacramento, and all Mayo Clinic campuses. Partners HealthCare began adopting Epic in 2015 in a project initially reported to cost $1.2 billion, which critics decried and which is greater than the cost of any of its buildings. By 2018, the total expenses for the project were $1.6 billion, with payments for the software itself amounting to less than $100 million and the majority of the costs caused by lost patient revenues, tech support and other implementation work. Criticisms and controversies Data sharing Care Everywhere is Epic's health information exchange software, which comes with its EHR system. A 2014 article in The New York Times interviews two doctors who said that their Epic systems wouldn't allow them to share data with users of competitors' software in a way that will satisfy the Meaningful Use requirements of the HITECH Act. At first, Epic charged a fee to send data to some non-Epic systems. Epic said the yearly cost for an average-sized hospital was around $5,000 a year. However, after Congressional hearings, Epic and other major software vendors announced that they would suspend per-transaction sharing fees. Epic customers must still pay for one-time costs of linking Epic to each individual non-Epic system with which they wish to exchange data; in contrast, Epic's competitors have formed the CommonWell Health Alliance which set a common Interoperability Software standard for electronic health records. A 2014 report by the RAND Corporation described Epic as a "closed" platform that made it "challenging and costly for hospitals" to interconnect with the clinical or billing software of other companies. The report also cited other research showing that Epic's implementation in the Kaiser Permanente system led to efficiency losses. In September 2017, Epic announced Share Everywhere, which allows patients to authorize any provider who has internet access to view their record in Epic and to send progress notes back. Share Everywhere was named Healthcare Dive's "Health IT Development of the Year" in 2017. UK experience An Epic electronic health record system costing £200 million was installed at Cambridge University Hospitals NHS Foundation Trust in October 2014, the first installation of an Epic system in the UK. After 2.1 million records were transferred to it, it developed serious problems and the system became unstable. Ambulances were diverted to other hospitals for five hours and hospital consultants noted issues with blood transfusion and pathology services. Other problems included delays to emergency care and appointments, and problems with discharge letters, clinical letters and pathology test results. Chief information officer, Afzal Chaudhry, said "well over 90% of implementation proceeded successfully". In July 2015, the BBC reported that the hospital's finances were being investigated. In September 2015, both the CEO and CFO of the hospital resigned. Problems with the clinical-records system, which were said to have compromised the "ability to report, highlight and take action on data" and to prescribe medication properly, were held to be contributory factors in the organization's sudden failure. In February 2016, digitalhealth.net reported that Clare Marx, president of the Royal College of Surgeons of England and member of the NHS National Information Board, found that at the time of implementation, "staff, patients and management rapidly and catastrophically lost confidence in the system. That took months and a huge amount of effort to rebuild." Danish experience In 2016, Danish health authorities spent 2.8 billion DKK on the implementation of Epic in 18 hospitals in a region with 2.8 million residents. On May 20, Epic went live in the first hospital. Doctors and nurses reported chaos in the hospital and complained of severe under preparation and training. Epic and its Danish partners insisted that normal testing and training were carried out. Since some elements of the Epic system were not properly translated from English to Danish, physicians resorted to Google Translate. As one example, when inputting information about a patient's condition, physicians were given the option to report between the left and the "correct" leg, not the left and right legs. As of 2019, Epic had still not been fully integrated with Denmark's national medical record system, which is meant to be accessed every time a patient is seen. Danish anesthesiologist and computer architect Gert Galster worked to adapt the system. According to Galster, these Epic systems were designed specifically to fit the U.S. health care system, and could not be disentangled for use in Denmark. An audit of the implementation that voiced concerns was published in June 2018. At the end of 2018, 62% of physicians expressed they were not satisfied with the system and 71 physicians signed a petition calling for the system to be removed. COVID-19 response In 2020, the novel coronavirus pandemic spread in the United States. Epic Systems faced considerable criticism for their initial plan to have their 10,000 employees return to work on-campus. Employees expressed concern about returning to the office, with the first group being required to return as early as August 10 while the pandemic continued to spread. This plan was abandoned, and as of December 2020, employees were still able to work from home. The plan had come about despite a Dane County public health order requiring remote work "to the greatest extent possible." Criticism revolved in particular around the fact that employees were being ordered back to preserve the company "culture," despite CEO Judy Faulkner's admission that work was getting done remotely. According to The Capital Times, who interviewed 26 Epic employees about the plan, "13 [employees] said they have knowledge of managers being demoted for expressing concern about the company’s plans to bring its nearly 10,000 workers back" to on-campus work, and all requested anonymity for fear of employer retribution. In a survey of Epic employees, 89% of employees expressed dissatisfaction with how Epic was handling the pandemic. See also Epic Systems Corp. v. Lewis References External links Epic, state's largest solar producer, to build own wind farm - Milwaukee Journal Sentinel article Epic Systems feeling heat over interoperability - Modern healthcare article Epic Systems, Leading Defense EHR Bidder, Slammed for Lack of Interoperability - Nextgov article Patient records giant Epic Systems will take a big step into the cloud in 2015 - VentureBeat article Cancer moonshot head recounts exchange with Epic’s Faulkner - Politico article Software companies based in Wisconsin Health care companies based in Wisconsin Electronic health record software companies Privately held companies based in Wisconsin Software companies established in 1979 1979 establishments in Wisconsin Dane County, Wisconsin Software companies of the United States
38873002
https://en.wikipedia.org/wiki/Google%20Keep
Google Keep
Google Keep is a note-taking service included as part of the free, web-based Google Docs Editors suite offered by Google. The service also includes Google Docs, Google Sheets, Google Slides, Google Drawings, Google Forms, and Google Sites. Google Keep is available as a web application as well as mobile app for Android and iOS. The app offers a variety of tools for taking notes, including text, lists, images, and audio. Text from images can be extracted using optical character recognition, and voice recordings can be transcribed. The interface allows for a single-column view or a multi-column view. Notes can be color-coded, and labels can be applied for organization. Later updates have added functionality to pin notes, and to collaborate on notes with other Keep users in real-time. Google Keep has received mixed reviews. A review just after launch in 2013 praised its speed, the quality of voice notes, synchronization, and the widget that could be placed on the Android home screen. Reviews in 2016 have criticized the lack of formatting options, inability to undo changes, and an interface that only offers two view modes where neither was liked for their handling of long notes. However, Google Keep received praise for features including universal device access, native integration with other Google services, and the option to turn photos into text through optical character recognition. Google ended support for the Google Keep Chrome app in February 2021, though Google Keep itself will continue to be accessible though other apps and directly in web browsers. Features Google Keep allows users to make different kinds of notes, including text, lists, images, and audio. Users can set reminders, which are integrated with Google Now, with options for time or location. Text from images can be extracted using optical character recognition technology. Voice recordings created through Keep are automatically transcribed. Keep can convert text notes into checklists. Users can choose between a single-column view and a multi-column view. Notes can be color-coded, with options for white, red, orange, yellow, green, teal, blue or grey. Users can press a "Copy to Google Doc" button that automatically copies all text into a new Google Docs document. Users can create notes and lists by voice. Notes can be categorized using labels, with a list of labels in the app's navigation bar. Updates In November 2014, Google introduced a real-time note cooperation feature between different Keep users, as well as a search feature determined by attributes, such as color, sharing status, or the kind of content in the note. In October 2016, Google added the ability for users to pin notes. In February 2017, Google integrated Google Keep with Google Docs, providing access to notes while using Docs on the web. Google Assistant could previously maintain a shopping list within Google Keep. This feature was moved to Google Express in April 2017, resulting in a severe loss of functionality. In July 2017, Google updated Keep on Android with the ability for users to undo and redo changes. Platforms Google Keep was launched on March 20, 2013 for the Android operating system and on the web. The Android app is compatible with Android Wear. Users can create new notes using voice input, add and check off items in lists, and view reminders. An app for the iOS operating system was released on September 24, 2015. Reception 2013 In a May 2013 review, Alan Henry of Lifehacker wrote that the interface was "colorful and easy to use", and that the colors actually served a purpose in organization and contrast. Henry praised the speed, quality of voice notes, synchronization, and Android home screen widget. He criticized the web interface, as well as the lack of an iOS app. Time listed Google Keep among its 50 best Android applications for 2013. 2016 In a January 2016 review, JR Raphael of Computerworld wrote that "Keep is incredibly close to being an ideal tool for me to collect and manage all of my personal and work-related notes. And, as evidenced by the fact that I continue to use it, its positives outweigh its negatives for me and make it the best all-around option for my needs", praising what he calls Keep's "killer features", namely simplicity, "easy universal access", and native integration with other Google services. However, he characterized Keep's lack of formatting options, the inability to undo or revert changes, and a missing search functionality within notes as "lingering weaknesses". In a July 2016 review, Jill Duffy of PC Magazine wrote that the interface was best described as "simplicity", and criticized it for offering list and grid views that did not make finding information quick or easy. Adding that "Most of my notes are text-based recipes, which are quite long", Duffy said the list view was "even worse" than the grid view as it only showed "one note at a time, and it's the most recently edited note." She wrote that the web interface was lacking in functionality present in the apps. The mobile app's offering to take a photo and run optical character recognition to have the scan turned into text was described as a "shining star", with the comment "It's an amazing feature, and it works very well". She also criticized the lack of formatting options, and that sharing options are "possible but not very refined". See also Comparison of notetaking software Google Notebook Google Jamboard References External links 2013 software Keep Android (operating system) software Note-taking software IOS software Google Docs Editors
4166124
https://en.wikipedia.org/wiki/S2%20Games
S2 Games
S2 Games was a video game development company which was founded by Marc "Maliken" DeForest, Jesse Hayes, and Sam McGrath, based in Rohnert Park, California. They also had a development location in Kalamazoo, Michigan. The company slogan was Dedicated employees serving dedicated gamers. Continuous development. Never-ending improvement. History Their first project (a real-time strategy, third-person shooter and role-playing game hybrid), Savage: The Battle for Newerth, was released in the Summer of 2003. They released its sequel, Savage 2: A Tortured Soul, on January 16, 2008, and are independently publishing and distributing it. Their latest installment in the Newerth series, Heroes of Newerth, based heavily around Defense of the Ancients, was released on May 12, 2010. In 2015, S2 Games sold the rights to Heroes of Newerth to Garena to focus on Strife, their second-generation MOBA. Garena subsequently moved Heroes of Newerth to Frostburn Studios, a Kalamazoo, Michigan based subsidiary of Garena. Titles Savage: The Battle for Newerth (2003) (Windows, Macintosh, Linux) Savage 2: A Tortured Soul (2008) (Windows, Macintosh, Linux) Heroes of Newerth (2010) (Windows, Macintosh, Linux) Strife (2015) (Windows, Macintosh, Linux) Savage Resurrection (2016) (Windows) Brawl of Ages (2017) (Windows) Key events In 2003, S2 Games released Savage: The Battle for Newerth, their first commercial game. In 2004, three former S2 Games employees left the company to form Offset Software. In 2006, S2 Games re-released Savage: The Battle for Newerth, as freeware. In 2008, S2 Games released Savage 2: A Tortured Soul. In 2009, S2 Games re-released Savage 2: A Tortured Soul as freeware. In 2010, S2 Games released Heroes of Newerth. In 2011, S2 Games re-released Heroes of Newerth as freeware/free-to-play. In 2012, S2 Games made all heroes in Heroes of Newerth completely free for online play. In 2012, over 10,000,000 Heroes of Newerth user accounts had been registered. In 2013, S2 Games announced Strife an upcoming "second generation MOBA". In 2015, S2 Games sold the property of Heroes of Newerth from its label into the hands of Garena and Frostburn Studios. In 2017, Savage Resurrection was re-released under a free-to-play model. In 2018, Strife and Brawl of Ages servers were shut down and S2 Games was quietly closed. References External links Video game companies based in California Video game development companies Software companies based in the San Francisco Bay Area Rohnert Park, California Video game companies established in 2003 Video game companies disestablished in 2018 Defunct video game companies of the United States Defunct companies based in the San Francisco Bay Area 2003 establishments in California 2018 disestablishments in California
22000932
https://en.wikipedia.org/wiki/IUCV
IUCV
Inter User Communication Vehicle (IUCV) is a data transfer mechanism in IBM VM line of operating systems. It was introduced with VM/SP Release 1 in 1980. It allows establishment of point to point communication channels, either between two virtual machines or between a virtual machine and hypervisor services. In effect, IUCV provides a form of message-based interaction between virtual machines that anticipated the client/server interaction between network connected physical machines that emerged later on distributed systems. IUCV is implemented by CP (the VM hypervisor) and controls all aspects of session establishments, message passing and flow control. IUCV basics Initializing IUCV Before a virtual machine can use the IUCV service, it must first indicate the address of an area within its address space where CP will be able to store information regarding pending information or status. Therefore, the DECLARE BUFFER method must be invoked first. The IUCV Path In IUCV terminology, the session between two end points is called a PATH. It is identified at each end by a Path ID which is only relevant to the virtual machine that owns the session end. A path is always a connected channel - meaning there are no connectionless paths. Establishing a path To establish a path, the initiating virtual machine must invoke the CONNECT method, and specify the path target identity, which is either another virtual machine name or the name of a CP system service - which all start with the '*' character - which is not a valid character within a virtual machine name. Provided the target has initialized IUCV itself, the target will be notified of the pending incoming path connection and may then either use the ACCEPT method - to complete path establishment - or the SEVER method - which effectively closes the pending path. Once the path is established, messages may be passed between the two path endpoints. IUCV Messages IUCV Messages are bounded, that is, they have a beginning and an end. If more than one message is pending on a path for an endpoint, IUCV will not merge the messages. Messages are sent on the path using the SEND method. The other end point can then receive the message using the RECEIVE method. If the original message also requested a reply, the receiving end point then use the REPLY method to send that reply. Flow control Multiple messages may be made pending on a path. The number of messages allowed pending for a path is specified during path establishment but cannot exceed 65535. Attempting to send a message on a path which has reached its pending message limit will result in an error. Suspend and resume Data transfer may be temporarily suspended by using the QUIESCE method. While the path is suspended, no further message transfers are allowed on the path until the RESUME method is invoked by the virtual machine that initially suspended the path. Polling A virtual machine may poll for IUCV notifications using the TEST MESSAGE and TEST COMPLETION methods. If nothing is pending, then the virtual machine waits until further information is available. Explicit path termination When either end point issues the SEVER method, the path enters a severed (closing) state and the other end point is notified. At this point, no new messages are allowed on the path - but the other end point may still retrieve pending messages. When the other end point also issues the SEVER method, the path is effectively dismantled. Implicit path termination A path may be implicitly closed when A virtual machine logs off A virtual machine is reset A virtual machine terminates IUCV operations using the RETRIEVE BUFFER method In either of those cases, for the other end of the path, the behavior is identical to an explicit path termination. Using IUCV The B2F0 instruction IUCV methods are invoked by using the 'B2F0'x instruction. This instruction must be invoked while in virtual supervisor state (for example a guest supervisor) or an Operation Exception program interrupt is generated. The instruction is then interpreted by CP as an IUCV request. IPARML IPARML is the Iucv PARaMeter List. It is a control block that describes the method being invoked as well as the method parameters. Upon completion of the B2F0 instruction, some fields are altered by CP to indicate the status of the B2F0 instruction completion. Notifications CP notifies a virtual machine of a pending message or status information by making an external interrupt code X'4000' pending to the virtual machine. When the interrupt occurs, the information regarding the pending status is made available at the address location specified by the DECLARE BUFFER method. Macros CP Macros CP has a specific macro (IUCV) which generates the appropriate code - including the instruction and filling in the IPARML - so that the details regarding the parameters about such and such method can be defaulted or checked for conflict. CMS Macros CMS can be made to handle IUCV application requests. CMS has its own set of IUCV macros (CMSIUCV) which allow multiple applications to share the IUCV facility within a virtual machine. Authorization Access to some IUCV functions is controlled by statements in the CP Directory (the list of virtual machines and their specifications). A virtual machine can be permitted to accept communications from all other virtual machines via the IUCV ALLOW directory statement, or establish a communication path with any other virtual machine via the IUCV ANY statement. It is also possible to allow a virtual machine to issue path connection requests to other specific virtual machines by specifying the virtual machine name in an IUCV statement, for example: IUCV TARGETVM. By default, a user is always allowed to connect to itself. The IUCV statement controls CP-imposed access control for IUCV connections. In addition, a virtual machine can impose its own access control by rejecting an attempt to connect. Examples of IUCV use CP System services The CP system services are IUCV end points which are not virtual machines by themselves, but allow a virtual machine to perform hypervisor functions asynchronously or to access specific hypervisor facilities. Some examples are *MSG : The Message System Service. Allows a virtual machine to receive through IUCV specific virtual machine console outputs such as the results of the 'CP MESSAGE' or Console I/O. This is used by VM subsystems such as PROP (The PRogrammable OPerator) or Fullscreen CMS. *SPL : Allows accessing spool files asynchronously. RSCS (The Remote Spool Communication Subsystem) is an example of an application that uses this system service. GCS GCS (The Group Control System) of VM uses IUCV to perform maintenance of shared memory areas between virtual machines. By using implicit path termination, the GCS recovery virtual machine can ensure that any locks held on the shared area by a virtual machine that entered the group but left unexpectedly is properly released. VM TCP/IP VM TCP/IP - the TCP/IP stack for VM - uses IUCV to either allow a virtual machine to perform socket operations or to allow a virtual machine to act as a network interface to pass whole frames or datagrams between itself and the TCP/IP stack. The S/390 and z/Architecture implementation of Linux uses this facility to implement a network interface to the VM TCP/IP stack. See also VMCF – The Virtual Machine Communication Facility Channel-to-channel adapter References External links IUCV in z/VM 5.3 CP Programming Services manual chapter 2.1.3 IBM mainframe operating systems Virtualization software VM (operating system)
43568926
https://en.wikipedia.org/wiki/Comparison%20of%20music%20education%20software
Comparison of music education software
The following comparison of music education software compares general and technical information for different music education software. For the purpose of this comparison, music education software is defined as any application which can teach music. General Operating system compatibility This section lists the operating systems on which the software supports. There may be multiple versions of a player for different operating systems. Features Extended features Instruments supported See also Online music education List of music software List of educational software References Education music software comparison Music Music education
442948
https://en.wikipedia.org/wiki/Calendar%20era
Calendar era
A calendar era is the period of time elapsed since one epoch of a calendar and, if it exists, before the next one. For example, it is the year as per the Gregorian calendar, which numbers its years in the Western Christian era (the Coptic Orthodox and Ethiopian Orthodox churches have their own Christian eras). In antiquity, regnal years were counted from the accession of a monarch. This makes the chronology of the ancient Near East very difficult to reconstruct, based on disparate and scattered king lists, such as the Sumerian King List and the Babylonian Canon of Kings. In East Asia, reckoning by era names chosen by ruling monarchs ceased in the 20th century except for Japan, where they are still used. Ancient dating systems Assyrian eponyms For over a thousand years, ancient Assyria used a system of eponyms to identify each year. Each year at the Akitu festival (celebrating the Mesopotamian new year), one of a small group of high officials (including the king in later periods) would be chosen by lot to serve as the limmu for the year, which meant that he would preside over the Akitu festival and the year would bear his name. The earliest attested limmu eponyms are from the Assyrian trading colony at Karum Kanesh in Anatolia, dating to the very beginning of the 2nd millennium BC, and they continued in use until the end of the Neo-Assyrian Period, ca. 612 BC. Assyrian scribes compiled limmu lists, including an unbroken sequence of almost 250 eponyms from the early 1st millennium BC. This is an invaluable chronological aid, because a solar eclipse was recorded as having taken place in the limmu of Bur-Sagale, governor of Guzana. Astronomers have identified this eclipse as one that took place on 15 June 763 BC, which has allowed absolute dates of 892 to 648 BC to be assigned to that sequence of eponyms. This list of absolute dates has allowed many of the events of the Neo-Assyrian Period to be dated to a specific year, avoiding the chronological debates that characterize earlier periods of Mesopotamian history. Olympiad dating Among the ancient Greek historians and scholars, a common method of indicating the passage of years was based on the Olympic Games, first held in 776 BC. The Olympic Games provided the various independent city-states with a mutually recognizable system of dates. Olympiad dating was not used in everyday life. This system was in use from the 3rd century BC. The modern Olympic Games (or Summer Olympic Games beginning 1896) do not continue the four year periods from ancient Greece: the 669th Olympiad would have begun in the summer of 1897, but the modern Olympics were first held in 1896. Indiction cycles Another common system was the indiction cycle (15 indictions made up an agricultural tax cycle in Roman Egypt, an indiction being a year in duration). Documents and events began to be dated by the year of the cycle (e.g., "fifth indiction", "tenth indiction") in the 4th century, and this system was used long after the tax ceased to be collected. It was used in Gaul, in Egypt until the Islamic conquest, and in the Eastern Roman Empire until its conquest in 1453. The rule for computing the indiction from the AD year number, which he had just invented, was stated by Dionysius Exiguus: add 3 and divide by 15; the remainder is the indiction, with 0 understood to be the fifteenth indiction. Thus the indiction of 2001 was 9. The beginning of the year for the indiction varied. Seleucid era The Seleucid era was used in much of the Middle East from the 4th century BC to the 6th century AD, and continued until the 10th century AD among Oriental Christians. The era is computed from the epoch 312 BC: in August of that year Seleucus I Nicator captured Babylon and began his reign over the Asian portions of Alexander the Great's empire. Thus depending on whether the calendar year is taken as starting on 1 Tishri or on 1 Nisan (respectively the start of the Jewish civil and ecclesiastical years) the Seleucid era begins either in 311 BC (the Jewish reckoning) or in 312 BC (the Greek reckoning: October–September). Ancient Rome Consular dating An early and common practice was Roman 'consular' dating. This involved naming both consules ordinarii who had taken up this office on 1 January (since 153 BC) of the relevant civil year. Sometimes one or both consuls might not be appointed until November or December of the previous year, and news of the appointment may not have reached parts of the Roman empire for several months into the current year; thus we find the occasional inscription where the year is defined as "after the consulate" of a pair of consuls. The use of consular dating ended in AD 541 when the emperor Justinian I discontinued appointing consuls. The last consul nominated was Anicius Faustus Albinus Basilius. Soon afterwards, imperial regnal dating was adopted in its place. Dating from the founding of Rome Another method of dating, rarely used, was anno urbis conditae (Latin: "in the year of the founded city" (abbreviated AUC), where "city" meant Rome). (It is often incorrectly given that AUC stands for ab urbe condita, which is the title of Titus Livius's history of Rome.) Several epochs were in use by Roman historians. Modern historians usually adopt the epoch of Varro, which we place in 753 BC. The system was introduced by Marcus Terentius Varro in the 1st century BC. The first day of its year was Founder's Day (21 April), although most modern historians assume that it coincides with the modern historical year (1 January to 31 December). It was rarely used in the Roman calendar and in the early Julian calendar – naming the two consuls that held office in a particular year was dominant. AD is thus approximately the same as AUC ( + 753). About AD 400, the Iberian historian Orosius used the AUC era. Pope Boniface IV (about AD 600) may have been the first to use both the AUC era and the Anno Domini era (he put AD 607 = AUC 1360). Regnal years of Roman emperors Another system that is less commonly found than might be thought was the use of the regnal year of the Roman emperor. At first, Augustus indicated the year of his reign by counting how many times he had held the office of consul, and how many times the Roman Senate had granted him Tribunican powers, carefully observing the fiction that his powers came from these offices granted to him, rather than from his own person or the many legions under his control. His successors followed his practice until the memory of the Roman Republic faded (about AD 200), when they began to use their regnal year openly. Dating from the Roman conquest Some regions of the Roman Empire dated their calendars from the date of Roman conquest, or the establishment of Roman rule. The Spanish era counted the years from 38 BC, probably the date of a new tax imposed by the Roman Republic on the subdued population of Iberia. The date marked the establishment of Roman rule in Spain and was used in official documents in Portugal, Aragon, Valencia, and in Castile, into the 14th century. This system of calibrating years fell to disuse in 1381 and was replaced by today's Anno Domini. Throughout the Roman and Byzantine periods, the Decapolis and other Hellenized cities of Syria and Palestine used the Pompeian era, counting dates from the Roman general Pompey's conquest of the region in 63 BC. Maya A different form of calendar was used to track longer periods of time, and for the inscription of calendar dates (i.e., identifying when one event occurred in relation to others). This form, known as the Long Count, is based upon the number of elapsed days since a mythological starting-point. According to the calibration between the Long Count and Western calendars accepted by the great majority of Maya researchers (known as the GMT correlation), this starting-point is equivalent to 11 August, 3114 BC in the proleptic Gregorian calendar or 6 September in the Julian calendar (−3113 astronomical). Other dating systems A great many local systems or eras were also important, for example the year from the foundation of one particular city, the regnal year of the neighboring Persian emperor, and eventually even the year of the reigning Caliph. Late Antiquity and Middle Ages Most of the traditional calendar eras in use today were introduced at the time of transition from Late Antiquity to the Early Middle Ages, roughly between the 6th and 10th centuries. Christian era The Etos Kosmou of the Byzantine Calendar places Creation at the beginning of its year 1, namely 5509 BC. Its first known use occurred in the 7th century AD, although its precursors were developed about AD 400. The year 7509 of this era began in September 2000. The Era of Martyrs or Era of Diocletian is reckoned from the beginning of the reign of Roman Emperor Diocletian; the first year of this era was 284/5. It was not the custom to use regnal years in Rome, but it was the custom in Roman Egypt, which the emperor ruled through a prefect (the king of Egypt). The year number changed on the first day of the Egyptian month Thoth (29 August three years out of four, 30 August the year before a Roman leap year.) Diocletian abolished the special status of Egypt, which thereafter followed the normal Roman calendar: consular years beginning on 1 January. This era was used in the Easter tables prepared in Alexandria long after the abdication of Diocletian, even though Diocletian was a notorious persecutor of Christians. The Era of Diocletian was retained by the Coptic Church and used for general purposes, but by 643 the name had been changed to Era of the Martyrs. The Incarnation Era is used by Ethiopia. Its epoch is 29 August, AD 8 in the Julian calendar. The Armenian calendar has its era fixed at AD 552. Dionysian "Common Era" The era based on the Incarnation of Christ was introduced by Dionysius Exiguus in 525 and is in continued use with various reforms and derivations. The distinction between the Incarnation being the conception or the Nativity of Jesus was not drawn until the late ninth century. The beginning of the numbered year varied from place to place: when, in 1600, Scotland adopted 1 January as the date the year number changes, this was already the case in much of continental Europe. England adopted this practice in 1752. A.D. (or AD) – for the Latin Anno Domini, meaning "in the year of (our) Lord". This is the dominant or Western Christian Era; AD is used in the Gregorian calendar. Anno Salutis, meaning "in the year of salvation" is identical. Originally intended to number years from the Incarnation of Jesus, according to modern thinking the calculation was a few years off. Years preceding AD 1 are numbered using the BC era, avoiding zero or negative numbers. AD was also used in the medieval Julian calendar, but the first day of the year was either 1 March, Easter, 25 March, 1 September, or 25 December, not 1 January. To distinguish between the Julian and Gregorian calendars, O.S. and N.S. were often added to the date, especially during the 17th and 18th centuries, when both calendars were in common use. Old Style (O.S.) was used for the Julian calendar and for years not beginning on 1 January. New Style (N.S.) was used for the Gregorian calendar and for Julian calendar years beginning on 1 January. Many countries switched to using 1 January as the start of the numbered year at the same time as they switched from the Julian calendar to the Gregorian calendar, but others switched earlier or later. B.C. (or BC) – meaning "Before Christ". Used for years before AD 1, counting backwards so the year n BC is n years before AD 1. Thus there is no year 0. C.E. (or CE) and B.C.E. (or BCE) – meaning "Common Era" and "Before the Common Era", numerically equivalent to AD and BC, respectively (in writing, "AD" precedes the year number, but "CE" follows the year: AD 1 = 1 CE.) The Latin equivalent vulgaris aera was used as early as 1615 by Johannes Kepler. The English abbreviations C.E. and B.C.E. were introduced in the 19th century by Jewish intellectuals, wishing to avoid the abbreviation for dominus "lord" in implicit reference to Christ. By the later 20th century, the abbreviations had come into wider usage by authors who wished to emphasize secularism. Dionysian-derived Astronomical year numbering equates its year 0 with 1 BC, and counts negative years from 2 BC backward (−1 backward), so 100 BC is −99. The human era, also named Holocene era, proposed by Cesare Emiliani adds 10,000 to AD years, so that AD 1 would be the year 10,001.<ref>{{cite journal | last1 = Cesare | first = E. | date = 1993 | department = Correspondence | journal = Nature |volume = 336 | page = 716}}</ref> Anno Lucis of Freemasonry adds 4000 years to the AD year. Islamic A.H. (or AH) – for the Latinized Anno Hegirae, meaning "in the year of the Hijra", Muhammad's emigration from Mecca to Medina in September 622, which occurred in its first year, used in the Islamic calendar. Since the Islamic calendar is a purely lunar calendar of about 354 or 355 days, its year count increases faster than that of solar and lunisolar calendars. S.H. (or SH) is used by the Iranian calendar to denote the number of solar years since the Hijra. The year beginning at the vernal equinox equals the number of the Gregorian year beginning at the preceding 1 January minus 621. Hindu Hindu calendar, counting from the start of the Kali Yuga, with its epoch on 18 February, 3102 BC Julian (23 January, 3102 BC Gregorian), based on Aryabhata (6th century). Vikrama Samvat, 56-57 BC, introduced about the 12th century. S.E. or (SE) – for the Saka Era, used in some Hindu calendars and in the Indian national calendar, with an epoch near the vernal equinox of year 78 (its year 0); its usage spread to Southeast Asia before year 1000. This era is also used (together with the Gregorian calendar) in the Indian national calendar, the official civil calendar used in communiques issued by the Government of India. Lakshmana Era, established by the Bengali ruler Lakshmana Sena with an epoch of 1118–1119. It was used for at least 400 years in Bihar and Bengal. Southeast Asia The Hindu Saka Era influences the calendars of southeast Asian indianized kingdoms. B.E. – for the Buddhist Era, introduced by Vajiravudh in 1912, which has an epoch (origin) of 544 BC. This year is called year 1 in Sri Lanka and Burma, but year 0 in Thailand, Laos and Cambodia. Thus the year 2500 B.E. occurred in 1956 in the former countries, but in 1957 in the latter. In Thailand in 1888 King Chulalongkorn decreed a National Thai Era, dating from the founding of Bangkok on 6 April 1782. In 1912 New Year's Day was shifted to 1 April. In 1941 Prime Minister Phibunsongkhram decided to count the years since 543 BC. This is the Thai solar calendar using the Thai Buddhist Era aligned to the western solar calendar. BE for Burmese Era – from Burmese calendar originally with an epochal year 0 date of 22 March 638; from which derived CS for Chula Sakarat era; variously known as LE Lesser Era; ME Minor Era – the Major or Great Era being the Saka Era of the Indian national calendar B.E. of the Bahá'í calendar is below. Bahá'í B.E. – The Bahá'í calendar dates from the year of the declaration of the Báb. Years are counted in the Bahá'í Era (BE), which starts its year 1 from 21 March 1844. Jewish A.M. (or AM) – for the Latin Anno Mundi, meaning "in the year of the world", has its epoch in the year 3761 BC. This was first used to number the years of the modern Hebrew calendar in 1178 by Maimonides. Precursors with epochs one or two years later were used since the 3rd century, all based on the Seder Olam Rabba of the 2nd century. The year beginning in the northern autumn of 2000 was 5761 AM. Zoroastrian The Zoroastrian calendar used regnal years since the reform by Ardeshir I, but after the fall of the Sassanid Empire, the ascension of the last Sassanid ruler, Yazdegerd III of Persia, crowned 16 June 632, continued to be used as the reference year, abbreviated Y.Z. or "Yazdegerd era". Modern Political The Republican Era of the French Republican Calendar was dated from 22 September 1792, the day of the proclamation of the French First Republic. It was used in Revolutionary France from 24 October 1793 (on the Gregorian calendar) to 31 December 1805. The Positivist calendar of 1844 takes 1789 as its epoch. The Republican era is used by the Republic of China since 1912, which is the first year of the republic. Coincidentally, this is the same as the Juche era used in North Korea, the year of the birth of its founder Kim Il-Sung. The Era Fascista 'Fascist Era' was instituted by the Italian Fascists and used Roman numerals to denote the number of years since the March on Rome in 1922. Therefore, 1934, for example, was XII E.F. (era fascista). This era was abolished with the fall of fascism in Italy on 25 July 1943, but restored in the northern part of the country during the Italian Social Republic. The Gregorian calendar remained in simultaneous use and a double numbering was adopted: the year of the Common era was presented in Arabic numerals and the year of the fascist era in Roman numerals. The year of the Fascist calendar began on 29 October, so, for example, 27 October 1933 was XI E.F. but 30 October 1933 was XII E.F. China traditionally reckoned by the regnal year of its emperors, see Chinese era name. Most Chinese do not assign numbers to the years of the Chinese calendar, but the few who do, like expatriate Chinese, use a continuous count of years from the reign of the legendary Yellow Emperor, using 2698 BC as year 1. Western writers begin this count at either 2637 BC or 2697 BC (see Chinese calendar). Thus, the Chinese years 4637, 4697, or 4698 began in early 2000. In Korea, from 1952 until 1961 years were numbered via Dangi years, where 2333 BC was regarded as the first such year. The Assyrian calendar, introduced in the 1950s, has its era fixed at 4750 BC. The Japanese calendar dates from the accession of the current Emperor of Japan. The current emperor took the throne in May 2019, which became Reiwa 1, and which was until then Heisei 31. The United States government sometimes uses a calendar of the era of its Independence, fixed on 4 July 1776, together with the Anno Domini civil calendar. For instance, its Constitution is dated "the Seventeenth Day of September in the Year of our Lord one thousand seven hundred and Eighty seven and of the Independence of the United States of America the Twelfth." Presidential proclamations are also dated in this way. Religious A.D. – "After Dianetics". In Scientology, years are numbered relative to the first publication of the book Dianetics: The Modern Science of Mental Health (1950). Y.O.L.D. – In the Discordian calendar, the standard designation for the year number is YOLD (Year of Our Lady of Discord). The calendar begins counting from 1 January 1166 BC in the Discordian year 0, ostensibly the date of origin of the Curse of Greyface. An alternate designation, A.D.D. has been occasionally seen (Anno Domina Discordia, a Latin translation of YOLD, but presumably also a play on attention deficit disorder). e.v. – Era vulgaris. (From Latin, meaning "common era", usually stylized in lowercase.) The Thelemic calendar is used by some Thelemites to designate a number of years since Aleister Crowley's inauguration of the so-called Aeon of Horus, which occurred on 20 March 1904, and coincides with both the Thelemic new year and a holiday known as the Equinox of the Gods. The abbreviation "A.N.", for Aerae Novae'' ("New Era" in Latin), is sometimes also used. Practical B.P. – for Before Present, specifically, the number of radiocarbon years before 1950. HE – for counting elapsed years of the Holocene from near the beginning of the Neolithic revolution of the Holocene epoch, specifically by adding exactly 10,000 years to AD (Anno Domini) or CE (Common Era) years, and subtracting BC/BCE years from 10001. Julian day number – for counting days, not years, its era fixed at noon 1 January, 4713 BC in the proleptic Julian calendar. This equals 24 November, 4714 BC in the proleptic Gregorian calendar. From noon of this day to noon of the next day was day 0. Multiples of 7 are Mondays. Negative values can also be used. Apart from the choice of the zero point and name, this Julian day and Julian date are not related to the Julian calendar. It does not count years, so, strictly speaking, it has no era, but it does have an epoch. Today (noon-to-noon UTC) the value is . Unix time – for counting elapsed seconds since the Unix epoch set at 00:00:00 or midnight UTC of 1 January 1970, though there are problems with Unix implementation of Coordinated Universal Time (UTC). See also Calendar reform Common Era Julian day List of calendars References Chronology
43165174
https://en.wikipedia.org/wiki/OpenClonk
OpenClonk
OpenClonk is a free and open-source 2D multiplayer action game, in which the player controls small humanoids called "clonks". The main mechanics of the game include mining, settling, player vs player combat, and tactical gameplay elements. The game has been compared to Worms, The Settlers, Lemmings and Minecraft. The game features a single-player and multiplayer mode, and supports cross-platform play across Microsoft Windows, Linux, and MacOS. The OpenClonk project is a continuation of the Clonk game series and game engine, both of which are being actively developed. The source code is available under the ISC license and game content is licensed under CC BY-SA, CC BY and CC0. Gameplay The gameplay features elements of action and tactical games. The player controls clonks, and these clonks can perform various tasks ranging from shoveling through dirt, throwing dynamite to mine gold, constructing buildings, and wielding swords. The game world consists of a dynamic and destructible landscape. The landscape is constructed out of different materials like earth, tunnels, sky, coal, ore, gold and water. Players can destroy these materials by digging or by using explosions for solid materials, and by using gravity or pumps for liquids. The various in-game objects and tools available to the clonks allow for a dynamical gameplay. Gameplay is focused around two types of game rounds: Settlement and melee rounds. In settlement rounds, players need to cooperate and build up large settlements to perform certain tasks like mining gold or constructing a statue. In melee rounds, players battle each other in small arenas. Settlement Settlement rounds can be played cooperatively, where two or more players work towards a mutual goal. In these type of rounds players have to build up settlements using the resources in the landscape, ranging from wood and metal to fire and stones. Typical challenges in settlement rounds are provided by inhospitable landscapes, natural disasters like meteors and volcanoes and transporting large objects. Settlements and their production lines allow the player to construct advanced items and vehicles. With more advanced items players can fulfill more complex tasks and achieve one of the many goals. Goals for settlement rounds include: Wealth: gain a certain amount of gold (called clunkers). Expansion: expand your settlement to a certain area of the landscape. Mining: mine a certain amount of valuables like gold or gems. Statue Construstion: gather statue parts and construct a statue. Melee Melee rounds are focused on the elimination of other players. These rounds are usually short, with a typical length between 5 and 30 minutes. The players control a single clonk and have to attack enemies using various weapons, like swords, bows, dynamite, grenade launchers, muskets and catapults. Melee round goals include: Last Man Standing: players have a certain amount of respawns and need to survive as long as possible. Deathmatch: players have to achieve a certain number of kills. Capture the Flag: Capture enemy flags and bring them to your base to gain points. King of the Hill: Try to defend a certain region as long as possible. Development OpenClonk development began with the release of the source code of Clonk Rage in February 2009 and a later release under the ISC license in May 2009. The website www.openclonk.org was founded at the same time, acting as a portal for both development and for players. The source code and game content were hosted first using a Mercurial repository, but now uses a Git repository. The game engine is programmed in C++ and is cross-platform. The game is available on Windows, Linux and Mac OS X, and can be compiled on FreeBSD. Game content is created using the game's own scripting language C4Script, and is developed simultaneously with the game engine. Since the game content and engine are completely separate, mods and completely new game content can be created by anyone using the OpenClonk engine. Development of OpenClonk is completely open and contributions from the community are often accepted as patches. Releases OpenClonk is being released on a regular basis with roughly a major release every year. Major releases are published when the engine or the game content has received large modifications or important features have been added. Minor releases are done for bug fixes and small updates to the game content. The current stable version is OpenClonk 4.1 released on 16 February 2014. OpenClonk 1.0 (3 December 2010) OpenClonk 1.1 (28 December 2010) OpenClonk 1.2 (12 February 2011) OpenClonk 2.0 (1 October 2011) OpenClonk 2.1 (10 October 2011) OpenClonk 2.2 (10 February 2012) OpenClonk 3.0 (14 October 2012) OpenClonk 3.1 (15 October 2012) OpenClonk 3.2 (18 November 2012) OpenClonk 3.3 (10 March 2013) OpenClonk 4.0 (26 January 2014) OpenClonk 4.1 (16 February 2014) OpenClonk 5.0 (5 October 2014) OpenClonk 6.0 (15 March 2015) OpenClonk 6.1 (12 June 2015) OpenClonk 7.0 (16 January 2016) OpenClonk 8.0 (4 February 2018) OpenClonk 8.1 (17 March 2018) Reception In a popularity competition organized by the Linux Game Awards, OpenClonk received a third place in the Project of the Month: March 2014. Linux Format named OpenClonk in September 2015 a "HotPick". Furthermore, on Desura in June 2015 OpenClonk achieved a 8.8/10 user rating. German computer magazine c't added OpenClonk to a DVD shipped with their magazine in April 2011. See also Clonk List of open source games References External links Download OpenClonk source code 2010 video games Indie video games Open-source video games Multiplayer online games Windows games Linux games MacOS games Creative Commons-licensed video games Free software programmed in C++ Multiplayer and single-player video games Software using the ISC license de:Clonk
49856505
https://en.wikipedia.org/wiki/Gloria%20Townsend
Gloria Townsend
Gloria Townsend is an American computer scientist and professor in the department of Computer Science at DePauw University in Indiana. She is known for her work in evolutionary computation and her involvement with women in computing. She has served on the Executive Committee of the Association for Computing Machinery (ACM) Council on Women in Computing. She is the author of One Hundred One Ideas for Small Regional Celebrations of Women in Computing. In 2013, she received the Mr. and Mrs. Fred C. Tucker Jr. Distinguished Career Award for notable contributions to DePauw through her commitments to students, teaching excellence, their chosen disciplines, and service to the University. In 2006, she organized several new regional celebrations of Women in Computing (WiC) to coincide with the international Grace Hopper Celebration of Women in Computing conference. In 2010, the United States National Science Foundation awarded funding to extend the celebrations to cover 12 regions as a joint effort by ACM-W, ABI, and NCWIT., Publications 1998. Turning liabilities into assets in a general education course, SIGCSE '98 Proceedings of the twenty-ninth SIGCSE technical symposium on Computer science education, Pages 58–62, ACM New York, NY, USA, 1998. 2002. People who make a difference: mentors and role models, ACM SIGCSE Bulletin - Women and Computing Homepage archive, Volume 34 Issue 2, Pages 57–61, ACM New York, NY, USA, June 2002. 2007. Leveling the CS1 playing field, SIGCSE '07 Proceedings of the 38th SIGCSE technical symposium on Computer science education, Pages 331-335, ACM New York, NY, USA, 2007. See also Association for Computing Machinery's Council on Women in Computing (ACM-W) References External links Gloria Townsend Professor of Computer Science, DePauw ACM-W Profile Grace Hopper Celebration of Women in Computing 21st-century American women scientists DePauw University faculty Indiana University alumni Living people American women computer scientists American computer scientists Year of birth missing (living people) American women academics
1597478
https://en.wikipedia.org/wiki/Hew%20Raymond%20Griffiths
Hew Raymond Griffiths
Hew Raymond Griffiths (born 8 November 1962, UK) has been accused by the United States of being a ring leader of DrinkOrDie or DOD, an underground software infringement network, using the online identity of "Bandido". Griffiths was living in Berkeley Vale in the Central Coast Region of NSW, Australia before he was placed on remand at Silverwater Correctional Centre. After fighting extradition for almost 3 years, Griffiths was finally extradited from Australia to the United States and on 20 February 2007, he appeared before Magistrate Judge Barry R. Portez of the U.S. District Court in Alexandria, Virginia. On 20 April, it was announced by the U.S. Department of Justice that Griffiths had entered a plea of guilty. His case is of interest in that he is an Australian resident who has been indicted by a court in Virginia, United States for copyright infringement and conspiracy to infringe copyright under the US Code. Hew Raymond Griffiths, born in the United Kingdom, had never at any point physically left Australia since arriving in his adopted country at an early age. This is an unusual situation as the US extradition has not targeted a fugitive or a dangerous person who financially profited from his activities. However, the Australian courts and executive government have agreed to treat Griffiths' activities as having taken place in a US jurisdiction. The case therefore highlights the serious consequences for Australian Internet users who are charged with distributing US copyright-protected material. Griffiths' extradition was very controversial in Australia, where his actions were not criminal. The matter of USA v Griffiths has been cited as an example of how bilateral arrangements can lead to undesirable effects such as a loss of sovereignty and what some have described as draconian outcomes. On 22 June 2007 Hew Griffiths was sentenced to 51 months in prison for conspiracy to commit copyright infringement. Taking into account the 3 years he spent in Australian and US prisons prior to sentencing, he served a further 15 months in the US. Griffiths' sentence attracted significant attention in Australia, and some attention in the United States and other countries which have recently signed, or are currently negotiating, bilateral Free Trade Agreements with the USA. Griffiths finally returned to Australia on 2 March 2008, after 5 weeks as an illegal alien in the US immigration detention system following his release from prison on 26 January 2008 (Australia Day). A condition of his repatriation to Australia was that he never again re-enter the United States, a country he had never visited before being extradited to it. See also Copyright infringement DrinkOrDie Operation Buccaneer Warez References Aussie software pirate extradited, Sydney Morning Herald, 7 May 2007 Software pirates not safe at home, The New Zealand Herald, 7 September 2004. Accused web pirate back behind bars, Sydney Morning Herald, 8 July 2004. The unsolicited views of Internet Users, broadbandreports.com blog, 17 July 2004. Illegal Internet Network reproduced and distributed pirated software, films and music worth $50 million – US DOJ 12 March 2003. Robbery under arms: Copyright law and the Australia–US Free Trade Agreement by Matthew Rimmer, First Monday, March 2006. How To Kill A Country: Australia's Devastating Trade Deal With the United States paper by Linda Weiss, Elizabeth Thurbon & John Mathews, Evatt Foundation, 2 April 2005. Global Software Piracy costing $54 Billion in 2005 – Computing.co.uk 23 May 2006. Australian Copyright Act 1968 Copyright Law of the United States of America contained in Title 17 of the US Code. IP Chapter of AUSFTA 2004. 'Bandido' Software Pirate Arraigned In U.S. On 2 Charges, Information Week, 21 February 2007. Another One Sacrificed in the Name of an Alliance, Opinion Article by Richard Ackland, Sydney Morning Herald, 16 February 2007. BitTorrent issues weblog 21 April 2007. IPKAT intellectual property law issues weblog 23 April 2007. Australian alliance issues weblog 8 May 2007. Extradited Software Piracy Ringleader Sentenced to 51 Months in Prison 22 June 2007 Discussion on Australian Larvatus Prodeo blog of extradition 18 February 2007 1962 births Living people Central Coast (New South Wales) Warez Prisoners and detainees of the United States federal government Australian people imprisoned abroad People extradited from Australia People extradited to the United States
2949850
https://en.wikipedia.org/wiki/Logical%20access%20control
Logical access control
In computers, logical access controls are tools and protocols used for identification, authentication, authorization, and accountability in computer information systems. Logical access is often needed for remote access of hardware and is often contrasted with the term "physical access", which refers to interactions (such as a lock and key) with hardware in the physical environment, where equipment is stored and used. Logical access controls enforce access control measures for systems, programs, processes, and information. The controls can be embedded within operating systems, applications, add-on security packages, or database and telecommunication management systems. The line between logical access and physical access can be blurred when physical access is controlled by software. For example, entry to a room may be controlled by a chip and PIN card and an electronic lock controlled by software. Only those in possession of an appropriate card, with an appropriate security level and with knowledge of the PIN are permitted entry to the room. On swiping the card into a card reader and entering the correct PIN code. Logical controls, also called logical access controls and technical controls, protect data and the systems, networks, and environments that protect them. In order to authenticate, authorize, or maintain accountability a variety of methodologies are used such as password protocols, devices coupled with protocols and software, encryption, firewalls, or other systems that can detect intruders and maintain security, reduce vulnerabilities and protect the data and systems from threats. Businesses, organizations and other entities use a wide spectrum of logical access controls to protect hardware from unauthorized remote access. These can include sophisticated password programs, advanced biometric security features, or any other setups that effectively identify and screen users at any administrative level. The particular logical access controls used in a given facility and hardware infrastructure partially depend on the nature of the entity that owns and administrates the hardware setup. Government logical access security is often different from business logical access security, where federal agencies may have specific guidelines for controlling logical access. Users may be required to hold security clearances or go through other screening procedures that complement secure password or biometric functions. This is all part of protecting the data kept on a specific hardware setup. Militaries and governments use logical access biometrics to protect their large and powerful networks and systems which require very high levels of security. It is essential for the large networks of police forces and militaries where it is used not only to gain access but also in six main essential applications. Without logical access control security systems highly confidential information would be at risk of exposure. There is a wide range of biometric security devices and software available for different levels of security needs. There are very large complex biometric systems for large networks that require absolute airtight security and there are less expensive systems for use in office buildings and smaller institutions. Notes References Andress, Jason. (2011). ″The Basics of Information Security.″ Cory Janssen, Logical Access, Techopedia, retrieved at 3:15 a.m. on August 12, 2014 findBIOMETRICS, Logical Access Control Biometrics, retrieved at 3:25 a.m. on August 12, 2014 External links RSA Intelligence Driven Security, EMC Corporation Computer access control
37831112
https://en.wikipedia.org/wiki/ARM%20Accredited%20Engineer
ARM Accredited Engineer
ARM Accredited Engineer (AAE) was a program of professional accreditations awarded by ARM Holdings. The AAE program was designed for computer software and hardware engineers wishing to validate their knowledge of ARM technology. The program was launched in 2012 at a series of events including ARM TechCon 2012. The AAE program consisted of a number of certifications, each with its own syllabus, and each assessed by means of a separate one-hour multiple-choice exam. The AAE program was ended in 2016. Certifications ARM Accredited Engineer (AAE) AAE, an entry-level accreditation, was the first to be launched. The AAE syllabus covered software-related aspects of the ARMv7 Architecture, with a specific focus on Cortex-A and Cortex-R profiles, including applications processors and real-time processors. It did not cover Cortex-M systems. The AAE certification was aimed at general embedded software and systems developers who have a broad knowledge of ARM technology. The syllabus covered the following subject areas: ARM architecture (30%) Software development (30%) Software optimization (15%) System (10%) Software debug (8%) Implementation (7%) ARM Accredited MCU Engineer (AAME) The AAME accreditation was launched on 16 September 2013. It was an entry-level accreditation, similar to the basic AAE accreditation, but focused on the ARMv7 Cortex-M profile. This accreditation was aimed at general embedded software engineers with a broad knowledge of ARM technology, with a bias toward microcontrollers. The syllabus covered the following subject areas: ARM architecture (35%) Software development (30%) Debug (13%) Software optimization (10%) Implementation (7%) System Startup (5%) Other accreditations The following accreditations were being considered for launch between 2013 and 2016: ARM Accredited Cortex-A Engineer (AACAE) ARM Accredited Cortex-R Engineer (AACRE) AA Windows on ARM Developer (AAWoAD) AA Linux on ARM Developer (AALoAD) AA Android on ARM Developer (AAAoAD) AA Graphics Specialist (AAGS) AA Security Specialist (AASS) ARM Accredited Cortex-M Engineer (AACME) ARM Accredited SoC Developer (AASoCD) ARM Accredited SoC Specialist (AASoCS) Exams All AAE Program exams were delivered by Prometric Inc. as supervised computer-based tests on dedicated test platforms throughout their network of 10,000 Authorized Prometric Test Centers (APTCs) around the world. There were 70 multiple choice questions. Candidates were given one hour to complete the test. Results were issued instantly on-screen. Grades are either pass or fail - no letter or percentage grades are issued. On passing the exam, candidates were able to request a paper certificate to be mailed to them. References ARM architecture Computer engineering Information technology qualifications
13223983
https://en.wikipedia.org/wiki/Mark%20Davis%20%28Unicode%29
Mark Davis (Unicode)
Mark Edward Davis (born September 13, 1952) is an American specialist in the internationalization and localization of software and the co-founder and president of the Unicode Consortium. He is one of the key technical contributors to the Unicode specifications, being the primary author or co-author of bidirectional text algorithms (used worldwide to display Arabic language and Hebrew language text), collation (used by sorting algorithms and search algorithms), Unicode normalization, Unicode scripts, text segmentation, identifiers, regular expressions, data compression, character encoding and security. Education Davis was educated at Stanford University where he was awarded a PhD in Philosophy in 1979. Career and research Davis has specialized in Internationalization and localization of software for many years. After his PhD, he worked in Zurich, Switzerland for several years, then returned to California to join Apple, where he co-authored the Macintosh KanjiTalk and Script Manager, and authored the Macintosh Arabic and Hebrew systems. He also worked on parts of the Mac OS, including contributions to the design of TrueType. Later, he was the manager and architect for the Taligent international frameworks and was then the architect for a large part of the Java international libraries. At IBM, he was the Chief Software Globalization Architect. He is the author of a number of patents, primarily in internationalization and localization. At various times he has also managed groups or departments covering text, internationalization, operating system services, porting and technical communications. Davis founded and was responsible for the overall architecture of International Components for Unicode (ICU: a major Unicode software internationalization library) and designed the core of the Java internationalization classes. He also is the vice-chair of the Unicode Common Locale Data Repository (CLDR) project, and is a co-author of Best Current Practice (BCP) 47 IETF language tag Request for Comments (RFC 4646 and RFC 5646), used for identifying languages in XML and HTML documents. Since the start of 2006, Davis has been working on software internationalization at Google, focusing on effective and secure use of Unicode (especially in the index and search pipeline), overall improvement and adoption of the software internationalization libraries (including ICU) and the introduction and maintenance of stable identifiers for languages, scripts, regions, time zones and currencies. Publications The Unicode Standard, Version 5.0 Personal life Davis is married to Anne Gundelfinger. He has two daughters from a previous marriage. References 1952 births American computer programmers Apple Inc. employees Google employees Living people People involved with Unicode
4097356
https://en.wikipedia.org/wiki/Certificate%20policy
Certificate policy
A certificate policy (CP) is a document which aims to state what are the different entities of a public key infrastructure (PKI), their roles and their duties. This document is published in the PKI perimeter. When in use with X.509 certificates, a specific field can be set to include a link to the associated certificate policy. Thus, during an exchange, any relying party has an access to the assurance level associated with the certificate, and can decide on the level of trust to put in the certificate. RFC 3647 The reference document for writing a certificate policy is, , . The RFC proposes a framework for the writing of certificate policies and Certification Practice Statements (CPS). The points described below are based on the framework presented in the RFC. Main points Architecture The document should describe the general architecture of the related PKI, present the different entities of the PKI and any exchange based on certificates issued by this very same PKI. Certificate uses An important point of the certificate policy is the description of the authorized and prohibited certificate uses. When a certificate is issued, it can be stated in its attributes what use cases it is intended to fulfill. For example, a certificate can be issued for digital signature of e-mail (aka S/MIME), encryption of data, authentication (e.g. of a Web server, as when one uses HTTPS) or further issuance of certificates (delegation of authority). Prohibited uses are specified in the same way. Naming, identification and authentication The document also describes how certificates names are to be chosen, and besides, the associated needs for identification and authentication. When a certification application is filled, the certification authority (or, by delegation, the registration authority) is in charge of checking the information provided by the applicant, such as his identity. This is to make sure that the CA does not take part in an identity theft. Key generation The generation of the keys is also mentioned in a certificate policy. Users may be allowed to generate their own keys and submit them to the CA for generation of an associated certificate. The PKI may also choose to prohibit user-generated keys, and provide a separated and probably more secure way of generating the keys (for example, by using a hardware security module). Procedures The different procedures for certificate application, issuance, acceptance, renewal, re-key, modification and revocation are a large part of the document. These procedures describe how each actor of the PKI has to act in order for the whole assurance level to be accepted. Operational controls Then, a chapter is found regarding physical and procedural controls, audit and logging procedures involved in the PKI to ensure data integrity, availability and confidentiality. Technical controls This part describes what are the technical requirements regarding key sizes, protection of private keys (by use of key escrow) and various types of controls regarding the technical environment (computers, network). Certificate revocation lists Those lists are a vital part of any public key infrastructure, and as such, a specific chapter is dedicated to the description of the management associated with these lists, to ensure consistency between certificate status and the content of the list. Audit and assessments The PKI needs to be audited to ensure it complies with the rules stated in its documents, such as the certificate policy. The procedures used to assess such compliance are described here. Other This last chapter tackles all remaining points, by example all the PKI-associated legal matters. References Key management Public key infrastructure
46736076
https://en.wikipedia.org/wiki/.top
.top
.top is a generic top-level domain, officially delegated in ICANN's New gTLD Program on August 4, 2014. The extension is managed and operated by the .top registry, and can be registered by anyone since November 18, 2014. .top domains are often used for malware and phishing, and is included in the list of banned TLDs for some antimalware vendors such as Malwarebytes. .top is blocked by default by Snort rules. Development June 20, 2011, ICANN officially announced that the application for new gTLDs was to open in 2012. April 11, 2012, the application was submitted online. On June 9, 2012, it appeared on the ICANN public list. March 20, 2013, it passed initial evaluation. March 20, 2014, the registry signed a contract with ICANN. August 5, 2014, the domain entered the root zone of ICANN new gTLD. October 15, 2014, it entered its sunrise period. November 18, 2014, .top domains could be registered openly. Registration volume exceeded 10,000 on the first day. April 24, 2015, .top was put on record with Chinese national government department MIIT (Ministry of Industry and Information Technology). September 2015, .top domains obtained 250,000 new registrations within one week, making its registration volume surge to 530,000. December 2015, .top registration reached around 1,000,000. January 2016, .top released its IDN domains, supporting Arabic, Chinese (traditional + simplified), French, German, Japanese, Russian and Spanish. The availability can be checked by .top whois. December 2016, .top registration broke 4,500,000. June 2017, .top registrations had gone down to about 3,000,000. October 2017, .top registrations had gone down to about 1,900,000. January 2018 .top registrations had gone down to about 1,300,000. March 2018, .top registrations had gone up to about 2,100,000. References Top Computer-related introductions in 2014
1886538
https://en.wikipedia.org/wiki/Apophysis%20%28software%29
Apophysis (software)
Apophysis is an open source fractal flame editor and renderer for Microsoft Windows and Macintosh. Apophysis has many features for creating and editing fractal flames, including an editor which allows one to directly edit the transforms by manipulating triangles, a mutations window, which applies random edits to the triangles, an adjust window, which allows the adjustment of coloring and location of the image. It also provides a scripting language with direct access to most of the components of the fractal, which allows for effects such as the animations seen in Electric Sheep, which are also fractal flames. Users can export fractal flames to other fractal flame rendering programs, such as FLAM3. There is a separate version of Apophysis that has support for 3D. There are numerous clones, ports, and forks of it. History Scott Draves invented Fractal Flames and published an open source implementation written in C in the early 90s. In 2001, Ronald Hordijk translated his code into Delphi and created a non-animated screensaver. And in 2003 or 2004 Mark Townsend took Ronald's code and added a graphical user interface to create Apophysis. It has since been improved and updated by Peter Sdobnov, Piotr Borys, and Ronald Hordijk. Since 2009, there is a version of Apophysis called Apophysis 7X. Originally, it was targeting to provide support for modern Microsoft Windows operating systems like Windows Vista and 7. A strong feedback from the Apophysis users encouraged the developer Georg Kiehne to provide updates which made 7X the most popular and advanced version of Apophysis so far. Technical details The user specifies a set of mathematical functions. Each function is a composition of an affine map, and usually some non-linear map. This set of functions is called an iterated function system (IFS). Apophysis then generates the attractor of this set of functions, by means of Monte-Carlo simulation. In fact, Apophysis generates a probability measure, which is then colored according to some rule. Scripts Apophysis uses the Scripter Studio scripting library to allow users to write scripts which run and either create a new flame, edit the existing flames, or do bigger tasks. One such instance is rendering an entire batch of fractals. Plugins Apophysis supports a plugin API so that new variations can be independently developed and distributed. There are numerous plugins available from the various user communities. Sample images See also Fractal flame Fractal-generating software Fractal art Ultra Fractal Chaotica References External links Free graphics software Free software programmed in Delphi Windows-only free software Fractal software Pascal (programming language) software
2235843
https://en.wikipedia.org/wiki/Blue%20Martini%20Software
Blue Martini Software
Blue Martini Software was a software manufacturer and professional services provider based in San Mateo, California that sold and supported an e-commerce, contact center, relationship marketing, and clienteling applications to retailers and other consumer-facing companies. The company was privately held until July 2000, when it went public on the NASDAQ under the ticker BLUE. Blue Martini software applications are sold and supported by RedPrairie Corporation and are now known as the RedPrairie Commerce Suite. RedPrairie is a privately held supply chain management, workforce management and all-channel commerce software provider headquartered in Alpharetta, Georgia with additional offices worldwide. Acquisition history: In March 2005, Blue Martini Software was acquired by Multi-Channel Holdings, Inc., a privately held Golden Gate Capital portfolio company which also owned Ecometry Corporation. In September 2006, Ecometry/Blue Martini Software and GERS Inc. merged to form Escalate Retail. In February 2011, Escalate Retail was acquired by RedPrairie Corporation. In November 2012, RedPrairie's parent New Mountain Capital acquired JDA Software and following that acquisition Blue Martini has been supported by JDA Software. References External links RedPrairie Corporate website RedPrairie acquires Escalate Retail Ecometry and GERS Combine to Form Escalate Retail Blue Martini Software acquired by Golden Gate Capital Portfolio Company S-1 as filed with SEC American companies established in 1998 American companies disestablished in 2006 Computer companies established in 1998 Computer companies disestablished in 2006 Defunct software companies of the United States Defunct computer companies of the United States Companies based in San Mateo, California Defunct companies based in California
6115598
https://en.wikipedia.org/wiki/Clinton%20Foundation
Clinton Foundation
The Clinton Foundation (founded in 1997 as the William J. Clinton Presidential Foundation, and renamed in 2013 as the Bill, Hillary & Chelsea Clinton Foundation) is a nonprofit organization under section 501(c)(3) of the U.S. tax code. It was established by former president of the United States Bill Clinton with the stated mission to "strengthen the capacity of people in the United States and throughout the world to meet the challenges of global interdependence." Its offices are located in New York City and Little Rock, Arkansas. Through 2016, the foundation had raised an estimated $2 billion from U.S. corporations, foreign governments and corporations, political donors, and various other groups and individuals. The acceptance of funds from wealthy donors has been a source of controversy. The foundation "has won accolades from philanthropy experts and has drawn bipartisan support". Charitable grants are not a major focus of the Clinton Foundation, which instead uses most of its money to carry out its own humanitarian programs. This foundation is a public organization to which anyone may donate and is distinct from the Clinton Family Foundation, a private organization for personal Clinton family philanthropy. According to the Clinton Foundation's website, neither Bill Clinton nor his daughter, Chelsea Clinton (both are members of the governing board), draws any salary or receives any income from the foundation. When Hillary Clinton was a board member, she reportedly also received no income from the foundation. Beginning in 2015, the foundation was accused of wrongdoing, including a bribery and pay-to-play scheme, but multiple investigations through 2019 found no evidence of malfeasance. The New York Times reported in September 2020 that a federal prosecutor appointed by attorney general Bill Barr to investigate the origins of the 2016 FBI Crossfire Hurricane investigation had also sought documents and interviews regarding how the FBI handled an investigation into the Clinton Foundation. History The origins of the foundation go back to 1997, when then-president Bill Clinton was focused mostly on fundraising for the future Clinton Presidential Center in Little Rock, Arkansas. He founded the William J. Clinton Foundation in 2001 following the completion of his presidency. Longtime Clinton advisor Bruce Lindsey became the CEO in 2004. Later, Lindsey moved from being CEO to being chair, largely for health reasons. Other Clinton hands who played an important early role included Doug Band and Ira Magaziner. Additional Clinton associates who have had senior positions at the foundation include John Podesta and Laura Graham. The foundation's success is spurred by Bill Clinton's worldwide fame and his ability to bring together corporate executives, celebrities, and government officials. Similarly, the foundation areas of involvement have often corresponded to whatever Bill suddenly felt an interest in. Preceding Barack Obama's 2009 nomination of Hillary Clinton as United States Secretary of State, Bill Clinton agreed to accept a number of conditions and restrictions regarding his ongoing activities and fundraising efforts for the Clinton Presidential Center and the Clinton Global Initiative. Accordingly, a list of donors was released in December 2008. By 2011, Chelsea Clinton was taking a dominant role in the foundation and had a seat on its board. To raise money for the foundation, she gave paid speeches, such as her $65,000 2014 address at the University of Missouri in Kansas City for the opening of the Starr Women's Hall of Fame. In 2013, Hillary Clinton joined the foundation following her tenure as Secretary of State. She planned to focus her work on issues regarding women and children, as well as economic development. Accordingly, at that point, it was renamed the "Bill, Hillary & Chelsea Clinton Foundation". Extra attention was paid to the foundation due to the 2016 United States presidential election. In July 2013, Eric Braverman was named CEO of the foundation. He is a friend and former colleague of Chelsea Clinton from McKinsey & Company. At the same time, Chelsea Clinton was named vice chair of the foundation's board. The foundation was also in the midst of a move to two floors of the Time-Life Building in Midtown Manhattan. Chelsea Clinton moved the organization to an outside review, conducted by the firm of Simpson Thacher & Bartlett. Its conclusions were made public in mid-2013. The main focus was to determine how the foundation could achieve firm financial footing that was not dependent upon the former president's fundraising abilities, how it could operate more like a permanent entity rather than a start-up organization, and thus how it could survive and prosper beyond Bill Clinton's lifetime. Dennis Cheng, a former Hillary Clinton campaign official and State Department deputy chief, was named to oversee a $250 million endowment drive. The review also found the management and structure of the foundation needed improvements, including an increase in the size of its board of directors that would have a more direct involvement in planning and budget activities. Additionally, the review said that all employees needed to understand the foundation's conflict of interest policies and that expense reports needed a more formal review process. In January 2015, Braverman announced his resignation. Politico attributed the move to being "partly from a power struggle inside the foundation between and among the coterie of Clinton loyalists who have surrounded the former president for decades and who helped start and run the foundation." He was succeeded at first in an acting capacity by former deputy assistant secretary, Maura Pally. On February 18, 2015, The Washington Post reported that, "the foundation has won accolades from philanthropy experts and has drawn bipartisan support, with members of the George W. Bush administration often participating in its programs." In March 2015, former Secretary of Health and Human Services in the Clinton administration, Donna Shalala, was selected to run the Clinton Foundation. She left in April 2017. In August 2016, The Boston Globe's editorial board suggested that the Clinton Foundation cease accepting donations. The Globe's editorial board offered praise for the foundation's work but added that "as long as either of the Clintons are in public office, or actively seeking it, they should not operate a charity, too" because it represents a conflict of interest and a political distraction. In 2016, Reuters reported that the Clinton Foundation suspected that it had been the target of a cyber security breach. As a consequence of the suspected cyber security breach, Clinton Foundation officials retained a security firm, FireEye, to evaluate its data systems. The cyber security breach has been described as sharing similarities with cyberattacks that targeted other institutions, such as the Democratic National Committee. In October 2016, The Wall Street Journal reported that four FBI field offices—in New York, Los Angeles, Washington, and Little Rock—had been collecting information about the Clinton Foundation to determine whether "there was evidence of financial crimes or influence-peddling". In a reported separate investigation, the Washington field office was investigating Terry McAuliffe before he became a board member of the Clinton Foundation. CNN reported in January 2018 that the FBI is investigating allegations of corruption at the Clinton Foundation in Arkansas. Sources said that federal prosecutors are checking to see if foundation donors were improperly promised policy favors or special access to Hillary Clinton during her tenure as secretary of state in return for donations and whether tax-exempt funds were misused by the foundation's leadership. The Washington Post reported in January 2020 that an additional Justice Department investigation into the matter, initiated after Donald Trump took office in 2017, was winding down after finding nothing worth pursuing. Board of directors As of January 2018, the board members are: Bill Clinton, chairman Chelsea Clinton, vice chair Frank Giustra Rolando Gonzalez-Bunster Eric Goosby Hadeel Ibrahim Lisa Jackson Bruce Lindsey, counselor to the chair Cheryl Mills Donna Shalala Programs and initiatives Clinton Health Access Initiative (CHAI) As of January 1, 2010, the Clinton HIV/AIDS Initiative, an initiative of the Clinton Foundation, became a separate nonprofit organization called the Clinton Health Access Initiative (CHAI). Organizations such as the Clinton Foundation continue to supply anti-malarial drugs to Africa and other affected areas; according to director Inder Singh, in 2011 more than 12 million individuals will be supplied with subsidized anti-malarial drugs. In May 2007, CHAI and UNITAID announced agreements that help middle-income and low-income countries save money on second-line drugs. The partnership also reduced the price of a once-daily first-line treatment to less than $1 per day. CHAI was spun off into a separate organization in 2010; Ira Magaziner became its CEO (he had been a key figure in the Clinton health care plan of 1993). Chelsea Clinton joined its board in 2011, as did Tachi Yamada, former President of the Global Health Program at the Bill & Melinda Gates Foundation. Clinton Global Initiative (CGI) and CGI U The Clinton Global Initiative (CGI) was founded in 2005 by Bill Clinton. Doug Band, counselor to Bill, was integral to its formation. Clinton has credited Band with being the originator of CGI and has noted that "Doug had the idea to do this." Band left his paid position at CGI in 2010, preferring to emphasize his Teneo business and family pursuits, but remains on the CGI advisory board. The overlap between CGI and Teneo, of which Bill was a paid advisor, drew criticism. According to his attorneys during 2007 plea negotiations on sex offense charges, financier Jeffrey Epstein also formed "part of the original group that conceived the Clinton Global Initiative", though his name was not mentioned in any of the organization's founding documents. In 2007, Bill started CGI U, which expanded the model of CGI to students, universities, and national youth organizations. CGI U has been held at Tulane University, the University of Texas at Austin, the University of Miami, the University of California, San Diego, The George Washington University, Washington University in St. Louis, Arizona State University, and University of California, Berkeley. Panelists and speakers have included Jon Stewart, Madeleine Albright, Vandana Shiva, Bill and Chelsea Clinton, Stephen Colbert, Jack Dorsey, Greg Stanton, U.S. Rep. Gabby Giffords, Shane Battier, Salman Khan (founder of Khan Academy), and U.S. Rep. John Lewis. In September 2016, it was announced that the Initiative would be winding down to be discontinued and that 74 employees would be let go at the end of the year. In January 2017, it was announced that another 22 employees would be let go by April 15, 2017, and that CGI University would be continued. Clinton Global Citizen Awards The Clinton Global Citizen Awards are a set of awards which have been given by the Clinton Global Initiative every year since 2007. The awards are given to individuals who, in the opinion of the Clinton Foundation, are "outstanding individuals who exemplify global citizenship through their vision and leadership". Past recipients of the award include Mexican businessman and philanthropist Carlos Slim, Irish billionaire Denis O'Brien, Moroccan entrepreneur Mohammad Abbad Andaloussi, Rwandan President Paul Kagame, Afghan women's rights activist Suraya Pakzad, Dominican Republic President Leonel Fernández, and Pakistani labor rights activist Syeda Ghulam Fatima. Clinton Climate Initiative (CCI) In August 2006, Bill Clinton started a program to fight climate change, the Clinton Foundation's Climate Initiative (CCI). The CCI directly runs various programs to prevent deforestation and to rehabilitate forests and other landscapes worldwide, develop clean energy, and help island nations threatened by rising ocean levels. On August 1, 2006, the foundation entered into a partnership with the Large Cities Climate Leadership Group, agreeing to provide resources to allow the participating cities to enter into an energy-saving product purchasing consortium and to provide technical and communications support. In May 2007, CCI announced its first project which will help some large cities cut greenhouse gas emissions by facilitating retrofitting of existing buildings. Five large banks committed $1 billion each to help cities and building owners make energy-saving improvements aimed at lowering energy use and energy costs. At the 2007 Clinton Global Initiative, Bill Clinton announced the 1Sky campaign to accelerate bold federal policy on global warming. The 1Sky campaign supports at least an 80% reduction in climate pollution levels by 2050. On May 19, 2009, CCI announced the global Climate Positive Development Program where it will work with the U.S. Green Building Council to promote "climate positive" city growth. Norway and Germany are among the countries co-financing projects with the CCI in numerous developing and third-world countries. Clinton Development Initiative (CDI) The Clinton Development Initiative, originally the Clinton Hunter Development Initiative, was formed in 2006 as a partnership with Scottish philanthropist Sir Tom Hunter's Hunter Foundation to target the root causes of poverty in Africa and promote sustainable economic growth. The Alliance for a Healthier Generation The Alliance for a Healthier Generation is a partnership between the Clinton Foundation and the American Heart Association that was working to end the childhood obesity epidemic in the United States by 2010. The Robert Wood Johnson Foundation, which provided an initial $8 million to start the Healthy Schools Program, awarded a $20 million grant to expand the program to over 8,000 schools in states with the highest obesity rates. At the industry level, the Alliance struck agreements with major food and beverage manufacturers to provide kids with nutritional options, and established nutrition guidelines for school vending machines, stores and cafeterias to promote healthy eating. Some of the companies involved in these efforts are Coca-Cola, Cadbury plc, Campbell Soup Company, Groupe Danone, Kraft Foods, Mars and PepsiCo. Clinton Giustra Sustainable Growth Initiative Established in 2007 with Canadian mining executive Frank Giustra — founder of the petroleum company Pacific Rubiales (renamed Pacific Exploration & Production in 2015) — CGSGI describes itself as "pioneering an innovative approach to poverty alleviation." Giustra's involvement with the Clinton Foundation has been criticized by the International Business Times, The Washington Post, and the American Media Institute because it was accompanied by a sudden reversal in Hillary Clinton's position while Secretary of State concerning the United States–Colombia Free Trade Agreement, an agreement which she had previously opposed "as bad for labor rights." Clinton Health Matters Initiative (CHMI) In November 2012, Bill Clinton announced the launch of the Clinton Health Matters Initiative (CHMI). CHMI is a national initiative, building on the Clinton Foundation's work on global health and childhood obesity, that works to improve the health and well-being of people across the United States by activating individuals, communities, and organizations to make meaningful contributions to the health of others. CHMI holds an annual Health Matters conference every January in the Coachella Valley. Disaster relief The foundation has funded extensive disaster relief programs following the 2004 Indian Ocean earthquake and Hurricane Katrina in 2005. Shortly after Hurricane Katrina hit, President George W. Bush asked former Presidents George H. W. Bush and Bill Clinton to raise funds to help rebuild the Gulf Coast region. The two Presidents, having worked together to assist victims of the Indian Ocean tsunami, established the Bush-Clinton Katrina Fund to identify and meet the unmet needs in the region, foster economic opportunity, and to improve the quality of life of those affected. In the first month after the hurricane, the Fund collected over 42,000 online donations alone; approximately $128.4 million has been received to date from all 50 states and $30.9 million from foreign countries. Both the foundation and the Clintons personally have been involved in Haiti before and after the 2010 Haiti earthquake. Bill Clinton was named the head of the Interim Haiti Recovery Commission (IHRC) in 2010 after serving as UN special envoy to Haiti in the immediate aftermath of the disaster. The Clinton Foundation itself raised $30m and played an important part in the creation of the Caracol Industrial Park. The IHRC mandate was removed by the Haitian legislature in 2011. No Ceilings project In 2013, Hillary Clinton established a partnership between the foundation and the Bill and Melinda Gates Foundation to gather and study data on the progress of women and girls around the world since the United Nations Fourth World Conference On Women in Beijing in 1995. This is called "No Ceilings: The Full Participation Project". The project released a report in March 2015. Financials The Clinton Foundation relies on donation from various groups or individuals, donors such as Bill & Melinda Gates Foundation donated over $25 million over the years Throughout the years, donations have been varying from one year to the other according to their financial reports Charity review sources In March 2015, the charity watchdog group Charity Navigator added the Clinton Foundation to a watch list (a designation meant to warn donors that questions have been raised about an entity's practices), after several news organizations raised questions over donations from corporations and foreign governments. It removed the foundation from its watch list in late December of that year. In September 2016, it gave it its highest possible rating, four out of four stars, after its customary review of the foundation's financial records and tax statements. A different charity monitor, CharityWatch, said that 88% of the foundation's money goes toward its charitable mission and gave the foundation an A rating for 2016. In 2015, based on revenue of $223 million and an expense ratio of 12% the foundation spent in excess of $26 million to complete its mission. Private philanthropy The Clinton Foundation is a public organization to which anyone may donate. Due to their similar names, the public foundation has sometimes been confused with the Clinton Family Foundation, which is reserved for the Clintons' private philanthropy. The two foundations have sometimes been conflated by news sources. The significantly smaller Clinton Family Foundation is a traditional private foundation that serves as the vehicle for their personal charitable giving. Headquartered in Chappaqua, New York, it received nearly all of the approximately $14 million the Clintons gave to charity from 2007 to 2013. Controversies Transparency Around 2007, the Clinton Foundation was criticized for a lack of transparency. Although U.S. law did not require charities, including presidential foundations, to disclose the identities of their contributors, critics said that the names of donors should be disclosed because Hillary Clinton was running to be the Democratic nominee for President of the United States. Commentator Matthew Yglesias opined in a Los Angeles Times op-ed that the Clintons should make public the names of foundation donors to avoid any appearance of impropriety. A lengthy donors list was then released by the foundation in December 2008, which included several politically sensitive donors, such as the Kingdom of Saudi Arabia and Blackwater Worldwide. The foundation stated that the disclosures would ensure that "not even the appearance of a conflict of interest" would exist once Hillary Clinton was Secretary of State. The foundation has been criticized for receiving donations from Middle-Eastern countries which are seen to oppress women (stoning for adultery, not being able to drive, requiring a male guardian, etc.). This particularly included Saudi Arabia, which donated between $10 million and $25 million. Apart from the Middle Eastern countries like the United Arab Emirates and Oman, other foreign government donations came from Australia, Germany, and a Canadian government agency. The foundation accepted these donations even though Hillary Clinton's 2016 presidential campaign platform guaranteed to break down barriers that held women back. In November 2016, Reuters reported that "The Clinton Foundation has confirmed it accepted a $1 million gift from Qatar while Hillary Clinton was U.S. Secretary of State without informing the State Department, even though she had promised to let the agency review new or significantly increased support from foreign governments." Washington Post columnist Jennifer Rubin opined that the Qatari gift "raised ethical questions" because of the nation's support for Hamas. The ethics agreement between the State Department and the Clinton Foundation that had been put into force at the beginning of Hillary Clinton's tenure as Secretary of State in 2009 came under scrutiny from the news media during February 2015 as polls showed her the likely 2016 Democratic nominee for president. The Wall Street Journal reported that the Clinton Foundation had resumed accepting donations from foreign governments once Secretary Clinton's tenure had ended. Contributions from foreign donors, which are prohibited by law from contributing to political candidates in the U.S., constitute a major portion of the foundation's income. An investigation by The Washington Post of 2014 donations showed that there was "substantial overlap between the Clinton political machinery and the foundation". The investigation revealed that almost half of the major donors who had backed Ready for Hillary, a group which supported her 2016 presidency bid, had given at least $10,000 to the foundation, either personally or through foundations or companies they run. The Clinton Foundation's chief communications officer Craig Minassian explained that it is a "false choice to suggest that people who may be interested in supporting political causes wouldn't also support philanthropic work." A subsequent The Washington Post inquiry into donations by foreign governments to the Clinton Foundation during the Secretary's tenure found, in addition to six cases where such governments continued making donations at the same level they had before Clinton became Secretary as envisioned under the agreement, one instance of a new donation, $500,000 from Algeria for earthquake relief in Haiti, that was outside the bounds of the continuation provision and should have received a special ethics review, but did not. Foundation officials said that if the former Secretary decided to run for president in 2016, they would again consider what steps to take in reference to foreign donations. But in general, they stressed that, "As with other global charities, we rely on the support of individuals, organizations, corporations and governments who have the shared goal of addressing critical global challenges in a meaningful way. When anyone contributes to the Clinton Foundation, it goes towards foundation programs that help save lives." State Department spokesperson Jen Psaki attested that the foundation's commitment to the ethics agreement in question "has been over and above the letter of the law". In August 2016, after Clinton's securing the Democratic nomination, the Clinton Foundation announced that it will stop accepting foreign donations if she were elected. In March 2015, Reuters reported that the Clinton Health Access Initiative had failed to publish all of its donors, and to let the State Department review all of its donations from foreign governments after it was spun off from the Clinton Foundation in 2010. In April 2015, The New York Times reported that when Hillary Clinton was Secretary of State, the State Department had approved transactions that allowed Russian state-owned corporation Rosatom to take a majority stake in Uranium One, whose chairman had donated to the Clinton Foundation. The State Department "was one of nine government agencies, not to mention independent federal and state nuclear regulators, that had to sign off on the deal." FactCheck.org decided there is "no evidence" that the donations influenced Clinton's official actions or that she was involved in the State Department's decision to approve the deal, and PolitiFact concluded that any "suggestion of a quid pro quo is unsubstantiated". 2015 State Department subpoena In February 2016, The Washington Post reported that the United States Department of State issued a subpoena to the foundation in the fall of 2015. According to the report, the subpoena focused on "documents about the charity's projects that may have required approval from federal government during Hillary Clinton's term as secretary of state" and "also asked for records related to Huma Abedin, longtime Clinton aide who for six months in 2012 was employed simultaneously by the State Department, the foundation, Clinton's personal office, and a private consulting firm with ties to the Clintons." Australian government donations Donations totalling tens of millions of dollars from successive Australian and New Zealand governments to the Clinton Foundation were the subject of criticism from a number of groups including the Taxpayers union of New Zealand for a perceived lack of accountability and perceived conflicts of interest, some of the donations were made directly and some through AUSAID. In 2006, the then foreign minister Alexander Downer and former President Clinton jointly signed a Memorandum of Understanding in February 2006 that gave more than million to the Clinton Foundation across four years for a project to provide screening and drug treatment to AIDS patients in Asia. The donation was later made through an affiliate of the charity known as the Clinton Health Access Initiative (CHAI). The Australian government ceased funding CHAI in 2016. In 2017 the Specialist Health Service (SHS) in a report commissioned by DFAT noted "Previously, there appears to have been an over-reliance on the Clinton Health Access Initiative (CHAI) for facilitating market access". In 2011, a pledge of million was made by Australian Julia Gillard government to the Global Partnership for Education which in 2014 joined the Clinton Global Initiative. Julia Gillard was made a member of the board of Global Partnership for Education in 2014 after losing the Australian federal election According to DFAT, Australia contributed million to the Global partnership for education between 2007 and 2014 including million in replenishment between 2018 and 2020 A Department of foreign affairs and trade (DFAT) spokesperson responded to questions by News.com by stating: "all funding is used "solely for agreed development projects" and Clinton charities have "a proven track record" in helping developing countries. Ethics controversies and use of taxpayer funds According to the hacked Podesta emails, Doug Band, an employee of the Clinton Foundation, accused Chelsea Clinton's husband Marc Mezvinsky of being involved in conflicts of interest. According to Band, Mezvinsky used the foundation to raise money for his hedge fund. Band also said that he could name 500 different conflicts of interest involving the foundation and some that involved Bill Clinton. Using the Former Presidents Act Bill Clinton used taxpayer funds to supplement the pay of aides at the Clinton foundation and also used funds for IT equipment. Clinton withdrew 16 million dollars using the president act which was more than any other living president had withdrawn. Cash for access In 2011 Raj Fernando who gave between 100,000 and 250,000 to the Clinton foundation was appointed to the International Security Advisory Board within the State Department despite being unqualified. Fernando's appointment came at the request of Clinton Aide Cheryl Mills and Hillary Clinton. Fernando resigned from the position shortly after an inquiry was made by ABC. In 2009 foundation aide Doug Band emailed Huma Abedin requesting a meeting with Hillary Clinton for Salman bin Hamad Al Khalifa who had donated 32 million to the Clinton foundation. 2 days later the meeting was arranged. References External links Clinton Global Initiative website Clinton Presidential Center website Bill Clinton Development charities based in the United States Hillary Clinton International charities
1063524
https://en.wikipedia.org/wiki/Open%20University%20Malaysia
Open University Malaysia
Open University Malaysia, abbreviated as OUM, is the 7th Malaysian private university and it is owned by the Multimedia Technology Enhancement Operations (METEOR) Sdn. Bhd, a consortium of 11 Malaysian public universities. It leverages on the quality, prestige and capabilities of its consortium. The main campus is at Menara OUM, Kelana Centre Point Kelana Jaya. In addition to this, there are more than 30 learning centres throughout Malaysia, out of which 10 are regional learning centres. As the first open university in the country, OUM initially opened to 753 learners in 2001. A decade later, OUM has over 100,000 students in more than 50 academic programmes. The MQA Rating System for Higher Education Institutions in Malaysia for 2011 (SETARA’11) has rated OUM as a Tier 5 (Excellent) institution. Chancellors The vice-chancellor of OUM is Professor Dato’ Dr Mansor Fadzil. The first chancellor was the late Tun Endon Mahmood Ambak (the wife of fifth Prime Minister Tun Abdullah Badawi) who was appointed on 16 December 2004. On 8 December 2007, Tun Jeanne Abdullah was appointed as the new chancellor of OUM; the pro-chancellor is Tan Sri Dato' Azman Hashim who is also the executive chairman of Arab-Malaysian Corporation Berhad. e-Learning methodology and tools Open Entry Open Entry refers to non-restrictive entry requirements for a degree programme, applicable to adults who possess learning experience which can be assessed and matched against the learning outcomes of an academic course. Blended learning methodology Face-to face-learning: Tutorials were conducted where students have the opportunity to physically meet their tutors and discussed their subject and their assignment e-Learning: Students are required to participate in an online forum using a learning management system and discussed among each other's and with their tutors and peers on their subject matters and on their assignment. Self-managed learning module: Students do not have to come over for their tutorials but they still need to come for their tests. They learn by using participating in the learning management system and using their own modules. m-Learning: The learning materials are designed in downloadable formats that can be accessed using a desktop or laptop computer. Students then opt to transfer these contents into a handphone to view them. One needs a handphone equipped with the necessary features. OUM has a strong network of learning centres nationwide. These learning centres are in major cities and towns, from Peninsular Malaysia to Sabah and Sarawak. OUM has more than 30 learning centres fully equipped with tutorial rooms, computer laboratories, library and Internet facilities. Modules are used by the students, tutors and subject-matter experts (SMEs). The modules are written by academicians from public and private universities in Malaysia. The Tan Sri Dr. Abdullah Sanusi Digital Library has more than 30,000 volumes of printed books in the main campus and learning centres nationwide. As for the digital collection, the online databases consist of more than 82,000 e-books and 32,000 e-journal titles. Other electronic collections include electronic theses, newspaper articles and legal acts. iRadio OUM is Malaysia's first Internet radio station to base its segments on modules offered by a university. iRadio OUM was established in April 2007. Boasting shows mainly aimed to add value to OUM learners' open and distance education, iRadio OUM is the creativity link, bridging education and information with entertainment in hopes to provide an alternative outlet for those looking for entertainment and education in the World Wide Web. Accreditation The following programmes have been fully accredited by the Malaysian Qualifications Agency (MQA). Diploma in Information Technology Diploma in Management Diploma in Civil Engineering Diploma in Electrical Engineering Diploma in Mechanical Engineering Diploma in Human Resource Management Diploma in Technology Management Diploma in Early Childhood Education Diploma in Accounting Bachelor of Education (Educational Administration) with Honours Bachelor of Information Technology with E-Commerce with Honours Bachelor of Information Technology with Honours Bachelor of Information Technology with Software Engineering with Honours Bachelor of Management with Honours Bachelor of Marketing with Honours Bachelor of Business Administration with Honours Bachelor of Information Technology & Management with Honours Bachelor of Information Technology with Accounting with Honours Bachelor of Information Technology in Network Computing with Honours Bachelor of Multimedia Communication with Honours Bachelor of Education (Civil Engineering) with Honours Bachelor of Education (Mechanical Engineering) with Honours Bachelor of Education (Electrical Engineering) with Honours Bachelor of Education (TESL) with Honours Bachelor of Education (Science) with Honours Bachelor of Education (Mathematics) with Honours Bachelor of Human Resource Management with Honours Bachelor of Mathematics with Information Technology with Honours Bachelor of Education (Pre-School Education) with Honours Bachelor of Education (Primary Education) with Honours Bachelor of Education (Arabic Language) with Honours Bachelor of Education (Chinese Language) with Honours Bachelor of Education (Malay Language) with Honours Bachelor of Education (Tamil Language) with Honours Bachelor of Education (Islamic Education) with Honours Bachelor of Education (Physical Education) with Honours Bachelor of Education (Special Education) with Honours Bachelor of Education (Music Education) with Honours Bachelor of Education (Social Studies) with Honours Bachelor of Education (Pre-School Education) with Honours Bachelor of Education (Visual Art Education) with Honours Bachelor of Nursing Science with Honours Bachelor of Technology Management with Honours Bachelor of Mathematics and Management with Honours Bachelor of Accounting with Honours Bachelor of Sports Science with Honours Bachelor of Tourism Management with Honours Bachelor of Hospitality Management with Honours Bachelor of Occupational Safety and Health with Honours Bachelor of Psychology with Honours Bachelor Islamic Studies (Islamic Management) with Honours Bachelor of Sciences in Project and Facility Management with Honours Master of Counselling (MQA/FA0155) Master of Science Master Of Information Science (Competitive Intelligence) Master of Science (Engineering) Master of Information Technology Master of Management Master of Environmental Science (Integrated Water Resources Management) Master of Education Master of Multimedia Communication Master of Business Administration Master of Information Technology Master of Management Master of Science (Business Administration) Master of Instructional Design and Technology Master of Software Engineering Master of Project Management Master of Human Resource Management Master of Islamic Studies Master of Nursing Doctor of Philosophy (Business Administration) Doctor of Philosophy (Information Technology) Doctor of Philosophy (Engineering) Doctor of Philosophy (Education) Doctor of Philosophy (Science) Postgraduate Diploma in Teaching Faculties Faculty of Business & Management (OUM Business School) Faculty of Education & Languages Faculty of Applied and Social Sciences Faculty of Information Technology & Multimedia Communication Faculty of Science and Technology Centre for Graduate Studies Academic Support Centre Centre for Instructional Design & Technology Institute of Quality, Research & Innovation Centre for Student Management Learner Service Centre Institute for Teaching & Learning Advancement Notable alumni Soo Wincci- Phd in Business Administration, Miss World Malaysia 2008, International Recording Artiste, Actress, Composer, Host and Entrepreneur Vanida Imran- Bachelor of Arts (English Studies) Honours, Miss World Malaysia 1993, Actress, Nona Host Wardina Safiyyah- Bachelor of Psychology (Honour), Actress, Model, TV host Winson Voon-Master of Business Administration. Celebrity Daniel Lee Chee Hun Master of Business Administration. Singer, second season Malaysian Idol winner. Brian Chen@Abang Brian Master of Education. Master Chef Celebrity Malaysia and Radio DJ Dato Mohd Faizal bin Hj. Mohd Hassim- Founder and CEO of HRSB Holdings Foreign students OUM international students came from Ghana Singapore, Indonesia, Canada, Pakistan, Bangladesh, Sri Lanka, Maldives, Bahrain, Libya, Saudi Arabia, Qatar, Somalia and Yemen, India. References External links Official website of Open University Malaysia Article about Open University Malaysia UNESCO Asia Pacific Open and Distance Learning Knowledge Base Malaysian Ministry Of Higher Education OUM Sabah Learning Centre MQA SETARA 11 (Rating System for Malaysian Higher Education Institutions 2011) See also Malaysia Education Distance education Lifelong learning Universities and colleges in Kuala Lumpur Open universities Educational institutions established in 2000 2000 establishments in Malaysia Distance education institutions based in Malaysia Business schools in Malaysia Information technology schools in Malaysia Private universities and colleges in Malaysia
1976821
https://en.wikipedia.org/wiki/Skype%20for%20Business%20Server
Skype for Business Server
Skype for Business Server (formerly Microsoft Office Communications Server and Microsoft Lync Server) is real-time communications server software that provides the infrastructure for enterprise instant messaging, presence, VoIP, ad hoc and structured conferences (audio, video and web conferencing) and PSTN connectivity through a third-party gateway or SIP trunk. These features are available within an organization, between organizations and with external users on the public internet or standard phones (on the PSTN as well as SIP trunking). Features One basic use of Skype for Business Server is instant messaging (IM) and presence within a single organization. This includes support for rich presence information, file transfer and voice and video communication. Skype for Business Server uses Interactive Connectivity Establishment for NAT traversal and TLS encryption to enable secure voice and video both inside and outside the corporate network. Skype for Business Server also supports remote users, both corporate users on the Internet (e.g. mobile or home workers) as well as users in partner companies. Skype for Business supports identity federation, enabling interoperability with other corporate IM networks. Federation can be configured either manually (where each partner manually configures the relevant edge servers in the other organization) or automatically (using the appropriate SRV records in the DNS). Microsoft Skype for Business Server uses Session Initiation Protocol (SIP) for signaling along with the SIMPLE extensions to SIP for IM and presence. Media is transferred using RTP and SRTP. The live meeting client uses Persistent Shared Object Model (PSOM) to download meeting content. The communicator client also uses HTTPS to connect with the web components server to download address books and expand distribution lists. By default, supported combinations include encrypted communications using SIP over TLS and SRTP as well as unencrypted SIP over TCP and RTP. Microsoft has published details of supported configuration for qualified vendors through Unified Communications Open Interoperability Program (UCOIP). IM is only one portion of the Skype for Business suite. The other major components are VoIP telephony and video conferencing through the desktop communicator client. Remote access is possible using the desktop, mobile and web clients. Several third parties have incorporated Skype for Business functionality on existing platforms. HP has implemented it on its Halo video conferencing platform. History When Microsoft Office Live Communications Server was originally launched on 29 December 2003, it replaced the Exchange Instant Messenger Service that had been included in Exchange 2000, but which was removed from the Exchange 2003 feature set. Holders of Exchange 2000 licenses which included Software Assurance were entitled to receive Live Communications Server as an upgrade, along with Exchange 2003; however, Live Communications Server Client Access Licenses were purchased as normal for new users. OCS R2 was announced at VoiceCon in Amsterdam in October 2008, a year after releasing Office Communications Server 2007. Microsoft released Microsoft Office Communications Server 2007 R2 in February 2009. The R2 release added the following features: Dial-in audioconferencing Desktop sharing Persistent Group Chat (only available on Windows OS clients) Attendant console and delegation Session Initiation Protocol trunking Mobility and single-number reach Microsoft Lync Server 2010 reached general availability in November 2010. Microsoft Lync 2013 was released to manufacturing in October 2012 with SP1 being released in March 2014. In 2015, the new version of Lync became Skype for Business with a new client experience, new server release and updates to the service in Office 365. Microsoft has stated that general availability for Skype for Business Server 2019 is targeted for the end of 2018. Versions 2018 - Skype for Business Server 2019 (due end of 2018) 2015 - Skype for Business Server 2015 2012 - Lync Server 2013 (RTM 11 October 2012) 2010 - Lync Server 2010 2009 - Office Communications Server 2007 R2 2007 - Office Communications Server 2007 2006 - Live Communications Server 2005 with SP1 2005 - Live Communications Server 2005, codenamed Vienna 2003 - Live Communications Server 2003 Client software and devices Microsoft Lync is the primary client application released with Lync Server. This client is used for IM, presence, voice and video calls, desktop sharing, file transfer and ad hoc conferences. With Lync 2013 there will be a release of Lync Light Client with fewer features. Microsoft also ships the Microsoft Attendant Console. This is a version of the Lync more oriented towards receptionists or delegates / secretaries or others who get a large volume of inbound calls. Persistent Group Chat functionality (introduced with Lync Server 2010) is only supported on the Windows OS client at this time. This requires an additional server or multiple servers for processing group chat transactions. Other client software and devices include: Lync Mobile is a mobile edition of the Lync Server 2010 client that offers similar functionality, including voice calls via the GSM network, instant messaging, presence and single number reachability. Clients for Lync Mobile includes the IPhone, IPad, Android, Windows Phone 7 and 8. New releases of Lync Mobile 2013 (Wave 2) are anticipated, bringing features such as collaboration and voice and video over IP. Microsoft RoundTable is an audio and video conferencing device that provides a 360-degree view of the conference room and tracks the various speakers. This device is now produced and sold via Polycom under the product name CX5000. The Skype for Business client is supported by Lync Server 2013, as well as Skype for Business Server. The documentation of Lync 2013 contains references to Lync Room Edition Devices - these are anticipated to provide close to immersive experience. LG-Nortel and Polycom also make IP phones in a traditional phone form factor that operate an embedded edition of Office Communicator 2007. The physical plastic phones as referred by Microsoft are also named Tanjay Phones. IP Desk Phones 'Optimized for Lync': Powered by Lync Phone Edition these phones have full support to PBX functionalities, access to calendar and contacts, rich conferencing, extended functionalities when connected to the PC, and integrated security and manageability. Built from the ground up for Lync, these phones come in different models designed to meet specific business needs, including a rich information worker experience, a basic desk phone, common area phone, or conference room phone. Aastra and other vendors offer IP Desk Phones Optimized for Microsoft Lync. Damaka has Lync clients for Android, iOS (iPhone/iPad), BlackBerry, Symbian, Windows 8, Mac OS X. It provides chat, voice, file transfer, video, and desktop sharing functions. Fisil has Lync clients for Linux, Android, iPhone, iPad. Linux support: Fisil makes the only available supported Linux client for Lync. The unifiedme.co.uk reference lists a Pidgin-based workaround, but according to the information at CERN, it has important limitations. The Damaka reference leads to the Google+ main page and has no information on a Linux client that I could find. The Fisil reference goes to a project management company that can write custom software. Compliance Lync Server also has the capability to log and archive all instant message traffic passing through the server and to create Call Detail Records for conferences and voice. These features can help provide compliance with legal requirements for many organizations. The Archiving server is not an overall end-to-end compliance solution, as archiving requires you to install the Archiving Server and to configure front end servers accordingly. Public IM connectivity (PIC) Lync Server also enables organizations to interoperate with four external IM services: AOL Instant Messenger, Microsoft Messenger service, Yahoo! Messenger, and Google Talk. PIC was first introduced with Service Pack 1 for Live Communications Server 2005, PIC is licensed separately for Yahoo, but is free for AOL and Messenger service for customers with Software Assurance. Microsoft announced that effective 30 June 2014, they will no longer support PIC connectivity to AOL/AIM Third-party software support SIPE plugin , the third-party SIPE plugin enables third-party clients such as Pidgin, Adium and Miranda IM as well as clients using the Telepathy framework to support MS Lync Servers with some limitation (Audio but no SRTP, No Video) via the extended version of SIP/SIMPLE. XMPP Lync Server has an XMPP gateway server to federate with external XMPP servers. With Lync Server 2013, XMPP is natively part of the product. The ejabberd XMPP server has a bridge that enables federation with OCS servers, without gateways (transports). Competition Competitors to Lync Server include: Alcatel-Lucent Enterprise OpenTouch Conversation Platform 3CX Phone System; 3CX Phone System for Windows is a software-based IP PBX Alceo's BCS Communicator Asterisk (PBX) Platform - SIP, ISDN, IAX, SMS, open source telephone system AT&T UC and SIP Services Avaya Aura (tm) Presence Services (with Messaging) and one-X software Bopup Communication Server; Based on private and secure IM protocol Cisco's Unified Communications Manager IM & Presence (on-Prem) or WebEx Connect Jabber Service (Cisco cloud) ejabberd Elastix; Elastix PBX, VoIP email, IM, faxing and collaboration functionality openUC Enterprise IBM's Lotus Sametime iChat Server (see Mac OS X Server) Jabber XCP (from Jabber, Inc., not to be confused with the IETF open standard XMPP) NEC's UCB and UCE Mitel Micollab Openfire Prosody ShoreTel Siemens' OpenScape sipXecs Sun Java System Instant Messaging (see Sun Java System Communications Suite) Swyx Tigase TrueConf Vertical Communications Algoria; TWS In instant messaging, the free public instant messaging networks (Google, Live Messenger, Yahoo and AOL) are widely used and represent a degree of competition. There have been attempts by other vendors at providing solutions such as Yahoo!'s Enterprise Instant Messenger; however these attempts have been largely unsuccessful. An ICQ corporate client and server option once existed, but it is no longer supported or developed. Products such as Cisco Unified Presence Server (Version 6.0.2+) support federation with Microsoft Office Communication Server 2007 to provide presence of Cisco IP phones and remote call control of the IP phone from the Microsoft Office Communicator client. The Siemens OpenScape solution offers a federation with the Office communicator, and also an integration into the office communicator, allowing to use the standard functionalities of the office communication suite together with the SIP based voice functionalities of the Siemens platform. The Asterisk telephone platform supports SIP, IAX, and ISDN connections. Most telephones that support these protocols may be used with Asterisk, including software phone clients. See also Related products Survivable branch appliance Similar products LG-Nortel IP Phone 8540 Lists List of Microsoft–Nortel Innovative Communications Alliance products Microsoft Servers References External links Lync Dev Center Microsoft Office servers Microsoft server software Microsoft server technology Instant messaging server software Innovative Communications Alliance products
1639135
https://en.wikipedia.org/wiki/COBIT
COBIT
COBIT (Control Objectives for Information and Related Technologies) is a framework created by ISACA for information technology (IT) management and IT governance. The framework is business focused and defines a set of generic processes for the management of IT, with each process defined together with process inputs and outputs, key process-activities, process objectives, performance measures and an elementary maturity model. Framework and components Business and IT goals are linked and measured to create responsibilities of business and IT teams. Five processes are identified: Evaluate, Direct and Monitor (EDM); Align, Plan and Organize (APO); Build, Acquire and Implement (BAI); Deliver, Service and Support (DSS); and Monitor, Evaluate and Assess (MEA). The COBIT framework ties in with COSO, ITIL, BiSL, ISO 27000, CMMI, TOGAF and PMBOK. The framework helps companies follow law, be more agile and earn more. Below are COBIT components: Framework: Organizes IT governance objectives and good practices by IT domains and processes and links them to business requirements. Process descriptions: A reference process model and common language for everyone in an organization. The processes map to responsibility areas of plan, build, run, and monitor. Control objectives: Provides a complete set of high-level requirements to be considered by management for effective control of each IT process. Management guidelines: Helps assign responsibility, agree on objectives, measure performance, and illustrate interrelationship with other processes. Maturity models: Assesses maturity and capability per process and helps to address gaps. The standard meets all the needs of the practice, while maintaining independence from specific manufacturers, technologies and platforms. When developing the standard, it was possible to use it both for auditing a company's IT system and for designing an IT system. In the first case, COBIT allows you to determine the degree of conformity of the system under study to the best examples, and in the second, to design a system that is almost ideal in its characteristics. History COBIT was initially "Control Objectives for Information and Related Technologies," though before the release of the framework people talked of "CobiT" as "Control Objectives for IT" or "Control Objectives for Information and Related Technology." ISACA first released COBIT in 1996, originally as a set of control objectives to help the financial audit community better maneuver in IT-related environments. Seeing value in expanding the framework beyond just the auditing realm, ISACA released a broader version 2 in 1998 and expanded it even further by adding management guidelines in 2000's version 3. The development of both the AS 8015: Australian Standard for Corporate Governance of Information and Communication Technology in January 2005 and the more international draft standard ISO/IEC DIS 29382 (which soon after became ISO/IEC 38500) in January 2007 increased awareness of the need for more information and communication technology (ICT) governance components. ISACA inevitably added related components/frameworks with versions 4 and 4.1 in 2005 and 2007 respectively, "addressing the IT-related business processes and responsibilities in value creation (Val IT) and risk management (Risk IT)." COBIT 5 (2012) is based on COBIT 4.1, Val IT 2.0 and Risk IT frameworks, and draws on ISACA's IT Assurance Framework (ITAF) and the Business Model for Information Security (BMIS). ISACA currently offers certification tracks on both COBIT 2019 (COBIT Foundations, COBIT Design & Implementation, and Implementing the NIST Cybersecurity Framework Using COBIT 2019) as well as certification in the previous version (COBIT 5). See also IT Governance Data governance Information Quality Management ITIL ISO/IEC 38500 References External links COBIT page at ISACA Checklist/cheatsheet summarizing Cobit 5 A user case of the COBIT Framework: San Marcos, TX Information technology governance Information technology audit Privacy
5276468
https://en.wikipedia.org/wiki/Hackweiser
Hackweiser
HackWeiser is an underground hacking group and hacking magazine founded in 1999. In early-2001 the founder and leader, p4ntera, left the group with saying very little. In April 2001 Hackweiser claimed credit with the start of Project China. The project was a focus of hack attacks based at Mainland Chinese computer systems. The group has appeared in the news due to having defaced well known websites, including websites owned by Microsoft, Sony, Walmart, Girlscouts of America, Jenny Craig, DARE, Nellis Air Force Base aka Area 51, CyberNanny. and countless others. They have been noted by the US Attorney's Bulletin in reference to "Responsible hackers". They have won multiple categories in the "State of the Hack Awards" The members of the groups were a mix of Grey hat and Black Hat hackers. Members included; The group eventually fell apart and disbanded after the arrest of Hackah Jak in mid-2003. Although reports still indicate that many ex-members are active on the underground. . References Hacker groups
6256906
https://en.wikipedia.org/wiki/Communications%20%26%20Information%20Services%20Corps
Communications & Information Services Corps
The Communications and Information Services Corps (CIS) () – formerly the Army Corps of Signals – is one of the combat support corps of the Irish Defence Forces, the military of Ireland. It is responsible for the installation, maintenance and operation of communications and information systems for the command, control and administration of the Defence Forces, and the facilitation of accurate, real-time sharing of intelligence between the Army, Naval Service and Air Corps branches at home and overseas. The CIS Corps is headquartered at McKee Barracks, Dublin, and comes under the command of an officer of Colonel rank, known as the Director of CIS Corps. Mission Formerly the Army Corps of Signals, the Communications and Information Services Corps is responsible for the development and operation of Information Technology and Telecommunications systems in support of Defence Forces tasks. It is also responsible for coordinating all communications – radio and line – and information systems, communications research and updating of communications in line with modern developments and operational requirements. The CIS Corps are tasked with utilising networking and information technologies in order to dramatically increase Defence Force operational effectiveness through the provision of timely and accurate information to the appropriate commander, along with the real time efficient sharing of information and intelligence with the Army, Naval Service and Air Corps, as well as with multinational partners involved in international peacekeeping and other actors as required. This role includes the development and maintenance of a secure nationwide Defence Forces Telecommunications Network (DFTN), which can support both protected voice and data services, and the provision and maintenance of encrypted military communications equipment for use by Defence Forces personnel at home and abroad. CIS Corps units are dispersed throughout the DF giving Communications and IT support to each of the Army Brigades; Naval Service, Air Corps, Defence Forces Training Centre (DFTC) and Defence Forces Headquarters (DFHQ). The CIS Corps have Base Workshops where detailed maintenance, research and development is conducted. The CIS Corps collar flash features the angel Gabriel, the messenger of God, behind a signal shield and the caption "Cór Comharthaíochta". This is the Irish translation for 'Signals Corps', while the translation for Communications and Information Services Corps is "An Cór Seirbhísí Cumarsáide agus Eolais". Signals intelligence & cyber The Communications and Information Services (CIS) Corps works with the Defence Forces intelligence branch, the Directorate of Military Intelligence (J2), with regards to signals intelligence (SIGINT), and houses a dedicated SIGINT element within the Corps. The DF CIS Corps has the ability to intercept and monitor communications, remotely collect data, and process it. Under Irish legislation, the Criminal Justice (Surveillance) Act 2009 and Interception of Postal Packets and Telecommunications Messages (Regulation) Act 1993 provides the Defence Forces with the legal authority to conduct domestic intelligence operations involving espionage, electronic communications and stored electronic information in order to safeguard and maintain the security of the state. The CIS Corps along with the Garda Síochána, the national police force, provide a significant domestic support role to the National Cyber Security Centre (NCSC) of the Department of Communications, Climate Action and Environment in countering cyber-attacks, protecting critical national infrastructure and securing government communications. CIS is responsible for cyber security within the Defence Forces, and maintains a capability in that area for the purpose of protecting its own networks and users domestic and foreign. In 2016 the establishment of the Computer Incident Response Team (CIRT) in DFHQ CIS Company was revealed. In July 2015, leaked email correspondence from Hacking Team – a private Italian spying and eavesdropping software company – reportedly showed members of the Irish CIS Corps in discussions with the company to purchase intrusion and surveillance "solutions" for the lawful interception of online communications, such as monitoring incoming and outgoing emails, browsing activity, Skype calls, remotely switching on webcams and microphones and remotely taking control of devices. A Defence Forces spokesperson said that for operational security reasons it could not comment on specific elements of the activities of the CIS Corps, but confirmed that no goods or services were purchased from the company in question. The CIS Corps deployed 'ethical hackers' to fight back against the Health Service Executive ransomware attack in mid-2021, and sent CIS personnel to hospitals and HSE offices in order to decrypt devices affected by the cyberattack onsite. Reservists were particularly useful to this effort due to their cybersecurity skills and experienced gleaned from the private sector. CIS School Soldiers join the Communications and Information Services Corps in one of the following apprenticeship trades; Communication Operative Electronic Engineering Technician Software Engineering Technician Information Technology Support Technician Communications & Information Services Technician Regimental Signaller Radio Instructor Telephone Maintenance Technician Data Communications Technician Microwave Systems Technician Network Administrator All personnel do their basic military training. The CIS School runs a large number of CIS related courses, including a degree programme in Military Communications for CIS technicians in association with the Institute of Technology, Carlow. There are 3 sections to the CIS School, providing training in the areas of operational procedures, technical proficiency and simulation. Procedural and Operations Section (Proc/Ops); Communications Operative Course Detachment Commanders Course Aerial Riggers Course Regimental Signals Course CIS Standard NCO Course CIS Young Officers Course Technical Section; CIS Trainee Technician Scheme (TTS) Technical Training Advancement Courses Strategic Applications Training Simulation Section; Digital Indoor Range Theatres (DIRT) Command and Staff Trainer The DIRT range is a digital indoor shooting range in which trainees can be assessed on their marksmanship skills for the various small arms weapons used by the DF. It helps develop these proficiencies methodically without firing live ammunition on a regular basis. The €1 million facility is based at the Department of Defence properties at Kilworth Camp, County Cork. Communications and Information Services Corps Units Active units Defence Forces Headquarters CIS Company (DFHQ CIS Coy), McKee Barracks, Dublin Communications Division, Naval Service (Comms Div, NS), Halbouline, Cork Air Corps CIS Squadron, Casement Aerodrome, Baldonnel, Dublin CIS Group (including CIS School, Base Signal Workshops, Base Technical Stores) Curragh Camp, Kildare 1 Field CIS Company (1 Fd CIS Coy), Collins Barracks, Cork 2 Field CIS Company (2 Fd CIS Coy), Cathal Brugha Barracks, Dublin Retired units 4 Field CIS Company (4 FD CIS COY), Custume Barracks, Athlone 31 Reserve Field CIS Company (31 RES FD CIS COY), Sarsfield Barracks, Limerick and Collins Barracks, Cork 54 Reserve Field CIS Company (54 RES FD CIS COY), Custume Barracks, Athlone and Military Post, Sligo 62 Reserve Field CIS Company (62 RES FD CIS COY), Cathal Brugha Barracks, Dublin The Reserve Defence Forces (RDF) units have been integrated with the Permanent Defence Forces (PDF) units as part of the "single force concept". See also Signals intelligence by alliances, nations and industries National Cyber Security Centre (NCSC) Directorate of Military Intelligence (J2) Army Ranger Wing (ARW) Garda Crime & Security Branch (CSB) Garda National Surveillance Unit (NSU) Centre for Cybersecurity & Cybercrime Investigation (UCD CCI) Royal Corps of Signals References External links Defence Forces Ireland - Army Corps - Communications and Information Services Corps Defence Forces Ireland - Air Corps - Communication & Information Services Squadron Military communications corps CIS Corps Irish intelligence agencies Signals intelligence agencies Computer security organizations Cryptography organizations Software engineering organizations Cyberwarfare Information technology management Communications in the Republic of Ireland
19009060
https://en.wikipedia.org/wiki/Anthropomorphism
Anthropomorphism
Anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It is considered to be an innate tendency of human psychology. Personification is the related attribution of human form and characteristics to abstract concepts such as nations, emotions, and natural forces, such as seasons and weather. Both have ancient roots as storytelling and artistic devices, and most cultures have traditional fables with anthropomorphized animals as characters. People have also routinely attributed human emotions and behavioral traits to wild as well as domesticated animals. Etymology Anthropomorphism and anthropomorphization derive from the verb form anthropomorphize, itself derived from the Greek ánthrōpos (, "human") and morphē (, "form"). It is first attested in 1753, originally in reference to the heresy of applying a human form to the Christian God. Examples in prehistory From the beginnings of human behavioral modernity in the Upper Paleolithic, about 40,000 years ago, examples of zoomorphic (animal-shaped) works of art occur that may represent the earliest known evidence of anthropomorphism. One of the oldest known is an ivory sculpture, the Löwenmensch figurine, Germany, a human-shaped figurine with the head of a lioness or lion, determined to be about 32,000 years old. It is not possible to say what these prehistoric artworks represent. A more recent example is The Sorcerer, an enigmatic cave painting from the Trois-Frères Cave, Ariège, France: the figure's significance is unknown, but it is usually interpreted as some kind of great spirit or master of the animals. In either case there is an element of anthropomorphism. This anthropomorphic art has been linked by archaeologist Steven Mithen with the emergence of more systematic hunting practices in the Upper Palaeolithic. He proposes that these are the product of a change in the architecture of the human mind, , where anthropomorphism allowed hunters to identify empathetically with hunted animals and better predict their movements. In religion and mythology In religion and mythology, anthropomorphism is the perception of a divine being or beings in human form, or the recognition of human qualities in these beings. Ancient mythologies frequently represented the divine as deities with human forms and qualities. They resemble human beings not only in appearance and personality; they exhibited many human behaviors that were used to explain natural phenomena, creation, and historical events. The deities fell in love, married, had children, fought battles, wielded weapons, and rode horses and chariots. They feasted on special foods, and sometimes required sacrifices of food, beverage, and sacred objects to be made by human beings. Some anthropomorphic deities represented specific human concepts, such as love, war, fertility, beauty, or the seasons. Anthropomorphic deities exhibited human qualities such as beauty, wisdom, and power, and sometimes human weaknesses such as greed, hatred, jealousy, and uncontrollable anger. Greek deities such as Zeus and Apollo often were depicted in human form exhibiting both commendable and despicable human traits. Anthropomorphism in this case is, more specifically, anthropotheism. From the perspective of adherents to religions in which humans were created in the form of the divine, the phenomenon may be considered theomorphism, or the giving of divine qualities to humans. Anthropomorphism has cropped up as a Christian heresy, particularly prominently with the Audians in third century Syria, but also in fourth century Egypt and tenth century Italy. This often was based on a literal interpretation of Genesis 1:27: "So God created humankind in his image, in the image of God he created them; male and female he created them". Criticism Some religions, scholars, and philosophers objected to anthropomorphic deities. The earliest known criticism was that of the Greek philosopher Xenophanes (570–480 BCE) who observed that people model their gods after themselves. He argued against the conception of deities as fundamentally anthropomorphic: Xenophanes said that "the greatest god" resembles man "neither in form nor in mind". Both Judaism and Islam reject an anthropomorphic deity, believing that God is beyond human comprehension. Judaism's rejection of an anthropomorphic deity grew during the Hasmonean period (circa 300 BCE), when Jewish belief incorporated some Greek philosophy. Judaism's rejection grew further after the Islamic Golden Age in the tenth century, which Maimonides codified in the twelfth century, in his thirteen principles of Jewish faith. In the Ismaili interpretation of Islam, assigning attributes to God as well as negating any attributes from God (via negativa) both qualify as anthropomorphism and are rejected, as God cannot be understood by either assigning attributes to Him or taking attributes away from Him. The 10th-century Ismaili philosopher Abu Yaqub al-Sijistani suggested the method of double negation; for example: “God is not existent” followed by “God is not non-existent”. This glorifies God from any understanding or human comprehension. Hindus do not reject the concept of a deity in the abstract unmanifested, but note practical problems. Lord Krishna said in the Bhagavad Gita, Chapter 12, Verse 5, that it is much more difficult for people to focus on a deity as the unmanifested than one with form, using anthropomorphic icons (murtis), because people need to perceive with their senses. In secular thought, one of the most notable criticisms began in 1600 with Francis Bacon, who argued against Aristotle's teleology, which declared that everything behaves as it does in order to achieve some end, in order to fulfill itself. Bacon pointed out that achieving ends is a human activity and to attribute it to nature misconstrues it as humanlike. Modern criticisms followed Bacon's ideas such as critiques of Baruch Spinoza and David Hume. The latter, for instance, embedded his arguments in his wider criticism of human religions and specifically demonstrated in what he cited as their "inconsistence" where, on one hand, the Deity is painted in the most sublime colors but, on the other, is degraded to nearly human levels by giving him human infirmities, passions, and prejudices. In Faces in the Clouds, anthropologist Stewart Guthrie proposes that all religions are anthropomorphisms that originate in the brain's tendency to detect the presence or vestiges of other humans in natural phenomena. There are also scholars who argue that anthropomorphism is the overestimation of the similarity of humans and nonhumans, therefore, it could not yield accurate accounts. In literature Religious texts There are various examples of personification in both the Hebrew Bible and Christian New Testaments, as well as in the texts of some other religions. Fables Anthropomorphism, also referred to as personification, is a well established literary device from ancient times. The story of "The Hawk and the Nightingale" in Hesiod's Works and Days preceded Aesop's fables by centuries. Collections of linked fables from India, the Jataka Tales and Panchatantra, also employ anthropomorphized animals to illustrate principles of life. Many of the stereotypes of animals that are recognized today, such as the wily fox and the proud lion, can be found in these collections. Aesop's anthropomorphisms were so familiar by the first century CE that they colored the thinking of at least one philosopher: Apollonius noted that the fable was created to teach wisdom through fictions that are meant to be taken as fictions, contrasting them favorably with the poets' stories of the deities that are sometimes taken literally. Aesop, "by announcing a story which everyone knows not to be true, told the truth by the very fact that he did not claim to be relating real events". The same consciousness of the fable as fiction is to be found in other examples across the world, one example being a traditional Ashanti way of beginning tales of the anthropomorphic trickster-spider Anansi: "We do not really mean, we do not really mean that what we are about to say is true. A story, a story; let it come, let it go." Fairy tales Anthropomorphic motifs have been common in fairy tales from the earliest ancient examples set in a mythological context to the great collections of the Brothers Grimm and Perrault. The Tale of Two Brothers (Egypt, 13th century BCE) features several talking cows and in Cupid and Psyche (Rome, 2nd century CE) Zephyrus, the west wind, carries Psyche away. Later an ant feels sorry for her and helps her in her quest. Modern literature Building on the popularity of fables and fairy tales, children's literature began to emerge in the nineteenth century with works such as Alice's Adventures in Wonderland (1865) by Lewis Carroll, The Adventures of Pinocchio (1883) by Carlo Collodi and The Jungle Book (1894) by Rudyard Kipling, all employing anthropomorphic elements. This continued in the twentieth century with many of the most popular titles having anthropomorphic characters, examples being The Tale of Peter Rabbit (1901) and later books by Beatrix Potter; The Wind in the Willows by Kenneth Grahame (1908); Winnie-the-Pooh (1926) and The House at Pooh Corner (1928) by A. A. Milne; and The Lion, the Witch, and the Wardrobe (1950) and the subsequent books in The Chronicles of Narnia series by C. S. Lewis. In many of these stories the animals can be seen as representing facets of human personality and character. As John Rowe Townsend remarks, discussing The Jungle Book in which the boy Mowgli must rely on his new friends the bear Baloo and the black panther Bagheera, "The world of the jungle is in fact both itself and our world as well". A notable work aimed at an adult audience is George Orwell's Animal Farm, in which all the main characters are anthropomorphic animals. Non-animal examples include Rev.W Awdry's children's stories of Thomas the Tank Engine and other anthropomorphic locomotives. The fantasy genre developed from mythological, fairy tale, and Romance motifs sometimes have anthropomorphic animals as characters. The best-selling examples of the genre are The Hobbit (1937) and The Lord of the Rings (1954–1955), both by J. R. R. Tolkien, books peopled with talking creatures such as ravens, spiders, and the dragon Smaug and a multitude of anthropomorphic goblins and elves. John D. Rateliff calls this the "Doctor Dolittle Theme" in his book The History of the Hobbit and Tolkien saw this anthropomorphism as closely linked to the emergence of human language and myth: "...The first men to talk of 'trees and stars' saw things very differently. To them, the world was alive with mythological beings... To them the whole of creation was 'myth-woven and elf-patterned'." Richard Adams developed a distinctive take on anthropomorphic writing in the 1970s: his debut novel, Watership Down (1972), featured rabbits that could talkwith their own distinctive language (Lapine) and mythologyand included a police-state warren, Efrafa. Despite this, Adams attempted to ensure his characters' behavior mirrored that of wild rabbits, engaging in fighting, copulating and defecating, drawing on Ronald Lockley's study The Private Life of the Rabbit as research. Adams returned to anthropomorphic storytelling in his later novels The Plague Dogs (1977) and Traveller (1988). By the 21st century, the children's picture book market had expanded massively. Perhaps a majority of picture books have some kind of anthropomorphism, with popular examples being The Very Hungry Caterpillar (1969) by Eric Carle and The Gruffalo (1999) by Julia Donaldson. Anthropomorphism in literature and other media led to a sub-culture known as furry fandom, which promotes and creates stories and artwork involving anthropomorphic animals, and the examination and interpretation of humanity through anthropomorphism. This can often be shortened in searches as "anthro", used by some as an alternative term to "furry". Anthropomorphic characters have also been a staple of the comic book genre. The most prominent one was Neil Gaiman's the Sandman which had a huge impact on how characters that are physical embodiments are written in the fantasy genre. Other examples also include the mature Hellblazer (personified political and moral ideas), Fables and its spin-off series Jack of Fables, which was unique for having anthropomorphic representation of literary techniques and genres. Various Japanese manga and anime have used anthropomorphism as the basis of their story. Examples include Squid Girl (anthropomorphized squid), Hetalia: Axis Powers (personified countries), Upotte!! (personified guns), Arpeggio of Blue Steel and Kancolle (personified ships). In film Some of the most notable examples are the Walt Disney characters the Magic Carpet from Disney's Aladdin franchise, Mickey Mouse, Donald Duck, Goofy, and Oswald the Lucky Rabbit; the Looney Tunes characters Bugs Bunny, Daffy Duck, and Porky Pig; and an array of others from the 1920s to present day. In the Disney/Pixar franchises Cars and Planes, all the characters are anthropomorphic vehicles, while in Toy Story, they are anthropomorphic toys. Other Pixar franchises like Monsters, Inc. features anthropomorphic monsters, and Finding Nemo features anthropomorphic marine life creatures (like fish, sharks, and whales). Discussing anthropomorphic animals from DreamWorks franchise Madagascar, suggests that "". Other DreamWorks franchises like Shrek features fairy tale characters, and Blue Sky Studios of 20th Century Fox franchises like Ice Age features anthropomorphic extinct animals. All of the characters in Walt Disney Animation Studios' Zootopia (2016) are anthropomorphic animals, that is an entirely nonhuman civilization. The live-action/computer-animated franchise Alvin and the Chipmunks by 20th Century Fox centers around anthropomorphic talkative and singing chipmunks. The female singing chipmunks called The Chipettes are also centered in some of the franchise's films. In television Since the 1960s, anthropomorphism has also been represented in various animated television shows such as Biker Mice From Mars (1993–1996) and SWAT Kats: The Radical Squadron (1993–1995). Teenage Mutant Ninja Turtles, first aired in 1987, features four pizza-loving anthropomorphic turtles with a great knowledge of ninjutsu, led by their anthropomorphic rat sensei, Master Splinter. Nickelodeon's longest running animated TV series SpongeBob SquarePants (1999–present), revolves around SpongeBob, a yellow sea sponge, living in the underwater town of Bikini Bottom with his anthropomorphic marine life friends. Cartoon Network's animated series The Amazing World of Gumball (2011–2019) are about anthropomorphic animals and inanimate objects. All of the characters in Hasbro Studios' TV series My Little Pony: Friendship Is Magic (2010–2019) are anthropomorphic fantasy creatures, with most of them being ponies living in the pony-inhabited land of Equestria. The Netflix original series Centaurworld focuses on a warhorse who gets transported to a Dr. Seuss-like world full of centaurs who possess the bottom half of any animal, as opposed to the traditional horse. In the American animated TV series Family Guy, one of the show's main characters, Brian, is a dog. Brian shows many human characteristics – he walks upright, talks, smokes, and drinks Martinis – but also acts like a normal dog in other ways; for example he cannot resist chasing a ball and barks at the mailman, believing him to be a threat. The PBS Kids animated series Let's Go Luna! centers on an anthropomorphic female Moon who speaks, sings, and dances. She comes down out of the sky to serve as a tutor of international culture to the three main characters: a boy frog and wombat and a girl butterfly, who are supposed to be preschool children traveling a world populated by anthropomorphic animals with a circus run by their parents. The French-Belgian animated series Mush-Mush & the Mushables takes place in a world inhabited by Mushables, which are anthropomrphic fungi, along with other critters such as beetles, snails, and frogs. In video games Sonic the Hedgehog, a video game franchise debuting in 1991, features a speedy blue hedgehog as the main protagonist. This series' characters are almost all anthropomorphic animals such as foxes, cats, and other hedgehogs who are able to speak and walk on their hind legs like normal humans. As with most anthropomorphisms of animals, clothing is of little or no importance, where some characters may be fully clothed while some wear only shoes and gloves. Another popular example in video games is the Super Mario series, debuting in 1985 with Super Mario Bros., of which main antagonist includes a fictional species of anthropomorphic turtle-like creatures known as Koopas. Other games in the series, as well as of other of its greater Mario franchise, spawned similar characters such as Yoshi, Donkey Kong and many others. Art history Claes Oldenburg Claes Oldenburg's soft sculptures are commonly described as anthropomorphic. Depicting common household objects, Oldenburg's sculptures were considered Pop Art. Reproducing these objects, often at a greater size than the original, Oldenburg created his sculptures out of soft materials. The anthropomorphic qualities of the sculptures were mainly in their sagging and malleable exterior which mirrored the not-so-idealistic forms of the human body. In "Soft Light Switches" Oldenburg creates a household light switch out of vinyl. The two identical switches, in a dulled orange, insinuate nipples. The soft vinyl references the aging process as the sculpture wrinkles and sinks with time. Minimalism In the essay "Art and Objecthood", Michael Fried makes the case that "literalist art" (minimalism) becomes theatrical by means of anthropomorphism. The viewer engages the minimalist work, not as an autonomous art object, but as a theatrical interaction. Fried references a conversation in which Tony Smith answers questions about his six-foot cube, "Die". Fried implies an anthropomorphic connection by means of "a surrogate personthat is, a kind of statue." The minimalist decision of "hollowness" in much of their work was also considered by Fried to be "blatantly anthropomorphic". This "hollowness" contributes to the idea of a separate inside; an idea mirrored in the human form. Fried considers the Literalist art's "hollowness" to be "biomorphic" as it references a living organism. Post-minimalism Curator Lucy Lippard's Eccentric Abstraction show, in 1966, sets up Briony Fer's writing of a post-minimalist anthropomorphism. Reacting to Fried's interpretation of minimalist art's "looming presence of objects which appear as actors might on a stage", Fer interprets the artists in Eccentric Abstraction to a new form of anthropomorphism. She puts forth the thoughts of Surrealist writer Roger Caillois, who speaks of the "spacial lure of the subject, the way in which the subject could inhabit their surroundings." Caillous uses the example of an insect who "through camouflage does so in order to become invisible... and loses its distinctness." For Fer, the anthropomorphic qualities of imitation found in the erotic, organic sculptures of artists Eva Hesse and Louise Bourgeois, are not necessarily for strictly "mimetic" purposes. Instead, like the insect, the work must come into being in the "scopic field... which we cannot view from outside." Mascots For branding, merchandising, and representation, figures known as mascots are now often employed to personify sports teams, corporations, and major events such as the World's Fair and the Olympics. These personifications may be simple human or animal figures, such as Ronald McDonald or the donkey that represents the United States's Democratic Party. Other times, they are anthropomorphic items, such as "Clippy" or the "Michelin Man". Most often, they are anthropomorphic animals such as the Energizer Bunny or the San Diego Chicken. The practice is particularly widespread in Japan, where cities, regions, and companies all have mascots, collectively known as yuru-chara. Two of the most popular are Kumamon (a bear who represents Kumamoto Prefecture) and Funassyi (a pear who represents Funabashi, a suburb of Tokyo). Animals Other examples of anthropomorphism include the attribution of human traits to animals, especially domesticated pets such as dogs and cats. Examples of this include thinking a dog is smiling simply because it is showing his teeth, or a cat mourns for a dead owner. Anthropomorphism may be beneficial to the welfare of animals. A 2012 study by Butterfield et al. found that utilizing anthropomorphic language when describing dogs created a greater willingness to help them in situations of distress. Previous studies have shown that individuals who attribute human characteristics to animals are less willing to eat them, and that the degree to which individuals perceive minds in other animals predicts the moral concern afforded to them. It is possible that anthropomorphism leads humans to like non-humans more when they have apparent human qualities, since perceived similarity has been shown to increase prosocial behavior toward other humans. In science In science, the use of anthropomorphic language that suggests animals have intentions and emotions has traditionally been deprecated as indicating a lack of objectivity. Biologists have been warned to avoid assumptions that animals share any of the same mental, social, and emotional capacities of humans, and to rely instead on strictly observable evidence. In 1927 Ivan Pavlov wrote that animals should be considered "without any need to resort to fantastic speculations as to the existence of any possible subjective states". More recently, The Oxford companion to animal behaviour (1987) advised that "one is well advised to study the behaviour rather than attempting to get at any underlying emotion". Some scientists, like William M Wheeler (writing apologetically of his use of anthropomorphism in 1911), have used anthropomorphic language in metaphor to make subjects more humanly comprehensible or memorable. Despite the impact of Charles Darwin's ideas in The Expression of the Emotions in Man and Animals (Konrad Lorenz in 1965 called him a "patron saint" of ethology) ethology has generally focused on behavior, not on emotion in animals. The study of great apes in their own environment and in captivity has changed attitudes to anthropomorphism. In the 1960s the three so-called "Leakey's Angels", Jane Goodall studying chimpanzees, Dian Fossey studying gorillas and Biruté Galdikas studying orangutans, were all accused of "that worst of ethological sins – anthropomorphism". The charge was brought about by their descriptions of the great apes in the field; it is now more widely accepted that empathy has an important part to play in research. De Waal has written: "To endow animals with human emotions has long been a scientific taboo. But if we do not, we risk missing something fundamental, about both animals and us." Alongside this has come increasing awareness of the linguistic abilities of the great apes and the recognition that they are tool-makers and have individuality and culture. Writing of cats in 1992, veterinarian Bruce Fogle points to the fact that "both humans and cats have identical neurochemicals and regions in the brain responsible for emotion" as evidence that "it is not anthropomorphic to credit cats with emotions such as jealousy". In computing In science fiction, an artificially-intelligent computer or robot, even though it has not been programmed with human emotions, often spontaneously experiences those emotions anyway: for example, Agent Smith in The Matrix was influenced by a "disgust" toward humanity. This is an example of anthropomorphism: in reality, while an artificial intelligence could perhaps be deliberately programmed with human emotions, or could develop something similar to an emotion as a means to an ultimate goal if it is useful to do so, it would not spontaneously develop human emotions for no purpose whatsoever, as portrayed in fiction. One example of anthropomorphism would be to believe that one's computer is angry at them because they insulted it; another would be to believe that an intelligent robot would naturally find a woman attractive and be driven to mate with her. Scholars sometimes disagree with each other about whether a particular prediction about an artificial intelligence's behavior is logical, or whether the prediction constitutes illogical anthropomorphism. An example that might initially be considered anthropomorphism, but is in fact a logical statement about an artificial intelligence's behavior, would be the Dario Floreano experiments where certain robots spontaneously evolved a crude capacity for "deception", and tricked other robots into eating "poison" and dying: here, a trait, "deception", ordinarily associated with people rather than with machines, spontaneously evolves in a type of convergent evolution. The conscious use of anthropomorphic metaphor is not intrinsically unwise; ascribing mental processes to the computer, under the proper circumstances, may serve the same purpose as it does when humans do it to other people: it may help persons to understand what the computer will do, how their actions will affect the computer, how to compare computers with humans, and conceivably how to design computer programs. However, inappropriate use of anthropomorphic metaphors can result in false beliefs about the behavior of computers, for example by causing people to overestimate how "flexible" computers are. According to Paul R. Cohen and Edward Feigenbaum, in order to differentiate between anthropomorphization and logical prediction of AI behavior, "the trick is to know enough about how humans and computers think to say exactly what they have in common, and, when we lack this knowledge, to use the comparison to suggest theories of human thinking or computer thinking." Computers overturn the childhood hierarchical taxonomy of "stones (non-living) → plants (living) → animals (conscious) → humans (rational)", by introducing a non-human "actor" that appears to regularly behave rationally. Much of computing terminology derives from anthropomorphic metaphors: computers can "read", "write", or "catch a virus". Information technology presents no clear correspondence with any other entities in the world besides humans; the options are either to leverage an emotional, imprecise human metaphor, or to reject imprecise metaphor and make use of more precise, domain-specific technical terms. People often grant an unnecessary social role to computers during interactions. The underlying causes are debated; Youngme Moon and Clifford Nass propose that humans are emotionally, intellectually and physiologically biased toward social activity, and so when presented with even tiny social cues, deeply-infused social responses are triggered automatically. This may allow incorporation of anthropomorphic features into computers/robots to enable more familiar "social" interactions, making them easier to use. Psychology Foundational research In psychology, the first empirical study of anthropomorphism was conducted in 1944 by Fritz Heider and Marianne Simmel. In the first part of this experiment, the researchers showed a 2-and-a-half minute long animation of several shapes moving around on the screen in varying directions at various speeds. When subjects were asked to describe what they saw, they gave detailed accounts of the intentions and personalities of the shapes. For instance, the large triangle was characterized as a bully, chasing the other two shapes until they could trick the large triangle and escape. The researchers concluded that when people see objects making motions for which there is no obvious cause, they view these objects as intentional agents (individuals that deliberately make choices to achieve goals). Modern psychologists generally characterize anthropomorphism as a cognitive bias. That is, anthropomorphism is a cognitive process by which people use their schemas about other humans as a basis for inferring the properties of non-human entities in order to make efficient judgements about the environment, even if those inferences are not always accurate. Schemas about humans are used as the basis because this knowledge is acquired early in life, is more detailed than knowledge about non-human entities, and is more readily accessible in memory. Anthropomorphism can also function as a strategy to cope with loneliness when other human connections are not available. Three-factor theory Since making inferences requires cognitive effort, anthropomorphism is likely to be triggered only when certain aspects about a person and their environment are true. Psychologist Adam Waytz and his colleagues created a three-factor theory of anthropomorphism to describe these aspects and predict when people are most likely to anthropomorphize. The three factors are: Elicited agent knowledge, or the amount of prior knowledge held about an object and the extent to which that knowledge is called to mind. Effectance, or the drive to interact with and understand one's environment. Sociality, the need to establish social connections. When elicited agent knowledge is low and effectance and sociality are high, people are more likely to anthropomorphize. Various dispositional, situational, developmental, and cultural variables can affect these three factors, such as need for cognition, social disconnection, cultural ideologies, uncertainty avoidance, etc. Developmental perspective Children appear to anthropomorphize and use egocentric reasoning from an early age and use it more frequently than adults. Examples of this are describing a storm cloud as "angry" or drawing flowers with faces. This penchant for anthropomorphism is likely because children have acquired vast amounts of socialization, but not as much experience with specific non-human entities, so thus they have less developed alternative schemas for their environment. In contrast, autistic children tend to describe anthropomorphized objects in purely mechanical terms (that is, in terms of what they do) because they have difficulties with theory of mind. Effect on learning Anthropomorphism can be used to assist learning. Specifically, anthropomorphized words and describing scientific concepts with intentionality can improve later recall of these concepts. In mental health In people with depression, social anxiety, or other mental illnesses, emotional support animals are a useful component of treatment partially because anthropomorphism of these animals can satisfy the patients' need for social connection. In marketing Anthropomorphism of inanimate objects can affect product buying behavior. When products seem to resemble a human schema, such as the front of a car resembling a face, potential buyers evaluate that product more positively than if they do not anthropomorphize the object. People also tend to trust robots to do more complex tasks such as driving a car or childcare if the robot resembles humans in ways such as having a face, voice, and name; mimicking human motions; expressing emotion; and displaying some variability in behavior. Image gallery See also Aniconism – antithetic concept Animism Anthropic principle Anthropocentrism Anthropology Anthropomorphic maps Anthropopathism Cynocephaly Furry fandom Great Chain of Being Human-animal hybrid Humanoid Moe anthropomorphism National personification Pareidolia – seeing faces in everyday objects Pathetic fallacy Prosopopoeia Speciesism Talking animals in fiction Tashbih Zoomorphism Notes References Sources Further reading External links "Anthropomorphism" entry in the Encyclopedia of Human-Animal Relationships (Horowitz A., 2007) "Anthropomorphism" entry in the Encyclopedia of Astrobiology, Astronomy, and Spaceflight "Anthropomorphism" in mid-century American print advertising. Collection at The Gallery of Graphic Design. Descriptive technique
1707979
https://en.wikipedia.org/wiki/Windows%20Presentation%20Foundation
Windows Presentation Foundation
Windows Presentation Foundation (WPF) is a free and open-source graphical subsystem (similar to WinForms) originally developed by Microsoft for rendering user interfaces in Windows-based applications. WPF, previously known as "Avalon", was initially released as part of .NET Framework 3.0 in 2006. WPF uses DirectX and attempts to provide a consistent programming model for building applications. It separates the user interface from business logic, and resembles similar XML-oriented object models, such as those implemented in XUL and SVG. Overview WPF employs XAML, an XML-based language, to define and link various interface elements. WPF applications can be deployed as standalone desktop programs or hosted as an embedded object in a website. WPF aims to unify a number of common user interface elements, such as 2D/3D rendering, fixed and adaptive documents, typography, vector graphics, runtime animation, and pre-rendered media. These elements can then be linked and manipulated based on various events, user interactions, and data bindings. WPF runtime libraries are included with all versions of Microsoft Windows since Windows Vista and Windows Server 2008. Users of Windows XP SP2/SP3 and Windows Server 2003 can optionally install the necessary libraries. Microsoft Silverlight provided functionality that is mostly a subset of WPF to provide embedded web controls comparable to Adobe Flash. 3D runtime rendering had been supported in Silverlight since Silverlight 5. At the Microsoft Connect event on December 4, 2018, Microsoft announced releasing WPF as open source project on GitHub. It is released under the MIT License. Windows Presentation Foundation has become available for projects targeting the .NET software framework, however, the system is not cross-platform and is still available only on Windows. Features Direct3D Graphics, including desktop items like windows, are rendered using Direct3D. This allows the display of more complex graphics and custom themes, at the cost of GDI's wider range of support and uniform control theming. It allows Windows to offload some graphics tasks to the GPU. This reduces the workload on the computer's CPU. GPUs are optimized for parallel pixel computations. This tends to speed up screen refreshes at the cost of decreased compatibility in markets where GPUs are not necessarily as powerful, such as the netbook market. WPF's emphasis on vector graphics allows most controls and elements to be scaled without loss in quality or pixelization, thus increasing accessibility. With the exception of Silverlight, Direct3D integration allows for streamlined 3D rendering. In addition, interactive 2D content can be overlaid on 3D surfaces natively. Data binding WPF has a built-in set of data services to enable application developers to bind and manipulate data within applications. It supports four types of data binding: one time: where the client ignores updates on the server. one way: where the client has read-only access to data. two way: where client can read from and write data to the server one way to source: where the client has write-only access to data LINQ queries, including LINQ to XML, can also act as data sources for data binding. Binding of data has no bearing on its presentation. WPF provides data templates to control presentation of data. A set of built-in controls is provided as part of WPF, containing items such as button, menu, grids, and list box. Dependency Properties can be added to Behaviours or Attached Properties to add custom binding properties. A powerful concept in the WPF is the logical separation of a control from its appearance. A control's template can be overridden to completely change its visual appearance. A control can contain any other control or layout, allowing for a high degree of control over composition. Features retained mode graphics. Repainting the display isn't always necessary. Media services The WPF provides an integrated system for building user interfaces with common media elements like vector and raster images, audio, and video. WPF also provides an animation system and a 2D/3D rendering system. WPF provides shape primitives for 2D graphics along with a built-in set of brushes, pens, geometries, and transforms. The 3D capabilities in WPF are a subset of the full-feature set provided by Direct3D. However, WPF provides tighter integration with other features like user interfaces, documents, and media. This makes it possible to have 3D user interfaces, 3D documents, or 3D media. There is support for most common image formats: BMP, JPEG, PNG, TIFF, Windows Media Photo, GIF, and ICON. WPF supports the video formats WMV, MPEG and some AVI files by default, but since it has Windows Media Player running beneath, WPF can use all the codecs installed for it. Templates In WPF you can define the look of an element directly, via its properties, or indirectly with a template or style. At its simplest a style is a combination of property settings that can be applied to a UI element with a single property attribute. Templates are a mechanism for defining alternative UI for portions of your WPF application. There are several template types available in WPF (ControlTemplate, DataTemplate, HierarchicalDataTemplate, and ItemsPanelTemplate). Control templates Underlying all UI controls in WPF is a new composition model. Every control is composed of one or more ‘visuals’. These visual sub-elements are turned into a hierarchical visual tree by WPF and eventually rendered by the GPU. Because WPF controls are not wrappers for standard Windows controls their UI can be radically changed without affecting the normal behavior of the control. Every control in WPF has a default ‘template’ that defines its visual tree. The default template is created by the control author and is replaceable by other developers and designers. The substitute UI is placed within a ControlTemplate. Data templates WPF has a flexible data binding system. UI elements can be populated and synchronized with data from an underlying data model. Rather than showing simple text for the bound data, WPF can apply a data template (replaceable UI for .NET types) before rendering to the visual tree. Animations WPF supports time-based animations, in contrast to the frame-based approach. This decouples the speed of the animation from how the system is performing. WPF supports low level animation via timers and higher level abstractions of animations via the Animation classes. Any WPF element property can be animated as long as it is registered as a dependency property. Animation classes are based on the .NET type of property to be animated. For instance, changing the color of an element is done with the ColorAnimation class and animating the width of an element (which is typed as a double) is done with the DoubleAnimation class. Animations can be grouped into Storyboards. Storyboards are the primary way to start, stop, pause and otherwise manipulate the animations. Animations can be triggered by external events, including user action. Scene redraws are time triggered. Presentation timers are initialized and managed by WPF. Animation effects can be defined on a per-object basis, which can be accessed directly from XAML markup. Imaging WPF can natively access Windows Imaging Component (WIC) code and APIs allowing developers to write image codecs for their specific image file formats. Effects WPF 3.0 provides for Bitmap effects (BitmapEffect class), which are raster effects applied to a Visual. These raster effects are written in unmanaged code and force rendering of the Visual to be performed on the CPU and not hardware accelerated by the GPU. BitmapEffects were deprecated in .NET 3.5 SP 1. .NET Framework 3.5 SP1 adds the Effect class, which is a Pixel-Shader 2.0 effect that can be applied to a visual, which allows all rendering to remain on the GPU. The Effect class is extensible, allowing applications to specify their own shader effects. The Effect class, in .NET 3.5 SP1 and 4.0, ships with two built-in effects, BlurEffect and DropShadowEffect. There are no direct replacements for OuterGlowBitmapEffect, EmbossBitmapEffect and BevelBitmapEffect, previously provided by the deprecated BitmapEffect class. However, there are other ways of achieving the same results with the Effect class. For example, developers can get an outer glow effect by using the DropShadowEffect with its ShadowDepth set to 0. Although the BitmapEffect class was marked as deprecated in .Net Framework 3.5 SP1, its use was still allowed and these effects would still render correctly. In .Net Framework 4.0 the BitmapEffect class was effectively deprecated - code referencing BitmapEffect still builds without errors, but no effect gets actually rendered. Documents WPF natively supports paginated documents. It provides the DocumentViewer class, which is for reading fixed layout documents. The FlowDocumentReader class offers different view modes such as per-page or scrollable and also reflows text if the viewing area is resized. Natively supports XML Paper Specification documents. Supports reading and writing paginated documents using Open Packaging Conventions. Text WPF includes a number of text rendering features that were not available in GDI. This is the first Microsoft programming interface to expose OpenType features to software developers, supporting OpenType, TrueType, and OpenType CFF (Compact Font Format) fonts. Support for OpenType typographic features includes: Ligatures Old-style numerals (for example, parts of the glyph hang below the text baseline) Swash variants Fractions Superscript and subscript Small caps Line-level justification Ruby characters Glyph substitution Multiple baselines Contextual and Stylistic Alternates Kerning WPF handles texts in Unicode, and handles texts independent of global settings, such as system locale. In addition, fallback mechanisms are provided to allow writing direction (horizontal versus vertical) handled independent of font name; building international fonts from composite fonts, using a group of single-language fonts; composite fonts embedding. Font linking and font fallback information is stored in a portable XML file, using composite font technology. The XML file has extension . The WPF text engine also supports built-in spell checking. It also supports such features as automatic line spacing, enhanced international text, language-guided line breaking, hyphenation, and justification, bitmap effects, transforms, and text effects such as shadows, blur, glow, rotation etc. Animated text is also supported; this refers to animated glyphs, as well as real-time changes in position, size, color, and opacity of the text. WPF text rendering takes advantage of advances in ClearType technology, such as sub-pixel positioning, natural advance widths, Y-direction anti-aliasing, hardware-accelerated text rendering, as well as aggressive caching of pre-rendered text in video memory. ClearType cannot be turned off in older WPF 3.x applications. Unlike the ClearType in GDI or GDI+, WPF ClearType does not snap glyphs to pixels horizontally, leading to a loss of contrast disliked by some users. The text rendering engine has been rewritten in WPF 4.0, correcting this issue. The extent to which glyphs are cached is dependent on the video card. DirectX 10 cards are able to cache the font glyphs in video memory, then perform the composition (assembling of character glyphs in the correct order, with the correct spacing), alpha-blending (application of anti-aliasing), and RGB blending (ClearType's sub-pixel color calculations), entirely in hardware. This means that only the original glyphs need to be stored in video memory once per font (Microsoft estimates that this would require 2 MB of video memory per font), and other operations such as the display of anti-aliased text on top of other graphics—including video—can also be done with no computation effort on the part of the CPU. DirectX 9 cards are only able to cache the alpha-blended glyphs in memory, thus requiring the CPU to handle glyph composition and alpha-blending before passing this to the video card. Caching these partially rendered glyphs requires significantly more memory (Microsoft estimates 5 MB per process). Cards that don't support DirectX 9 have no hardware-accelerated text rendering capabilities. Interoperability Windows Forms is also possible through the use of the ElementHost and WindowsFormsHost classes. To enable the use of WinForms, the developer executes this from their WPF C# code: System.Windows.Forms.Integration.WindowsFormsHost.EnableWindowsFormsInterop(); Alternative input WPF supports digital ink-related functionality. WPF 4.0 supports multi-touch input on Windows 7 and above. Accessibility WPF supports Microsoft UI Automation to allow developers to create accessible interfaces. This API also allows automated test scripts to interact with the UI. XAML Following the success of markup languages for web development, WPF introduces eXtensible Application Markup Language (XAML; ), which is based on XML. XAML is designed as a more efficient method of developing application user interfaces. The specific advantage that XAML brings to WPF is that XAML is a completely declarative language, allowing the developer (or designer) to describe the behavior and integration of components without the use of procedural programming. Although it is rare that an entire application will be built completely in XAML, the introduction of XAML allows application designers to more effectively contribute to the application development cycle. Using XAML to develop user interfaces also allows for separation of model and view, which is considered a good architectural principle. In XAML, elements and attributes map to classes and properties in the underlying APIs. As in web development, both layouts and specific themes are well suited to markup, but XAML is not required for either. Indeed, all elements of WPF may be coded in a .NET language (C#, VB.NET). The XAML code can ultimately be compiled into a managed assembly in the same way all .NET languages are. Architecture The architecture of WPF spans both managed code and native code components. However, the public API exposed is only available via managed code. While the majority of WPF is in managed code, the composition engine which renders the WPF applications is a native component. It is named Media Integration Layer (MIL) and resides in milcore.dll. It interfaces directly with DirectX and provides basic support for 2D and 3D surfaces, timer-controlled manipulation of contents of a surface with a view to exposing animation constructs at a higher level, and compositing the individual elements of a WPF application into a final 3D "scene" that represents the UI of the application and renders it to the screen. The Desktop Window Manager also uses the MIL for desktop and window composition. The media codecs are also implemented in unmanaged code, and are shipped as windowscodecs.dll. In the managed world, PresentationCore (presentationcore.dll) provides a managed wrapper for MIL and implements the core services for WPF, including a property system that is aware of the dependencies between the setters and consumers of the property, a message dispatching system by means of a Dispatcher object to implement a specialized event system and services which can implement a layout system such as measurement for UI elements. PresentationFramework (presentationframework.dll) implements the end-user presentational features, including layouts, time-dependent, story-board based animations, and data binding. WPF exposes a property system for objects which inherit from DependencyObject, that is aware of the dependencies between the consumers of the property, and can trigger actions based on changes in properties. Properties can be either hard coded values or expressions, which are specific expressions that evaluate to a result. In the initial release, however, the set of expressions supported is closed. The value of the properties can be inherited from parent objects as well. WPF properties support change notifications, which invoke bound behaviors whenever some property of some element is changed. Custom behaviors can be used to propagate a property change notification across a set of WPF objects. This is used by the layout system to trigger a recalculation of the layout on property-changes, thus exposing a declarative programming style for WPF, whereby almost everything, from setting colors and positions to animating elements can be achieved by setting properties. This allows WPF applications to be written in XAML, which is a declarative mark-up language, by binding the keywords and attributes directly to WPF classes and properties. The interface elements of a WPF application are maintained as a class of Visual objects. Visual objects provide a managed interface to a composition tree which is maintained by Media Integration Layer (MIL). Each element of WPF creates and adds one or more composition nodes to the tree. The composition nodes contain rendering instructions, such as clipping and transformation instructions, along with other visual attributes. Thus the entire application is represented as a collection of composition nodes, which are stored in a buffer in the system memory. Periodically, MIL walks the tree and executes the rendering instructions in each node, thus compositing each element on to a DirectX surface, which is then rendered on screen. MIL uses the painter's algorithm, where all the components are rendered from back of the screen to the front, which allows complex effects like transparencies to be easily achieved. This rendering process is hardware accelerated using the GPU. The composition tree is cached by MIL, creating a retained mode graphics, so that any changes to the composition tree needs only to be incrementally communicated to MIL. This also frees the applications of managing repainting the screen; MIL can do that itself as it has all the information necessary. Animations can be implemented as time-triggered changes to the composition tree. On the user visible side, animations are specified declaratively, by setting some animation effect to some element via a property and specifying the duration. The code-behind updates the specific nodes of the tree, via Visual objects, to represent both the intermediate states at specified time intervals as well as the final state of the element. MIL will render the changes to the element automatically. All WPF applications start with two threads: one for managing the UI and another background thread for handling rendering and repainting. Rendering and repainting is managed by WPF itself, without any developer intervention. The UI thread houses the Dispatcher (via an instance of DispatcherObject), which maintains a queue of UI operations that need to be performed (as a tree of Visual objects), sorted by priority. UI events, including changing a property that affects the layout, and user interaction events raised are queued up in the dispatcher, which invokes the handlers for the events. Microsoft recommends that the event handlers only update the properties to reflect new content for application responsiveness, with the new content being generated or retrieved in a background thread. The render thread picks up a copy of the visual tree and walks the tree calculating which components will be visible and renders them to Direct3D surfaces. The render thread also caches the visual tree, so only changes to the tree need to be communicated, which will result in updating just the changed pixels. WPF supports an extensible layout model. Layout is divided into two phases: Measure; and Arrange. The Measure phase recursively calls all elements and determines the size they will take. In the Arrange phase, the child elements are recursively arranged by their parents, invoking the layout algorithm of the layout module in use. Tools A number of development tools are available for developing Windows Presentation Foundation applications. Microsoft tools Microsoft Visual Studio is a developer-oriented IDE that contains a combination XAML editor and WPF visual designer, beginning with Visual Studio 2008. Prior to Visual Studio 2008, the WPF designer add-in, codenamed Cider, was the original release of a WYSIWYG editor for creating WPF windows, pages, and user controls. It was available for Visual Studio 2005 as a Visual Studio 2005 extensions for .NET Framework 3.0 CTP for the initial release of WPF. Microsoft Visual Studio Express 2008 and later editions, particularly Visual C# Express and Visual Basic Express, also have the WPF designer integrated. Microsoft Blend is a designer-oriented tool that provides an artboard for the creation of WPF applications with 2D and 3D graphics, text and forms content. It generates XAML that may be exported into other tools and shares solution (sln files) and project formats (csproj, vbproj) with Microsoft Visual Studio. Microsoft Expression Design is a bitmap and 2D-vector graphics tool for exporting to XAML. XAMLPad is a lightweight tool included in the .NET Framework SDK. It can create and render XAML files using a split screen UI layout. It also provides a tree view of the markup in a panel. Third-party tools SharpDevelop, an open-source .NET IDE, includes WPF application design abilities. It is a free alternative to Visual Studio. PowerBuilder .NET by Sybase is a 4GL tool that translates PowerBuilder code, graphical objects to XAML and allows deploying the application as a WPF target. Essential Studio for WPF by Syncfusion is a package of over 100 modern WPF UI controls for building beautiful, high performance line-of-business WPF applications. Deployment WPF's deployment model offers both standalone and XAML Browser Applications (XBAP, pronounced "ex-bap") flavors. The programming model for building either type of application is similar. Standalone applications are those that have been locally installed on the computer using software such as ClickOnce or Windows Installer (MSI) and which run on the desktop. Standalone applications are considered full trust and have full access to a computer's resources. XAML Browser Applications (XBAPs) are programs that are hosted inside a web browser. Pre-.NET4 XBAP applications run in a partial trust sandbox environment, and are not given full access to the computer's resources and not all WPF functionality is available. The hosted environment is intended to protect the computer from malicious applications. XBAPs can run as fully trusted applications in .NET 4, with full access to computer resources. Starting an XBAP from an HTML page or vice versa is seamless (there is no security or installation prompt). Although one gets the perception of the application running in the browser, it actually runs in an out-of-process executable different from the browser. Internet Explorer As of the release of .NET Framework 3.0, XBAPs would only run in IE. Firefox support With the release of .NET Framework 3.5 SP1, XBAP also runs in Mozilla Firefox using the included extension. On October 16, 2009, Mozilla added the Firefox plugin and extension to its add-ons blocklist, because of a remotely exploitable serious security vulnerability, in agreement with Microsoft. Two days later, the block was removed. On Windows 7, the Firefox plugin does not run by default. A reinstallation of the .NET Framework 3.5 SP1 will install the plugin and add support for XBAP applications on Firefox. Alternatively, hard-copying the plugin DLLs from a working Windows XP/Vista installation to the plugin directory of Mozilla Firefox will also reinstate support for XBAP applications. The WPF plugin DLLs reside in the following directories (depending on the Framework version): 3.5 [SP1] C:\Windows\Microsoft.NET\Framework\v3.5\WPF\NPWPF.dll 4.0 C:\Windows\Microsoft.NET\Framework\WPF\NPWPF.dll Microsoft Silverlight Silverlight (codenamed WPF/E) is a deprecated cross-browser browser plugin which contained WPF-based technology (including XAML) that provided features such as video, vector graphics, and animations to multiple operating systems including Windows 7, Windows Vista, Windows XP, and Mac OS X. Microsoft sanctioned a limited number of third-party developers to work on ports for certain Linux distributions. Specifically, it was provided as an add-on for Mozilla Firefox, Internet Explorer 6 and above, Google Chrome 42 and below and Apple Safari. .NET Micro Framework The .NET Micro Framework includes a GUI object model loosely based on WPF, although without support for XAML. References Bibliography Adam Nathan: Windows Presentation Foundation Unleashed (WPF), December 21, 2006, Sams Publishing, Chris Anderson: Essential Windows Presentation Foundation (WPF), April 11, 2007, Addison-Wesley, Chris Sells, Ian Griffiths: Programming WPF, August 28, 2007, O'Reilly Media, Arlen Feldman, Maxx Daymon: WPF in Action with Visual Studio 2008, November 21, 2008, Manning Publications, External links MSDN Library: Windows Presentation Foundation Rich typography with Windows Presentation Foundation Windows Presentation Foundation User Education .NET terminology Formerly proprietary software Free and open-source software Presentation Foundation Microsoft free software Microsoft Windows multimedia technology Software using the MIT license Widget toolkits 2006 software
8727412
https://en.wikipedia.org/wiki/FSA%20Corporation
FSA Corporation
FSA Corporation (formerly Freedman, Sharp, and Associates) developed UNIX and Windows system level software for security and distributed system administration in the 1990s. The company provided the underlying technology basis for software offerings by IBM, Symantec, and McAfee. FSA's best known products were its Load Balancer distributed workload management solution, its PowerBroker secure system administration solution for controlling and auditing the power of root on UNIX networks, and its CipherLink network encryption solution. The company was acquired by McAfee in 1996. The company was a testing ground for Theo de Raadt's ideas concerning open-source software, which led to the OpenBSD operating system. De Raadt was FSA's first non-founding employee. History Early years The company was conceived in a 1989 meeting between Dan Freedman and Maurice Sharp, both of whom had been asked by their Apollo Computer sales representative (Gary Erickson) to form a company that could serve and consult to Calgary-area oil companies with UNIX computer networks. From 1989 through the end of 1991, Freedman and Sharp operated FSA as a consulting company, dealing at the driver and administration level with the large computer networks of the day (large in 1990 meant anything more than about 10 computers on a LAN). In early 1992, Maurice Sharp chose to leave the company, taking a full-time intern position at Apple Computer. Freedman renamed the company from Freedman, Sharp, and Associates to FSA Corporation, and changed its focus from system-administrative consulting to distributed workload management. Shortly after the departure of Maurice Sharp, Freedman began to assemble materials for a 3-day UNIX security course. The course comprised over 500 pages of materials along with a tape of open source tools for managing the security of a UNIX network. Freedman marketed and taught the course 10 times in 1992 in various North American cities. He cites the course as being an important way of learning the concerns of system administrators, providing the feedback he needed to decide what products and services FSA would offer next. While the security course phase of FSA's history did not produce any notable products, the course served as an important mechanism by which the company could quickly engage with potential customers, learning their needs and deriving a plan for product development on the basis of what was learned. Load balancer Freedman's graduate work at the University of Calgary had involved the development of a process migration subsystem for networks of Sun Microsystems computers. From 1992 - 1994, the company commercialized that work, developing the company's Load Balancer product, which was a versatile system for distributing batch jobs across the increasingly larger networks of computers emerging at that time. Freedman hired Theo de Raadt as FSA's first employee. De Raadt's programming and architecture competence have since been proven in his OpenBSD operating system project, but at the time FSA Corporation was his first job since graduating from the University of Calgary. In January, 1994, the Load Balancer product line was sold to Unison Tymlabs, which needed a UNIX-based product line ahead of its IPO. Unison has since been absorbed via acquisition by IBM, and the load balancer product line is now sold by IBM. PowerBroker The sale of Load Balancer left the company with staff and cash, but no product. Freedman had developed and marketed a 3-day UNIX security course in 1992, and had developed significant contacts within the banking, defense, and chip-making communities. These customers all had similar problems in managing large UNIX networks, specifically concerning the control and audit of the actions of the systems' administrators. The problem was that the root account used by systems administrators when reconfiguring parts of the system, was able to edit any of the audit trails created by the system. Freedman designed a new product, PowerBroker, that was similar in concept to today's sudo products, but which allowed centralized control and auditing of an entire network even down to the keystroke level, with the logs stored on a dedicated remote computer to which the system administrators typically did not have access. By vetting all access and logging through this remote machine, a secure log could be maintained. The system was ported to over 22 versions of UNIX to accommodate the newer, larger networks with hundreds or thousands of machines. Dean Huxley was responsible for most of the system-level programming on PowerBroker with Kevin Chmilar and Earle Lowe also contributing. The PowerBroker product line was sold non-exclusively to Raxco and Symark Software (now known as BeyondTrust). Raxco launched Axent Technologies, as "a division exclusively committed to providing cross-platform, client/server security solutions", and the product was renamed UNIX Privilege Manager (UPM). Axent was subsequently sold to Symantec Corporation, who later spun off UPM and other products as PassGo Technologies, now part of Quest Software. BeyondTrust continues to sell the product under the PowerBroker name. CipherLink, PowerTelnet, and PowerFTP In 1995, the company began to develop network encryption technologies, again in response to a growing number of similar requests from its customers. Early products in this sphere included PowerTelnet which is comparable to today's ssh, and PowerFTP, which provided encrypted file transfer. In February 1996, Freedman realized that this technology could be generalized, resulting in a general-purpose network encryption solution that would encrypt the traffic of any application, without requiring much, if any, modification to that application. Chmilar, Huxley, and Lowe worked hard to prepare a demonstrable version of this new CipherLink product in time for the Networld+Interop trade show in April, 1996. The product was well received, and became a finalist for the Best Product of Show award at the trade show that year. Growth In early 1995, Paul Scripko joined the company as its first VP of Sales. He and Freedman had met at Unison Tymlabs, where he had assumed responsibility for sales of FSA's Load Balancer product line after its acquisition in 1994. Scripko professionalized FSA's sales machine, and the company immediately began to derive higher revenues from larger customers. Also around this time, Benjamin Freedman (brother of Dan Freedman) began to work part-time as the company's VP of Marketing. Gary Neill was brought on board by Dan Freedman as a management consultant in the Fall of 1995, and remained with the company until its acquisition by McAfee in 1996. Acquisition by McAfee Associates In August, 1996, FSA was acquired by antivirus maker McAfee Associates, which wanted to expand its products from antivirus into the more general security area. The FSA team developed 12 new product lines for McAfee in the following 12 months, including NetCrypto, PCCrypto, WebScan, McAfee Personal Firewall, and a number of other products only some of which were successful. With little interest in UNIX software at that time, McAfee sold the PowerBroker product line to Symark Software (now known as BeyondTrust) in a non-exclusive deal. Ironically, McAfee also sold rights to PowerBroker to Raxco, which was later acquired by arch-rival Symantec. McAfee continued its security expansion with the acquisition of Trusted Information Systems and PGP. Many of the personnel involved with FSA later joined another of Freedman's high-technology startup company, Jasomi Networks. Investment FSA Corporation was funded without the assistance of venture capital or angel investors. Instead, the company funded itself with customer revenues. The company also benefitted from the IRAP (Industrial Research Assistance Program) program operated by the Government of Canada, and a high-technology tax credit known as SRED available to Canadian companies. References External links Software companies of Canada
1637989
https://en.wikipedia.org/wiki/Chu%20Bong-Foo
Chu Bong-Foo
Chu Bong-Foo (born 1937) is the inventor of the Tsang-chieh (Cangjie), a widely used Chinese input method. His renowned input method, created in 1976 and given to the public domain in 1982, has sped up the computerization of Chinese society. Chu spent his childhood in Taiwan, and has worked in Brazil, United States, Taiwan, Shenzhen and Macau. History Chu was born in 1937 in Huanggang, Hubei to father Chu Wan-in, also called Chu Huai-ping (). His family led a wandering life during the turbulent days of mainland China, and they finally settled down in Taiwan. There he studied at a local high school. He was an imaginative teenager who spent so much time reading fiction that it negatively affected his studies. Later he also became interested in cinema. After graduating from Taiwan Provincial Agriculture Institute and his military service, he taught briefly at an elementary school in Hualien. In this period he witnessed the poverty of countryside, and developed a sense of mission for rural development and cultural improvement. Finding teaching not to his taste, he went to Brazil instead to develop his career, only to find life more difficult. Over that period of time, he took up several jobs. It was also during these turbulent times that Chu flirted with the hippie lifestyle and studied at a local conservatory. Tsang-chieh However, his work on Tsang-chieh did not begin until he worked at "Cultural Abril", a publishing house in Brazil, in 1972. From then on, he would dedicate his life to modernizing Chinese information technology. He saw for himself how the Brazilians could, in just one day, translate and publish foreign literature, while the Chinese took at least a year. The technology then, coupled with the complexities of the Chinese script, required a painstaking process of picking up type pieces from an enormous Chinese character set. Besides, publishers often encountered characters not included in their set. This meant that the printing of any information in Chinese was much slower than in other languages. In 1973, he returned to Taiwan. He gathered a team to study an efficient method of looking up a character with 26 keys on the common keyboard. Existing methods of looking up a Chinese character such as looking for its radicals, zhuyin, or romanization give only ambiguous results. On the other hand, while Chinese script has no alphabet, most characters are compounds of a common set of components. Chu assumed that it was possible to encode Chinese characters with a group of 'Chinese alphabets' which can be mapped on a common keyboard. After studying dictionary cut-outs and conducting many tests, the team released a table of 8,000 encoded characters in 1976. This result was unsatisfactory for general use but did however prove the possibility of encoding Chinese in this way. Chu then enlisted more help, including that of Shen Hung-lian () from the Department of Chinese Literature, National Taiwan University. At the same time, Chu also learned about An Wang's encoding scheme. On one hand, Wang's scheme further confirmed the feasibility of the encoding approach. On the other hand, it inspired Chu to think that his encoding scheme should not only be convenient for looking up a character, it should also take the form of the characters into account to make it possible to compose (draw) the character from a code. Chu assumed this could be achieved with the following three steps: choosing adequate rules of decomposition of characters choosing an adequate set of forms as the common components encoding the common components (with "Chinese alphabets") To achieve these steps, the team employed a principle similar to the "pictophonetic compounds" principle of Chinese. In 1977 the team released the first generation of the method that would later be named "Tsang-chieh". The team selected a set of less than 2,000 components to compose about 12,000 common characters. Each component is represented by a permutation of 1 to 3 of 26 "Chinese alphabets" (also called "radicals"). Each "alphabet" maps to a particular letter key on a standard QWERTY keyboard. In 1978, he implemented the method with computer technology, making it a Chinese input method for computers. The ROC Defense Minister Chiang Wei-kuo gave the input method the name "Tsang-chieh". Chu put Tsang-chieh method in the public domain in a bold effort to promote Chinese computing, essentially giving up his rights to any royalty. His contribution led many future Chinese systems to come bundled with a free copy of the Tsang-chieh input method, removing the greatest barrier to effective Chinese input systems. Since then, many adaptations of Chu's methods have also appeared. Over generations of upgrades, Chu's Tsang-chieh has included more and more characters. The fifth generation, released in 1985, included 60,000 characters. "Chinese computer" During the development of Tsang-chieh method, Chu found that his invention is not only an input method, but also a character encoding method for computing systems. Unlike An Wang's encoding method of the time, or later methods such as Big5 and Unicode, Tsang-chieh method does not sort characters by their usage frequency, stroke count, or radical, but is based on their composition aspect and inspired by the "pictophonetic compounds" principle of Chinese. Chu therefore began to develop a theory (which he would later call "Chinese DNA", "Alphabets of Chinese Language", or "Chinese character gene" theory). The theory states that the forms selected by Chu are the "genes" of Chinese. Proper arrangement of these "genes" can provide all functions of the characters. Therefore, Tsang-chieh method as a character encoding is very useful, since it contains not only an ordered set of characters, but also precise references of shapes, pronunciations and semantics of the characters. Therefore, the system is an efficient base for a variety of Chinese information technology: smart dictionary; operating system and application software; programming language; hardware architecture of PC and embedded systems; and even strong artificial intelligence. In 1979, he invented a character generator program, which takes Tsang-chieh encoded data and dynamically generates Chinese characters for screen display. In the same year, Chu's team collaborated with the Acer company, and the program became incorporated in the firmware of a "Chinese computer". Later the generator was also used in the "Tsang-chieh controller board", which would enable an Apple II computer to display Chinese characters in its hi-res graphics mode. A particular interesting "feature" of this early system was that it would also take and generate characters not explicitly included in the codepage, but implied by the rules of Tsang-chieh. Since then, Chu has held unique views on Chinese information technology. He considered input using ordinary keyboards more feasible and compatible than speech and handwriting recognition or specialized keyboard. However, many of his other opinions have been at odds with consensus: He uses a nationalist rhetoric on the subject. He values written Classical Chinese over various forms of vernacular Chinese. He also values it over many synthetic languages and their writing systems in the world. On the encoding issue, he said that the proposed Big5 13,053 characters codepage too small and too fixed. On the display of Chinese, he believes that the job should be done by the "calculation" of a computer's central processing unit, while the use of lookup tables and storage units should be kept to a minimum. He believes that Chinese information technology should take second-mover advantage and choose an alternative path from then-established Western theories. He also believes that the technology should be rooted in levels as basic and economical as possible. Instead of providing Chinese access at the operating system and application level on standard PC platforms, he believes it should be available at much lower levels using specialized firmware and hardware, which can be used in a wide variety of products. He also believes some programming languages containing syntax and tokens based on (Classical) Chinese language are necessary. [[Image:Mingzhu xiaoziku1.PNG|frame|right|Demonstration of character generator Mingzhu'''s capability of generating the characters according to the codes. None of the examples are included in Unicode. The first character is which is for a kind of soup in Xuzhou. Other characters are never recorded. Mingzhu was modified from Juzhen.]] In early 1990s, when the Chinese version of Microsoft Windows 3.0 attempted to enter Taiwanese market, Chu and some partners competed with it and advocated for more independence of Chinese information technology. Chu worked in Shenzhen with a group of developers and produced a software application for Chinese integration, called "Juzhen" (), stood up against this strong force. It was released to the public domain, and distributed through the Rexun magazine. Between Chu and the financially strong Microsoft, the odds were against the former. However, Chu's engine had the benefit of space: in Chu's engine, a font containing 13095 characters took up at most a megabyte each and fit snugly on a floppy disk as compared to the 3–5 megabytes required by competitors' products. This strong advantage of Chu's technology led a sizeable number of technology companies to initiate discussions with Chu for a transfer of technology rights. Soon after, Jinmei (), Zangzhu () and other budget font makers swamped the market, forcing prices down and ensuring that every user could afford original copies of Chinese typefaces. After "Juzhen" system, Chu left Taiwan for Macau. In 1999, he was appointed vice chairman of Culturecom Corporation. Seven years with Culturecom Since 1999, Chu became a vice chairman of Hong Kong- and Macau-based Culturecom Corporation, Chu's team has been cooperating with Culturalcom until 2006 when Culturecom terminated this partnership. Several products and technology were developed respectively, and resulted a series of E-book device with several names such as 文昌, 蒼頡. The core of the device is "Culturecom 1610", a RISC, System-on-a-chip "Chinese CPU" that includes a character generator. The device also features a "Cholesterol" LCD, which saves electricity. The device, similar to India's Simputer, features simple architecture and low cost. Chu's team designed it as an affordable electronic textbook for poor rural population. They also wished it to be the platform of a rural wireless network project named "eTown". However, up to 2006, these ideals were not realized. In 2002, some details of the product were released to LGPL by the two parties. Although the device did not take-off as expected, its technologies were employed by some other companies in their products, such as Kolin's i-library. During this period, Chu's team was also interested in Virtual cinematography. They have released several feature length animation films. Chu also gave more elaboration on his "Chinese DNA" theory. Using this theory as basis, Chu's team claimed to be developing: a system capable of automatically creating a movie from a written script a method of interpreting I Ching's prediction a strong artificial intelligence natural language interface named "Little Hsin" See also Chinese BASIC Transmeta References External links Personal website Mingzhu generator : Chu Bong Foo's page. Including the executable, sourcecode and instruction. Mingzhu is a Tsang-chieh character generator modified from Juzhen''. It runs on MS Windows' "DOS PROMPT". It requires Microsoft Macro Assembler and Link. 1937 births Living people Taiwanese computer scientists Taiwanese computer programmers Taiwanese expatriates in the United States People from Huanggang Scientists from Hubei Taiwanese people from Hubei Date of birth missing (living people)
17698784
https://en.wikipedia.org/wiki/GNU%20Emacs
GNU Emacs
GNU Emacs is a free software text editor. It was created by GNU Project founder Richard Stallman. In common with other varieties of Emacs, GNU Emacs is extensible using a Turing complete programming language. GNU Emacs has been called "the most powerful text editor available today". With proper support from the underlying system, GNU Emacs is able to display files in multiple character sets, and has been able to simultaneously display most human languages since at least 1999. Throughout its history, GNU Emacs has been a central component of the GNU project, and a flagship of the free software movement. GNU Emacs is sometimes abbreviated as GNUMACS, especially to differentiate it from other EMACS variants. The tag line for GNU Emacs is "the extensible self-documenting text editor". History In 1976, Stallman wrote the first Emacs (“Editor MACroS”), and in 1984, began work on GNU Emacs, to produce a free software alternative to the proprietary Gosling Emacs. GNU Emacs was initially based on Gosling Emacs, but Stallman's replacement of its Mocklisp interpreter with a true Lisp interpreter required that nearly all of its code be rewritten. This became the first program released by the nascent GNU Project. GNU Emacs is written in C and provides Emacs Lisp, also implemented in C, as an extension language. Version 13, the first public release, was made on March 20, 1985. The first widely distributed version of GNU Emacs was version 15.34, released later in 1985. Early versions of GNU Emacs were numbered as "1.x.x," with the initial digit denoting the version of the C core. The "1" was dropped after version 1.12 as it was thought that the major number would never change, and thus the major version skipped from "1" to "13". A new third version number was added to represent changes made by user sites. In the current numbering scheme, a number with two components signifies a release version, with development versions having three components. GNU Emacs was later ported to the Unix operating system. It offered more features than Gosling Emacs, in particular a full-featured Lisp as its extension language, and soon replaced Gosling Emacs as the de facto Unix Emacs editor. Markus Hess exploited a security flaw in GNU Emacs's email subsystem in his 1986 cracking spree, in which he gained superuser access to Unix computers. Although users commonly submitted patches and Elisp code to the net.emacs newsgroup, participation in GNU Emacs development was relatively restricted until 1999, and was used as an example of the "Cathedral" development style in The Cathedral and the Bazaar. The project has since adopted a public development mailing list and anonymous CVS access. Development took place in a single CVS trunk until 2008, and today uses the Git DVCS. Richard Stallman has remained the principal maintainer of GNU Emacs, but he has stepped back from the role at times. Stefan Monnier and Chong Yidong have overseen maintenance since 2008. On September 21, 2015 Monnier announced that he would be stepping down as maintainer effective with the feature freeze of Emacs 25. Longtime contributor John Wiegley was announced as the new maintainer on November 5, 2015. Licensing The terms of the GNU General Public License (GPL) state that the Emacs source code, including both the C and Emacs Lisp components, are freely available for examination, modification, and redistribution. Older versions of the GNU Emacs documentation appeared under an ad-hoc license that required the inclusion of certain text in any modified copy. In the GNU Emacs user's manual, for example, this included instructions for obtaining GNU Emacs and Richard Stallman's essay The GNU Manifesto. The XEmacs manuals, which were inherited from older GNU Emacs manuals when the fork occurred, have the same license. Newer versions of the documentation use the GNU Free Documentation License with "invariant sections" that require the inclusion of the same documents and that the manuals proclaim themselves as GNU Manuals. For GNU Emacs, like many other GNU packages, it remains policy to accept significant code contributions only if the copyright holder executes a suitable disclaimer or assignment of their copyright interest to the Free Software Foundation. Bug fixes and minor code contributions of fewer than 10 lines are exempt. This policy is in place so that the FSF can defend the software in court if its copyleft license is violated. In 2011, it was noticed that GNU Emacs had been accidentally releasing some binaries without corresponding source code for two years, in opposition to the intended spirit of the GPL. Richard Stallman described this incident as "a very bad mistake", which was promptly fixed. The FSF didn't sue any downstream redistributors who unknowingly violated the GPL by distributing these binaries. Using GNU Emacs Commands In its normal editing mode, GNU Emacs behaves like other text editors and allows the user to insert characters with the corresponding keys and to move the editing point with the arrow keys. Escape key sequences or pressing the control key and/or the meta key, alt key or super keys in conjunction with a regular key produces modified keystrokes that invoke functions from the Emacs Lisp environment. Commands such as save-buffer and save-buffers-kill-emacs combine multiple modified keystrokes. Some GNU Emacs commands work by invoking an external program, such as ispell for spell-checking or GNU Compiler Collection (gcc) for program compilation, parsing the program's output, and displaying the result in GNU Emacs. Emacs also supports "inferior processes"—long-lived processes that interact with an Emacs buffer. This is used to implement , running a Unix shell as inferior process, as well as read–eval–print loop (REPL) modes for various programming languages. Emacs' support for external processes makes it an attractive environment for interactive programming along the lines of Interlisp or Smalltalk. Users who prefer IBM Common User Access-style keys can use , a package that originally was a third-party add-on but has been included in GNU Emacs since version 22. Minibuffer Emacs uses the "minibuffer," normally the bottommost line, to present status and request information—the functions that would typically be performed by dialog boxes in most GUIs. The minibuffer holds information such as text to target in a search or the name of a file to read or save. When applicable, command-line completion is available using the tab and space keys. File management and display Emacs keeps text in data structures known as buffers. Buffers may or may not be displayed onscreen, and all buffer features are accessible to both an Emacs Lisp program and to the user interface. The user can create new buffers and dismiss unwanted ones, and many buffers can exist at the same time. There is no upper limit on the number of buffers Emacs allows, other than hardware memory limits. Advanced users may amass hundreds of open buffers of various types relating to their current work. Emacs can be configured to save the list of open buffers on exit, and reopen this list when it is restarted. Some buffers contain text loaded from text files, which the user can edit and save back to permanent storage. These buffers are said to be "visiting" files. Buffers also serve to display other data, such as the output of Emacs commands, dired directory listings, documentation strings displayed by the "help" library and notification messages that in other editors would be displayed in a dialog box. Some of these notifications are displayed briefly in the minibuffer, and GNU Emacs provides a buffer that keeps a history of the most recent notifications of this type. When the minibuffer is used for output from Emacs, it is called the "echo area". Longer notifications are displayed in buffers of their own. The maximum length of messages that will be displayed in the minibuffer is, of course, configurable. Buffers can also serve as input and output areas for an external process such as a shell or REPL. Buffers which Emacs creates on its own are typically named with asterisks on each end, to distinguish from user buffers. The list of open buffers is itself displayed in this type of buffer. Most Emacs key sequences remain functional in any buffer. For example, the standard Ctrl-s isearch function can be used to search filenames in dired buffers, and the file list can be saved to a text file just as any other buffer. dired buffers can be switched to a writable mode, in which filenames and attributes can be edited textually; when the buffer is saved, the changes are written to the filesystem. This allows multiple files to be renamed using the search and replace features of Emacs. When so equipped, Emacs displays image files in buffers. Emacs is binary safe and 8-bit clean. Emacs can split the editing area into separate non-overlapping sections called "windows," a feature that has been available since 1975, predating the graphical user interface in common use. In Emacs terminology, "windows" are similar to what other systems call "frames" or "panes" a rectangular portion of the program's display that can be updated and interacted with independently. Each Emacs window has a status bar called the "mode line" displayed by default at the bottom edge of the window. Emacs windows are available both in text-terminal and graphical modes and allow more than one buffer, or several parts of a buffer, to be displayed at once. Common applications are to display a dired buffer along with the contents of files in the current directory (there are special modes to make the file buffer follow the file highlighted in dired), to display the source code of a program in one window while another displays a shell buffer with the results of compiling the program, to run a debugger along with a shell buffer running the program, to work on code while displaying a man page or other documentation (possibly loaded over the World Wide Web using one of Emacs' built-in web browsers) or simply to display multiple files for editing at once such as a header along with its implementation file for C-based languages. In addition, there is , a minor mode that chains windows to display non-overlapping portions of a buffer. Using , a single file can be displayed in multiple side-by-side windows that update appropriately when scrolled. In addition, Emacs supports "narrowing" a buffer to display only a portion of a file, with top/bottom of buffer navigation functionality and buffer size calculations reflecting only the selected range. Emacs windows are tiled and cannot appear "above" or "below" their companions. Emacs can launch multiple "frames", which are displayed as individual windows in a graphical environment. On a text terminal, multiple frames are displayed stacked filling the entire terminal, and can be switched using the standard Emacs commands. Major modes GNU Emacs can display or edit a variety of different types of text and adapts its behavior by entering add-on modes called "major modes". There are major modes for many different purposes including editing ordinary text files, the source code of many markup and programming languages, as well as displaying web pages, directory listings and other system info. Each major mode involves an Emacs Lisp program that extends the editor to behave more conveniently for the specified type of text. Major modes typically provide some or all of the following common features: Syntax highlighting ("font lock"): combinations of fonts and colors, termed "faces," that differentiate between document elements such as keywords and comments. Automatic indentation to maintain consistent formatting within a file. The automatic insertion of elements required by the structure of the document, such as spaces, newlines, and parentheses. Special editing commands, such as commands to jump to the beginning or the end of a function while editing a programming file or commands to validate documents or insert closing tags while working with markup languages such as XML. Minor modes The use of "minor modes" enables further customization. A GNU Emacs editing buffer can use only one major mode at a time, but multiple minor modes can operate simultaneously. These may operate directly on documents, as in the way the major mode for the C programming language defines a separate minor mode for each of its popular indent styles, or they may alter the editing environment. Examples of the latter include a mode that adds the ability to undo changes to the window configuration and one that performs on-the-fly syntax checking. There is also a minor mode that allows multiple major modes to be used in a single file, for convenience when editing a document in which multiple programming languages are embedded. "Batch mode" GNU Emacs supports the capability to use it as an interpreter for the Emacs Lisp language without displaying the text editor user interface. In batch mode, user configuration is not loaded and the terminal interrupt characters C-c and C-z will have their usual effect of exiting the program or suspending execution instead of invoking Emacs keybindings. GNU Emacs has command line options to specify either a file to load and execute, or an Emacs Lisp function may be passed in from the command line. Emacs will start up, execute the passed-in file or function, print the results, then exit. The shebang line #!/usr/bin/emacs --script allows the creation of standalone scripts in Emacs Lisp. Batch mode is not an Emacs mode per se, but describes an alternate execution mode for the Emacs program. Manuals Apart from the built-in documentation, GNU Emacs has a detailed manual. An electronic copy of the GNU Emacs Manual, written by Richard Stallman, is bundled with GNU Emacs and can be viewed with the built-in info browser. Two additional manuals, the Emacs Lisp Reference Manual by Bil Lewis, Richard Stallman, and Dan Laliberte and An Introduction to Programming in Emacs Lisp by Robert Chassell, are included. All three manuals are also published in book form by the Free Software Foundation. The XEmacs manual is similar to the GNU Emacs Manual, from which it forked at the same time that the XEmacs software forked from GNU Emacs. Internationalization GNU Emacs has support for many alphabets, scripts, writing systems, and cultural conventions and provides spell-checking for many languages by calling external programs such as ispell. Version 24 added support for bidirectional text and left-to-right and right-to-left writing direction for languages such as Arabic, Persian and Hebrew. Many character encoding systems, including UTF-8, are supported. GNU Emacs uses UTF-8 for its encoding as of GNU 23, while prior versions used their own encoding internally and performed conversion upon load and save. The internal encoding used by XEmacs is similar to that of GNU Emacs but differs in details. The GNU Emacs user interface originated in English and, with the exception of the beginners' tutorial, has not been translated into any other language. A subsystem called Emacspeak enables visually impaired and blind users to control the editor through audio feedback. Extensibility The behavior of GNU Emacs can be modified and extended almost without limit by incorporating Emacs Lisp programs that define new commands, new buffer modes, new keymaps, add command-line options, and so on. Many extensions providing user-facing functionality define a major mode (either for a new file type or to build a non-text-editing user interface); others define only commands or minor modes, or provide functions that enhance another extension. Many extensions are bundled with the GNU Emacs installation; others used to be downloaded as loose files (the Usenet newsgroup gnu.emacs.sources was a traditional means of distribution) but there has been a development of managed packages and package download sites since version 24, with a built-in package manager (itself an extension) to download, install, and keep them up to date. The list of available packages is itself displayed in an Emacs buffer set to major mode. Notable examples include: AUCTeX, tools to edit and process TeX and LaTeX documents dired, a file manager Dissociated press, a Racter-like text generator Doctor, an implementation of ELIZA Dunnet, a text adventure Emacs Web Wowser (eww), a web browser. Emacs Speaks Statistics (ESS) modes for editing statistical languages like R and SAS ERC, an IRC client Eshell, a command line shell written in Emacs Lisp. This allows closer integration with the Emacs environment than standard shells such as bash or PowerShell, which are also available from within Emacs. For example, in Eshell, Elisp functions are available as shell commands and output from Unix commands can be redirected to an Emacs buffer. Exwm, an X window manager allowing X11 apps to be run in an Emacs window. Gnus, a full-featured news client (newsreader) and email client and early evidence for Zawinski's Law Magit, for working with the version control system Git MULtilingual Enhancement to Emacs (MULE) allows editing of text in multiple languages in a manner somewhat analogous to Unicode Org-mode for keeping notes, maintaining various types of lists, planning and measuring projects, and composing documents in many formats (such as PDF, HTML, or OpenDocument formats). There are static site generators using org mode, as well as an extension, Babel, allowing it to be used for literate programming. Planner, a personal information manager rcirc, an IRC client Superior Lisp Interaction Mode for Emacs (SLIME) extends GNU Emacs into a development environment for Common Lisp. With SLIME (written in Emacs Lisp) the GNU Emacs editor communicates with a Common Lisp system (using the SWANK backend) over a special communication protocol and provides such tools as a read–eval–print loop, a data inspector and a debugger. Texinfo (Info), an online help-browser Zone, a display hack mode incorporating various text effects. Performance GNU Emacs often ran noticeably slower than rival text editors on the systems in which it was first implemented, because the loading and interpreting of its Lisp-based code incurs a performance overhead. Modern computers are powerful enough to run GNU Emacs without slowdowns, but versions prior to 19.29 (released in 1995) couldn't edit files larger than 8 MB. The file size limit was raised in successive versions, and 32 bit versions after GNU Emacs 23.2 can edit files up to 512 MB in size. Emacs compiled on a 64-bit machine can handle much larger buffers. Platforms GNU Emacs is one of the most-ported non-trivial computer programs and runs on a wide variety of operating systems, including DOS, Windows and OpenVMS. Support for some "obsolete platforms was removed in Emacs 23.1", such as VMS and most commercial Unix variants. It is available for most Unix-like operating systems, such as Linux, the various BSDs, Solaris, AIX, HP-UX and macOS, and is often included with their system installation packages. Native ports of GNU Emacs exist for Android and Nokia's Maemo. GNU Emacs runs both on text terminals and in graphical user interface (GUI) environments. On Unix-like operating systems, GNU Emacs can use the X Window System to produce its GUI either directly using Athena widgets or by using a "widget toolkit" such as Motif, LessTif, or GTK+. GNU Emacs can also use the graphics systems native to macOS and Windows to provide menubars, toolbars, scrollbars and context menus conforming more closely to each platform's look and feel. Forks XEmacs Lucid Emacs, based on an early version of GNU Emacs 19, was developed beginning in 1991 by Jamie Zawinski and others at Lucid Inc. One of the best-known forks in free software development occurred when the codebases of the two Emacs versions diverged and the separate development teams ceased efforts to merge them back into a single program. After Lucid filed for bankruptcy, Lucid Emacs was renamed XEmacs and remains the second most popular variety of Emacs, after GNU Emacs. XEmacs development has slowed, with the most recent stable version 21.4.22 released in January 2009, while GNU Emacs has implemented many formerly XEmacs-only features. This has led some users to proclaim XEmacs' death. Other forks of GNU Emacs Other forks, less known than XEmacs, include: Meadow a Japanese version for Microsoft Windows SXEmacs Steve Youngs' fork of XEmacs Aquamacs a version which focuses on integrating with the Apple Macintosh user interface Remacs a port of GNU Emacs to the Rust programming language. Release history Changes in each Emacs release are listed in a NEWS file distributed with Emacs. Changes brought about by downgrading to the previous release are listed in an "Antinews" file, often with some snarky commentary on why this might be desirable. References Further reading External links Unofficial Emacs wiki Emacs - Free Software Directory Emacs Free file comparison tools Free integrated development environments Free software programmed in C Free software programmed in Lisp Free text editors Emacs Hex editors Linux integrated development environments Linux text editors MacOS text editors OpenVMS text editors Software using the GPL license Text editors Unix text editors Windows text editors
2665298
https://en.wikipedia.org/wiki/Sysctl
Sysctl
sysctl is a software utility of some Unix-like operating systems that reads and modifies the attributes of the system kernel such as its version number, maximum limits, and security settings. It is available both as a system call for compiled programs, and an administrator command for interactive use and scripting. Linux additionally exposes sysctl as a virtual file system. BSD In BSD, these parameters are generally objects in a management information base (MIB) that describe tunable limits such as the size of a shared memory segment, the number of threads the operating system will use as an NFS client, or the maximum number of processes on the system; or describe, enable or disable behaviors such as IP forwarding, security restrictions on the superuser (the "securelevel"), or debugging output. In OpenBSD and DragonFly BSD, sysctl is also used as the transport layer for the hw.sensors framework for hardware monitoring, whereas NetBSD uses the ioctl system call for its sysmon envsys counterpart. Both sysctl and ioctl are the two system calls which can be used to add extra functionality to the kernel without adding yet another system call; for example, in 2004 with OpenBSD 3.6, when the tcpdrop utility was introduced, sysctl was used as the underlying system call. In FreeBSD, although there is no sensors framework, the individual temperature and other sensors are still commonly exported through the sysctl tree through Newbus, for example, as is the case with the aibs(4) driver that's available in all the 4 BSD systems, including FreeBSD. In BSD, a system call or system call wrapper is usually provided for use by programs, as well as an administrative program and a configuration file (for setting the tunable parameters when the system boots). This feature first appeared in 4.4BSD. It has the advantage over hardcoded constants that changes to the parameters can be made dynamically without recompiling the kernel. Historically, although kernel variables themselves could be modified through sysctl, the elements comprising the MIB of the sysctl tree were hardcoded at compile time, and as of 2019, it's mostly still the case in OpenBSD (with some exceptions like hw.sensors, which manages and provides its own dynamic subtree). FreeBSD has had "sysctl internal magic" for dynamic sysctl tree management since 1995; NetBSD has had its own implementation of a dynamic sysctl tree since December 2003. Linux In Linux, the sysctl interface mechanism is also exported as part of procfs under the directory (not to be confused with the directory). This difference means checking the value of some parameter requires opening a file in a virtual file system, reading its contents, parsing them and closing the file. The sysctl system call does exist on Linux, but it has been deprecated and does not have a wrapper function in glibc; it is usually unavailable due to many distributions configuring the kernel without CONFIG_SYSCTL_SYSCALL; so it is not recommended for use. Examples When IP forwarding is enabled, the operating system kernel will act as a router. In FreeBSD, NetBSD, OpenBSD, DragonFly BSD, and Darwin/Mac OS X, the parameter can be set to to enable this behavior. In Linux, the parameter is called . In most systems, the command will enable a certain behavior. This will persist until the next reboot. If the behavior should be enabled whenever the system boots, the line can be added/rewritten to the file . Additionally, some sysctl variables cannot be modified after the system is booted. These variables (depending on the variable, the version and flavor of BSD) need to either be set statically in the kernel at compile time or set in . See also hw.sensors ioctl References External links sysctl-explorer.net – An initiative to facilitate the access of Linux' sysctl reference documentation Application programming interfaces Berkeley Software Distribution DragonFly BSD FreeBSD Linux NetBSD OpenBSD Operating system technology System calls Unix Unix process- and task-management-related software
3713
https://en.wikipedia.org/wiki/Bjarne%20Stroustrup
Bjarne Stroustrup
Bjarne Stroustrup (; ; born 30 December 1950) is a Danish computer scientist, most notable for the creation and development of the C++ programming language. He is a visiting professor at Columbia University, and works at Morgan Stanley as a managing director in New York. Early life and education Stroustrup was born in Aarhus, Denmark. His family was working class, and he went to the local schools. He attended Aarhus University 1969–1975 and graduated with a master's degree in mathematics and computer science. His interests focused on microprogramming and machine architecture. He learned the fundamentals of object-oriented programming from its inventor, Kristen Nygaard, who frequently visited Aarhus. In 1979, he received a PhD in computer science from the University of Cambridge, where he was supervised by David Wheeler. His thesis concerned communication in distributed computer systems. Career In 1979, Stroustrup began his career as a member of technical staff in the Computer Science Research Center of Bell Labs in Murray Hill, New Jersey, USA. There, he began his work on C++ and programming techniques. Stroustrup was the head of AT&T Bell Labs' Large-scale Programming Research department, from its creation until late 2002. In 1993, he was made a Bell Labs fellow and in 1996, an AT&T Fellow. From 2002 to 2014, Stroustrup was the College of Engineering Chair Professor in Computer Science at Texas A&M University. From 2011, he was made a University Distinguished Professor. As of January 2014, Stroustrup is a technical fellow and managing director in the technology division of Morgan Stanley in New York City and a visiting professor in computer science at Columbia University. C++ Stroustrup is best known for his work on C++. In 1979, he began developing C++ (initially called "C with Classes"). In his own words, he "invented C++, wrote its early definitions, and produced its first implementation [...] chose and formulated the design criteria for C++, designed all its major facilities, and was responsible for the processing of extension proposals in the C++ standards committee." C++ was made generally available in 1985. For non-commercial use, the source code of the compiler and the foundation libraries was the cost of shipping (US$75); this was before Internet access was common. Stroustrup also published a textbook for the language in 1985, The C++ Programming Language. The key language-technical areas of contribution of C++ are: A static type system with equal support for built-in types and user-defined types (that requires control of the construction, destruction, copying, and movement of objects; and operator overloading). Value and reference semantics. Systematic and general resource management (RAII): constructors, destructor, and exceptions relying on them. Support for efficient object-oriented programming: based on the Simula model with statically checked interfaces, multiple inheritance, and efficient implementation based on virtual function tables. Support for flexible and efficient generic programming: templates with specialization and concepts. Support for compile-time programming: template metaprogramming and compile-time evaluated functions ("constexpr functions"). Direct use of machine and operating system resources. Concurrency support through libraries (where necessary, implemented using intrinsics). Stroustrup documented his principles guiding the design of C++ and the evolution of the language in his 1994 book, The Design and Evolution of C++, and three papers for ACM's History of Programming Languages conferences. Stroustrup was a founding member of the C++ standards committee (from 1989, it was an ANSI committee and from 1991 an ISO committee) and has remained an active member ever since. For 24 years he chaired the subgroup chartered to handle proposals for language extensions (Evolution Working Group). Awards and honors Selected honors 2018: The Charles Stark Draper Prize from The US National Academy of Engineering for conceptualizing and developing the C++ programming language. 2018: The Computer Pioneer Award from The IEEE Computer Society for bringing object-oriented programming and generic programming to the mainstream with his design and implementation of the C++ programming language. 2017: The Faraday Medal from the IET (Institute of Engineering Technology) for significant contributions to the history of computing, in particular pioneering the C++ programming language. 2010: The University of Aarhus's Rigmor og Carl Holst-Knudsens Videnskabspris. 2005: The William Procter Prize for Scientific Achievement from Sigma Xi (the scientific research society) as the first computer scientist ever. 1993: The ACM Grace Murray Hopper award for his early work laying the foundations for the C++ programming language. Based on those foundations and Dr. Stroustrup's continuing efforts, C++ has become one of the most influential programming languages in the history of computing. Fellowships Member of the National Academy of Engineering in 2004. Fellow of the Association for Computing Machinery (ACM) in 1994. Fellow of the Institute of Electrical and Electronics Engineers (IEEE) in 1994. Fellow of the Computer History Museum for his invention of the C++ programming language in 2015. Honorary Fellow of Churchill College, Cambridge in 2017. Honorary doctorates and professorships He was awarded an honorary doctorate from the University Carlos III, Spain 2019. Stroustrup has been a noble doctor at ITMO University since 2013. Honorary Professor in Object Oriented Programming Languages, Department of Computer Science, University of Aarhus. 2010. Publications Stroustrup has written or co-written a number of publications, including the books: A Tour of C++ (1st and 2nd edition) Programming: Principles and Practice Using C++ The C++ Programming Language (1st, 2nd, 3rd, and 4th edition) The Design and Evolution of C++ The Annotated C++ Reference Manual. In all, these books have been translated into 21 languages. More than 100 academic articles, including: B. Stroustrup: Thriving in a crowded and changing world: C++ 2006–2020. ACM/SIGPLAN History of Programming Languages conference, HOPL-IV. London. June 2020. B. Stroustrup: Evolving a language in and for the real world: C++ 1991–2006. ACM HOPL-III. June 2007. B Stroustrup: What should we teach software developers? Why? CACM. January 2010. Gabriel Dos Reis and Bjarne Stroustrup: A Principled, Complete, and Efficient Representation of C++. Journal of Mathematics in Computer Science Volume 5, Issue 3 (2011), Page 335–356 . Special issue on Polynomial System Solving, System and Control, and Software Science. Gabriel Dos Reis and Bjarne Stroustrup: General Constant Expressions for System Programming Languages. SAC-2010. The 25th ACM Symposium on Applied Computing. March 2010. Y. Solodkyy, G. Dos Reis, and B. Stroustrup: Open and Efficient Type Switch for C++. Proc. OOPSLA'12. Peter Pirkelbauer, Yuriy Solodkyy, Bjarne Stroustrup: Design and Evaluation of C++ Open Multi-Methods. In Science of Computer Programming (2009). Elsevier Journal. June 2009. . Gabriel Dos Reis and Bjarne Stroustrup: Specifying C++ Concepts. POPL06. January 2006. B. Stroustrup: Exception Safety: Concepts and Techniques. In Springer Verlag Lecture Notes in Computer Science, LNCS-2022. ISSN 0302-9743. . April 2001. B Stroustrup: Generalizing Overloading for C++2000. Overload, Issue 25. 1 April 1998. B. Stroustrup: Why C++ isn't just an Object-Oriented Programming Language. Addendum to OOPSLA'95 Proceedings. OOPS Messenger, vol 6 no 4, pp 1–13. October 1995. B. Stroustrup: A History of C++: 1979–1991. Proc ACM History of Programming Languages conference (HOPL-2). ACM Sigplan Notices. Vol 28 No 3, pp 271–298. March 1993. Also, History of Programming languages (editors T.J. Begin and R.G. Gibson) Addison-Wesley, 1996. B. Stroustrup: What is Object-Oriented Programming? (1991 revised version). Proc. 1st European Software Festival. February 1991. B. Stroustrup: Data Abstraction in C. Bell Labs Technical Journal. vol 63. no 8 (Part 2), pp 1701–1732. October 1984. B. Stroustrup: Classes: An Abstract Data Type Facility for the C Language. Sigplan Notices, January 1982. More than a hundred technical reports for the C++ standards committee (WG21) References External links 1950 births Aarhus University alumni Fellows of Churchill College, Cambridge C++ C++ people Columbia School of Engineering and Applied Science faculty Danish computer programmers Danish computer scientists Danish expatriates in the United States Fellow Members of the IEEE Fellows of the Association for Computing Machinery Grace Murray Hopper Award laureates Living people Members of the United States National Academy of Engineering People from Aarhus People from Watchung, New Jersey Programming language designers Scientists at Bell Labs Texas A&M University faculty
43967257
https://en.wikipedia.org/wiki/Self-XSS
Self-XSS
Self-XSS (self cross-site scripting) is a social engineering attack used to gain control of victims' web accounts. In a Self-XSS attack, the victim of the attack unknowingly runs malicious code in their own web browser, thus exposing personal information to the attacker, a kind of vulnerability known as cross-site scripting. Overview Self-XSS operates by tricking users also into copying and pasting malicious content into their browsers' web developer console. Usually, the attacker posts a message that says by copying and running certain code, the user will be able to hack another user's account. In fact, the code allows the attacker to hijack the victim's account. History and mitigation In the past, a very similar attack took place, in which users were tricked into pasting malicious JavaScript into their address bar. When browser vendors stopped this by preventing easily running JavaScript from the address bar, attackers started using Self-XSS in its current form. Web browser vendors and web sites have taken steps to mitigate this attack. Firefox and Google Chrome have both begun implementing safeguards to warn users about Self-XSS attacks. Facebook and others now display a warning message when users open the web developer console, and they link to pages explaining the attack in detail. Etymology The "self" part of the name comes from the fact that the user is attacking themselves. The "XSS" part of the name comes from the abbreviation for cross-site scripting, because both attacks result in malicious code running on a legitimate site. However, the attacks don't have much else in common, because XSS is an attack against the website itself (which users cannot protect themselves against but can be fixed by the site operator making their site more secure), whereas Self-XSS is a social engineering attack against the user (which savvy users can protect themselves against but the site operator cannot do anything about it). References Further reading Social engineering (computer security) Web security exploits
625507
https://en.wikipedia.org/wiki/Windows%20Calculator
Windows Calculator
Windows Calculator is a software calculator developed by Microsoft and included in Windows. It has four modes: standard, scientific, programmer, and a graphing mode. The standard mode includes a number pad and buttons for performing arithmetic operations. The scientific mode takes this a step further and adds exponents and trigonometric function, and programmer mode allows the user to perform operations related to computer programming. Recently, a graphing mode was added to the Calculator, allowing users to graph equations on a coordinate plane. The Windows Calculator is one of a few applications that have been bundled in all versions of Windows, starting with Windows 1.0. Since then, the calculator has been upgraded with various capabilities. In addition, the calculator has also been included with Windows Phone and Xbox One. History A simple arithmetic calculator was first included with Windows 1.0. In Windows 3.0, a scientific mode was added, which included exponents and roots, logarithms, factorial-based functions, trigonometry (supports radian, degree and gradians angles), base conversions (2, 8, 10, 16), logic operations, statistical functions such as single variable statistics and linear regression. Windows 9x Until Windows 95, it uses an IEEE 754-1985 double-precision floating-point, and the highest representable number by the calculator is 21024, which is slightly above 10308 (~1.80 × 10308). In Windows 98 and later, it uses an arbitrary-precision arithmetic library, replacing the standard IEEE floating point library. It offers bignum precision for basic operations (addition, subtraction, multiplication, division) and 32 digits of precision for advanced operations (square root, transcendental functions). The largest value that can be represented on the Windows Calculator is currently and the smallest is . (Also ! calculates the gamma function which is defined over all real numbers, only excluding the negative integers). Windows 2000, XP and Vista In Windows 2000, digit grouping is added. Degree and base settings are added to menu bar. The calculators of Windows XP and Vista were able to calculate using numbers beyond 1010000, but calculating with these numbers (e.g. 10^2^2^2^2^2^2^2...) does increasingly slow down the calculator and make it unresponsive until the calculation has been completed. These are the last versions of Windows Calculator, where calculating with binary/decimal/hexadecimal/octal numbers is included into scientific mode. In Windows 7, they were moved to programmer mode, which is a new separate mode that co-exists with scientific mode. Windows 7 In Windows 7, separate programmer, statistics, unit conversion, date calculation, and worksheets modes were added. Tooltips were removed. Furthermore, Calculator's interface was revamped for the first time since its introduction. The base conversion functions were moved to the programmer mode and statistics functions were moved to the statistics mode. Switching between modes does not preserve the current number, clearing it to 0. The highest number is now limited to 1010000 again. In every mode except programmer mode, one can see the history of calculations. The app was redesigned to accommodate multi-touch. Standard mode behaves as a simple checkbook calculator; entering the sequence 6 * 4 + 12 / 4 - 4 * 5 gives the answer 25. In scientific mode, order of operations is followed while doing calculations (multiplication and division are done before addition and subtraction), which means 6 * 4 + 12 / 4 - 4 * 5 = 7. In programmer mode, inputting a number in decimal has a lower and upper limit, depending on the data type, and must always be an integer. Data type of number in decimal mode is signed n-bit integer when converting from number in hexadecimal, octal, or binary mode. On the right of the main Calculator, one can add a panel with date calculation, unit conversion and worksheets. Worksheets allow one to calculate a result of a chosen field based on the values of other fields. Pre-defined templates include calculating a car's fuel economy (mpg and L/100 km), a vehicle lease, and a mortgage. In pre-beta versions of Windows 7, Calculator also provided a Wages template. Windows 8.1 While the traditional Calculator is still included with Windows 8.1, a Metro-style Calculator is also present, featuring a full-screen interface as well as normal, scientific, and conversion modes. Windows 10 The Calculator in non-LTSC editions of Windows 10 is a Universal Windows Platform app. In contrast, Windows 10 LTSC (which does not include universal Windows apps) includes the traditional calculator, but which is now named . Both calculators provide the features of the traditional calculator included with Windows 7, such as unit conversions for volume, length, weight, temperature, energy, area, speed, time, power, data, pressure and angle, and the history list which the user can clear. Both the universal Windows app and LTSC's register themselves with the system as handlers of a '' pseudo-protocol. This registration is similar to that performed by any other well-behaved application when it registers itself as a handler for a filetype (e.g. ) or protocol (e.g. ). All Windows 10 editions (both LTSC and non-LTSC) continue to have a , which however is just a stub that launches (via ShellExecute) the handler that is associated with the '' pseudo-protocol. As with any other protocol or filetype, when there are multiple handlers to choose from, users are free to choose which handler they prefer either via the classic control panel ('Default programs' settings) or the immersive UI settings ('Default Apps' settings) or from the command prompt via . In the Windows 10 Fall Creators Update, a currency converter mode was added to Calculator. On 6 March 2019, Microsoft released the source code for Calculator on GitHub under the MIT License. Features By default, Calculator runs in standard mode, which resembles a four-function calculator. More advanced functions are available in scientific mode, including logarithms, numerical base conversions, some logical operators, operator precedence, radian, degree and gradians support as well as simple single-variable statistical functions. It does not provide support for user-defined functions, complex numbers, storage variables for intermediate results (other than the classic accumulator memory of pocket calculators), automated polar-cartesian coordinates conversion, or support for two-variables statistics. Calculator supports keyboard shortcuts; all Calculator features have an associated keyboard shortcut. Calculator in programmer mode cannot accept or display a number larger than a signed QWORD (16 hexadecimal digits/64 bits). The largest number it can handle is therefore 0x7FFFFFFFFFFFFFFF (decimal 9,223,372,036,854,775,807). Any calculations in programmer mode which exceed this limit will overflow, even if those calculations would succeed in other modes. In particular, scientific notation is not available in this mode. Issues For some transcendental function operations, such as square root operator (sqrt(4) - 2 = -8.1648465955514287168521180122928e-39) causing the number to be calculated incorrectly due to catastrophic cancelation. Older versions of the universal Calculator in non-LTSC editions of Windows 10 doesn't use any regional format (can be set in Region Control Panel) that are different from the app's display language for number formatting (the app's language is English (United States) but Windows's regional format is set to a different format). Calculator Plus Calculator Plus is a separate application for Windows XP and Windows Server 2003 users that adds a 'Conversion' mode over the Windows XP version of the Calculator. The 'Conversion' mode supports unit conversion and currency conversion. Currency exchange rates can be updated using the built-in update feature, which downloads exchange rates from the European Central Bank. See also Formula calculator List of formerly proprietary software Microsoft Mathematics Power Calculator References External links Windows Calculator on Microsoft Store Source code on GitHub Microsoft Calculator Plus 1985 software Formerly proprietary software Free and open-source software Mathematical software Microsoft free software Software calculators Software using the MIT license Universal Windows Platform apps Windows components Windows-only free software Xbox One software Windows Phone software
543884
https://en.wikipedia.org/wiki/Generative%20music
Generative music
Generative music is a term popularized by Brian Eno to describe music that is ever-different and changing, and that is created by a system. Historical background In 1995 whilst working with SSEYO's Koan software (built by Tim Cole and Pete Cole who later evolved it to Noatikl then Wotja), Brian Eno used the term "generative music" to describe any music that is ever-different and changing, created by a system. The term has since gone on to be used to refer to a wide range of music, from entirely random music mixes created by multiple simultaneous CD playback, through to live rule-based computer composition. Koan was SSEYO's first real-time music generation system, developed for the Windows platform. Work on Koan was started in 1990, and the software was first released to the public in 1994. In 1995 Brian Eno started working with SSEYO's Koan Pro software, work which led to the 1996 publication of his title 'Generative Music 1 with SSEYO Koan Software'. In 2007 SSEYO evolved Koan into what became Intermorphic Noatikl, and eventually Noatikl itself evolved into Wotja; Wotja X was launched in 2018 for all of iOS, macOS, Windows and Android. Eno's early relationship with SSEYO Koan and Intermorphic co-founder Tim Cole was captured and published in his 1995 diary A Year with Swollen Appendices. Software Many software programs have been written to create generative music. FractMus, developed by Gustavo Díaz-Jerez is a real-time algorithmic music generator. Nodal (2007–present), a graph-based generative composition system for real-time MIDI sequence generation (for macOS and Windows) Bloom developed 2008 by Peter Chilvers together with Brian Eno for the iPhone and iPod Touch. Modern generative music games have been considered generative in character. Theory There are four primary perspectives on generative music (Wooller, R. et al., 2005) (reproduced with permission): Linguistic/structural Music composed from analytic theories that are so explicit as to be able to generate structurally coherent material (Loy and Abbott 1985; Cope 1991). This perspective has its roots in the generative grammars of language (Chomsky 1956) and music (Lerdahl and Jackendoff 1983), which generate material with a recursive tree structure. Interactive/behavioural Music generated by a system component that has no discernible musical inputs. That is, "not transformational" (Rowe 1991; Lippe 1997:34; Winkler 1998). The Wotja software by Intermorphic, and the Koan software by SSEYO used by Brian Eno to create Generative Music 1, are both examples of this approach. Creative/procedural Music generated by processes that are designed and/or initiated by the composer. Steve Reich's It's Gonna Rain and Terry Riley's In C are examples of this (Eno 1996). Biological/emergent Non-deterministic music (Biles 2002), or music that cannot be repeated, for example, ordinary wind chimes (Dorin 2001). This perspective comes from the broader generative art movement. This revolves around the idea that music, or sounds may be "generated" by a musician "farming" parameters within an ecology, such that the ecology will perpetually produce different variation based on the parameters and algorithms used. An example of this technique is Joseph Nechvatal's Viral symphOny: a collaborative electronic noise music symphony created between the years 2006 and 2008 using custom artificial life software based on a viral model. Other notes Brian Eno, who coined the term generative music, has used generative techniques on many of his works, starting with Discreet Music (1975) up to and including (according to Sound on Sound Oct 2005) Another Day on Earth. His works, lectures, and interviews on the subject have done much to promote generative music in the avant-garde music community. Eno used SSEYO's Koan generative music system (created by Pete Cole and Tim Cole of Intermorphic), to create his hybrid album Generative Music 1 (published by SSEYO and Opal Arts in April 1996), which is probably his first public use of the term generative music. Lerdahl and Jackendoff's publication described a generative grammar for homophonic tonal music, based partially on a Schenkerian model. While originally intended for analysis, significant research into automation of this process in software is being carried out by Keiji Hirata and others. In It's Gonna Rain, an early work by contemporary composer Steve Reich, overlapping tape loops of the spoken phrase "it's gonna rain" are played at slightly different speeds, generating different patterns through phasing. A limited form of generative music was attempted successfully by members of the UK electronic music act Unit Delta Plus; Delia Derbyshire, Brian Hodgson and Peter Zinovieff, in 1968. However, its use would only be popularized later on. See also Generative art Algorithmic composition Cellular automaton Change ringing Computer-generated music Interactive music Live coding List of music software Musikalisches Würfelspiel Footnotes References Artística de Valencia, After The Net, 5 – 29 June 2008, Valencia, Spain: catalogue: Observatori 2008: After The Future, p. 80 Biles, A. 2002a. GenJam in Transition: from Genetic Jammer to Generative Jammer. In International Conference on Generative Art, Milan, Italy. Chomsky, N. 1956. Three models for the description of language. IRE Transcripts on Information Theory, 2: 113-124. Collins, N. 2008. The analysis of generative music programs. Organised Sound, 13(3): 237–248. Cope, D. 1991. Computers and musical style. Madison, Wis.: A-R Editions. Dorin, A. 2001. Generative processes and the electronic arts. Organised Sound, 6 (1): 47-53. Eno, B. 1996. Generative Music. http://www.inmotionmagazine.com/eno1.html (accessed 26 February 2009). Essl, K. 2002. Generative Music. http://www.essl.at/bibliogr/generative-music.html (accessed 22 Mar 2010). García, A. et al. 2010. Music Composition Based on Linguistic Approach. 9th Mexican International Conference on Artificial Intelligence, MICAI 2010, Pachuca, Mexico. pp. 117–128. Intermorphic Limited History of Noatikl, Koan and SSEYO (accessed 26 February 2009). Lerdahl, F. and R. Jackendoff. 1982. A generative theory of tonal music. Cambridge, Mass: MIT Press. Lippe, C. 1997. Music for piano and computer: A description. Information Processing Society of Japa SIG Notes, 97 (122): 33-38. Loy, G. and C. Abbott. 1985. Programming languages for computer music synthesis, performance and composition. ACM Computing Surveys, 17 (2): 235-265. Nierhaus, G. Algorithmic Composition - Paradigms of Automated Music Generation. Springer 2009. Rowe, R. 1991. Machine Learning and Composing: Making Sense of Music with Cooperating Real-Time Agents. Thesis from Media Lab. Mass.: MIT. Winkler, T. 1998. Composing Interactive Music. Cambridge, Massachusetts: MIT Press. Wooller, R., Brown, A. R, et al. A framework for comparing algorithmic music systems. In: Symposium on Generative Arts Practice (GAP). 2005. University of Technology Sydney. Computer music software
14570367
https://en.wikipedia.org/wiki/Wim%20Taymans
Wim Taymans
Wim Odilia Georges Taymans is a Belgian software developer based in Malaga, Spain. Taymans started his career in multimedia development on the Commodore 64 writing various games and demos. He was known in the Commodore 64 coding community under the nickname The Wim. In 1990 he was the coder behind the C64 game Puffy's Saga which was distributed by Ubisoft. He later moved on to the Amiga where he among other things wrote a version of the classic game Boulder Dash. In 1994 he installed the Linux operating system on his Amiga and has since been involved with the development of various multimedia technologies for the Linux platform. His first efforts on Linux were some assembly optimizations for the rtjpeg library; later, he worked on the Trinity video editor before teaming up with Erik Walthinsen to create the GStreamer multimedia framework. In 2004 he started working for Fluendo in Spain as employee number 3. While working for Fluendo he designed and wrote most of what today is the 0.10 release series of GStreamer. In July 2007 he left Fluendo together with many of the other GStreamer developers and joined up with United Kingdom company Collabora. As part of his previous job at Collabora he maintained and developed GStreamer further, with the aim of providing Linux and other Unix and Unix-like operating systems with a competitive and powerful multimedia framework. In November 2013, Wim started a new endeavour as a Principal Software Engineer at Red Hat and will spend most of his time working on upstream GStreamer. He was the main architect and developer behind the GStreamer 1.0 release which came out on September 24, 2012. In July 2015 it was announced that Wim is designing and writing Pinos, which became PipeWire, from his position as Principal Engineer at Red Hat. PipeWire is a server for handling audio and video streams on Linux. References External links C64.org profile page GStreamer Fluendo Collabora Interview with Wim Taymans by Audio Libre 1972 births Living people Video game programmers Free software programmers Belgian computer programmers
22472
https://en.wikipedia.org/wiki/Ophiuchus
Ophiuchus
Ophiuchus () is a large constellation straddling the celestial equator. Its name comes from the Ancient Greek (), meaning "serpent-bearer", and it is commonly represented as a man grasping a snake. The serpent is represented by the constellation Serpens. Ophiuchus was one of the 48 constellations listed by the 2nd-century astronomer Ptolemy, and it remains one of the 88 modern constellations. An old alternative name for the constellation was Serpentarius (). Location Ophiuchus lies between Aquila, Serpens, Scorpius, Sagittarius, and Hercules, northwest of the center of the Milky Way. The southern part lies between Scorpius to the west and Sagittarius to the east. In the northern hemisphere, it is best visible in summer. It is opposite of Orion. Ophiuchus is depicted as a man grasping a serpent; the interposition of his body divides the snake constellation Serpens into two parts, Serpens Caput and Serpens Cauda. Ophiuchus straddles the equator with the majority of its area lying in the southern hemisphere. Rasalhague, its brightest star, lies near the northern edge of Ophiuchus at about declination. The constellation extends southward to −30° declination. Segments of the ecliptic within Ophiuchus are south of −20° declination. In contrast to Orion, from November to January (summer in the Southern Hemisphere, winter in the Northern Hemisphere), Ophiuchus is in the daytime sky and thus not visible at most latitudes. However, for much of the Arctic Circle in the Northern Hemisphere's winter months, the Sun is below the horizon even at midday. Stars (and thus parts of Ophiuchus, especially Rasalhague) are then visible at twilight for a few hours around local noon, low in the south. In the Northern Hemisphere's spring and summer months, when Ophiuchus is normally visible in the night sky, the constellation is actually not visible, because the midnight sun obscures the stars at those times and places in the Arctic. In countries close to the equator, Ophiuchus appears overhead in June around midnight and in the October evening sky. Features Stars The brightest stars in Ophiuchus include α Ophiuchi, called Rasalhague ("head of the serpent charmer"), at magnitude 2.07, and η Ophiuchi, known as Sabik ("the preceding one"), at magnitude 2.43. Other bright stars in the constellation include β Ophiuchi, Cebalrai ("dog of the shepherd") and λ Ophiuchi, or Marfik ("the elbow"). RS Ophiuchi is part of a class called recurrent novae, whose brightness increase at irregular intervals by hundreds of times in a period of just a few days. It is thought to be at the brink of becoming a type-1a supernova. Barnard's Star, one of the nearest stars to the Solar System (the only stars closer are the Alpha Centauri binary star system and Proxima Centauri), lies in Ophiuchus. It is located to the left of β and just north of the V-shaped group of stars in an area that was once occupied by the now-obsolete constellation of Taurus Poniatovii (Poniatowski's Bull). In 2005, astronomers using data from the Green Bank Telescope discovered a superbubble so large that it extends beyond the plane of the galaxy. It is called the Ophiuchus Superbubble. In April 2007, astronomers announced that the Swedish-built Odin satellite had made the first detection of clouds of molecular oxygen in space, following observations in the constellation Ophiuchus. The supernova of 1604 was first observed on 9 October 1604, near θ Ophiuchi. Johannes Kepler saw it first on 16 October and studied it so extensively that the supernova was subsequently called Kepler's Supernova. He published his findings in a book titled De stella nova in pede Serpentarii (On the New Star in Ophiuchus's Foot). Galileo used its brief appearance to counter the Aristotelian dogma that the heavens are changeless. In 2009 it was announced that GJ 1214, a star in Ophiuchus, undergoes repeated, cyclical dimming with a period of about 1.5 days consistent with the transit of a small orbiting planet. The planet's low density (about 40% that of Earth) suggests that the planet may have a substantial component of low-density gas—possibly hydrogen or steam. The proximity of this star to Earth (42 light years) makes it a tempting target for further observations. In April 2010, the naked-eye star ζ Ophiuchi was occulted by the asteroid 824 Anastasia. Deep-sky objects Ophiuchus contains several star clusters, such as IC 4665, NGC 6633, M9, M10, M12, M14, M19, M62, and M107, as well as the nebula IC 4603-4604. M10 is a fairly close globular cluster, only 20,000 light-years from Earth. It has a magnitude of 6.6 and is a Shapley class VII cluster. This means that it has "intermediate" concentration; it is only somewhat concentrated towards its center. The unusual galaxy merger remnant and starburst galaxy NGC 6240 is also in Ophiuchus. At a distance of 400 million light-years, this "butterfly-shaped" galaxy has two supermassive black holes 3,000 light-years apart. Confirmation of the fact that both nuclei contain black holes was obtained by spectra from the Chandra X-ray Observatory. Astronomers estimate that the black holes will merge in another billion years. NGC 6240 also has an unusually high rate of star formation, classifying it as a starburst galaxy. This is likely due to the heat generated by the orbiting black holes and the aftermath of the collision. In 2006, a new nearby star cluster was discovered associated with the 4th magnitude star Mu Ophiuchi. The Mamajek 2 cluster appears to be a poor cluster remnant analogous to the Ursa Major Moving Group, but 7 times more distant (approximately 170 parsecs away). Mamajek 2 appears to have formed in the same star-forming complex as the NGC 2516 cluster roughly 135 million years ago. Barnard 68 is a large dark nebula, located 410 light-years from Earth. Despite its diameter of 0.4 light-years, Barnard 68 only has twice the mass of the Sun, making it both very diffuse and very cold, with a temperature of about 16 kelvins. Though it is currently stable, Barnard 68 will eventually collapse, inciting the process of star formation. One unusual feature of Barnard 68 is its vibrations, which have a period of 250,000 years. Astronomers speculate that this phenomenon is caused by the shock wave from a supernova. The space probe Voyager 1, the furthest man-made object from earth, is traveling in the direction of Ophiuchus. It is located between α Herculis, α and κ Ophiuchi at right ascension 17h 13m and declination +12° 25’ (July 2020). History and mythology There is no evidence of the constellation preceding the classical era, and in Babylonian astronomy, a "Sitting Gods" constellation seems to have been located in the general area of Ophiuchus. However, Gavin White proposes that Ophiuchus may in fact be remotely descended from this Babylonian constellation, representing Nirah, a serpent-god who was sometimes depicted with his upper half human but with serpents for legs. The earliest mention of the constellation is in Aratus, informed by the lost catalogue of Eudoxus of Cnidus (4th century BCE): To the ancient Greeks, the constellation represented the god Apollo struggling with a huge snake that guarded the Oracle of Delphi. Later myths identified Ophiuchus with Laocoön, the Trojan priest of Poseidon, who warned his fellow Trojans about the Trojan Horse and was later slain by a pair of sea serpents sent by the gods to punish him. According to Roman era mythography, the figure represents the healer Asclepius, who learned the secrets of keeping death at bay after observing one serpent bringing another healing herbs. To prevent the entire human race from becoming immortal under Asclepius' care, Jupiter killed him with a bolt of lightning, but later placed his image in the heavens to honor his good works. In medieval Islamic astronomy (Azophi's Uranometry, 10th century), the constellation was known as Al-Ḥawwa''', "the snake-charmer". Aratus describes Ophiuchus as trampling on Scorpius with his feet. This is depicted in Renaissance to Early Modern star charts, beginning with Albrecht Dürer in 1515; in some depictions (such as that of Johannes Kepler in De Stella Nova, 1606), Scorpius also seems to threaten to sting Serpentarius in the foot. This is consistent with Azophi, who already included ψ Oph and ω Oph as the snake-charmer's "left foot", and θ Oph and ο Oph as his "right foot", making Ophiuchus a zodiacal constellation at least as regards his feet. This arrangement has been taken as symbolic in later literature and placed in relation to the words spoken by God to the serpent in the Garden of Eden (Genesis 3:15). Zodiac Ophiuchus is one of the thirteen constellations that cross the ecliptic. It has sometimes been suggested as the "13th sign of the zodiac". However, this confuses zodiac or astrological signs with constellations. The signs of the zodiac are a twelve-fold division of the ecliptic, so that each sign spans 30° of celestial longitude, approximately the distance the Sun travels in a month, and (in the Western tradition) are aligned with the seasons so that the March equinox always falls on the boundary between Pisces and Aries. Constellations, on the other hand, are unequal in size and are based on the positions of the stars. The constellations of the zodiac have only a loose association with the signs of the zodiac, and do not in general coincide with them. In Western astrology the constellation of Aquarius, for example, largely corresponds to the sign of Pisces. Similarly, the constellation of Ophiuchus occupies most (29 November – 18 December) of the sign of Sagittarius (23 November – 21 December). The differences are due to the fact that the time of year that the Sun passes through a particular zodiac constellation's position has slowly changed (because of the precession of the equinoxes) over the centuries from when the Babylonians originally developed the Zodiac. Citations See also Ophiuchus (Chinese astronomy) References Ridpath, Ian; and Tirion, Wil; (2007) Stars and Planets Guide'', Collins, London; , Princeton University Press, Princeton; External links The Deep Photographic Guide to the Constellations: Ophiuchus Star Tales – Ophiuchus Warburg Institute Iconographic Database (medieval and early modern images of Ophiuchus under the name Serpentarius) Constellations Equatorial constellations Constellations listed by Ptolemy Asclepius in mythology
7888008
https://en.wikipedia.org/wiki/Activstudio
Activstudio
Activstudio is a software application designed specifically for teachers and presenters who use an Activboard Interactive Whiteboard, Promethean's interactive whiteboard. Activstudio and the derivative product Activprimary were designed and implemented by Nigel Pearce together with a software development team at Promethean (Blackburn, England). History Promethean released its first commercially available interactive whiteboard application 'PandA' (an acronym for ‘Presentations and Annotations’) in March 1997. PandA (also designed by Pearce) was based loosely upon the Windows multimedia authoring software Creator. During 2000, PandA was renamed Activstudio. The first version of Activstudio was released in the UK in February 2001. This version replaced and expanded upon the features of PandA. Activstudio v2 was released in March 2004. Activstudio v2.5 was released in February 2006. The derivative application Activprimary (similar in functionality to Activstudio but offering an interface aimed towards the younger learner) was released for the primary education sector in January 2004. Activprimary won the Worlddidac award for Software Innovation in 2004 and again in 2006. Activprimary 2 won Best Education Technology Solution for Productivity/Creativity at The Software & Information Industry Association’s (SIIA) 23rd Annual CODIE awards in 2006 Version 3 of Activstudio and Activprimary were released in January 2007. A number of prototype, simultaneous, multi-input whiteboard techniques were developed in June 2006. These ideas were first exhibited at the NECC, Atlanta, in July 2007 and many were included under the product name 'Activarena' within Activstudio and Activprimary versions 3.5 (released Sept 2007). New features included shared concept mapping and dual flipcharting. Version 3.6 of Activstudio and Activprimary were released in December 2007. Activprimary 3.6 won the Worlddidac award for Software Innovation for the third time in 2008. Version 3.7 of Activstudio and Activprimary for Windows were released in August 2008. Promethean Learning and Activstudio 3 were both finalists for Best Professional Development Solution and Best Education Technology Solution for Productivity/Creativity at The Software & Information Industry Association’s (SIIA) 23rd Annual CODIE awards in 2008 ActivInspire, the successor to (and the redesign of) Activstudio, was released in March 2009. ActivInspire contains most of the functionality found within ActivStudio and ActivPrimary, including both user interfaces in one product. It also adds support for the ActivExpression learner response system and the ActivArena Dual User mode. ActivInspire runs on Windows, Mac and Linux platforms. Description Activstudio provides a suite of ‘interactive whiteboard centric’ tools. The main feature is to allow the user to prepare and present files known as flipcharts, an electronic document that can contain a combination of vector and raster object data including lines, shapes, rich text, images, video, FLASH media and other third party document types. The use of the term 'flipchart' in describing these electronic documents derives from the similarity in the way a physical flipchart is commonly used, effectively starting with a set of 'blank canvas' pages upon which to present. The very early versions of Activstudio had a much reduced functionality set, comprising simple pens, highlighters and page turning and hence the analogy with 'flipcharts' was born and has remained. The flipchart also captures and stores any notes (termed ‘annotations’) which may be written on the surface of the interactive whiteboard using an electronic pen. Additionally the program allows the user to write (or ‘annotate’) directly over other applications, WEB Browser content and live video clips. On the Windows platform, Activstudio also comprises the program Activmarker. This product allows for pen annotations to be written over Microsoft Word documents, Excel spreadsheets and Powerpoint presentations. The subsequent markup is then automatically committed into the content of the Office document for storage. Activstudio includes integrated tools to centrally manage additional interactive inputs including remote annotation and telepointing using wireless handheld Activslates and Activote Student Response Devices. Activstudio incorporates many of today’s standard ‘whiteboard centric’ concepts, including the spotlight, revealer and zoom functions, a sound recorder, resource library and the interactive protractor, ruler and compass tools. A large selection of editable Restrictors, Properties and Actions that can be edited on screen through simple clicks enable the creation of interactive multimedia. Activstudio version 3 added point and click object authoring capabilities, gesture recognition, an optional Windows style interface and integrated links to Promethean Planet via a digital dashboard. Activprimary is virtually identical in functionality to Activstudio but is aimed at the younger learner environment, differing only in its user interface. Other derivatives of the applications: Activstudio SE (Student Edition). A reduced functionality version. Activstudio Viewer. A flipchart viewer having an Activstudio skin. Activprimary Viewer. A flipchart viewer having an Activprimary skin. Activstudio and Activprimary are both available in 32 languages and for most versions of the Windows and MAC operating systems. All authored flipcharts will work in either program and there are many freely available flipcharts uploaded to the internet. A quick search should provide a good starting point for any teacher just starting out with an Activboard. Whilst Activstudio and Activprimary were originally designed for use on the Activboard Interactive Whiteboard, the programs can also be used with the standard computer mouse for preparation purposes and are also suitable for running with any other type of pen input device or on any other make of interactive whiteboard. File Format ActivStudio and ActivPrimary use a proprietary 'flipchart' format ending in the file extension '.flp'. This utilizes PKZip compression technology to reduce flipchart file sizes. This technique is also employed in other applications, such as the Microsoft PowerPoint pptx file format. In 2009, Promethean released ActivInspire which can be used for free. This software package now uses the new 'flipchart' file format ending in the file extension '.flipchart'. ActivInspire can open the older '.flp' format but it can only save in the new '.flipchart' format. This results in ActivInspire's files from no longer being backwards compatible. In other words, Activstudio (and Activprimary) cannot open the flipcharts created by ActivInspire. The ActivInspire 'flipchart' file uses a proprietary compression format. Accessing these files in a hex-editor you will find the file header information for this format is 'Bamboo'. One of the consequences of using a proprietary file format is that it makes it difficult for third-party software to use pen flipcharts. References Software for teachers
69791160
https://en.wikipedia.org/wiki/UNC1151%20%28hacker%20group%29
UNC1151 (hacker group)
UNC1151 is a hacker group, allegedly linked to Belarusian intelligence and was reported to be responsible for a cyberattack on Ukrainian government websites in January 2022. A March 2021 report from FireEye found that the group is associated the Ghostwriter hacker group. In February 2022 The Register reported that a Ukrainian CERT had announced that the group was targeting "private ‘i.ua’ and ‘meta.ua’ [email] accounts of Ukrainian military personnel and related individuals" as part of a phishing attack during the invasion of Ukraine. Mandiant said that two domains mentioned by the CERT, i[.]ua-passport[.]space and id[.]bigmir[.]space were known command and control domains of the group. Mandiant also said "We are able to tie the infrastructure reported by CERT.UA to UNC1151, but have not seen the phishing messages directly. However, UNC1151 has targeted Ukraine and especially its military extensively over the past two years, so this activity matches their historical pattern." References Hacker groups
14202916
https://en.wikipedia.org/wiki/Kit%20Cosper
Kit Cosper
James Kitchen ‘Kit’ Cosper (born October 21, 1967) was the sixth employee at Red Hat and one of the founders of Linux Hardware Solutions (LHS) as well as a founding director of Linux International. After LHS was acquired by VA Linux Cosper became Director of Miscellaneous at VA and eventually managed their Open Source advocacy group, consisting of Chris DiBona, Joseph Arruda and John Mark Walker. During the Dot Com implosion of 2001 Cosper left VA and went on to serve as Executive Technical Advisor and interim CTO at Telkonet, a powerline networking startup in Germantown, Maryland. He was married on May 5, 1990 in Wilmington, NC to Lourie Cosper and has three children: William (born March 1985), Keifer (born April 1993) and Elizabeth (born Jan. 1996). His first grandchild Hulet James Cosper, a namesake, was born in Jan. 2019 in Elmer, NJ to his oldest son William and his wife Elizabeth Cosper née Atwill. External links Then You Win - Article on Open Source and Microsoft Personal info page - Red Hat employee Personal info page from WayBackMachine Living people Geeknet 1967 births
8719354
https://en.wikipedia.org/wiki/Ranjit%20Hoskote
Ranjit Hoskote
Ranjit Hoskote (born 29 March 1969) is an Indian poet, art critic, cultural theorist and independent curator. He has been honoured by the Sahitya Akademi, India's National Academy of Letters, with the Sahitya Akademi Golden Jubilee Award and the Sahitya Akademi Prize for Translation. Early life and education Ranjit Hoskote was born in Mumbai and educated at the Bombay Scottish School, Elphinstone College, where he studied for a BA in Politics, and later at University of Bombay, from where he obtained an MA degree in English Literature and Aesthetics. Career As poet Hoskote began to publish his work during the early 1990s. He is the author of several collections of poetry including Zones of Assault, The Cartographer's Apprentice, Central Time, Jonahwhale, The Sleepwalker's Archive and Vanishing Acts: New & Selected Poems 1985–2005. Hoskote has been seen as extending the Anglophone Indian poetry tradition established by Dom Moraes, Nissim Ezekiel, A.K. Ramanujan and others through "major new works of poetry". His work has been published in numerous Indian and international journals, including Poetry Review (London), Wasafiri, Poetry Wales, Nthposition, The Iowa Review, Green Integer Review, Fulcrum (annual), Rattapallax, Lyric Poetry Review, West Coast Line, Kavya Bharati, Prairie Schooner, Coldnoon: Travel Poetics, The Four-Quarters Magazine and Indian Literature. His poems have also appeared in German translation in Die Zeit, Akzente, the Neue Zuercher Zeitung, Wespennest and Art & Thought/ Fikrun-wa-Fann. He has translated the Marathi poet Vasant Abaji Dahake, co-translated the German novelist and essayist Ilija Trojanow, and edited an anthology of contemporary Indian verse. His poems have appeared in anthologies including Language for a New Century (New York: W. W. Norton, 2008). and The Bloodaxe Book of Contemporary Indian Poets (Newcastle: Bloodaxe, 2008). Hoskote has also translated the 14th-century Kashmiri mystic-poet Lal Ded, variously known as Lalleshwari, Lalla and Lal Arifa, for the Penguin Classics imprint, under the title I, Lalla: The Poems of Lal Ded. This publication marks the conclusion of a 20-year-long project of research and translation for the author. Reviewing Hoskote's third volume, The Sleepwalker's Archive, for The Hindu in 2001, the poet and critic Keki Daruwalla wrote: "It is the way he hangs on to a metaphor, and the subtlety with which he does it, that draws my admiration (not to mention envy)... Hoskote’s poems bear the 'watermark of fable': behind each cluster of images, a story; behind each story, a parable. I haven’t read a better poetry volume in years." Commenting on Hoskote's poetry on Poetry International Web, the poet and editor Arundhathi Subramaniam observes: "His writing has revealed a consistent and exceptional brilliance in its treatment of image. Hoskote’s metaphors are finely wrought, luminous and sensuous, combining an artisanal virtuosity with passion, turning each poem into a many-angled, multifaceted experience." In 2004, a year in which Indian poetry in English lost three of its most important figures – Ezekiel, Moraes, and Arun Kolatkar – Hoskote wrote obituaries for these "masters of the guild". Hoskote has also written about the place of poetry in contemporary culture. As a literary organiser, Hoskote has been associated with the PEN All-India Centre, the Indian branch of International PEN, since 1986, and is currently its general secretary, as well as Editor of its journal, Penumbra. He has also been associated with the Poetry Circle Bombay since 1986, and was its president from 1992 to 1997. As art critic Hoskote has been placed by research scholars in a historic lineage of five major art critics active in India over a sixty-year period: "William George Archer, Richard Bartholomew, Jagdish Swaminathan, Geeta Kapur, and Ranjit Hoskote... played an important role in shaping contemporary art discourse in India, and in registering multiple cultural issues, artistic domains, and moments of history." Hoskote was principal art critic for The Times of India, Bombay, from 1988 to 1999. In his role as religion and philosophy editor for The Times, he began a popular column on spirituality, sociology of religion, and philosophical commentary, "The Speaking Tree" (he named the column, which was launched in May 1996, after the benchmark 1971 study of Indian society and culture, The Speaking Tree, written by scholar and artist Richard Lannoy). Hoskote was an art critic and senior editor with The Hindu, from 2000 to 2007, contributing to its periodical of thought and culture, Folio. In his role as an art critic, Hoskote has authored a critical biography as well as a major retrospective study of the painter Jehangir Sabavala, and also monographs on the artists Atul Dodiya, Tyeb Mehta, Sudhir Patwardhan, Baiju Parthan, Bharti Kher and Iranna GR. He has written major essays on other leading Indian artists, including, among others, Gieve Patel, Bhupen Khakhar, Akbar Padamsee, Mehlli Gobhai, Vivan Sundaram, Laxman Shreshtha, Surendran Nair, Jitish Kallat, the Raqs Media Collective, Shilpa Gupta and Sudarshan Shetty. Hoskote has also written a monographic essay on the Berlin-based artists Dolores Zinny and Juan Maidagan. As cultural theorist As a cultural theorist, Hoskote has addressed the cultural and political dynamics of postcolonial societies that are going through a process of globalisation, emphasising the possibilities of a 'non-western contemporaneity', "intercultural communication" and "transformative listening". He has also returned often to the theme of the "nomad position" and to the polarity between "crisis and critique". In many of his writings and lectures, Hoskote examines the relationship between the aesthetic and the political, describing this as a tension between the politics of the expressive and the expressivity of the political. He has explored, in particular, the connections between popular visual art, mass mobilisations and the emergence of fluid and fluctuating identities within the evolving metropolitan cultures of the postcolonial world, and in what he has called the nascent "third field" of artistic production by subaltern producers in contemporary India, which is "neither metropolitan nor rural, neither (post)modernist nor traditional, neither derived from academic training nor inherited without change from tribal custom" and assimilates into itself resources from the global archive of cultural manifestations. Hoskote has also speculated, in various essays, on the nature of a "futurative art" possessed of an intermedia orientation, and which combines critical resistance with expressive pleasure. He writes that "the modern art-work is often elegiac in nature: it mourns the loss of beauty through scission and absence; it carries within its very structure a lament for the loss of beauty." In a series of essays, papers and articles published from the late 1990s onward, Hoskote has reflected on the theme of the asymmetry between a 'West' that enjoys economic, military and epistemological supremacy and an 'East' that is the subject of sanction, invasion and misrepresentation. In some of these writings, he dwells on the historic fate of the "House of Islam" as viewed from the West and from India, while in others, he retrieves historic occasions of successful cultural confluence, when disparate belief systems and ethnicities have come together into a fruitful and sophisticated hybridity. Hoskote, in collaboration with wife Nancy Adajania, has focused on transcultural artistic practice, its institutional conditions, systems of production and creative outcomes, and the radical transformations that it brings about in the relationship between regional art histories and a fast-paced global art situation that is produced within the international system of biennials, collaborative projects, residencies and symposia. As curator Hoskote was co-curator of the 7th Gwangju Biennale (2008) in South Korea, collaborating with Okwui Enwezor and Hyunjin Kim. In 2011, Hoskote was invited to act as curator of the first-ever professionally curated national pavilion of India at the Venice Biennale, organised by the Lalit Kala Akademi, India's National Academy of Art. Hoskote titled the pavilion "Everyone Agrees: It's About To Explode", and selected works by the artists Zarina Hashmi, Gigi Scaria, Praneet Soi, and the Desire Machine Collective for it. The pavilion was installed in the central Artiglierie section of the Arsenale. Hoskote wrote that his pavilion was "intended to serve as a laboratory in which we will test out certain key propositions concerning the contemporary Indian art scene. Through it, we could view India as a conceptual entity that is not only territorially based, but is also extensive in a global space of the imagination." In making his selection of artists, the curator aimed to "represent a set of conceptually rigorous and aesthetically rich artistic practices that are staged in parallel to the art market. Furthermore, these have not already been valorized by the gallery system and the auction-house circuit.... The Indian manifestation will also focus on artistic positions that emphasize the cross-cultural nature of contemporary artistic production: some of the most significant art that is being created today draws on a diversity of locations, and different economies of image-making and varied cultural histories." As cultural activist Hoskote is a defender of cultural freedoms against the monopolistic claims of the State, religious pressure groups and censors, whether official or self-appointed. He has been involved in organising protest campaigns in defence of victims of cultural intolerance. Awards, grants and residencies Hoskote has been a Visiting Writer and Fellow of the International Writing Program of the University of Iowa (1995) and was writer-in-residence at the Villa Waldberta, Munich (2003). He has also held a writing residency as part of the Goethe-Institut/ Polnisches Institut project, "The Promised City: Warsaw/ Berlin/ Mumbai" (2010). He was awarded the Sanskriti Award for Literature, 1996, and won First Prize in the British Council/Poetry Society All-India Poetry Competition, 1997. India's National Academy of Letters honoured him with the Sahitya Akademi Golden Jubilee Award in 2004. The S. H. Raza Foundation conferred its 2006 Raza Award for Literature on Hoskote. Hoskote has held an Associate Fellowship with Sarai CSDS, a new-media initiative of the Centre for the Study of Developing Societies (CSDS), New Delhi, and is in the process of developing, jointly with Nancy Adajania, a new journal of critical inquiry in the visual arts. Hoskote has been researcher-in-residence at BAK/ basis voor actuele kunst, Utrecht, and is a contributor to BAK's long-term Former West platform. Hoskote currently lives and works in Mumbai. Bibliography Poetry Zones of Assault (1991), Rupa Publishers, New Delhi The Cartographer’s Apprentice. (Pundole Art Gallery, Mumbai 2000) The Sleepwalker’s Archive. (Single File, Mumbai 2001) Vanishing Acts: New and Selected Poems 1985–2005. (Penguin Books India, New Delhi 2006) REVIEW REVIEW Die Ankunft der Vögel, German translation by Jürgen Brocan. (Carl Hanser Verlag, Munich 2006) REVIEW REVIEW I, Lalla - The Poems of Lal Died (2013), Penguin Classics Central Time. (Penguin Books India/ Viking, New Delhi 2014) AUTHOR INTERVIEW REVIEW AUTHOR PROFILE REVIEW REVIEW REVIEW FEATURE Pale Ancestors. (poems by Ranjit Hoskote and paintings by Atul Dodiya; Bodhi Art, Mumbai 2008) Jonahwhale (2018), Penguin Random House India REVIEW INTERVIEW The Atlas of Lost Beliefs (2020), Arc Publications REVIEW Hunchprose (2021), Penguin Hamish Hamilton REVIEW REVIEW Non fiction Pilgrim, Exile, Sorcerer : The Painterly Evolution of Jehangir Sabavala (Eminence Designs, Mumbai 1998) Sudhir Patwardhan: The Complicit Observer. (Eminence Designs/ Sakshi Gallery, Mumbai 2004) REVIEW The Crucible of Painting: The Art of Jehangir Sabavala. (Eminence Designs/ National Gallery of Modern Art, Mumbai 2005) Ganesh Pyne: A Pilgrim in the Dominion of Shadows. (Galerie 88, Kolkata 2005) Baiju Parthan: A User's Manual. (Afterimage, Mumbai 2006) REVIEW The Dancer on the Horse: Reflections on the Art of Iranna GR. (Lund Humphries/ Ashgate Publishing, London 2007) EXTRACTS Bharti Kher. (Jack Shainman Gallery, New York 2007) The Crafting of Reality: Sudhir Patwardhan, Drawings. (The Guild Art Gallery, Mumbai 2008) REVIEW Zinny & Maidagan: Das Abteil/ Compartment. (Museum für Moderne Kunst, Frankfurt/ Main & Verlag der Buchhandlung Walther König, Köln 2010) The Dialogues Series. (co-authored with Nancy Adajania; Popular Prakashan/ Foundation B&G, Mumbai 2011; first five books in an 'unfolding programme of conversations with artists'): Anju Dodiya. Atul Dodiya. Veer Munshi. Manu Parekh. Baiju Parthan. INTERVIEW Praneet Soi. (essays by Charles Esche and Ranjit Hoskote; Distanz Verlag, Berlin 2011) Atul Dodiya. (edited by Ranjit Hoskote, with texts by Thomas McEvilley, Enrique Juncosa, Nancy Adajania and Hoskote; Prestel Verlag, Munich, London & New York 2014) REVIEW Kampfabsage. (co-authored with Ilija Trojanow; Random House/ Karl Blessing Verlag, Munich 2007) Despair and Modernity: Reflections from Modern Indian Painting. (co-authored with Harsha V. Dehejia and Prem Shankar Jha; Motilal Banarsidass, New Delhi 2000) Confluences: Forgotten Histories From East And West (co-authored with Ilija Trojanow) (New Delhi, Yoda Press 2012) As editor Dom Moraes: Selected Poems. (Penguin Modern Classics, New Delhi 2012) REVIEW REVIEW REVIEW REVIEW REVIEW REVIEW INTERVIEW As translator Ilija Trojanow, Along the Ganga: To the Inner Shores of India. (Penguin Books India, New Delhi 2005) Ilija Trojanow, Along the Ganges. (British edition: Haus Publishing, London 2005) I, Lalla: The Poems of Lal Ded. (Penguin Classics, New Delhi 2011) REVIEW REVIEW REVIEW REVIEW REVIEW REVIEW REVIEW REVIEW REVIEW CRITICAL RECOMMENDATION CRITICAL RECOMMENDATION Poetry Anthologies Language for a New Century (2008) ed. Tina Chang, Nathalie Handal and Ravi Shankar. Published by W. W. Norton & Company. The Bloodaxe Book of Contemporary Indian Poets (2008) ed. Jeet Thayil. Published by Bloodaxe Books. Staying Human: New Poems for Staying Alive (2020) ed. Neil Astley. Published by Bloodaxe Books. Singing in the Dark (2020) ed. K. Satchidanandan and Nishi Chawla. Published by Penguin Vintage. Exhibitions curated 'Hinged by Light' (paintings and sculptural departures by three major Indian abstractionists: Mehlli Gobhai, Prabhakar Kolte, Yogesh Rawal; Pundole Art Gallery, Bombay, January 1994). 'Private Languages' (paintings, sculptures and assemblages by three emerging Indian artists: Anandajit Ray, Ravinder Reddy, Sudarshan Shetty; Pundole Art Gallery, Bombay, January 1997). 'Making An Entrance' (site-specific public-art installations by the artists Jehangir Jani, Bharati Kapadia, Kausik Mukhopadhyay, Baiju Parthan and Sudarshan Shetty, set up in the Kala Ghoda precinct, Bombay's old colonial quarter, during the Kala Ghoda Arts Festival 2000; Bombay, February 2000). 'Intersections: Seven Artistic Dialogues between Abstraction and Figuration' (paintings and mixed-media works by Chittrovanu Mazumdar, Mehlli Gobhai, Bharati Kapadia, Yogesh Rawal, Baiju Parthan, C. Douglas and Jitish Kallat; The Guild Art Gallery, Bombay, February 2000). 'Family Resemblances: Nine Approaches to a Mutable Self' (paintings by Laxman Shreshtha, Sachin Karne, Atul Dodiya, Jitish Kallat, Baiju Parthan, Amitava Das, Surendran Nair, Anju Dodiya and Gargi Raina; The Birla Academy of Art & Culture, Bombay, March 2000). 'The Bodied Self' (paintings by Anju Dodiya, Jehangir Jani and Theodore Mesquita; Gallery Sans Tache, Bombay, April 2001). 'Labyrinth/ Laboratory' (a mid-career retrospective of Atul Dodiya, including paintings, sculpture-installations and assemblages, at the invitation of the Japan Foundation; Japan Foundation Asia Center, Tokyo, June–July 2001). 'The Active Line' (drawings by Jehangir Sabavala, Mehlli Gobhai, Laxma Goud, Manjit Bawa and Jogen Chowdhury; The Guild Art Gallery, Bombay, December 2001) 'Clicking into Place' (a trans-Asian exhibition—the Indian phase of 'Under Construction', below—including paintings by Alfredo Esquillo/ Manila, Shibu Natesan/ London, Jitish Kallat/ Bombay, and a digital installation by Baiju Parthan/ Bombay; Sakshi Gallery, Bombay, February 2002). 'Under Construction' (Hoskote was co-curator for this collaborative curatorial project, initiated by the Japan Foundation Asia Center, which took place at various venues in Asia, culminating in an exhibition at the Japan Foundation Forum and the Tokyo Opera City Art Gallery, Tokyo, in December 2002). REVIEW 'Visions of Landscape' (paintings by Akbar Padamsee, Ram Kumar, Gulammohammed Sheikh, Sudhir Patwardhan, Laxman Shreshtha, Atul Dodiya and Shibu Natesan; The Guild Art Gallery, Bombay, January 2005). 'Jehangir Sabavala: A Retrospective' (a monographic exhibition of Sabavala's art, covering the period 1942–2005; The National Gallery of Modern Art: Bombay and New Delhi, November–December 2005). 'Strangeness' (paintings, drawings, mixed-media works and sculpture-installations by Krishen Khanna, Baiju Parthan, Theodore Mesquita, Viraj Naik, Tina Bopiah, Sunil Gawde, Rajeev Lochan, Riyas Komu, T. V. Santhosh, Krishnamachari Bose, Krishnaraj Chonat; Anant Art Gallery, Calcutta, January 2006).REPORT 'Aparanta: The Confluence of Contemporary Art in Goa' (a survey exhibition gathering together 265 art-works by 22 contemporary artists and 4 historic masters, ranging across oils, watercolours, drawings, graphics, mixed-media works, sculptures and video-installations; artists include F N Souza, V S Gaitonde, Angelo da Fonseca, Laxman Pai; Antonio e Costa, Alex Tavares, Wilson D'Souza, Sonia Rodrigues Sabharwal, Hanuman Kambli, Giraldo de Sousa, Vidya Kamat, Viraj Naik, Siddharth Gosavi, Pradeep Naik, Subodh Kerkar, Rajan Fulari, Rajendra Usapkar, Santosh Morajkar, Yolanda de Sousa-Kammermeier, Nirupa Naik, Chaitali Morajkar, Liesl Cotta De Souza, Querozito De Souza, Shilpa Mayenkar, Baiju Parthan, and Dayanita Singh; Old Goa Medical College Building/ Escola Medica e Cirurgica de Goa, for the Goa Tourism Development Corporation, Panjim, April 2007). CURATORIAL ESSAY REVIEW REVIEW The 7th Gwangju Biennale (Artistic Director: Okwui Enwezor; Curators: Ranjit Hoskote and Hyunjin Kim; Gwangju, South Korea, 5 September-9 November 2008). 'To See is To Change: A Parallax View of 40 Years of German Video Art' (a re-curation of the globally circulating Goethe-Institut collection, '40 Years of German Video Art', as a 2-day annotated screening cycle and symposium by a group of theorists, artists and enthusiasts: Nancy Adajania, Shaina Anand, Ranjit Hoskote, Ashok Sukumaran, Kabir Mohanty, Mriganka Madhukaillya, Kaushik Bhaumik, Devdutt Trivedi, and Rana Dasgupta; Jnanapravaha & Chemould Prescott Road, Bombay, 14–15 November 2008). CONCEPT, DESCRIPTION & SCHEDULE ARCHIVAL VIDEO 'ZIP Files' (an editorial selection from the Foundation B&G collection, including paintings, photographic works and sculptures by 24 contemporary artists, including Rameshwar Broota, Surendran Nair, Riyas Komu, Ram Rahman, N. S. Harsha, Nataraj Sharma, Valsan Koorma Kolleri, Jogen Chowdhury, Manu Parekh, Madhvi Parekh, Gargi Raina, Ajay Desai, Krishnamachari Bose, Sumedh Rajendran, Veer Munshi, Pooja Iranna, Baiju Parthan, Rekha Rodwittiya, G. R. Irannna, Ravi Kumar Kashi, H. G. Arunkumar, Subhash Awchat, K. S. Radhakrishnan, and Farhad Hussain; Foundation B&G & Tao Art Gallery, Bombay, February 2009 and Foundation B&G & Religare Arts Initiative, New Delhi, April 2009). REVIEW REVIEW 'Shrapnel' (an exhibition of recent works by Veer Munshi, developed through an ongoing dialogue between Munshi and Hoskote; the exhibition included extracts from the artist's ongoing photographic archive, 'Pandit Houses', and his painting-based installation, 'The Chamber'; Foundation B&G and Tao Art Gallery, Bombay, March 2009). CURATORIAL ESSAY 'The Pursuit of Intensity: Manu Parekh, Selected Works 2004–2009' (Foundation B&G and Tao Art Gallery, October 2009). CURATORIAL ESSAY 'Retrieval Systems' (an exhibition exploring the use of memory as resource in the work of Alex Fernandes, B. Manjunath Kamath, Baiju Parthan, G. R. Iranna, and Tina Bopiah; Art Alive, New Delhi, November 2009). 'Detour: Five Position Papers on the Republic' (an exhibition conceived as a 'critical homage' on the centennial of Gandhi's Hind Swaraj, 1909, with works by Dayanita Singh, Ram Rahman, Ravi Agarwal, Samar Jodha, and Sonia Jabbar; Chemould Prescott Road, Bombay, December 2009 – January 2010). CURATORIAL ESSAY, WORKS & INSTALLATION VIEWS Everyone Agrees: It's About To Explode (the India pavilion at the 54th Venice Biennale, La Biennale di Venezia, with works by Zarina Hashmi, Gigi Scaria, Praneet Soi, and the Desire Machine Collective/ Sonal Jain & Mriganka Madhukaillya; Arsenale, Venice, June–November 2011). 'The Needle on the Gauge: The Testimonial Image in the Work of Seven Indian Artists' (an exhibition featuring works by Indian photographers extending their practice through documentary projects, video works, blogs and social initiatives: Ravi Agarwal, Gauri Gill, Samar Jodha, Ryan Lobo, Veer Munshi, Ram Rahman, Gigi Scaria; CACSA/ Contemporary Art Centre of South Australia, Adelaide, September–October 2012). 'Nothing is Absolute: A Journey through Abstraction', co-curated by Ranjit Hoskote & Mehlli Gobhai (The Jehangir Nicholson Gallery, CSMVS/ formerly the Prince of Wales Museum, Bombay, February–August 2013). REVIEW REVIEW 'The 4th Former West Congress: Documents, Constellations, Prospects', co-convened by Boris Buden, Boris Groys, Kathrin Klingan, Maria Hlavajova, Ranjit Hoskote, Kathrin Rhomberg and Irit Rogoff (Haus der Kulturen der Welt, Berlin, March 2013). 'Experiments with Truth: Atul Dodiya, Works 1981–2013' (The National Gallery of Modern Art, New Delhi: November–December 2013). 'No Parsi is an Island', co-curated by Ranjit Hoskote & Nancy Adajania (The National Gallery of Modern Art/ NGMA, Mumbai: December 2013 – February 2014). 'Zameen' (an exhibition including works by Ravi Agarwal, Atul Dodiya, Vishwajyoti Ghosh, H. G. Arunkumar, Zarina Hashmi, Ranbir Kaleka, Ryan Lobo, Veer Munshi, Jagannath Panda, Baiju Parthan, Ashim Purkayastha, Ram Rahman, Gargi Raina, Gigi Scaria and Praneet Soi; Art District XIII, New Delhi: October 2014 – February 2015). 'The Shadow Trapper's Almanac: Tanmoy Samanta, Recent Works' (TARQ, Mumbai: November 2014 – January 2015). ‘Abby Weed Grey and Indian Modernism’, curated by Susan Hapgood and Ranjit Hoskote (New York University: Grey Art Gallery, New York City: January–March 2015). ‘Unpacking the Studio: Celebrating the Jehangir Sabavala Bequest’ (CSMVS, Mumbai: 15 September-31 December 2015). ‘The State of Architecture: Practices & Processes in India’, co-curated by Rahul Mehrotra, Ranjit Hoskote & Kaiwan Mehta (NGMA, Mumbai: 6 January-20 March 2016). ‘And the last shall be the first: G R Iranna, Works 1995-2015’ (NGMA Bangalore, 16 January-16 February 2016). ‘No Parsi is an Island’, co-curated by Ranjit Hoskote & Nancy Adajania (NGMA, Delhi: 20 March-29 May 2016). ‘The Quest for Cruzo: A Homage to the Art of Antonio Piedade da Cruz’ (Sunaparanta: Goa Centre for the Arts, Panjim: 30 June-20 July 2016). ‘Laxman Shreshtha: The Infinite Project’ (JNAF/ CSMVS, Mumbai: 2-part retrospective, 18 August-3 October + 14 October-31 December 2016). ‘DWELLING’ (10th anniversary show of Galerie Mirchandani + Steinruecke, Mumbai: 2 parts | Part 1: 10 November 2016 – 10 January 2017). ‘Terra Cognita? Three Moments in the History of the Graphic Image in India, 1556-2016’ (Serendipity Arts Festival, Palacio Idalcao, Panjim, Goa: 15 December 2016 – 15 March 2017). ‘DWELLING’ (10th anniversary show of Galerie Mirchandani + Steinruecke, Mumbai: 2 parts | Part 2: 29 March-29 May 2017). ‘In the Presence of Another Sky: Sakti Burman, A Retrospective’ (NGMA Mumbai: 17 October – 26 November 2017). ‘Anti-Memoirs: Locus, Language, Landscape’ (Serendipity Arts Festival, Palacio Idalcao, Panjim, Goa: 14–22 December 2017). ‘State of Housing: Aspirations, Imaginaries, Realities’, co-curated by Rahul Mehrotra, Ranjit Hoskote & Kaiwan Mehta (Max Mueller Bhavan, Mumbai: 2 February-18 March 2018). ‘The Sacred Everyday: Embracing the Risk of Difference’ (Serendipity Arts Festival, Palacio Idalcao, Panjim, & the Church of Santa Monica, Old Goa: 15 December 2018 – 15 January 2019). ‘No Place like the Present’ (Akara Art, Bombay: 16 January – 9 March 2019). 'The 20th' (20th anniversary show, Art Musings | Jehangir Art Gallery, Mumbai: 12–18 February 2019; extended through a cycle of 5 exhibitions at Art Musings across the year). ‘M F Husain: Horses of the Sun’ (MATHAF Arab Museum of Modern Art, Doha, Qatar: 20 March-1 July 2019). ‘Reverie and Reality: Jogen Chowdhury’ (Emami Art/ Kolkata Centre for Creativity, Kolkata: 20 September-7 December 2019). ‘Opening Lines: Ebrahim Alkazi, Works 1948-1971’ (Art Heritage + Shridharani Gallery, Delhi: 15 October – 11 November 2019). ‘Transients’, a solo exhibition of photographs by Sheetal Mallar (Art Musings, Mumbai: 9 January – 10 February 2020). ‘Don’t Ask Me About Colour: Mehlli Gobhai, A Retrospective’, co-curated by Ranjit Hoskote & Nancy Adajania (NGMA Mumbai: 6 March – 25 April 2020). ‘Patterns of Intensity’ [Artists: Chandrashekar Koteshwar, Ghana Shyam Latua, Barkha Gupta, Anil Thambai, Teja Gavankar, Meghna Patpatia, Suman Chandra, Savia Mahajan, Vipul Badva, Kaushik Saha, Purvai Rai] (Art Alive Gallery, Delhi: 3 – 30 April 2021). ‘Mehlli Gobhai: Epiphanies’, co-curated by Ranjit Hoskote & Nancy Adajania (Chemould Prescott Road, Mumbai: 23 July – 31 August 2021). ‘The Cymroza Chronicles’ | Cymroza at 50 (Cymroza Art Gallery, Mumbai: 1 September – 19 October 2021). ‘Mapping the Lost Spectrum’ | Cymroza at 50 (Pundole’s, Hamilton House, Mumbai: 1–14 September 2021). ‘F N Souza: The Power and the Glory’ (JNAF/ CSMVS, Mumbai: 29 October 2021 – 3 January 2022). REVIEW See also Indian English literature Indian Writing in English Nissim Ezekiel Dom Moraes International PEN Nancy Adajania Gwangju Biennale International Writing Program References External links Poetry Poems & interview at Poetry International Web 8 poems at Green Integer Review 6 poems at Nthposition Poem at Rattapallax Poem in Poems for Madrid 3 poems at Fieralingue Ranjit Hoskote Interview mp3 recording Essays Essay by Ranjit Hoskote, 'Looking for Anchorage, and Not in Alaska Alone' Critical essay on Documenta 11, at Art & Thought Essay by Ranjit Hoskote, 'Winter Thoughts about Spring: Looking Forward to a Renewal of the Arts in Kashmir' Articles Ambedkar's legacy in The Hindu The Mob as Censor at countercurrents.org Painting the art world red in Hindustan Times Indian art critics 1969 births Living people Writers from Mumbai Elphinstone College alumni University of Mumbai alumni Indian art curators Indian male essayists English-language poets from India International Writing Program alumni 20th-century Indian essayists Recipients of the Sahitya Akademi Prize for Translation Recipients of the Sahitya Akademi Golden Jubilee Award
1302323
https://en.wikipedia.org/wiki/Siegbert%20Salomon%20Prawer
Siegbert Salomon Prawer
Siegbert Salomon Prawer (1925–2012) was Taylor Professor of the German Language and Literature at the University of Oxford. Life and works Prawer was born on 15 February 1925 in Cologne, Germany, to Jewish parents Marcus and Eleanora (Cohn) Prawer. Marcus was a lawyer from Poland and Eleanora's father was cantor of Cologne's largest synagogue. His sister Ruth was born in 1927. The family fled the Nazi regime in 1939, emigrating to Britain. Educated at King Henry VIII School, Coventry, and Jesus College, Cambridge, he was lecturer at the University of Birmingham from 1948 to 1963, Professor of German at Westfield College, London, from 1964, and became Taylor Professor of the German Language and Literature at the University of Oxford in 1969. He was awarded his PhD by Birmingham University in 1953 (PhD, University of Birmingham, Department of German, 1953, 'A critical analysis of 24 consecutive poems from Heine's Romanzero'). He was a Fellow (then an Honorary Fellow) of Queen's College, Oxford, and an Honorary Fellow of Jesus College, Cambridge. He had academic interests in German poetry and lieder, Romantic German literature, especially E. T. A. Hoffmann and Heinrich Heine, comparative literature and also in film, particularly horror films. His sister was the writer Ruth Prawer Jhabvala. He made a cameo appearance in the Merchant-Ivory film Howards End (for which his sister wrote the Academy Award-winning screenplay). Prawer died on 5 April 2012 in Oxford, England. Publications 1952: German Lyric Poetry: a critical analysis of selected poems from Klopstock to Rilke. London: Routledge & Kegan Paul 1960: Mörike und seine Leser. Stuttgart: Ernst Klett 1960: Heine. Buch der Lieder. London: Edward Arnold 1961: Heine the Tragic Satirist: a study of the later poetry 1827-56. Cambridge: Cambridge University Press 1964: Penguin Book of Lieder. Harmondsworth: Penguin Books, editor and translator 1969: Essays in German Culture, Language and Society. London: University of London, editor with R. Hinton Thomas, Leonard Wilson Forster, Roy Pascal 1970: Heine's Shakespeare: a study on contexts: inaugural lecture delivered before the University of Oxford on 5 May 1970. Oxford: Clarendon Press 1970: The Romantic Period in Germany: essays by members of the London University Institute of Germanic Studies, editor 1971: Seventeen Modern German Poets. London: Oxford University Press, editor 1973: Comparative Literary Studies: An Introduction. London: Duckworth 1976: Karl Marx and World Literature. Oxford: Clarendon Press 1980: Caligari's Children: the film as tale of terror. Oxford: Oxford University Press 1983: Heine's Jewish comedy: a study of his portraits of Jews and Judaism. Oxford: Clarendon Press 1984: A. N. Stencl, Poet of Whitechapel. Oxford: Oxford Centre for Postgraduate Hebrew studies. 1st Stencl Lecture 1984: Coal-Smoke and Englishmen: a study of verbal caricature in the writings of Heinrich Heine. London: Institute of Germanic Studies, University of London 1986: Frankenstein's Island: England and the English in the writings of Heinrich Heine. Cambridge: Cambridge University Press 1992: Israel at Vanity Fair: Jews and Judaism in the Writings of W. M. Thackeray. Leiden: Brill 1997: Breeches and Metaphysics: Thackeray's German discourse. Oxford: Legenda 2000: W. M. Thackeray's European Sketch Books: a study of literary and graphic portraiture. Oxford, New York: P. Lang 2002: The Blue Angel. (BFI Film Classics.) London: British Film Institute 2004: Nosferatu: Phantom der Nacht. (BFI Film Classics.) London: British Film Institute 2005: Between Two Worlds: the Jewish presence in German and Austrian film, 1919-1933. (Film Europa: German Cinema in an International Context) New York, Oxford: Berghahn Books 2009: A Cultural Citizen of the World: Sigmund Freud's knowledge and use of British and American writings. Oxford: Legenda References External links A fond farewell   Archived from the original on 5 December 2008 1925 births Academics of the University of Birmingham Academics of Westfield College Fellows of the British Academy Fellows of Jesus College, Cambridge Fellows of The Queen's College, Oxford British film historians English Jews Jewish emigrants from Nazi Germany to the United Kingdom People educated at King Henry VIII School, Coventry Alumni of Jesus College, Cambridge Alumni of Christ's College, Cambridge 2012 deaths Alumni of the University of Birmingham Taylor Professors of the German Language and Literature German emigrants to England German people of Polish-Jewish descent English people of Polish-Jewish descent English philologists English people of German-Jewish descent
2584969
https://en.wikipedia.org/wiki/Sect%C3%A9ra%20Secure%20Module
Sectéra Secure Module
Sectéra is a family of secure voice and data communications products made by General Dynamics Mission Systems which are approved by the United States National Security Agency. Devices in the family can use either the National Institute of Standards and Technology (NIST) Advanced Encryption Standard (AES) or SCIP to provide Type 1 encryption, with communication levels classified up to Top Secret. The devices are activated with a Personal Identification Number (PIN). Sectéra Secure Module The Sectéra Secure Module is a device that can provide encryption of both voice and data. It is used in the Sectéra Wireline Terminal for use with standard PSTN devices and has been incorporated into a slim module to use with a Motorola GSM cell phone. The module is placed between the battery and the body of the phone. The phone may be used as a regular GSM phone when the security module is not activated by the PIN. Sectéra Edge Another member of the Sectéra family, Sectéra Edge, is a smart phone that supports both classified and unclassified voice and data communication, including access to the SIPRNET. The Sectéra Edge costs approximately $3,000. The Sectéra Edge was developed by General Dynamics competing against a product by L-3 under an $18 million contract from the National Security Agency. Available since mid-2008, it is already used by tens of thousands of employees in the intelligence community and the Defense, Homeland Security, and State Departments among others. It was reported in 2009, that the Sectéra Edge could be the device to replace the BlackBerry of President Barack Obama in order to provide him with secure communications. But instead of the Sectéra Edge, a standard BlackBerry device was secured with the SecurVoice encryption software. The Sectéra Edge was discontinued in 2015. References External links Product Details - Sectéra Encryption devices General Dynamics Mission Systems
692713
https://en.wikipedia.org/wiki/The%20Honky%20Tonk%20Man
The Honky Tonk Man
Roy Wayne Farris (born January 25, 1953), better known by the ring name The Honky Tonk Man, is an American retired professional wrestler. He previously wrestled for World Championship Wrestling (WCW) and World Wrestling Federation (WWF, now WWE). He is best known for his first run with WWF, where he held the WWF Intercontinental Championship for a record 64 weeks before losing it to The Ultimate Warrior at the 1988 SummerSlam. He is the cousin of fellow professional wrestler and color commentator Jerry Lawler. Farris was inducted into the WWE Hall of Fame as part of the 2019 induction ceremony. Professional wrestling career Early career (1977–1984) Farris began his career in 1977 working in Malden, Missouri and wrestled alongside his training partner Koko B. Ware for promoter Henry Rogers. Farris then moved on to Memphis Wrestling in 1978, originally working as a jobber to the stars. He wrestled frequently in Birmingham, Dothan, Mobile, and Pensacola as "Dynamite" Wayne Farris. He achieved greater success when he teamed with Larry Latham to form The Blond Bombers when they were put together by Gerry Brisco in Florida Championship Wrestling. The Bombers were later put with Sgt. Danny Davis as their manager when they came back to Memphis. The Blond Bombers were involved in heated feuds with several fan favorite teams across the two competing Tennessee promotions, appearing in both Nick Gulas's Nashville based territory, and Jerry Jarrett's Memphis area. Their signature moment was the now famous "Tupelo Concession Stand Brawl" against Jerry Lawler and Bill Dundee. He then had stints in the American Wrestling Association (AWA), Jim Crockett Promotions, World Wrestling Council (WWC), Southeastern Championship Wrestling, Southwest Championship Wrestling, National Wrestling Alliance and Stampede Wrestling through the early 1980s, winning multiple singles and tag team championships in each. 111 Stampede Wrestling (1982–1986) Farris made his debut for Stampede Wrestling in Calgary in 1982 where the Honky Tonk Wayne gimmick was born. A spinoff of rock star Elvis Presley, he sported slicked-back hair, sideburns, and carried a guitar. Honky and Ron Starr won the Stampede Wrestling International Tag Team Championship in 1985 and 1986. He later teamed with Cuban Assassin to win the International Tag Team titles. On June 20, 1986 he defeated Bad News Allen for the Stampede North American Heavyweight Championship; the title was vacated when Honky left for WWF in 1986. World Wrestling Federation (1986–1991) Early run (1986–1987) Farris entered the World Wrestling Federation (WWF) in 1986 under the ring name The Honky Tonk Man. Honky made his televised WWF debut on the September 28, 1986 episode of Wrestling Challenge, defeating Terry Gibbs. Originally pushed as a fan favorite wrestler with an Elvis impersonator gimmick, Honky soon cut a series of promos with Jesse "The Body" Ventura that aired on the WWF's syndicated programming asking fans for a "vote of confidence", while these promos actually insulted fans in the manner of Andy Kaufman before him. The results predictably came back negative, and it was not long before Honky turned into a cocky villain and took on "Mouth of the South" Jimmy Hart (later billed "Colonel" as a reference to Elvis Presley's manager Colonel Tom Parker) as his manager. Honky's first major feud came against Jake "The Snake" Roberts, who was in the midst of a fan favorite turn. The feud intensified when Honky attacked Roberts on his talk show set, The Snake Pit. According to Roberts, Honky was supposed to hit him with a gimmicked balsa wood guitar (which was fiberglass); he believes Farris accidentally grabbed a real, non-gimmicked guitar and smashed it across Roberts' back, legitimately injuring him. According to Roberts, this started his dependence on prescription pain medication (in an interview, Roberts alleges that he was picking pieces of the guitar out of his back for weeks after he was hit). This has been disputed as Roberts had been a known drug user years before this incident. However, in an interview for World Wrestling Insanity, Honky disputed Roberts' assertion saying, "That's not true and, in fact I attribute most of that to Mick Foley, who wrote about it in his book, and Jake, who lied about it", although television footage of the incident showed that the guitar did not break like a gimmicked one would have and that it took several more hits to Roberts' back for the guitar to break apart. Yet Roberts continued to wrestle regularly following this angle bringing into question the alleged non-gimmicked guitar shot. During their feud, which culminated at WrestleMania III, Honky grabbed the ring ropes to score a tainted win; afterward, Roberts cleared the ring of Honky before he and Alice Cooper attacked Hart with Damien, Roberts' python. Intercontinental Heavyweight Champion (1987–1988) On the June 13, 1987 episode of Superstars, Honky defeated Ricky "The Dragon" Steamboat for the WWF Intercontinental Heavyweight Championship; Honky reversed Steamboat's inside cradle and grabbed onto the bottom ropes for extra leverage to get the pinfall win. Butch Reed was originally scheduled to win the title. Honky was originally meant to be a transitional champion to only hold on to the title for a short period of time, however, Roberts failed several drug tests following WrestleMania and Honky became booked to remain champion for what would be a record-setting run. In a later interview, Honky remarked that Hulk Hogan, whom he then had a friendly, collaborative relationship with outside of the ring, had helped give Honky a chance at the title after a coincidental meeting between Hogan, Honky, and Vince McMahon took place. Hogan stuck up for Honky, even though McMahon had someone else in mind. To preserve his title, which could only be taken by pinfall or submission, Honky often got himself deliberately counted out or disqualified against challengers such as Steamboat, Billy Jack Haynes, Bruno Sammartino, and George "The Animal" Steele. Also during this time, Honky began using a 50s-styled, themed entrance song performed by Farris (included on Piledriver: The Wrestling Album II, the WWF's second album of wrestling themes). By September 1987, "Macho Man" Randy Savage was in the midst of a fan favorite turn and began challenging Honky for the title (after Honky had made comments about himself being "the greatest Intercontinental Heavyweight Champion of all time" and making disparaging comments about former champions, particularly Savage). Although they had several matches beforehand – they had also met in 1986, when the then-villain Savage was champion and challenged by the fan favorite Honky – the first Savage-Honky match to air on national television was on the October 3, 1987 Saturday Night's Main Event XII, which was taped on September 23 in Hershey, Pennsylvania. During that match, Savage nearly defeated Honky until Honky's allies in Jimmy Hart's stable, The Hart Foundation (who had interfered throughout the match), ran into the ring and attacked Savage, getting Honky disqualified. Savage's manager, Miss Elizabeth, attempted to stop the attack on Savage, but Honky shoved her down and she fled to the locker room; meanwhile, Honky completed his attempt to break his guitar over Savage's head. Shortly thereafter, Miss Elizabeth returned with Savage's former rival, Hulk Hogan, who aided Savage in running off the heels (leading to the formation of The Mega Powers). Honky continued his bitter feud against Savage, as Honky would frequently make advances toward Miss Elizabeth – including one such incident at the 1987 Slammy Awards – to agitate his challenger. The last high-profile Savage-Honky match, aired as part of the undercard to Hulk Hogan vs. Andre the Giant on the 1988 The Main Event I, saw Honky lose by countout after Savage rammed him into the ring post on the outside of the ring. Their feud was blown off in the weeks before WrestleMania IV through a series of tag team-style steel cage matches, involving various allies of both Honky and Savage on their respective sides and Savage usually emerging victorious. Honky retained the title in matches with Savage and Brutus "The Barber" Beefcake, Honky's next major rival. During the Beefcake-Honky feud – which began at WrestleMania IV (where Jimmy Hart got him disqualified by knocking out the referee with his megaphone while Beefcake had Honky in a sleeper hold, and this saw Honky retain the title, but Hart got a haircut from Beefcake himself) and continued during the spring and summer of 1988 – Honky vowed not to let Beefcake cut his ducktail hair, something Beefcake often said he would do in promos. In their matches, Honky was often seconded by a mysterious woman named Peggy Sue; while WWF Women's Champion Sherri Martel played the role for television tapings, more often than not, "Peggy Sue" was Jimmy Hart dressed in drag. Beefcake countered with a "woman" of his own: "Georgina" (George "The Animal" Steele in drag). Honky and Beefcake were scheduled to square off at the 1988 SummerSlam in what was billed as Beefcake's last shot at the now renamed Intercontinental Championship. However, in a storyline twist, Beefcake was thrust in a feud with "Outlaw" Ron Bass after Bass committed a sneak attack on Beefcake; the incident was aired the weekend before SummerSlam. At the event, it was announced that a "mystery opponent" would face Honky for the title. When it came time for the match, Honky grabbed the microphone and proclaimed that he did not care who his opponent was. The Ultimate Warrior then ran out and pinned his stunned opponent in just 31 seconds for the Intercontinental Championship, ending his reign at 454 days. Honky had been the champion for one year, two months and 27 days – the longest Intercontinental Championship reign in history. Rhythm and Blues and departure (1989–1991) In 1989, Honky entered the Royal Rumble, where he was eliminated by Tito Santana and Bushwhacker Butch. In late 1989 and 1990, he and Greg Valentine, who was also managed by Jimmy Hart, aligned themselves as the tag team Rhythm and Blues. At WrestleMania VI, they notably rode in a pink Cadillac, with future WWE Hall of Famer Diamond Dallas Page as the driver. with After competing against such teams as The Hart Foundation at WrestleMania V and The Legion of Doom, Rhythm & Blues were part of Ted DiBiase's Million Dollar Team along with his "mystery" partner, the debuting Undertaker, against Rhodes' Dream Team of The Hart Foundation and Koko B. Ware at the 1990 Survivor Series, where they emerged victorious. Honky wrapped up his WWF career with a stint as a pro-villains color commentator alongside Vince McMahon and Roddy Piper on Superstars before leaving in January 1991. Independent circuit (1991–1994) After leaving WWF, Honky went the independent circuit. He wrestled his former partner Greg Valentine to a double disqualification for United Wrestling Alliance on February 19, 1991. He lost to Don Muraco at Century Toyota on June 28, 1992. Then he made a one night appearance on November 11, 1993 for United States Wrestling Association as he lost to Jeff Jarrett by disqualification. World Championship Wrestling (1994) In Summer to Winter 1994, Honky was wrestling for World Championship Wrestling and challenged Johnny B. Badd for the WCW World Television Championship, until he left due to a dispute with management. In his book Controversy Creates Cash, Eric Bischoff stated that his favorite firing was that of Honky. Honky has responded by saying that it was an honor, as Bischoff had fired a number of people while in WCW until he got himself fired. Return to Independent Circuit (1995–1999) After an unsuccessful stint with WCW, Honky returned to the indies. In 1995 he wrestled for National Wrestling Conference having matches against former WWF stars Virgil, Ultimate Warrior, and Jake Roberts. He worked with them until 1998. In 1996 he worked for American Wrestling Federation where he feuded with Koko B. Ware. In 1998 he worked for Elite Canadian Championship Wrestling (ECCW) in British Columbia, Canada. On May 5, 1999 he wrestled Michael Hayes in a losing effort for SSOW. Return to the WWF (1997–1998, 2001) After a brief stint in the American Wrestling Federation, Honky resurfaced in the WWF full-time in 1997 as a color commentator on Raw Is War, WWF Superstars, and Shotgun Saturday Night, and then as the manager of Billy Gunn, who had started a singles run. Under Honky's tutelage, Gunn became known as "Rockabilly", which was a short-lived and unsuccessful gimmick and was also disliked by Honky himself. He then made an appearance in the 1998 Royal Rumble event eliminated by Vader. Honky returned to the WWF for a one-time appearance at the 2001 Royal Rumble, but was quickly eliminated by Kane after being hit on the head with his guitar. Third return to World Wrestling Entertainment (2008–2013) Final matches (2008–2009) In 2008, Santino Marella announced his intention to break Honky's record for longest Intercontinental Championship reign, usually displaying a special "Honk-a-meter" comparing Honky's 64-week record with the length of his own reign at the time. On the October 6 episode of Raw, Honky (now a fan favorite for the first time since 1986), along with Goldust and Roddy Piper, was named as one of the possible opponents for Marella's Intercontinental Championship at Cyber Sunday. He was elected by fans to challenge for the title with 35% of the vote; despite concern that his injured finger might require surgery, he did appear, winning the match by disqualification (thus failing to win the title). After the match had ended, Goldust and Piper came down to the ring and, along with Honky, attacked Marella. On the October 27 episode of Raw, Honky appeared as a special guest commentator. After an impersonation of Marella's on-screen girlfriend, Beth Phoenix, Charlie Haas was knocked into the announcer table, and Marella attacked Honky, prompting Piper and Goldust to block Marella's escape from the ring. Upon Goldust's entry to the ring, Marella turned around to be smashed over the head by Honky's guitar. Honky inducted Koko B. Ware into the WWE Hall of Fame on April 4, 2009. Sporadic appearances (2010–2013) In 2010, WWE offered him a place in the WWE Hall of Fame, but he rejected it. Honky made a brief appearance on Old School Raw on March 4, 2013. Following a match between the team of Brodus Clay and Tensai and 3MB, he smashed 3MB member Heath Slater over the head with a guitar. He then danced with Clay and Tensai to his signature "Cool, Cocky, Bad" theme song. Later career (2000–present) Since 2000, Honky has worked independent wrestling shows all over the world. Honky, along with Ryan Smith and a host of others, ran a series of controversial wrestling websites from 2000 to 2006. TheHonkyTonkMan.com featured frequent updates from Honky himself, a highly interactive message board community, extensive photo gallery, audio updates, and more. Notable online feuds began between The Honky Tonk Man and Jerry Lawler, Roddy Piper, and others. These often intense online rivalries became a major drawing point for fans. The website unexpectedly closed without much explanation in December 2006. The site now forwards to various new ventures of former website manager Ryan Smith, who remains tight-lipped about the closing. Honky has wrestled for Southern Championship Wrestling in Castroville, Texas, and MSW in eastern Canada. On April 23, 2008, Honky was seen wrestling in Presque Isle, Maine for the North Atlantic American Wrestling Association promotion. He appeared on Heavy on Wrestling on June 14, 2008, in Superior, Wisconsin. He wrestled as a fan favorite, defeating Big Brody Hoofer and hitting Cameron Steele with a guitar. He also appeared at PDX Wrestling (the new-age Portland Wrestling, run by Sandy Barr's son Josh) teaming with a local fan favorite against two villains. On April 26, 2008, Honky was inducted into the XWF Hall of Fame by its creator Jack Blaze at their 2008 XWF Superbrawl event. XWF was later renamed LPW (Legends Pro Wrestling) where Honky is still honored in their Hall of Fame. On June 28, 2008 in Chicago Ridge, Illinois, he made a special guest appearance for Ring of Honor with the storyline that "Sweet N'Sour" Larry Sweeney had brought him on board with his Sweet N'Sour Inc. faction. He praised the crowd and was about to sing and dance for them until Sweeney stepped in and told him he would not be doing either until their demands were met. On July 27, 2008, Honky almost had the index finger of his right hand severed during a public appearance in Canada before an Ultimate Championship Wrestling show in Charlottetown, Prince Edward Island. He was making an appearance at Boston Pizza in Charlottetown several hours before the show when someone wanted to take a photo with him with both men clashing guitars. When the guitars collided, the neck of Honky's guitar turned and sliced into Honky's finger, almost severing it. Honky was immediately taken to Queen Elizabeth Hospital where doctors stitched the finger and bandaged it. Honky made his appearance at the Ultimate Championship Wrestling show several hours later. He was unable to wrestle his scheduled match due to the injury and was replaced by Trash Canyon, whom he managed from ringside. Honky, although injured and in obvious pain, sang his theme song twice in the ring. In August 2008, Honky appeared at Wrestling Supershows across Canada. Honky also made appearances in SWCW in Oklahoma City, Oklahoma. On October 24, 2008, he wrestled for Big Time Wrestling (his first match in four months), beating L'Empereur. On January 7, 2009, he appeared in a World Pro Wrestling event in Colusa, California, teaming with Doink The Clown (a new masked version) to face WPW World Tag Team Champions The First Class Express, Jerry Grey and Mighty Henrich. The match ended in a no contest as Doink turned on Honky and the three triple-teamed him. On May 7, 2009, Honky and Bushwacker Luke defeated "Kowboy" Mike Hughes and "Wildman" Gary Williams for the UCW Tag Team Championship. On January 31, 2011, Honky made his Dynamic Wrestling Alliance debut defeating Col. Jonathan James at the "Golden Opportunity II" event in Middletown, Ohio. On June 5, 2016, Honky wrestled in Impact Pro Wrestling in New Zealand, at the Armageddon Expo in Wellington. He teamed up with Brook Duncan and Britenay to defeat the team of the IPW New Zealand Heavyweight Champion Curt Chaos, Taylor Adams and Mr Burns. Honky made a cameo appearance in the first episode of season 3 of Lucha Underground, appearing as a warden in a prison, returning Dario Cueto's things upon his release. Hall of Fame (2019) On February 26, 2019, WWE confirmed that the Honky Tonk Man would join the WWE Hall of Fame class of 2019. He was inducted on April 7, 2019 by his former manager Jimmy Hart. Other media Honky appeared in the coin operated arcade game WWF Superstars which debuted in 1989. Honky appeared in an episode of the court based show Judge Jeanine Pirro as a witness to the defendant; the episode, which aired on October 11, 2010, was also the highest rated show for Judge Jeanine Pirro of all time. Honky appeared in the video game WWE All Stars as a free downloadable character. He also has appeared in WWE 2K15 as part of a downloadable content pack and is in WWE 2K16 as an unlockable character from the special objectives. He was cast in John Wesley Norton's film Executive Ranks. Honky also appeared in Insane Clown Posse's music video for "How Many Times" along with The Bushwhackers and his former tag team partner Greg Valentine. Personal life Farris is a first cousin of Jerry Lawler and a first cousin once-removed of the late Brian Christopher. He is an avid golfer in his spare time. Farris' first marriage to Judy Lynn Nuckolls was brief, but he has been married to his current wife Tammy since 1984. He has lived in Gilbert, Arizona since June 1993. Although a kayfabe rival of Randy Savage, Farris had a professionally friendly relationship with him. Both men held the WWF Intercontinental Heavyweight Championship for over one year, with Farris beating Savage by a few weeks. Farris considers Harley Race to be the greatest professional wrestler of all time. Ferris has stated that he donates his hair to Locks of Love once a year. Championships and accomplishments All Pro Wrestling APW Universal Heavyweight Championship (1 time) Big Time Wrestling BTW Heavyweight Championship (1 time) Cauliflower Alley Club Men's Wrestling Award (2011) International Championship Wrestling ICW Heavyweight Championship (1 time) Legends Pro Wrestling XWF/LPW Hall of Fame (class of 2008) Mid-Eastern Wrestling Federation MEWF Heavyweight Championship (1 time) Mid-South Wrestling Association MSWA Tennessee Heavyweight Championship (1 time) NWA Mid-America/Continental Wrestling Association AWA Southern Tag Team Championship (4 times) – with Larry Latham (3) and Tojo Yamamoto (1) NWA Mid-America Tag Team Championship (3 times) – with Larry Latham North Atlantic Wrestling Association NAWA Tag Team Championship (1 time) – with Paul Hudson Northern States Wrestling Alliance NSWA Tag Team Championship (1 time) – with Greg Valentine Pro Wrestling Illustrated Ranked No. 159 of the top 500 singles wrestlers in the PWI 500 in 1992 Ring Masters Entertainment RME Heavyweight Championship (1 time) RME Tag Team Championship (1 time) - with Bobby Collins Southeastern Championship Wrestling NWA Alabama Heavyweight Championship (1 time) NWA Southeastern Heavyweight Championship (Northern Division) (1 time) NWA Southeastern Tag Team Championship (1 time) – with Ron Starr NWA Southeastern United States Junior Heavyweight Championship (1 time) Stampede Wrestling Stampede International Tag Team Championship (3 times) – with Ron Starr (2) and The Cuban Assassin (1) Stampede North American Heavyweight Championship (1 time) Ultimate Championship Wrestling UCW Tag Team Championship (1 time) – with Bushwhacker Luke Ultimate Championship Wrestling (Virginia) UCW Heavyweight Championship (1 time) Universal Wrestling Association UWA Heavyweight Championship (1 time) World Wrestling Council WWC Caribbean Heavyweight Championship (1 time) World Wrestling Federation/WWE WWF Intercontinental Championship (1 time) WWE Hall of Fame (Class of 2019) 1 During Honky Tonk Man's reign by mid-1988, the title was renamed the WWF Intercontinental Championship. References External links Tha O Show Episode 160 Honky Tonk Man Interview Honky Tonk Man's Interview with GENICKBRUCH.com Honky Tonk Man's 2nd Interview with GENICKBRUCH.com 1953 births American male professional wrestlers Elvis impersonators Living people People from Bolivar, Tennessee People from Gilbert, Arizona People from Malden, Missouri Professional wrestlers from Tennessee Professional wrestling managers and valets Stampede Wrestling alumni The First Family (professional wrestling) members WWE Hall of Fame inductees WWF/WWE Intercontinental Champions
4390324
https://en.wikipedia.org/wiki/NeuroDimension
NeuroDimension
NeuroDimension, Inc. was acquired by nDimensional, Inc. (in 2016). NeuroDimension specialized in neural networks, adaptive systems, and genetic optimization and made software tools for developing and implementing these artificial intelligence technologies. NeuroSolutions is a general-purpose neural network development environment and TradingSolutions is a tool for developing trading systems based on neural networks and genetic algorithms. History Formation and NeuroSolutions Prior to the acquisition of NeuroDimension (in 2016), it was a software development company headquartered in Gainesville, Florida and founded in 1991 by Steven Reid, MD, Jose Principe, PhD (Director of the Computational Neural Engineering Lab at the University of Florida) and Curt Lefebvre, PhD (CEO of nDimensional). Dr. Reid provided the initial capital to get the company off the ground. Dr. Principe provided the engineering staff with technical direction and had helped secure research grant funding for the company. Dr. Lefebvre was the principal author of the company’s core neural network technology. The company was formed around a software tool, NeuroSolutions, which enables engineers and researchers to model their data using neural networks. Financial Analysis and TradingSolutions In 1997, it became apparent that one of the most common uses of NeuroSolutions was to create neural network models to time the financial markets. Released in 2008, Trader68 handles the trading and distribution of trading signals from TradingSolutions, proprietary research, and other sources. In late 2015, Trader68 was discontinued and is no longer supported or actively developed. TradingSolutions was discontinued in 2016. nDimensional, Inc. Acquires NeuroDimension, Inc. In August 2016, nDimensional, Inc. announced the acquisition of NeuroDimension, Inc. to help accelerate its new web-based Platform-as-a-Service product called nD to market. See also Artificial Neural Network Artificial Intelligence Adaptive system Embedded system Genetic algorithm Neural network software Technical Analysis Software companies based in Florida Companies established in 1991 1991 establishments in Florida Software companies of the United States
958486
https://en.wikipedia.org/wiki/VMac
VMac
vMac was an open source emulator for Mac OS on Windows, DOS, OS/2, NeXTSTEP, Linux, Unix, and other platforms. Although vMac has been abandoned, Mini vMac, an improved spinoff of vMac, is still actively developed. vMac and Mini vMac emulate a Macintosh Plus and can run Apple Macintosh System versions 1.1 to 7.5.5. vMac and Mini vMac support CPU emulation from Motorola 68000 to 68040, display output, sound, floppy disk insert, HFV image files, and more. Some vMac ports include extra features such as CD-ROM support, basic serial port (SCC) support, Gemulator ROM board support, and various performance improvements. Although the website is still in operation, most vMac development slowed to a halt in 1999, and no official releases have been made since. Many of the developer e-mail addresses listed on the website are not currently working. Overview Mini vMac, vMac's spinoff, is still being maintained and developed by Paul C. Pratt. Currently Mini vMac supports Macintosh 128K, 512K, 512Ke, Plus, SE and Classic, with active development for Macintosh II, Macintosh Portable and PowerBook 100 support. Due to complaints about the rarity of the original II, it also accepts Macintosh IIx and Macintosh SE/30 ROM files. The precompiled versions available for download at Mini vMac's SourceForge project emulates a Macintosh Plus with 4 MiB of RAM. vMac and Mini vMac require a Macintosh Plus ROM file and Macintosh system software to work. Macintosh ROM files are owned by Apple and cannot be legally distributed. However, the Windows and Unix ports of vMac (not Mini vMac) support the Gemulator ROM board from Emulators Inc., which allows users to add genuine MacPlus ROM chips to their x86 machine via an ISA expansion slot. This board can also support ROM chips from other early Macintosh systems, but the publicly released versions of vMac only supported the Macintosh Plus. Macintosh system software is available from Apple's Support Downloads Website (see External links below). As mentioned, Mini vMac also requires a specific ROM image for the computer emulation desired. A software application for these 68000 Macs may be downloaded from the Mini vMac website for retrieval of a system's ROM image, along with a complete tutorial for locating an old Mac, retrieving the ROM and working with disk images. See also Basilisk II, an emulator of later 68k Macs. Executor (software), a emulator/compatibility layer for early 68k Macs. SheepShaver, an emulator of early PowerPC Macs. PearPC, an emulator of PowerPC Macs, can run Mac OS X and various open Unixes. External links vMac Mini vMac archive of Apple's Support Downloads Website Guide to setting up System 6 in Mini vMac for Windows Creating a Mac-on-Stick using Mini vMac Emulators, Inc. Macintosh platform emulators BeOS software Linux emulation software Classic Mac OS emulation software Windows emulation software 68k emulators Android emulation software
888572
https://en.wikipedia.org/wiki/Golden%20age%20of%20arcade%20video%20games
Golden age of arcade video games
The golden age of arcade video games was the period of rapid growth, technological development and cultural influence of arcade video games, from the late 1970s to the early 1980s. The period began with the release of Space Invaders in 1978, which led to a wave of shoot 'em up games such as Galaxian and the vector graphics-based Asteroids in 1979, made possible by new computing technology that had greater power and lower costs. Arcade video games transitioned from black-and-white to color, with titles such as Frogger and Centipede taking advantage of the visual opportunities of bright palettes. Video game arcades became a part of popular culture and a primary channel for new games. Video game genres were still being established, but included space-themed shooters such as Defender and Galaga, maze chase games which followed the design established by Pac-Man, driving and racing games which more frequently used 3D perspectives, and the beginning of what would later be called platform games touched off by Donkey Kong. Games began starring named characters, such as Pac-Man, Mario and Q*bert, and some of these characters crossed over into other media including songs, cartoons, and movies. The 1982 film Tron was closely tied to an arcade game of the same name. Relevant time period Although the exact years differ, most sources agree the period was from about the late 1970s to early 1980s. Technology journalist Jason Whittaker, in The Cyberspace Handbook, places the beginning of the golden age in 1978, with the release of Space Invaders. Video game journalist Steven L. Kent, in his book The Ultimate History of Video Games, places it at 1979 to 1983. The book pointed out that 1979 was the year that Space Invaders (which he credits for ushering in the golden age) was gaining considerable popularity in the United States, and the year that saw vector display technology, first seen in arcades in 1977 with Space Wars, rise to prominence via Atari's Asteroids. However, 1983 was the period that began "a fairly steady decline" in the coin-operated video game business and when many arcades started disappearing. Walter Day of Twin Galaxies places this period's beginning in the late 1970s, when color arcade games became more prevalent and arcade video games started appearing outside of their traditional bowling alley and bar locales, through to its ending in the mid-1980s. RePlay magazine in 1985 dated the arcade industry's "video boom" years from 1979 to 1982. The golden age of arcade games largely coincided with, and partly fueled, the second generation of game consoles and the microcomputer revolution. In contrast to most other sources, the History of Computing Project website places the golden age of video games between 1971 and 1983, covering the "mainstream appearance of video games as a consumer market" and "the rise of dedicated hardware systems and the origin of multi-game cartridge based systems". 1971 was chosen as an earlier start date by the project for two reasons: the creator of Pong filed a pivotal patent regarding video game technology, and it was the release of the first arcade video game machine, Computer Space. Business The golden age was a time of great technical and design creativity in arcade games. The era saw the rapid spread of video arcades across North America, Europe, and Asia. The number of video game arcades in North America was doubled between 1980 and 1982; reaching a peak of 10,000 video game arcades across the region (compared to 4,000 as of 1998). Beginning with Space Invaders, video arcade games also started to appear in supermarkets, restaurants, liquor stores, gas stations and many other retail establishments looking for extra income. Video game arcades at the time became as common as convenience stores, while arcade games like Pac-Man and Space Invaders appeared in most locations across the United States, including even funeral homes. The sales of arcade video game machines increased significantly during this period, from $50 million in 1978 to $900 million in 1981, with 500,000 arcade machines sold in the United States at prices ranging as high as $3000 in 1982 alone. By 1982, there were 24,000 full arcades, 400,000 arcade street locations and 1.5 million arcade machines active in North America. The market was very competitive; the average life span of an arcade game was four to six months. Some games like Robby Roto failed because they were too complex to learn quickly, and others like Star Fire because they were too unfamiliar to the audience. Qix was briefly very popular but, Taito's Keith Egging later said, "too mystifying for gamers ... impossible to master and when the novelty wore off, the game faded". At around this time, the home video game industry (second-generation video game consoles and early home computer games) emerged as "an outgrowth of the widespread success of video arcades" at the time. In 1980, the U.S. arcade video game industry's revenue generated from quarters tripled to $2.8 billion. By 1981, the arcade video game industry in the United States was generating an annual revenue of over $5 billion with some estimates as high as $10.5 billion for all video games (arcade and home) in the U.S. that year, which was three times the amount spent on movie tickets in 1981. The total revenue for the U.S. arcade video game industry in 1981 was estimated at more than $7 billion though some analysts estimated the real amount may have been much higher. By 1982, video games accounted for 87% of the $8.9 billion in commercial games sales in the United States. In 1982, the arcade video game industry's revenue in quarters was estimated at $8 billion surpassing the annual gross revenue of both pop music ($4 billion) and Hollywood films ($3 billion) combined that year. It also exceeded the revenues of all major sports combined at the time, earning three times the combined ticket and television revenues of Major League Baseball, basketball, and American football, as well as earning twice as much as all the casinos in Nevada combined. This was also more than twice as much revenue as the $3.8 billion generated by the home video game industry (during the second generation of consoles) that same year; both the arcade and home markets combined added up to a total revenue between $11.8 billion and $12.8 billion for the U.S. video game industry in 1982. In comparison, the U.S. video game industry in 2011 generated total revenues between $16.3 billion and $16.6 billion. Prior to the golden age, pinball machines were more popular than video games. The pinball industry reached a peak of 200,000 machine sales and $2.3 billion revenue in 1979, which had declined to 33,000 machines and $464 million in 1982. In comparison, the best-selling arcade games of the golden age, Space Invaders and Pac-Man, had each sold over 360,000 and 400,000 cabinets, respectively, with each machine costing between $2000 and $3000 (specifically $2400 in Pac-Man's case). In addition, Space Invaders had grossed $2 billion in quarters by 1982, while Pac-Man had grossed over $1 billion by 1981 and $2.5 billion by the late 1990s. In 1982, Space Invaders was considered the highest-grossing entertainment product of its time, with comparisons made to the then highest-grossing film Star Wars, which had grossed $486 million, while Pac-Man is today considered the highest-grossing arcade game of all time. Many other arcade games during the golden age also had hardware unit sales at least in the tens of thousands, including Ms. Pac-Man with over 115,000 units, Asteroids with 70,000, Donkey Kong with over 60,000, Defender with 55,000, Galaxian with 40,000, Donkey Kong Junior with 35,000, Mr. Do! with 30,000, and Tempest with 29,000 units. A number of arcade games also generated revenues (from quarters) in the hundreds of millions, including Defender with more than $100 million in addition to many more with revenues in the tens of millions, including Dragon's Lair with $48 million and Space Ace with $13 million. The most successful arcade game companies of this era included Taito (which ushered in the golden age with the shooter game Space Invaders and produced other successful arcade action games such as Gun Fight and Jungle King), Namco (the Japanese company that created Galaxian, Pac-Man, Pole Position and Dig Dug) and Atari (the company that introduced video games into arcades with Computer Space and Pong, and later produced Asteroids). Other companies such as Sega (who later entered the home console market against its former arch rival, Nintendo), Nintendo (whose mascot, Mario, was introduced in 1981's Donkey Kong as "Jumpman"), Bally Midway Manufacturing Company (which was later purchased by Williams), Cinematronics, Konami, Centuri, Williams and SNK also gained popularity around this era. During this period, Japanese video game manufacturers became increasingly influential in North America. By 1980, they had become very influential through licensing their games to American manufacturers. Jonathan Greenberg of Forbes predicted in early 1981 that Japanese companies would eventually dominate the North American video game industry, as American video game companies were increasingly licensing products from Japanese companies, who in turn were opening up North American branches. By 1982-1983, Japanese manufacturers had more directly captured a large share of the North American arcade market, which Gene Lipkin of Data East USA partly attributed to Japanese companies having more finances to invest in new ideas. Technology Arcades catering to video games began to gain momentum in the late 1970s, with Space Invaders (1978) followed by games such as Asteroids (1979) and Galaxian (1979). Arcades became more widespread in 1980 with Pac-Man, Missile Command and Berzerk, and in 1981 with Defender, Donkey Kong, Frogger and others. The central processing unit (CPU) microprocessors in these games allowed for more complexity than earlier transistor-transistor logic (TTL) discrete circuitry games such as Atari's Pong (1972). The arcade boom that began in the late 1970s is credited with establishing the basic techniques of interactive entertainment and for driving down hardware prices to the extent of allowing the personal computer (PC) to become a technological and economic reality. While color monitors had been used by several racing video games before (such as Indy 800 and Speed Race Twin), it was during this period that RGB color graphics became widespread, following the release of Galaxian in 1979. Galaxian introduced a tile-based video game graphics system, which reduced processing and memory requirements by up to 64 times compared to the previous framebuffer system used by Space Invaders. This allowed Galaxian to render multi-color sprites, which were animated atop a scrolling starfield backdrop, providing the basis for the hardware developed by Nintendo for arcade games such as Radar Scope (1980) and Donkey Kong followed by the Nintendo Entertainment System console. The Golden Age also saw developers experimenting with vector displays, which produced crisp lines that couldn't be duplicated by raster displays. A few of these vector games became great hits, such as 1979's Asteroids, 1980's Battlezone and Tempest and 1983's Star Wars from Atari. However, vector technology fell out of favor with arcade game companies due to the high cost of repairing vector displays. Several developers at the time were also experimenting with pseudo-3D and stereoscopic 3D using 2D sprites on raster displays. In 1979, Nintendo's Radar Scope introduced a three-dimensional third-person perspective to the shoot 'em up genre, later imitated by shooters such as Konami's Juno First and Activision's Beamrider in 1983. In 1981, Sega's Turbo was the first racing game to feature a third-person rear view format, and use sprite scaling with full-colour graphics. Namco's Pole Position featured an improved rear-view racer format in 1982 that remained the standard for the genre; the game provided a perspective view of the track, with its vanishing point swaying side to side as the player approaches corners, accurately simulating forward movement into the distance. That same year, Sega released Zaxxon, which introduced the use of isometric graphics and shadows; and SubRoc-3D, which introduced the use of stereoscopic 3D through a special eyepiece; This period also saw significant advances in digital audio technology. Space Invaders in 1978 was the first game to use a continuous background soundtrack, with four simple chromatic descending bass notes repeating in a loop, though it was dynamic and changed tempo during stages. Rally-X in 1980 was the first game to feature continuous background music, which was generated using a dedicated sound chip, a Namco 3-channel PSG. That same year saw the introduction of speech synthesis, which was first used in Stratovox, released by Sun Electronics in 1980, followed soon after by Namco's King & Balloon. Developers also experimented with laserdisc players for delivering full motion video based games with movie-quality animation. The first laserdisc video game to exploit this technology was 1983's Astron Belt from Sega, soon followed by Dragon's Lair from Cinematronics; the latter was a sensation when it was released (and, in fact, the laserdisc players in many machines broke due to overuse). While laserdisc games were usually either shooter games with full-motion video backdrops like Astron Belt or interactive movies like Dragon's Lair, Data East's 1983 game Bega's Battle introduced a new form of video game storytelling: using brief full-motion video cutscenes to develop a story between the game's shooting stages, which years later became the standard approach to video game storytelling. By the mid-1980s, the genre dwindled in popularity, as laserdiscs were losing out to the VHS format and the laserdisc games themselves were losing their novelty. 16-bit processors began appearing in several arcade games during this era. Universal's Get A Way (1978) was a sit-down racing game that used a 16-bit CPU, for which it was advertised as the first game to use a 16-bit microcomputer. Another racing game, Namco's Pole Position (1982), used the 16-bit Zilog Z8000 processor. Atari's Food Fight (1983) was one of the earliest games to use the Motorola 68000 processor. 3D computer graphics began appearing in several arcade games towards the end of the golden age. Funai's Interstellar, a laserdisc game introduced at Tokyo's Amusement Machine Show (AM Show) in September 1983, demonstrated pre-rendered 3D computer graphics. Simutrek's Cube Quest, another laserdisc game introduced at the same Tokyo AM Show in September 1983, combined laserdisc animation with 3D real-time computer graphics. Star Rider, introduced by Williams Electronics at the Amusement & Music Operators Association (AMOA) in October 1983, also demonstrated pre-rendered 3D graphics. Atari's I, Robot, developed and released in 1984, was the first arcade game to be rendered entirely with real-time 3D computer graphics. Gameplay Space Invaders (1978) established the "multiple life, progressively difficult level paradigm" used by many classic arcade games. Designed by Tomohiro Nishikado at Taito, he drew inspiration from Atari's block-breaker game Breakout (1976) and several science fiction works. Nishikado added several interactive elements to Space Invaders that he found lacking in earlier video games, such as the ability for enemies to react to the player's movement and fire back, with a game over triggered by enemies killing the player (either by getting hit or enemies reaching the bottom of the screen) rather than a timer running out. In contrast to earlier arcade games which often had a timer, Space Invaders introduced the "concept of going round after round." It also gave the player multiple lives before the game ends, and saved the high score. It also had a basic story with animated characters along with a "crescendo of action and climax" which laid the groundwork for later video games, according to Eugene Jarvis. With the enormous success of Space Invaders, dozens of developers jumped into the development and manufacturing of arcade video games. Some simply copied the "invading alien hordes" idea of Space Invaders and turned out successful imitators like Namco's Galaxian and Galaga, which extended the fixed shooter genre with new gameplay mechanics, more complex enemy patterns, and richer graphics. Galaxian introduced a "risk-reward" concept, while Galaga was one of the first games with a bonus stage. Sega's 1980 release Space Tactics was an early first-person space combat game with multi-directional scrolling as the player moved the cross-hairs on the screen. Others tried new concepts and defined new genres. Rapidly evolving hardware allowed new kinds of games which allowed for different styles of gameplay. The term "action games" began being used in the early 1980s, in reference to a new genre of character action games that emerged from Japanese arcade developers, drawing inspiration from manga and anime culture. According to Eugene Jarvis, these new character-driven Japanese action games emphasized "character development, hand-drawn animation and backgrounds, and a more deterministic, scripted, pattern-type" of play. Terms such as "action games" or "character games" began being used to distinguish these new character-driven action games from the space shooters that had previously dominated the video game industry. The emphasis on character-driven gameplay in turn enabled a wider variety of subgenres. In 1980, Namco released Pac-Man, which popularized the maze chase genre, and Rally-X, which featured a radar tracking the player position on the map. Games such as the pioneering 1981 games Donkey Kong and Qix in 1981 introduced new types of games where skill and timing are more important than shooting as fast as possible, with Nintendo's Donkey Kong in particular setting the template for the platform game genre. The two most popular genres during the golden age were space shooters and character action games. While Japanese developers were creating a character-driven action game genre in the early 1980s, American developers largely adopted a different approach to game design at the time. According to Eugene Jarvis, American arcade developers focused mainly on space shooters during the late 1970s to early 1980s, greatly influenced by Japanese space shooters but taking the genre in a different direction from the "more deterministic, scripted, pattern-type" gameplay of Japanese games, towards a more "programmer-centric design culture, emphasizing algorithmic generation of backgrounds and enemy dispatch" and "an emphasis on random-event generation, particle-effect explosions and physics" as seen in arcade games such as his own Defender (1981) and Robotron: 2084 (1982) as well as Atari's Asteroids (1979). Namco's Bosconian in 1981 introduced a free-roaming style of gameplay where the player's ship freely moves across open space, while also including a radar tracking player & enemy positions. Bega's Battle in 1983 introduced a new form of video game storytelling: using brief full-motion video cutscenes to develop a story between the game's shooting stages. Other examples of innovative games are Atari Games' Paperboy in 1984 where the goal is to successfully deliver newspapers to customers, and Namco's Phozon where the object is to duplicate a shape shown in the middle of the screen. The theme of Exidy's Venture is dungeon exploration and treasure-gathering. Q*bert plays upon the user's sense of depth perception to deliver a novel experience. Popular culture Some games of this era were so popular that they entered popular culture. The first to do so was Space Invaders. The game was so popular upon its release in 1978 that an urban legend blamed it for a national shortage of 100 yen coins in Japan, leading to a production increase of coins to meet demand for the game (although 100 yen coin production was lower in 1978 and 1979 than in previous or subsequent years, and the claim does not withstand logical scrutiny: arcade operators would have emptied out their machines and taken the money to the bank, thus keeping the coins in circulation). It soon had a similar impact in North America, where it has appeared or is referenced in numerous facets of popular culture. Soon after the release of Space Invaders, hundreds of favourable articles and stories about the emerging video game medium aired on television and were printed in newspapers and magazines. The Space Invaders Tournament held by Atari in 1980 was the first video game competition and attracted more than 10,000 participants, establishing video gaming as a mainstream hobby. By 1980, 86% of the 13–20 population in the United States had played arcade video games, and by 1981, there were more than 35 million gamers visiting video game arcades in the United States. The game that most affected popular culture in North America was Pac-Man. It was released in 1980 caused such a sensation that it initiated what is now referred to as "Pac-Mania" (which later became the title of the last coin-operated game in the series, released in 1987). Released by Namco, the game featured a yellow, circle-shaped creature trying to eat dots through a maze while avoiding pursuing enemies. Though no one could agree what the "hero" or enemies represented (they were variously referred to as ghosts, goblins or monsters), the game was extremely popular. The game spawned an animated television series, numerous clones, Pac-Man-branded foods, toys, and a hit pop song, "Pac-Man Fever". The game's popularity was such that President Ronald Reagan congratulated a player for setting a record score in Pac-Man. Pac-Man was also responsible for expanding the arcade game market to involve large numbers of female audiences across all age groups. Though many popular games quickly entered the lexicon of popular culture, most have since left, and Pac-Man is unusual in remaining a recognized term in popular culture, along with Space Invaders, Donkey Kong, Mario and Q*bert. Seen as an additional source of revenue, arcade games began popping up outside of dedicated arcades, including bars, restaurants, movie theaters, convenience stores, laundromats, gas stations, supermarkets, airports, even dentist and doctor offices. Showbiz Pizza and Chuck E. Cheese were founded specifically as restaurants focused on featuring the latest arcade titles. In 1983, an animated television series produced for Saturday mornings called Saturday Supercade featured video game characters from the era, such as Frogger, Donkey Kong, Q*bert, Donkey Kong Jr., Kangaroo, Space Ace, and Pitfall Harry. Arcade games at the time affected on the music industry, revenues for which had declined by $400 million between 1978 and 1981 (from $4.1 billion to $3.7 billion), a decrease that was directly credited to the rise of arcade games at the time. Successful songs based on video games also began appearing. The pioneering electronic music band Yellow Magic Orchestra (YMO) sampled Space Invaders sounds in their 1978 self-titled album and the hit single "Computer Game" from the same album, the latter selling over 400,000 copies in the United States. In turn, YMO had a major influence on much of the video game music produced during the 8-bit and 16-bit eras. Other pop songs based on Space Invaders soon followed, including "Disco Space Invaders" (1979) by Funny Stuff, "Space Invaders" (1980) by Player One (known as Playback in the US), and the hit songs "Space Invader" (1980) by The Pretenders and "Space Invaders" (1980) by Uncle Vic. The game was also the basis for Player One's "Space Invaders" (1979), which in turn provided the bassline for Jesse Saunders' "On and On" (1984), the first Chicago house music track. The song "Pac-Man Fever" reached No. 9 on the Billboard Hot 100 and sold over a million singles in 1982, while the album Pac-Man Fever sold over a million records, with both receiving Gold certifications. That same year, R. Cade and the Video Victims also produced an arcade-inspired album, Get Victimized, featuring songs such as "Donkey Kong". In 1984, former YMO member Haruomi Hosono produced an album entirely from Namco arcade game samples entitled Video Game Music, an early example of a chiptune record and the first video game music album. Arcade game sounds also had a strong influence on the hip hop, pop music (particularly synthpop) and electro music genres during the early 1980s. The booming success of video games at the time led to music magazine Billboard listing the 15 top-selling video games alongside their record charts by 1982. More than a decade later, the first electroclash record, I-F's "Space Invaders Are Smoking Grass" (1997), has been described as "burbling electro in a vocodered homage to Atari-era hi-jinks", particularly Space Invaders which it was named after. Arcade games also influenced the film industry; beginning with Space Invaders, arcade games began appearing at many movie theaters, while early films based on video games were also produced, most notably Tron, which grossed over $33 million in 1982 which began the Tron franchise which included a video game adaptation that grossed more than the film. Other films based on video games included the 1983 films WarGames (where Matthew Broderick plays Galaga at an arcade), Nightmares, and Joysticks, the 1984 films The Last Starfighter, as well as Cloak & Dagger (in which an Atari 5200 cartridge implausibly containing the eponymous arcade game becomes the film's MacGuffin). Arcades also appeared in many other films at the time, such as Dawn of the Dead (where they play Gun Fight and F-1) in 1978, and Midnight Madness in 1980, Take This Job and Shove It and Puberty Blues in 1981, the 1982 releases Rocky III, Fast Times At Ridgemont High, Koyaanisqatsi and The Toy, the 1983 releases Psycho II, Spring Break,Terms of Endearment and Never Say Never Again, the 1984 releases Footloose, The Karate Kid (where Elisabeth Shue plays Pac-Man), The Terminator, Night of the Comet and The Adventures of Buckaroo Banzai Across the 8th Dimension, the 1985 releases The Goonies,The Boys Next Door and Ferris Bueller's Day Off as well as the 1986 films Something Wild, The Color of Money and Psycho III (where Norman Bates stands next to a Berzerk cabinet). Over the Top and Can't Buy Me Love showcase several arcade game cabinets as well. In more recent years, there have been critically acclaimed documentaries based on the golden age of arcade games, such as The King of Kong: A Fistful of Quarters (2007) and Chasing Ghosts: Beyond the Arcade (2007). Since 2010, many arcade-related features or films incorporating 1980's nostalgia have been released including Tron: Legacy (2010), Wreck-It Ralph (2012), Pixels (2015), Everybody Wants Some!! (2016),Summer of 84 (2018) and Ready Player One (2018) which is based upon the novel by Ernest Cline and directed by Steven Spielberg. Television shows and streaming series have exhibited arcade games including The Goldbergs and the Netflix series Stranger Things (both of which feature Dragon's Lair among other games). Strategy guides The period saw the emergence of a gaming media, publications dedicated to video games, in the form of video game journalism and strategy guides. The enormous popularity of video arcade games led to the very first video game strategy guides; these guides (rare to find today) discussed in detail the patterns and strategies of each game, including variations, to a degree that few guides seen since can match. "Turning the machine over"—making the score counter overflow and reset to zero—was often the final challenge of a game for those who mastered it, and the last obstacle to getting the highest score. Some of these strategy guides sold hundreds of thousands of copies at prices ranging from $1.95 to $3.95 in 1982 (equivalent to between $ and $ in ). That year, Ken Uston's Mastering Pac-Man sold 750,000 copies, reaching No. 5 on B. Dalton's mass-market bestseller list, while Bantam's How to Master the Video Games sold 600,000 copies, appearing on The New York Times mass-market paperback list. By 1983, 1.7 million copies of Mastering Pac-Man had been printed. List of popular arcade games The games below are some of the most popular and/or influential games of the era. List of best-selling arcade games For arcade games, success was usually judged by either the number of arcade hardware units sold to operators, or the amount of revenue generated, from the number of coins (such as quarters or 100 yen coins) inserted into machines, and/or the hardware sales (with arcade hardware prices often ranging from $1000 to $4000). This list only includes arcade games that have sold more than 10,000 hardware units. Space Invaders (750,000) Pac-Man (400,000) Donkey Kong (132,000) Ms. Pac-Man (125,000) Asteroids (100,000) Defender (70,000) Centipede (55,988) Galaxian (50,000 in the US) Hyper Olympic (Track & Field) (38,000 in Japan) Donkey Kong Jr. (30,000 in the US) Karate Champ (30,000 in the US) Mr. Do! (30,000 in the US) Tempest (29,000) Q*bert (25,000) Robotron: 2084 (23,000) Dig Dug (22,228 in the US) Pole Position (21,000 in the US) Popeye (20,000 in the US) Missile Command (20,000) Jungle Hunt (18,000 in the US) Dragon's Lair (16,000) Berzerk (15,780) Scramble (15,136 in the US) Battlezone (15,122) Champion Baseball (15,000 in Japan) Stargate (15,000) Star Wars (12,695) Super Cobra (12,337 in the US) Space Duel (12,038) Atari Football (11,306) Gee Bee (10,000) Decline and aftermath The golden age cooled around the mid-1980s as copies of popular games began to saturate the arcades. Arcade video game revenues in the United States had declined from $8 billion in 1981 to $5 billion in 1983, reaching a low of $4 billion in 1984. The arcade market had recovered by 1986, with the help of software conversion kits, the arrival of popular beat 'em up games (such as Kung-Fu Master and Renegade), and advanced motion simulator games (such as Sega's "taikan" games including Hang-On, Space Harrier, and Out Run). Arcades remained commonplace through to the 1990s as there were still new genres being explored. In 1987, arcades experienced a short resurgence with Double Dragon, which started the golden age of beat 'em up games, a genre that peaked in popularity with Final Fight two years later. In 1988, arcade game revenues in the United States rose back to $6.4 billion, largely due to the rising popularity of violent action games in the beat 'em up and run and gun shooter genres. However, the growth of home video game systems such as the Nintendo Entertainment System led to another brief arcade decline toward the end of the 1980s. In the early 1990s, the Genesis (Mega Drive outside most of North America) and Super NES (Super Famicom in Japan) greatly improved home play and some of their technology was even integrated into a few video arcade machines. In the early 1990s, the release of Capcom's Street Fighter II established the modern style of fighting games and led to a number of similar games, resulting in a renaissance for the arcades. Another factor was realism, including the "3D Revolution" from 2D and pseudo-3D graphics to true real-time 3D polygon graphics. This was largely driven by a technological arms race between Sega and Namco. By the early 2000s, the sales of arcade machines in North America had declined, with 4,000 unit sales being considered a hit by the time. One of the causes of decline was new generations of video game consoles and personal computers that sapped interest from arcades. Since the 2000s, arcade games have taken different routes globally. In the United States, arcades have become niche markets as they compete with the home console market, and they adapted other business models, such as providing other entertainment options or adding prize redemptions. In Japan, some arcades continue to survive in the early 21st century, with games like Dance Dance Revolution and The House of the Dead tailored to experiences that players cannot easily have at home. Legacy The Golden Age of Video Arcade Games spawned numerous cultural icons and even gave some companies their identity. Elements from games such as Space Invaders, Pac-Man, Donkey Kong, Frogger, and Centipede are still recognized in today's popular culture, and new entries in the franchises for some golden age games continued to be released decades later. Pac-Man and Dragon's Lair joined Pong for permanent display at the Smithsonian in Washington, D.C. for their cultural impact in the United States. No other video game has been inducted since. Emulators such as the Internet Archive Virtual Arcade are able to run these classic games inside a web browser window on a modern computer. Computers have gotten faster per Moore's Law. JavaScript emulators can now run copies of the original console ROMs without porting the code to the new systems. See also Arcade cabinet List of arcade video games References Further reading The Official Price Guide to Classic Video Games by David Ellis (2004), External links The KLOV Top Video Games Lists by Greg McLemore and friends Reference to the term 'Golden Age' The Dot Eaters, Videogame History 101 Internet Archive, Virtual Arcade Arcade video games History of video games