id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
370882
https://en.wikipedia.org/wiki/Programming%20tool
Programming tool
A programming tool or software development tool is a computer program that software developers use to create, debug, maintain, or otherwise support other programs and applications. The term usually refers to relatively simple programs, that can be combined to accomplish a task, much as one might use multiple hands to fix a physical object. The most basic tools are a source code editor and a compiler or interpreter, which are used ubiquitously and continuously. Other tools are used more or less depending on the language, development methodology, and individual engineer, often used for a discrete task, like a debugger or profiler. Tools may be discrete programs, executed separately – often from the command line – or may be parts of a single large program, called an integrated development environment (IDE). In many cases, particularly for simpler use, simple ad hoc techniques are used instead of a tool, such as print debugging instead of using a debugger, manual timing (of overall program or section of code) instead of a profiler, or tracking bugs in a text file or spreadsheet instead of a bug tracking system. The distinction between tools and applications is murky. For example, developers use simple databases (such as a file containing a list of important values) all the time as tools. However a full-blown database is usually thought of as an application or software in its own right. For many years, computer-assisted software engineering (CASE) tools were sought after. Successful tools have proven elusive. In one sense, CASE tools emphasized design and architecture support, such as for UML. But the most successful of these tools are IDEs. Uses of programming tools Translating from human to computer language Modern computers are very complex and in order to productively program them, various abstractions are needed. For example, rather than writing down a program's binary representation a programmer will write a program in a programming language like C, Java or Python. Programming tools like assemblers, compilers and linkers translate a program from a human write-able and readable source language into the bits and bytes that can be executed by a computer. Interpreters interpret the program on the fly to produce the desired behavior. These programs perform many well defined and repetitive tasks that would nonetheless be time-consuming and error-prone when performed by a human, like laying out parts of a program in memory and fixing up the references between parts of a program as a linker does. Optimizing compilers on the other hand can perform complex transformations on the source code in order to improve the execution speed or other characteristics of a program. This allows a programmer to focus more on higher level, conceptual aspects of a program without worrying about the details of the machine it is running on. Making program information available for humans Because of the high complexity of software, it is not possible to understand most programs at a single glance even for the most experienced software developer. The abstractions provided by high-level programming languages also make it harder to understand the connection between the source code written by a programmer and the actual program's behaviour. In order to find bugs in programs and to prevent creating new bugs when extending a program, a software developer uses some programming tools to visualize all kinds of information about programs. For example, a debugger allows a programmer to extract information about a running program in terms of the source language used to program it. The debugger can compute the value of a variable in the source program from the state of the concrete machine by using information stored by the compiler. Memory debuggers can directly point out questionable or outright wrong memory accesses of running programs which may otherwise remain undetected and are a common source of program failures. List of tools Software tools come in many forms: Binary compatibility analysis tools Bug databases: Comparison of issue tracking systems – Including bug tracking systems Build tools: Build automation, List of build automation software Call graph Code coverage: Code coverage#Software code coverage tools. Code review: List of tools for code review Code sharing sites: Freshmeat, Krugle, SourceForge, GitHub. See also Code search engines. Compilation and linking tools: GNU toolchain, gcc, Microsoft Visual Studio, CodeWarrior, Xcode, ICC Debuggers: Debugger#List of debuggers. See also Debugging. Disassemblers: Generally reverse-engineering tools. Documentation generators: Comparison of documentation generators, help2man, Plain Old Documentation, asciidoc Formal methods: Mathematical techniques for specification, development and verification GUI interface generators Library interface generators: SWIG Integration Tools Memory debuggers are frequently used in programming languages (such as C and C++) that allow manual memory management and thus the possibility of memory leaks and other problems. They are also useful to optimize efficiency of memory usage. Examples: dmalloc, Electric Fence, Insure++, Valgrind Parser generators: Parsing#Parser development software Performance analysis or profiling: List of performance analysis tools Revision control: List of revision control software, Comparison of revision control software Scripting languages: PHP, Awk, Perl, Python, REXX, Ruby, Shell, Tcl Search: grep, find Source code Clones/Duplications Finding: Duplicate code#Tools Source code editor Text editors: List of text editors, Comparison of text editors Source code formatting: indent, pretty-printers, beautifiers, minifiers Source code generation tools: Automatic programming#Implementations Static code analysis: lint, List of tools for static code analysis Unit testing: List of unit testing frameworks IDEs Integrated development environments combine the features of many tools into one package. They for example make it easier to do specific tasks, such as searching for content only in files in a particular project. IDEs may for example be used for development of enterprise-level applications. Different aspects of IDEs for specific programming languages can be found in this comparison of integrated development environments. See also Computer aided software engineering tools Computer science Configuration System Scripting language Software development kit Software engineering and list of software engineering topics Software systems Toolkits for User Innovation References Software Development Tools for Petascale Computing Workshop 2007 External links
2908867
https://en.wikipedia.org/wiki/Graphical%20Data%20Display%20Manager
Graphical Data Display Manager
GDDM (Graphical Data Display Manager) is a computer graphics system for the IBM System/370 which was developed in IBM's Hursley lab, and first released in 1979. GDDM was originally designed to provide programming support for the IBM 3279 colour display terminal and the associated 3287 colour printer. The 3279 was a colour graphics terminal designed to be used in a general business environment. GDDM was extended in the early 1980s to provide graphics support for all of IBM's display terminals and printers, and ran on all of IBM's mainframe operating systems. GDDM also provided support for the (then current) international standards for interactive computer graphics: GKS and PHIGS. Both GKS and PHIGS were designed around the requirements of CAD systems. GDDM is also available on the IBM i midrange operating system, as well as its predecessor, the AS/400. GDDM comprises a number of components: Graphics primitives - lines, circles, boxes etc. Graphing - through the Presentation Graphics Feature (PGF) Language support - PL/I, REXX, COBOL etc. Conversion capabilities - for example to GIF format. Interactive Chart Utility (ICU). GDDM remains in widespread use today, embedded in many z/OS applications, as well as in system programs. GDDM and OS/2 Presentation Manager IBM and Microsoft began collaborating on the design of OS/2 in 1986. The Graphics Presentation Interface (GPI), the graphics API in the OS/2 Presentation Manager, was based on IBM's GDDM and the Graphics Control Program (GCP). GCP was originally developed in Hursley for the 3270/PC-G and 3270/PC-GX terminals. The GPI was the primary graphics API for the OS/2 operating system. At the time (1980s), the graphical user interface (GUI) was still in its early stages of popularity, but already it was clear that the foundation of a good GUI was a graphics API with strong real-time interactive capabilities. Unfortunately, the design of GDDM was closer to (at the time) traditional graphics APIs like GKS, which made it unsuited for more than the simplest interactive uses. Microsoft and IBM went their separate ways in 1991. Microsoft continued development of its Windows operating environment with Graphics Device Interface (GDI) graphics API. IBM continued with OS/2 for several more years. References Charles Petzold, Programming the OS/2 Presentation Manager, Microsoft Press, 1989. . External links announcement of 3279 and 3287. GDDM Programming Guide Graphics software OS/2 Graphical Data Display Manager IBM mainframe software
21565753
https://en.wikipedia.org/wiki/Ran%20Raz
Ran Raz
Ran Raz () is a computer scientist who works in the area of computational complexity theory. He was a professor in the faculty of mathematics and computer science at the Weizmann Institute. He is now a professor of computer science at Princeton University. Ran Raz received his Ph.D. at the Hebrew University of Jerusalem in 1992 under Avi Wigderson and Michael Ben-Or. Ran Raz is well known for his work on interactive proof systems. His two most-cited papers are on multi-prover interactive proofs and on probabilistically checkable proofs. Ran Raz received the Erdős Prize in 2002. His work has been awarded in the top conferences in theoretical computer science. In 2004, he received the best paper award in ACM Symposium on Theory of Computing (STOC) for , and the best paper award in IEEE Conference on Computational Complexity (CCC) for . In 2008, the work received the best paper award in IEEE Symposium on Foundations of Computer Science (FOCS). Selected publications . . . . . Notes Year of birth missing (living people) Living people Theoretical computer scientists Weizmann Institute of Science faculty Israeli computer scientists
31358
https://en.wikipedia.org/wiki/The%20Art%20of%20Computer%20Programming
The Art of Computer Programming
The Art of Computer Programming (TAOCP) is a comprehensive monograph written by the computer scientist Donald Knuth presenting programming algorithms and their analysis. Knuth began the project, originally conceived as a single book with twelve chapters, in 1962. The first three volumes of what was then expected to be a seven-volume set were published in 1968, 1969, and 1973. Work began in earnest on Volume 4 in 1973, but was suspended in 1977 for work on typesetting prompted by the second edition of Volume 2. Writing of the final copy of Volume 4A began in longhand in 2001, and the first online pre-fascicle, 2A, appeared later in 2001. The first published installment of Volume 4 appeared in paperback as Fascicle 2 in 2005. The hardback Volume 4A, combining Volume 4, Fascicles 0–4, was published in 2011. Volume 4, Fascicle 6 ("Satisfiability") was released in December 2015; Volume 4, Fascicle 5 ("Mathematical Preliminaries Redux; Backtracking; Dancing Links") was released in November 2019. The published Fascicles 5 and 6 are expected to make up the first two-thirds of Volume 4B. Knuth has not announced any estimated date for release of Volume 4B, although his method used for Volume 4A is to release the hardback volume sometime after release of the paperback fascicles contained in it. Near-term publisher estimates put the release date at May or June 2019, which proved to be incorrect. History After winning a Westinghouse Talent Search scholarship, Knuth enrolled at the Case Institute of Technology (now Case Western Reserve University), where his performance was so outstanding that the faculty voted to award him a master of science upon his completion of the bachelor degree. During his summer vacations, Knuth was hired by the Burroughs Corporation to write compilers, earning more in his summer months than full professors did for an entire year. Such exploits made Knuth a topic of discussion among the mathematics department, which included Richard S. Varga. In January 1962, when he was a graduate student in the mathematics department at Caltech, Knuth was approached by Addison-Wesley to write a book about compiler design, and he proposed a larger scope. He came up with a list of 12 chapter titles the same day. In the summer of 1962 he worked on a FORTRAN compiler for UNIVAC. During this time, he also came up with a mathematical analysis of linear probing, which convinced him to present the material with a quantitative approach. After receiving his PhD in June 1963, he began working on his manuscript, of which he finished his first draft in June 1965, at hand-written pages. He had assumed that about five hand-written pages would translate into one printed page, but his publisher said instead that about hand-written pages translated to one printed page. This meant he had approximately printed pages of material, which closely matches the size of the first three published volumes. The publisher was nervous about accepting such a project from a graduate student. At this point, Knuth received support from Richard S. Varga, who was the scientific adviser to the publisher. Varga was visiting Olga Taussky-Todd and John Todd at Caltech. With Varga's enthusiastic endorsement, the publisher accepted Knuth's expanded plans. In its expanded version, the book would be published in seven volumes, each with just one or two chapters. Due to the growth in Chapter 7, which was fewer than 100 pages of the 1965 manuscript, per Vol. 4A p. vi, the plan for Volume 4 has since expanded to include Volumes 4A, 4B, 4C, 4D, and possibly more. In 1976, Knuth prepared a second edition of Volume 2, requiring it to be typeset again, but the style of type used in the first edition (called hot type) was no longer available. In 1977, he decided to spend some time creating something more suitable. Eight years later, he returned with TEX, which is currently used for all volumes. The offer of a so-called Knuth reward check worth "one hexadecimal dollar" (100HEX base 16 cents, in decimal, is $2.56) for any errors found, and the correction of these errors in subsequent printings, has contributed to the highly polished and still-authoritative nature of the work, long after its first publication. Another characteristic of the volumes is the variation in the difficulty of the exercises. Knuth even has a numerical difficulty scale for rating those exercises, varying from 0 to 50, where 0 is trivial, and 50 is an open question in contemporary research. Knuth's dedication reads: This series of books is affectionately dedicatedto the Type 650 computer once installed atCase Institute of Technology,with whom I have spent many pleasant evenings. Assembly language in the book All examples in the books use a language called "MIX assembly language", which runs on the hypothetical MIX computer. Currently, the MIX computer is being replaced by the MMIX computer, which is a RISC version. Software such as GNU MDK exists to provide emulation of the MIX architecture. Knuth considers the use of assembly language necessary for the speed and memory usage of algorithms to be judged. Critical response Knuth was awarded the 1974 Turing Award "for his major contributions to the analysis of algorithms […], and in particular for his contributions to the 'art of computer programming' through his well-known books in a continuous series by this title." American Scientist has included this work among "100 or so Books that shaped a Century of Science", referring to the twentieth century, Covers of the third edition of Volume 1 quote Bill Gates as saying, "If you think you're a really good programmer… read (Knuth's) Art of Computer Programming… You should definitely send me a résumé if you can read the whole thing." The New York Times referred to it as "the profession's defining treatise". Volumes Completed Volume 1 – Fundamental Algorithms Chapter 1 – Basic concepts Chapter 2 – Information structures Volume 2 – Seminumerical Algorithms Chapter 3 – Random numbers Chapter 4 – Arithmetic Volume 3 – Sorting and Searching Chapter 5 – Sorting Chapter 6 – Searching Volume 4A – Combinatorial Algorithms Chapter 7 – Combinatorial searching (part 1) Planned Volume 4B... – Combinatorial Algorithms (chapters 7 & 8 released in several subvolumes) Chapter 7 – Combinatorial searching (continued) Chapter 8 – Recursion Volume 5 – Syntactic Algorithms Chapter 9 – Lexical scanning (also includes string search and data compression) Chapter 10 – Parsing techniques Volume 6 – The Theory of Context-Free Languages Volume 7 – Compiler Techniques Chapter outlines Completed Volume 1 – Fundamental Algorithms Chapter 1 – Basic concepts 1.1. Algorithms 1.2. Mathematical Preliminaries 1.2.1. Mathematical Induction 1.2.2. Numbers, Powers, and Logarithms 1.2.3. Sums and Products 1.2.4. Integer Functions and Elementary Number Theory 1.2.5. Permutations and Factorials 1.2.6. Binomial Coefficients 1.2.7. Harmonic Numbers 1.2.8. Fibonacci Numbers 1.2.9. Generating Functions 1.2.10. Analysis of an Algorithm 1.2.11. Asymptotic Representations 1.2.11.1. The O-notation 1.2.11.2. Euler's summation formula 1.2.11.3. Some asymptotic calculations 1.3 MMIX (MIX in the hardback copy but updated by fascicle 1) 1.3.1. Description of MMIX 1.3.2. The MMIX Assembly Language 1.3.3. Applications to Permutations 1.4. Some Fundamental Programming Techniques 1.4.1. Subroutines 1.4.2. Coroutines 1.4.3. Interpretive Routines 1.4.3.1. A MIX simulator 1.4.3.2. Trace routines 1.4.4. Input and Output 1.4.5. History and Bibliography Chapter 2 – Information Structures 2.1. Introduction 2.2. Linear Lists 2.2.1. Stacks, Queues, and Deques 2.2.2. Sequential Allocation 2.2.3. Linked Allocation (topological sorting) 2.2.4. Circular Lists 2.2.5. Doubly Linked Lists 2.2.6. Arrays and Orthogonal Lists 2.3. Trees 2.3.1. Traversing Binary Trees 2.3.2. Binary Tree Representation of Trees 2.3.3. Other Representations of Trees 2.3.4. Basic Mathematical Properties of Trees 2.3.4.1. Free trees 2.3.4.2. Oriented trees 2.3.4.3. The "infinity lemma" 2.3.4.4. Enumeration of trees 2.3.4.5. Path length 2.3.4.6. History and bibliography 2.3.5. Lists and Garbage Collection 2.4. Multilinked Structures 2.5. Dynamic Storage Allocation 2.6. History and Bibliography Volume 2 – Seminumerical Algorithms Chapter 3 – Random Numbers 3.1. Introduction 3.2. Generating Uniform Random Numbers 3.2.1. The Linear Congruential Method 3.2.1.1. Choice of modulus 3.2.1.2. Choice of multiplier 3.2.1.3. Potency 3.2.2. Other Methods 3.3. Statistical Tests 3.3.1. General Test Procedures for Studying Random Data 3.3.2. Empirical Tests 3.3.3. Theoretical Tests 3.3.4. The Spectral Test 3.4. Other Types of Random Quantities 3.4.1. Numerical Distributions 3.4.2. Random Sampling and Shuffling 3.5. What Is a Random Sequence? 3.6. Summary Chapter 4 – Arithmetic 4.1. Positional Number Systems 4.2. Floating Point Arithmetic 4.2.1. Single-Precision Calculations 4.2.2. Accuracy of Floating Point Arithmetic 4.2.3. Double-Precision Calculations 4.2.4. Distribution of Floating Point Numbers 4.3. Multiple Precision Arithmetic 4.3.1. The Classical Algorithms 4.3.2. Modular Arithmetic 4.3.3. How Fast Can We Multiply? 4.4. Radix Conversion 4.5. Rational Arithmetic 4.5.1. Fractions 4.5.2. The Greatest Common Divisor 4.5.3. Analysis of Euclid's Algorithm 4.5.4. Factoring into Primes 4.6. Polynomial Arithmetic 4.6.1. Division of Polynomials 4.6.2. Factorization of Polynomials 4.6.3. Evaluation of Powers (addition-chain exponentiation) 4.6.4. Evaluation of Polynomials 4.7. Manipulation of Power Series Volume 3 – Sorting and Searching Chapter 5 – Sorting 5.1. Combinatorial Properties of Permutations 5.1.1. Inversions 5.1.2. Permutations of a Multiset 5.1.3. Runs 5.1.4. Tableaux and Involutions 5.2. Internal sorting 5.2.1. Sorting by Insertion 5.2.2. Sorting by Exchanging 5.2.3. Sorting by Selection 5.2.4. Sorting by Merging 5.2.5. Sorting by Distribution 5.3. Optimum Sorting 5.3.1. Minimum-Comparison Sorting 5.3.2. Minimum-Comparison Merging 5.3.3. Minimum-Comparison Selection 5.3.4. Networks for Sorting 5.4. External Sorting 5.4.1. Multiway Merging and Replacement Selection 5.4.2. The Polyphase Merge 5.4.3. The Cascade Merge 5.4.4. Reading Tape Backwards 5.4.5. The Oscillating Sort 5.4.6. Practical Considerations for Tape Merging 5.4.7. External Radix Sorting 5.4.8. Two-Tape Sorting 5.4.9. Disks and Drums 5.5. Summary, History, and Bibliography Chapter 6 – Searching 6.1. Sequential Searching 6.2. Searching by Comparison of Keys 6.2.1. Searching an Ordered Table 6.2.2. Binary Tree Searching 6.2.3. Balanced Trees 6.2.4. Multiway Trees 6.3. Digital Searching 6.4. Hashing 6.5. Retrieval on Secondary Keys Volume 4A – Combinatorial Algorithms, Part 1 Chapter 7 – Combinatorial Searching 7.1. Zeros and Ones 7.1.1. Boolean Basics 7.1.2. Boolean Evaluation 7.1.3. Bitwise Tricks and Techniques 7.1.4. Binary Decision Diagrams 7.2. Generating All Possibilities 7.2.1. Generating Basic Combinatorial Patterns 7.2.1.1. Generating all n-tuples 7.2.1.2. Generating all permutations 7.2.1.3. Generating all combinations 7.2.1.4. Generating all partitions 7.2.1.5. Generating all set partitions 7.2.1.6. Generating all trees 7.2.1.7. History and further references Planned Volume 4B, 4C, 4D – Combinatorial Algorithms Chapter 7 – Combinatorial Searching (continued) 7.2. Generating all possibilities (continued) 7.2.2. Backtrack programming (published in Fascicle 5) 7.2.2.1. Dancing links (published in Fascicle 5) 7.2.2.2. Satisfiability (published in Fascicle 6) 7.2.2.3. Constraint satisfaction 7.2.2.4. Hamiltonian paths and cycles (online draft in pre-fascicle 8A) 7.2.2.5. Cliques 7.2.2.6. Covers (Vertex cover, Set cover problem, Exact cover, Clique cover) 7.2.2.7. Squares 7.2.2.8. A potpourri of puzzles (online draft in pre-fascicle 9B) (includes Perfect digital invariant) 7.2.2.9. Estimating backtrack costs (chapter 6 of "Selected Papers on Analysis of Algorithms", and Fascicle 5, pp 44−47, under the heading "Running time estimates") 7.2.3. Generating inequivalent patterns (includes discussion of Pólya enumeration theorem) (see "Techniques for Isomorph Rejection", Ch 4 of "Classification Algorithms for Codes and Designs" by Kaski and Östergård) 7.3. Shortest paths 7.4. Graph algorithms 7.4.1. Components and traversal 7.4.1.1. Union-find algorithms 7.4.1.2. Depth-first search 7.4.1.3. Vertex and edge connectivity 7.4.2. Special classes of graphs 7.4.3. Expander graphs 7.4.4. Random graphs 7.5. Graphs and optimization 7.5.1. Bipartite matching (including maximum-cardinality matching, Stable marriage problem, Mariages Stables) 7.5.2. The assignment problem 7.5.3. Network flows 7.5.4. Optimum subtrees 7.5.5. Optimum matching 7.5.6. Optimum orderings 7.6. Independence theory 7.6.1. Independence structures 7.6.2. Efficient matroid algorithms 7.7. Discrete dynamic programming (see also Transfer-matrix method) 7.8. Branch-and-bound techniques 7.9. Herculean tasks (aka NP-hard problems) 7.10. Near-optimization Chapter 8 – Recursion (chapter 22 of "Selected Papers on Analysis of Algorithms") Volume 5 – Syntactic Algorithms Chapter 9 – Lexical scanning (includes also string search and data compression) Chapter 10 – Parsing techniques Volume 6 – The Theory of Context-free Languages Volume 7 – Compiler Techniques English editions Current editions These are the current editions in order by volume number: The Art of Computer Programming, Volumes 1-4A Boxed Set. Third Edition (Reading, Massachusetts: Addison-Wesley, 2011), 3168pp. Volume 1: Fundamental Algorithms. Third Edition (Reading, Massachusetts: Addison-Wesley, 1997), xx+650pp. . Errata: (2011-01-08), (2020-03-26, 27th printing). Addenda: (2011). Volume 2: Seminumerical Algorithms. Third Edition (Reading, Massachusetts: Addison-Wesley, 1997), xiv+762pp. . Errata: (2011-01-08), (2020-03-26, 26th printing). Addenda: (2011). Volume 3: Sorting and Searching. Second Edition (Reading, Massachusetts: Addison-Wesley, 1998), xiv+780pp.+foldout. . Errata: (2011-01-08), (2020-03-26, 27th printing). Addenda: (2011). Volume 4A: Combinatorial Algorithms, Part 1. First Edition (Reading, Massachusetts: Addison-Wesley, 2011), xv+883pp. . Errata: (2020-03-26, ? printing). Volume 1, Fascicle 1: MMIX – A RISC Computer for the New Millennium. (Addison-Wesley, 2005-02-14) . Errata: (2020-03-16) (will be in the fourth edition of volume 1) Volume 4, Fascicle 5: Mathematical Preliminaries Redux; Backtracking; Dancing Links. (Addison-Wesley, 2019-11-22) xiii+382pp, . Errata: (2020-03-27) (will become part of volume 4B) Volume 4, Fascicle 6: Satisfiability. (Addison-Wesley, 2015-12-08) xiii+310pp, . Errata: (2020-03-26) (will become part of volume 4B) Previous editions Complete volumes These volumes were superseded by newer editions and are in order by date. Volume 1: Fundamental Algorithms. First edition, 1968, xxi+634pp, . Volume 2: Seminumerical Algorithms. First edition, 1969, xi+624pp, . Volume 3: Sorting and Searching. First edition, 1973, xi+723pp+foldout, . Errata: . Volume 1: Fundamental Algorithms. Second edition, 1973, xxi+634pp, . Errata: . Volume 2: Seminumerical Algorithms. Second edition, 1981, xiii+ 688pp, . Errata: . The Art of Computer Programming, Volumes 1-3 Boxed Set. Second Edition (Reading, Massachusetts: Addison-Wesley, 1998), pp. Fascicles Volume 4 fascicles 0–4 were revised and published as Volume 4A: Volume 4, Fascicle 0: Introduction to Combinatorial Algorithms and Boolean Functions. (Addison-Wesley Professional, 2008-04-28) vi+240pp, . Errata: (2011-01-01). Volume 4, Fascicle 1: Bitwise Tricks & Techniques; Binary Decision Diagrams. (Addison-Wesley Professional, 2009-03-27) viii+260pp, . Errata: (2011-01-01). Volume 4, Fascicle 2: Generating All Tuples and Permutations. (Addison-Wesley, 2005-02-14) v+127pp, . Errata: (2011-01-01). Volume 4, Fascicle 3: Generating All Combinations and Partitions. (Addison-Wesley, 2005-07-26) vi+150pp, . Errata: (2011-01-01). Volume 4, Fascicle 4: Generating All Trees; History of Combinatorial Generation. (Addison-Wesley, 2006-02-06) vi+120pp, . Errata: (2011-01-01). Volume 4 fascicles 5–6 will become part of Volume 4B: Volume 4, Fascicle 5: Mathematical Preliminaries Redux; Backtracking; Dancing Links. (Addison-Wesley, 2019-11-22) xiii+382pp, . Errata: (2020-03-27) Volume 4, Fascicle 6: Satisfiability. (Addison-Wesley, 2015-12-08) xiii+310pp, . Errata: (2020-03-26) Pre-fascicles Volume 4 pre-fascicles 5A, 5B, and 5C were revised and published as fascicle 5. Volume 4 pre-fascicle 6A was revised and published as fascicle 6. Volume 4, Pre-fascicle 8A: Hamiltonian Paths and Cycles Volume 4, Pre-fascicle 9B: A Potpourri of Puzzles See also Introduction to Algorithms References Notes Citations Sources External links Overview of topics (Knuth's personal homepage) Oral history interview with Donald E. Knuth at Charles Babbage Institute, University of Minnesota, Minneapolis. Knuth discusses software patenting, structured programming, collaboration and his development of TeX. The oral history discusses the writing of The Art of Computer Programming. "Robert W Floyd, In Memoriam", by Donald E. Knuth - (on the influence of Bob Floyd) TAoCP and its Influence of Computer Science (Softpanorama) 1968 non-fiction books Computer programming books Computer science books Monographs Books by Donald Knuth Analysis of algorithms Computer arithmetic algorithms American non-fiction books 1969 non-fiction books 1973 non-fiction books 1981 non-fiction books 2011 non-fiction books Addison-Wesley books
45450339
https://en.wikipedia.org/wiki/Greenway%20Health
Greenway Health
Greenway Health, LLC is a privately-owned vendor of health information technology (HIT) including integrated electronic health record (EHR), practice management, and revenue cycle management solutions. Intergy, Greenway’s cloud-based EHR and practice management solution, serves ambulatory healthcare practices. The company has offices in Tampa, Florida; Carrollton, Georgia; and Bangalore, India. History Medical Manager The Medical Manager Corporation launched the first medical practice management software, Medical Manager, developed by Michael "Mickey" Singer, in 1977. Headquartered in Gainesville, Florida, Medical Manager had one of the largest installed bases of practice management software in the United States at the time of its sale to Vista. In April 2000, the Medical Manager software was adopted into the Smithsonian National Museum of American History in Washington, D.C., under the permanent research collection on information technology. Later the same year, Medical Manager Corporation was acquired by Healtheon, now known as Emdeon. Vitera Healthcare Solutions Sage Software Healthcare, Inc., founded in 2000 after the purchase and rebranding of the Medical Manager software from Emdeon, provided EHR and medical practice management software for healthcare providers. The company’s products included Intergy, a suite of clinical, financial, reporting, and communication tools for healthcare providers. The company operated under the name Sage Software Healthcare, Inc., until November 2011, when it was acquired by Vista Equity Partners for $320 million and renamed Vitera Healthcare Solutions. Sage Group originally purchased the software from Emdeon for $565 million in 2006. In June 2013, Vitera acquired the Birmingham-based EHR company, SuccessEHS, Inc. Products absorbed into the Vitera solutions portfolio include an EHR system, an electronic dental record (EDR) system, and a revenue cycle management and practice management service. SuccessEHS SuccessEHS, Inc., was founded in 1995 in Birmingham, Alabama, as a vendor of EHR and practice management solutions with integrated medical billing services. Greenway Medical Technologies Greenway Medical Technologies, founded in 1999, was an EHR vendor offering a flagship suite of HIT products known as PrimeSUITE. Greenway Medical Technologies had an initial public offering on Feb. 2, 2012, but was taken private again in November 2013 when Vista Equity Partners fully acquired Greenway Medical and combined it with Vitera and SuccessEHS, rebranding them as Greenway Health. Ransomware Attack In April 2017, approximately 400 users were unable to access patient records for about three weeks due to a ransomware attack on Greenway's systems. The company announced that the problem had been fixed on May 12, 2017. Office Closures In October 2017, Greenway announced the closure of its offices in Atlanta, Birmingham, and Lake Mary, Florida. Numerous layoffs were made at the company's Carrollton, Georgia, office. A total of 120 employees at the Atlanta and Carrollton locations were affected by the closures, which were completed by the end of January 2018. The closures were in an effort to unify operations in the Tampa office, thus making Tampa the new headquarters location. Many of the impacted employees were given the opportunity to relocate to one of the three remaining offices. Department of Justice Settlement On Feb. 6, 2019, Greenway was ordered to pay $57.25 million in consequence to a complaint filed by the United States under the False Claims Act alleging that Greenway caused its users to submit false claims to the government by misrepresenting the capabilities of its EHR product “Prime Suite” and bribing users to induce them to recommend Prime Suite. The government alleged that Greenway concealed information when applying for a certification that would have disqualified it. The government also alleged that Greenway violated the Anti-Kickback Statute by paying money and incentives to its client providers to recommend Prime Suite to prospective new customers. Greenway Revenue Services To help clients improve the profitability of their practices and provide the flexibility needed to meet the unique needs of practices, Greenway launched a new Greenway Revenue Services offering, GRS Select, in March 2021. GRS Select provides a customizable suite of revenue cycle services designed to simplify billing, alleviate administrative burdens, and identify new revenue opportunities, while allowing practices to maintain full control of their billing. Cloud-based data services In July 2020, Greenway announced its collaboration with Amazon Web Services (AWS) to develop a cloud-based, data services platform, Greenway Insights™. The platform will provide a regulatory analytics solution to assist clients in meeting reporting requirements and offer revenue cycle insights amid a changing RCM landscape. Moving forward, Greenway plans to work with AWS on other capabilities, including remote patient monitoring and virtual waiting rooms. Telehealth In August 2020, Greenway announced the launch of Greenway Telehealth™, a new virtual care solution, in partnership with Twilio, a cloud communications platform company. Developed to meet the demand for virtual care resulting from the COVID-19 pandemic, this HIPAA-compliant solution is available to both of Greenway’s EHRs, Intergy and Prime Suite. Awards In March 2021, Greenway was featured on a list of “Telehealth Companies to Know” by Becker’s Hospital Review, a publication for healthcare decision-makers. In February 2020, independent industry research firm Frost & Sullivan named Greenway the winner of its 2020 North American Ambulatory Revenue Cycle Management Customer Value Leadership Award, citing Greenway’s ability to “help customers achieve sustained and long-term revenue gains.” Shortly thereafter, the company was named a Gold Winner in the Company Rethinking of the Year category of the Golden Bridge Awards. Business News Daily named Intergy the Most Flexible EMR, as well as the Best Customizable Practice Management Software, in its Best Electronic Medical Record (EMR) Software of 2020 report. References Health care companies established in 2013 American companies established in 2013 Companies based in Tampa, Florida Electronic health records Health information technology companies
3750214
https://en.wikipedia.org/wiki/South%20London%20Storm
South London Storm
South London Storm is a rugby league club who play and train at Archbishop Lanfranc School in the London Borough of Croydon, they currently compete in the London and South East Merit League. Founded in 1997, Storm have been voted Rugby League Conference "Club of the Year" three times, in 2002, 2005 and 2006. In 2013 South London Storm merged with West London Sharks to form South West London Chargers. Club Details & Personnel Club honours Harry Jepson Trophy (RLC National Champions): Winners 2006 Harry Jepson Trophy Semi Finals: 2006 RLC Club Of The Year: 2002, 2005, 2006 RLC Shield: Winners 2002 Active Sports Club Of The Year Award: 2004 BBC London Amateur Sports Club of the Year: 2006 RLC Premier South Division Winners: 2005, 2006, 2009 RLC Premier South Division Runners Up: 2007, 2008 RLC Premier South Grand Final Winners: 2005, 2006 RLC Premier South Grand Final Runners Up: 2007, 2008, 2009 London Academy Final: Winners 2009 London Amateur Rugby League (2nd XIII): Winners 2006 Gordon Anderton Memorial Trophy: Runners Up 1997–98, 1998–99 London League Cup: Runners Up 2000 Rugby League Challenge Cup: 2nd Round 2005 Player Records Most Tries in a match: 6 Mark Nesbitt vs Aberavon Fighting Irish - 2003 Most Goals in a match: 17 Tom Bold vs Bedford Tigers - 2009 Most Points in a match: 38 Darren Bartley vs Kent Ravens - 2007 Most Tries in a season: 28 Louis Neethling - 2005 Most Goals in a season: 102 Louis Neethling - 2005 Most Points in a season: 316 Louis Neethling - 2005 Most First Grade Appearances: 102 Carl Zacharow - (2002–present) Club Records Most Points Scored: 102 vs Bedford Tigers - 2009 Most Points Conceded: 100 vs Crawley Jets - 2000 & West London Sharks - 2002 Biggest Home Win: (90 points) 94–4 vs London Skolars - 2005 & 102–12 vs Bedford Tigers - 2009 Biggest Away Win: (72 points) 76–4 vs Sunderland Nissan - 2005 & Greenwich Admirals - 2005 Biggest Home Defeat: (90 points) 0–90 vs London Skolars - 2001 Biggest Away Defeat: (98 points) 2–100 vs Crawley Jets - 2000 Highest Scoring Game: 114 points vs Bedford Tigers (102–12) - 2009 Lowest Scoring Game: 22 vs Ipswich Rhinos (18–4) - 2007 Longest Undefeated Run: 14 games - 24 June 2006 to 30 June 2007 Longest Run Without a Win: 9 games - 6 May 2000 to 1 July 2000 Club Awards Coaches Julian Critchley - 1997–98 Ian Curzon - 1998–99 Julian Critchley & Graeme Harker - 2000 Paul Johnstone - 2001 Andy Fleming - 2001 Julian Critchley & Graeme Harker - 2001 Anthony Lipscombe - 2002 Darryl Pitt - 2003 & 2004 Rob Powell - 2005 & 2006 Andy Gilvary & Dave Wilson - 2007 Marcus Tobin - 2008 James Massara - 2009 Paul Brown - 2011 Ben Cramant & Mick Gray - 2012 Former Players Now At Pro Clubs Will Sharp - Harlequins RL Lamont Bryan - Harlequins RL Corey Simms - London Skolars Adam Janowski - Harlequins RL Rob Powell - Harlequins RL (Assistant Coach) Alex Ingarfield - Harlequins RL Jack Kendall - England under 18 England deaf London Irish RFU - Dewsbury Rams RL. South London Storm Dream Team To mark Storm's 10th Anniversary the club announced their 1997–2007 Dream Team. Tane Kingi (2005–2007) Corey Simms (2002–2004) Keri Ryan (2001–2006) Carl Zacharow (2001–2007) Gavin Calloo (2001–2006) Michael Walker (2005–2007) Terry Reader (2001–2002) Gavin Hill (2005–2007) Mark Nesbitt (2002–2006) Koben Katipa (2003–2004) Alan Emerson (2006–2007) Louis Neethling (2004–2005) Paul Rice (2003–2004) Andrew Hames (2003–2007) Nick Byram (2000–2004) John Ferguson (2003–2005) Julian Critchley (1997–2000) Jack Kendall (2000 - 2002) Coach: Rob Powell (2005–2006) Manager: Steve Cook (2002–2007) First Grade Playing Record - 2000 to 2011 Up to and including 2 July 2011. Second Grade Playing Record - 2003 to 2011 Up to and including 2 July 2011. Club history The South London area has a strong rugby league tradition, and many of London’s most successful amateur clubs have come from this part of the capital. For nearly three decades clubs such as Streatham Celtic, Peckham Pumas and South London Warriors dominated the London League, and between them they won the title over twenty times. The mid-1990s heralded the demise of these once dominant clubs leaving the league without a club south of the Thames. To fill this void the current South London club was formed on 21 July 1997 by Jed Donnelly, Graeme Harker and Julian Critchley in a bar after London Broncos' World Club Championship victory against Canberra Raiders on 21 July 1997. Initially nicknamed 'the Saints', as one of the founder members was a supporter of St Helens, the fledgling club recruited many of its players from the recently defunct east London, Bexleyheath and Peckham outfits, and they approached the local rugby union club, Streatham-Croydon, about basing themselves at their Frant Road ground. Storm's original colours were red and black. London League Saints were immediately accepted into the London League, and in their debut season they finished third in the Second Division behind Kingston and St Albans Centurions. That 1997/98 season culminated in an appearance in the Gordon Anderton Memorial Trophy Final against Reading Raiders at the New River Stadium. The 24–28 was a cruel blow for a team that were considered to have enjoyed the better of the game, but two controversial Raiders’ tries in the closing two minutes sealed Saints’ fate. The 1998/99 season was one that promised much for Saints but, due to the near collapse of the league, that potential was largely unfulfilled, although South London did eventually emerge from the debris as runners-up to the London Colonials. A second successive appearance in the Gordon Anderton Memorial Trophy Final again ended in defeat (28–32), this time at the hands of a strong Metropolitan Police team. It was in February 1999 that the club launched its junior section, initially at U11 only. The bulk of the youngsters came from the neighbouring Whitehorse Manor School where Saints scrum-half Lee Mason-Ellis was a teacher. They made their competitive debut two months later against Kingston Warriors, at the time the only other junior club in the capital, losing narrowly in an exciting encounter. For the seniors, with the prospect of winter rugby league looking increasingly forlorn, South's thoughts turned to the new summer competition, the Rugby League Conference. The name of the club was changed to South London Storm as there were two other teams known as 'the Saints' in the Conference. Three months later the club was accepted into the Southern Division of the expanding competition. For the club's switch to summer in 2000 the colours were changed to maroon. 2000 It was a real baptism of fire for Storm in the RLC, as they managed only a single win – away at Kingston – to finish bottom of their group. The season opener at home to Oxford Cavaliers (4–62) was covered by the Independent newspaper. Despite suffering a number of maulings (including a 2–100 loss at the hands of Crawley Jets), enthusiasm never waned and the club did much to raise the profile of the sport in this corner of the capital. Amazingly, Storm's season ended with an appearance in the London League Final against St Albans Centurions. But once again Storm were left frustrated as the Hertfordshire side emerged victorious from a gripping encounter. A member of Storm's team that day, and Man of the Match, was Ryan Jones who went on the play for and captain the Welsh rugby union team, and who was a member of the tour to New Zealand. The club made sporting history in October when the under-11s played their counterparts from Kingston Warriors in the curtain raiser to the England vs Australia Rugby League World Cup clash at Twickenham. It was the first ever game of rugby league at union's headquarters and Storm's Mark Cole, cousin of England footballer Joe Cole, scored the first ever try at the stadium and Rob Harker scored the first ever hat-trick of tries. South London Storm was still operating, albeit as a Masters XIII, as recently as 2014, playing 1/2 fixtures a year In 2017 they moved into Club Langley and played under the moniker of ‘Silverbacks’ for 3 seasons, during which a historic first ever Masters Tour to Canada was undertaken in June 2019 The first ever transatlantic Masters game between a UK Masters team against the Toronto Wolfpack Masters team at the Lamport stadium, followed by a further match against the Ontario Greybeards two days later As is the nomadic existence of Rugby League in south London, we move onto the next chapter of the South London Storm - the South London Clippers Masters, playing out of Greenwich from 2020 Season's Record First Grade Rugby League Conference South 06/05/2000 South London Storm 4 Oxford Cavaliers 62 13/05/2000 West London Sharks 60 South London Storm 6 20/05/2000 Crawley Jets 100 South London Storm 2 27/05/2000 South London Storm 24 Kingston Warriors 26 03/06/2000 South London Storm 8 St Albans Centurions 58 10/06/2000 South London Storm 8 North London Skolars 78 17/06/2000 Oxford Cavaliers 72 South London Storm 0 24/06/2000 South London Storm 0 West London Sharks 68 01/07/2000 South London Storm 6 Crawley Jets 90 08/07/2000 Kingston Warriors 16 South London Storm 24 15/07/2000 St Albans Centurions 50 South London Storm 10 22/07/2000 North London Skolars 70 South London Storm 14 2001 2001 was a much improved year for the club and, although they won only three of their matches, Storm were a much more competitive outfit and got better as the year progressed, as narrow losses to the West London Sharks and North London Skolars proved towards the end of the season. The trio of wins, against Bedford Swifts (22–6), Crewe Wolves (20–16) and Kingston Warriors (46–10) all came in the second half of the season, after an opening sequence of six successive losses including a 6–100 drubbing at the hands of West London. The season was notable for scrum-half Terry Reader's individual achievement of successfully kicking 29 successive conversions. Season's Record First Grade Rugby League Conference South 05/05/2001 Bedford Swifts 38 South London Storm 10 12/05/2001 South London Storm 0 North London Skolars 90 19/05/2001 Crewe Wolves 36 South London Storm 25 26/05/2001 Crawley Jets 66 South London Storm 12 02/06/2001 South London Storm 26 Kingston Warriors 38 09/06/2001 West London Sharks 100 South London Storm 6 16/06/2001 South London Storm 22 Bedford Swifts 6 30/06/2001 North London Skolars 42 South London Storm 12 07/07/2001 South London Storm 20 Crewe Wolves 16 14/07/2001 South London Storm 12 Crawley Jets 72 21/07/2001 Kingston Warriors 10 South London Storm 46 04/08/2001 South London Storm 16 West London Sharks 41 2002 2002 was the season when South London finally started to fulfil their potential. New Zealander Anthony Lipscombe took up the coaching reins, and brought about a steady improvement to the team's performances on the park. Storm's pre-season preparation got off to a good start with a surprise success in the prestigious St Albans 9s Festival. Using a squad made up of mainly new players, they defeated their Centurion hosts quite comfortably in the Final. The regular season saw Storm suffer a succession of frustratingly narrow defeats – most by ten points or less – to finish bottom of the South Division, but it was in the end-of-season Shield Play Offs that saw the team hit form. Group wins over Kingston Warriors (28–22 and 36–4) and Oxford Cavaliers (21–12 in both games), took South London to Cheltenham’s Prince of Wales Stadium for a semi-final clash with Crewe Wolves. It was a tough encounter that for long periods looked to be going Wolves’ way, but Storm dug in to prevail 21–14, courtesy of two late tries from Carl Zacharow and Keri Ryan. A fortnight later, also at the Prince of Wales Stadium, South London met Bedford Swifts in the Rugby League Conference Shield Final, where they treated the large crowd, and the Sky TV cameras, to an exhilarating display of running rugby. Storm ran in ten tries in a runaway 54–2 victory, Caro Wild led the way with a hat-trick, Daniel Poireaudeau grabbed two, and Terry Reader, Keri Ryan, Nathan Price-Saleh, Aaron Russell and Alun Watkins pitched in with one apiece. The final whistle sparked terrific celebrations both on the pitch and in the stand where Storm's large traveling support cheered Keri Ryan as he lifted the club's first ever major trophy. Once again Storm fielded a second team in the London League, and although wins were hard to come by, only one all season, the players showed great enthusiasm with a number graduating to the first team. The season ended with the club's first overseas tour. A party of 24 travelled to the south of France to play French National One club Realmont XIII. In front of a crowd of 750 – a quarter of the town's population – Storm put up a brave performance, but were eventually downed 18–36. To round off the club's most successful season ever, Captain Keri Ryan was named at stand-off in the 2002 Rugby League Conference Dream Team, and full-back Corey Simms was named the competition's Young Player Of The Year. Fittingly, the club was also presented with the award for Rugby League Conference Club of The Year 2002. Season's Record First Grade Rugby League Conference South 04/05/2002 Kingston Warriors 36 South London Storm 22 11/05/2002 South London Storm 20 West London Sharks 32 18/05/2002 North London Skolars 66 South London Storm 16 25/05/2002 Oxford Cavaliers 40 South London Storm 30 01/06/2002 South London Storm 16 Crawley Jets 48 08/06/2002 South London Storm 42 Kingston Warriors 18 22/06/2002 West London Sharks 32 South London Storm 18 29/06/2002 South London Storm 6 North London Skolars 50 06/07/2002 South London Storm 38 Oxford Cavaliers 50 13/07/2002 Crawley Jets 80 South London Storm 0 RLC Shield Play Offs 27/07/2002 Kingston Warriors 22 South London Storm 28 (Group) 03/08/2002 South London Storm 20 Oxford Cavaliers 12 (Group) 10/08/2002 South London Storm 36 Kingston Warriors 4 (Group) 17/08/2002 Oxford Cavaliers 12 South London Storm 20 (Group) 24/08/2002 Crewe Wolves 14 South London Storm 21 (Semi-Final) 31/08/2002 South London Storm 54 Bedford Swifts 2 (Final) 2003 Buoyed by their success in the RLC Shield, Storm were encouraged to apply for membership of the newly formed National League Three. The application was successful, however, following a number of internal meetings the club reluctantly decided against taking the step up and instead remain in the RLC. However, only four weeks before the start of the season local rivals Crawley Jets folded, and Storm accepted the RFL's last minute invitation to participate in NL3. The club also entered a second team in the RLC, and employed the first full-time Rugby League Development Officer in the area, accelerating the junior development program started by volunteers in 2000. Under the South London Storm “umbrella” are the three junior feeder clubs formed – the Croydon Hurricanes, Thornton Heath Tornadoes, and the Brixton Bulls. Coached by ex-London Broncos player Darryl Pitt, the club opened their league campaign with an against-the-odds 24–16 victory over Huddersfield Underbank Rangers. It was a win that was all the more remarkable for the fact that they were down to 12 men after only 5 seconds; prop Mick Smith having been sent off in the first tackle. Storm registered a further five wins in the season but missed out on the end of season play-offs. The club made a second tour to France in September, losing 22–48 against a Salses XIII line up containing three ex-French internationals. In November Storm played a charity match against an Australian Legends of League side including the likes of Jason Hetherington, Trevor Gillmeister, Craig Coleman, Andrew Farrar and Peter Tunks. Both teams served the enthusiastic crowd of three or four hundred with an exciting end-to-end contest played in a manner befitting the occasion. The result was irrelevant; although for the record the score was 24–20 in favour of the Legends. That same month Storm played their first ever Rugby League Challenge Cup game when they hosted National Conference side West Bowling in the Preliminary Round, losing 4–36. In 2003 Storm were represented at International level for the first time when U15 player Adam Janowski was selected to play for England U15s against their Welsh counterparts at Easter. Season's Record Rugby League Conference Cup 03/03/2003 South London Storm 24 West London Sharks 24 09/03/2003 Greenwich Admirals 6 South London Storm 16 16/03/2003 North London Skolars 34 South London Storm 20 23/03/2003 South London Storm 14 North London Skolars 15 30/03/2003 West London Sharks 16 South London Storm 30 05/04/2003 South London Storm 62 Greenwich Admirals 20 13/04/2003 Aberavon Fighting Irish 10 South London Storm 44 (Quarter-Final) 18/04/2003 North London Skolars 28 South London Storm 19 (Semi-Final) First Grade National League 3 03/05/2003 South London Storm 24 Huddersfield Underbank Rangers 16 10/05/2003 Dudley Hill 42 South London Storm 0 17/05/2003 Coventry Bears 20 South London Storm 14 31/05/2003 South London Storm 28 Sheffield Hillsborough Hawks 22 07/06/2003 Manchester Knights 2 South London Storm 42 14/06/2003 St Albans Centurions 38 South London Storm 18 21/06/2003 Sheffield Hillsborough Hawks 36 South London Storm 10 28/06/2003 South London Storm 26 Hemel Stags 8 05/07/2003 South London Storm 6 St Albans Centurions 28 12/06/2003 South London Storm 32 Teesside Steelers 36 19/07/2003 Hemel Stags 16 South London Storm 22 26/06/2003 South London Storm 4 Woolston Rovers (Warrington) 32 02/08/2003 South London Storm 34 Coventry Bears 22 09/08/2003 Woolston Rovers (Warrington) 44 South London Storm 14 Second Grade Rugby League Conference South 03/05/2003 Gosport Vikings 80 South London Storm 4 10/05/2003 South London Storm 16 Greenwich Admirals 40 17/05/2003 South London Storm 18 Hemel Stags 12 31/05/2003 South London Storm 0 Crawley Jets 88 07/06/2003 South London Storm 14 Gosport Vikings 24 14/06/2003 West London Sharks 88 South London Storm 12 28/06/2003 Greenwich Admirals 62 South London Storm 8 05/07/2003 South London Storm 16 Kingston Warriors 26 12/07/2003 North London Skolars 82 South London Storm 8 19/07/2003 Crawley Jets 88 South London Storm 6 2004 Storm again participated in National League Three and after victories in their opening three games, against Manchester, Bradford Dudley Hill and Birmingham, they topped the division for the one and only time. However, after the promising start, the season tailed off and once again Storm narrowly missed out on the play-offs. During the year Storm were awarded the Active Sports Club of the Year award from 400 participating sports clubs signed up to the Active Sports program, the biggest sports development programme in London. The club also embarked on a historic tour to Australia – the first British Rugby League team to tour Australia since 1997 – with games against Beerwah Bulldogs and Gympie Devils in Sunshine Coast, Queensland. The season closed with a second tour of the year, this time to Toulouse, where they drew 22–22 against Villeneuve Tolosane. Season's Record First Grade National League 3 01/05/2004 Manchester Knights 12 South London Storm 28 08/05/2004 South London Storm 18 Bradford Dudley Hill 15 22/05/2004 South London Storm 26 Birmingham Bulldogs 14 29/05/2004 St Albans Centurions 30 South London Storm 22 31/05/2004 South London Storm 54 Essex Eels 18 05/06/2004 Sheffield Hillsborough Hawks 26 South London Storm 18 12/06/2004 South London Storm 26 Bramley Buffaloes 20 26/06/2004 Coventry Bears 46 South London Storm 14 03/07/2004 Gateshead Storm 20 South London Storm 32 10/07/2004 South London Storm v Woolston Rovers (Warrington) (Match abandoned) 17/07/2004 Huddersfield Underbank Rangers 24 South London Storm 24 24/07/2004 South London Storm 36 Carlisle Centurions 16 31/07/2004 Birmingham Bulldogs 32 South London Storm 30 07/08/2004 South London Storm 20 St Albans Centurions 24 14/08/2004 Essex Eels 14 South London Storm 54 21/08/2004 South London Storm 34 Sheffield Hillsborough Hawks 35 28/08/2004 Bramley Buffaloes 32 South London Storm 18 30/08/2004 Hemel Stags 54 South London Storm 6 04/09/2004 South London Storm 12 Hemel Stags 38 11/09/2004 South London Storm 18 Coventry Bears 28 Home match versus Woolston Rovers (Warrington) abandoned due to an injury. Second Grade Rugby League Conference South 01/05/2004 Kingston Warriors 58 South London Storm 0 08/05/2004 South London Storm 20 Gosport & Fareham Vikings 22 22/05/2004 West London Sharks 54 South London Storm 16 05/06/2004 Greenwich Admirals 46 South London Storm 6 12/06/2004 South London Storm 24 Kingston Warriors 42 19/06/2004 Gosport & Fareham Vikings 46 South London Storm 12 26/06/2004 South London Storm 0 West London Sharks 88 03/07/2004 South London Storm 10 Greenwich Admirals 44 2005 As the cost of travelling to places as far afield as Carlisle and Gateshead began to spiral, Storm took the decision to apply for, and were admitted to, the newly created RLC South Premier for the 2005 season and appointed Rob Powell as Director of Coaching. The season proved to be a success with the club winning its first round Rugby League Challenge Cup match against West London Sharks (24–20) in front of a crowd of 1,000. However, the Powergen Challenge Cup run came to an end in the second round when they were beaten 50–24 at Castleford Lock Lane, despite having surprising led at half-time. During the RLC South Premier campaign the first team dominated the group and won all but one game during the season. The team lost in the national semi-final against Bridgend Blue Bulls, the competition's eventual winners, but the season ended on a high by beating the other 85 clubs to the RLC Club of the Year award for the 2nd time in 4 years. Season's Record First Grade Rugby League Conference Premier South 07/05/2005 South London Storm 82 Ipswich Rhinos 6 14/05/2005 London Skolars A 0 South London Storm 64 21/05/2005 South London Storm 72 Sunderland Nissan 6 28/05/2005 South London Storm 46 Greenwich Admirals 0 04/06/2005 Luton Vipers 4 South London Storm 68 11/06/2005 South London Storm 52 West London Sharks 14 18/06/2005 Sunderland Nissan 4 South London Storm 76 25/06/2005 Ipswich Rhinos 16 South London Storm 24 02/07/2005 South London Storm 94 London Skolars A 4 09/07/2005 Greenwich Admirals 4 South London Storm 76 16/07/2005 South London Storm vs Luton Vipers – Won: Walk over 23/07/2005 West London Sharks 46 South London Storm 10 Luton Vipers forfeit a fixture RLC Premier Play Offs 30/07/2005 South London Storm 70 West London Sharks 6 (Divisional Play Off) 13/08/2005 South London Storm 24 West London Sharks 8 (Divisional Final) 21/08/2005 Bridgend Blue Bulls 34 South London Storm 18 (National Semi-Final) Second Grade Rugby League Conference South 07/05/2005 South London Storm 28 Hemel Stags 36 21/05/2005 South London Storm 38 West London 52 28/05/2005 Haringey Hornets 50 South London Storm 12 04/06/2005 Kingston Warriors 41 South London Storm 18 11/06/2005 Hemel Stags 42 South London Storm 18 25/06/2005 West London Sharks 46 South London Storm 16 02/07/2005 South London Storm 18 Haringey Hornets 48 09/07/2005 South London Storm 6 Kingston Warriors 100 2006 The 2006 summer season was to be the most successful for South London Storm as a club, with both senior teams winning their leagues, successes for the 4 Storm youth clubs, and the first team being crowned RLC National Champions. Despite pressure from the Ipswich Rhinos, Storm once again won the South division of the RLC Premier. After disposing of the Bridgend team in the semi-final, they crushed the East Lancashire Lions in the final at Broadstreet RUFC by 30 points to nil. This rounded off a successful season that included the London League title for the second team who defeated Luton Vipers in the Final. Season's Record First Grade Rugby League Conference Premier South 29/04/2006 South London Storm 54 West London Sharks 24 06/05/2006 South London Storm 30 Coventry Bears 18 13/05/2006 South London Storm 34 Haringey Hornets 28 20/05/2006 Essex Eels 6 South London Storm 68 27/05/2006 South London Storm 46 Kingston Warriors 18 10/06/2006 Ipswich Rhinos 32 South London Storm 14 17/06/2006 West London Sharks 34 South London Storm 24 24/06/2006 Coventry Bears 28 South London Storm 32 01/07/2006 Haringey Hornets 30 South London Storm 34 08/07/2006 South London Storm 80 Kingston Warriors 12 29/07/2006 South London Storm 46 Ipswich Rhinos 8 RLC Premier Play Offs 12/08/2006 South London Storm 52 Ipswich Rhinos 10 (Divisional Final) 20/08/2006 South London Storm 32 Bridgend Blue Bulls 12 (National Semi-Final) 03/09/2006 East Lancashire Lions 0 South London Storm 30 (Jepson Trophy Final) Second Grade London League 29/04/2006 South London Storm 66 West London Sharks 22 06/05/2006 West London Sharks 36 South London Storm 20 13/05/2006 Bedford Tigers 16 South London Storm 32 20/05/2006 Southend Seaxes 14 South London Storm 40 27/05/2006 Kentish Tigers 24 South London Storm 33 17/06/2006 West London Sharks 38 South London Storm 38 24/06/2006 Luton Vipers 54 South London Storm 6 08/07/2006 Smallford Saints 40 South London Storm 38 22/07/2006 South London Storm 38 West London Sharks 26 London League Play Offs 06/08/2006 South London Storm 44 Bedford Tigers 14 (Semi-Final) 12/06/2006 South London Storm 52 Luton Vipers 20 (Final) 2007 After the success of the previous season, 2007 was always going to be a tough year. Coach Rob Powell moved on to Super League's Harlequins RL, and was replaced by Andy Gilvary and Dave Wilson. Meanwhile, ten of the Grand Final winning team moved on to pastures new. The season kicked off with a Challenge Cup First Round game away to Thornhill Trojans, but playing out of season the Londoners were no match for the National Conference League Premier Division side and lost 18–58. Season's Record First Grade Rugby League Conference Premier South 14/04/2007 West London Sharks 16 South London Storm 56 28/04/2007 South London Storm 26 St Albans Centurions 22 12/05/2007 South London Storm 18 Ipswich Rhinos 4 19/05/2007 West London Sharks 28 South London Storm 32 26/05/2007 London Skolars A 26 South London Storm 42 02/06/2007 South London Storm 74 Kent Ravens 2 09/06/2007 South London Storm 22 London Skolars 22 16/06/2007 South London Storm vs Kingston Warriors – Won: Walk over 30/06/2007 Ipswich Rhinos 25 South London Storm 24 07/07/2007 South London Storm 36 West London Sharks 18 15/07/2007 St Albans Centurions 32 South London Storm 16 21/07/2007 Kent Ravens 0 South London Storm 66 28/07/2007 South London Storm 32 London Skolars A 33 04/08/2007 South London Storm 56 Kingston Warriors 22 RLC Premier Play Offs 11/08/2007 South London Storm 48 London Skolars 24 (Divisional Semi-Final) 18/08/2007 South London Storm 10 St Albans Centurions 18 (Divisional Final) Second Grade London League 28/04/2007 South London Storm 22 London Griffins 38 12/05/2007 South London Storm 26 Southgate Skolars 29 19/05/2007 West London Sharks 56 South London Storm 14 02/06/2007 South London Storm 82 Kent Ravens 10 16/06/2007 South London Storm 42 Smallford Saints 18 23/06/2007 Farnborough Falcons 40 South London Storm 28 07/07/2007 South London Storm 18 West London Sharks 24 14/07/2007 Southgate Skolars 72 South London Storm 6 21/07/2007 Kent Ravens 18 South London Storm 30 London League Play Offs 04/08/2007 St Albans Centurions 40 South London Storm 22 (Quarter-Final) 2008 Storm once again reached the RLC Premier South Grand Final but were defeated 20–24 by West London Sharks, with the game-breaking try coming two minutes from the end of the match. Season's Record First Grade Rugby League Conference Premier South 19/04/2008 Ipswich Rhinos 32 South London Storm 12 03/05/2008 St Albans Centurions 32 South London Storm 12 10/05/2008 South London Storm 22 West London Sharks 48 17/05/2008 London Skolars 12 South London Storm 30 24/05/2008 US Portsmouth 22 South London Storm 16 07/06/2008 South London Storm 70 Elmbridge 8 14/06/2008 South London Storm 36 Ipswich Rhinos 16 21/06/2008 South London Storm 42 St Albans Centurions 10 28/06/2008 West London Sharks 38 South London Storm 10 05/07/2008 South London Storm 44 London Skolars 12 12/07/2008 South London Storm 58 US Portsmouth 14 26/07/2008 Elmbridge 22 South London Storm 58 RLC Premier Play Offs 09/08/2008 South London Storm 20 Ipswich Rhinos 14 (Divisional Semi-Final) 16/08/2008 West London Sharks 24 South London Storm 20 (Divisional Final) Second Grade London League 03/05/2008 St Albans Centurions 14 South London Storm 12 10/05/2008 South London Storm 24 West London Sharks 42 17/05/2008 Southgate Skolars 12 South London Storm 16 07/06/2008 Kent Ravens 54 South London Storm 42 28/06/2008 West London Sharks 62 South London Storm 14 14/07/2008 South London Storm 48 Southampton Spitfires 18 12/07/2008 Feltham YOI 52 South London Storm 64 23/07/2008 South London Storm 26 Metropolitan Police 34 26/07/2008 South London Storm 34 Kent Ravens 30 London League Play Offs 02/08/2008 Southampton Spitfires 44 South London Storm 20 (Quarter-Final) 2009 Storm will again participate in the Premier South Division of the RLC. Their opposing teams will be Bedford Tigers, Elmbridge, Hainault Bulldogs, Ipswich Rhinos, London Skolars A, St Albans Centurions, Portsmouth Navy Seahawks and West London Sharks. Season's Record First Grade Rugby League Conference Premier South 13/04/09 London Skolars 22 South London Storm 54 25/04/09 Elmbridge 10 South London Storm 54 02/05/09 Portsmouth Navy Seahawks 24 South London Storm 80 09/05/09 South London Storm 66 Hainault Bulldogs 4 16/05/09 South London Storm 102 Bedford Tigers 12 23/05/09 St Albans Centurions 20 South London Storm 48 30/05/09 South London Storm 56 London Skolars 18 06/06/09 West London Sharks 8 South London Storm 24 13/06/09 Ipswich Rhinos 14 South London Storm 26 27/06/09 South London Storm 73 Portsmouth Navy Seahawks 30 04/07/09 Bedford Tigers vs South London Storm – Won: Walk over 11/07/09 South London Storm 40 St Albans Centurions 10 18/07/09 South London Storm 40 West London Sharks 30 25/07/09 South London Storm vs Ipswich Rhinos – Won: Walk over Bedford Tigers and Ipswich Rhinos each forfeit a fixture. RLC Premier Play Offs 15/08/09 South London Storm 58 Ipswich Rhinos 12 (Divisional Semi-Final) 22/08/09 South London Storm 16 West London Sharks 26 (Divisional Final) Second Grade London League 25/04/09 Guildford Giants 20 South London Storm 28 16/05/09 South London Storm 6 Hammersmith Hills Hoists 54 23/05/09 St Albans Centurions 10 South London Storm 56 06/06/09 West London Sharks 34 South London Storm 36 24/06/09 Hammersmith Hills Hoists 64 South London Storm 14 27/06/09 South London Storm 48 Sussex Merlins 34 25/07/09 South London Storm 66 Hemel Stags 14 02/08/09 Sussex Merlins 40 South London Storm 34 London League Play Offs 16/08/09 Hemel Stags 24 South London Storm 16 (Semi-Final) Academy Grade 18/04/09 Kent Ravens 18 South London Storm 28 25/04/09 South London Storm 40 Medway Dragons 6 09/05/09 South London Storm 40 Greenwich Admirals 16 06/06/09 Medway Dragons 22 South London Storm 20 18/07/09 South London Storm 46 Greenwich Admirals 26 (Final at Staines RFC) 2010 Season's Record First Grade Rugby League Conference Premier South 01/05/10 West London Sharks 34 South London Storm 20 08/05/10 South London Storm vs Portsmouth Navy Seahawks – Won: Walk Over 15/05/10 Hainault Bulldogs 34 South London Storm 28 22/05/10 South London Storm 56 Eastern Rhinos 10 29/05/10 St Albans Centurions 36 South London Storm 4 05/06/10 Hammersmith Hillhoists 36 South London Storm 32 19/06/10 South London Storm 38 London Skolars 6 26/06/10 South London Storm 16 West London Sharks 58 03/07/10 South London Storm 48 Portsmouth Navy Seahawks 28 10/07/10 South London Storm 50 Hainaut Bulldogs 42 17/07/10 Eastern Rhinos 30 South London Storm 22 24/07/10 South London Storm 6 St Albans Centurions 60 31/07/10 South London Storm 18 Hammersmith Hillhoists 52 07/08/10 London Skolars vs South London Storm – Won: Walk Over Not including games forfeited by Portsmouth (h) and London Skolars (a). Second Grade Rugby League Conference 01/05/10 South London Storm 40 Guildford Giants 22 08/05/10 South London Storm 40 Southampton Spitfires 14 15/05/10 South London Storm 54 Sussex Merlins 22 22/05/10 Elmbridge Eagles 90 South London Storm 4 05/06/09 South London Storm 98 Swindon St George 0 09/06/10 Greenwich Admirals 14 South London Storm 16 12/06/10 Oxford Cavaliers 24 South London Storm 40 19/06/10 Guildford Giants 62 South London Storm 12 26/06/10 Southampton Spitfires vs South London Storm – Lost: Walkover 03/07/10 Sussex Merlins 50 South London Storm 20 10/07/10 South London Storm 4 Elmbridge Eagles 72 17/07/10 South London Storm 6 Greenwich Admirals 64 24/07/10 Swindon St George vs South London Storm – Lost: Walkover 31/07/10 South London Storm vs Oxford Cavaliers – Won: Walkover 2011 Season's Record First Grade Rugby League Conference Premier South 30/04/11 South London Storm 26 St Albans Centurions 32 07/05/11 Eastern Rhinos 46 South London Storm 14 14/05/11 West London Sharks 28 South London Storm 18 21/05/11 Hainault Bulldogs 14 South London Storm 40 04/06/11 Hammersmith Hills Hoists 30 South London Storm 6 11/06/11 St Albans Centurions 46 South London Storm 10 28/06/11 South London Storm 22 Eastern Rhinos 32 (@ St Albans) 25/06/11 South London Storm 24 West London Sharks 22 02/07/11 South London Storm 32 London Skolars 18 09/07/11 South London Storm 18 Hammersmith Hills Hoists 62 16/07/11 South London Storm 24 Eastern Rhinos 42 23/07/11 South London Storm 20 West London Sharks 22 30/07/11 Hammersmith Hills Hoists vs South London Storm Second Grade London League 07/05/11 London Skolars 'A' 56 South London Storm 16 21/05/11 Phantoms RL 4 South London Storm 70 04/06/11 Hammersmith Hills Hoists 'A' 60 South London Storm 20 18/06/11 St Albans Centurions 'A' 24 South London Storm 10 18/06/11 Bedford Tigers 'A' 16 South London Storm 10 (@ St Albans) 25/06/11 South London Storm 0 Mudchute Uncles 28 09/07/11 South London Storm 18 Hammersmith Hills Hoists 'A' 36 16/07/11 South London Storm vs Hemel Stags 'A' 23/07/11 South London Storm vs Greenwich Admirals 'A' 30/07/11 Hammersmith Hills Hoists 'A' vs South London Storm Challenge Cup Record 29/11/2003 South London Storm 4 West Bowling 36 (1st Round) 05/02/2005 South London Storm 24 West London Sharks 20 (1st Round) 19/02/2005 Castleford Lock Lane 50 South London Storm 24 (2nd Round) 03/02/2007 Thornhill Trojans 58 South London Storm 18 (1st Round) External links Official Website Storm Facebook Page Storm YouTube Channel London RL Rugby League Conference South London Storm vs London Skolars A 2009 on YouTube Thornhill Trojans vs South London Storm 2007 on YouTube South London Storm Development 2007 on YouTube South London Storm vs London Skolars 2007 on YouTube References Rugby League Conference teams Sport in the London Borough of Croydon Rugby league teams in London Rugby clubs established in 1997
20825633
https://en.wikipedia.org/wiki/Nokia%206301
Nokia 6301
The Nokia 6301, approved by the FCC for the US market in January 2008, (RM-323 for the North America market, RM-322 for the European market) is a triband GSM mobile phone. The North American model 6301b is equipped with 850/1800/1900 MHz bands. The European model 6301 is equipped with 900/1800/1900 MHz bands. The 6301 has SMS and MMS 1.2, and is capable of instant messaging. It has a standard 12 button numeric keypad, a five way navigation key and four additional keys on its face. It has a side volume key and a top mounted dedicated power key. The bulk of the area above the keypad is taken up with the 2.0" TFT display, 320 x 240 pixels with up to 16.7 Million colors. It is a small device, weighing 3.27 oz and is 4.20 x 1.72 x 0.52 in. Power is provided by a BL-4C 860 mAh Li-ion unit providing up to 3.5 hours of use per charge. While nearly identical in appearance to the Nokia 6300, there are significant differences between the two. Major features UMA Unlicensed Mobile Access (UMA) allows the mobile device to utilize a wireless router to make phone calls. The phone and router are recognized as being similar to a phone and GSM tower by the carrier's system. This allows for seamless handoffs between the router and the GSM tower at the point where the device is no longer in range of the router. 802.11b/g WiFi A WiFi connection for data transfer to the service provider means downloading content such as ringtones, games and wallpapers proceeds at a much faster pace than EDGE or GPRS speeds. Music capability An FM radio and multi-format music player are included in the build. Stereo output is accomplished through the 2.5 mm Nokia AV port at the bottom of the device or through the Bluetooth A2DP provision. Operation of the FM radio requires a wired device be attached to the AV port as it functions as the antenna for the radio. The music player supports MIDI, AAC, AAC+, enhanced AAC+, MP3, and WMA files. There is an equalizer to allow for some adjustment of the sound's tonal quality. Photography and videography A 2.0 megapixel CMOS sensor is mated to a fixed focus lens without auxiliary lighting. Maximum resolution is 1600 x 1200. Digital zoom allows for in camera cropping prior to taking the shot. The still image camera has High, Normal and Basic quality settings. Image size choices are 160 x 120, 320 x 240, 640 480, 800 x 600, 1280 x 960 and 1600 x 1200. Available effects are Normal, False color, Grayscale, Sepia, Negative and Solarize. White Balance choices are Auto, Daylight, Tungsten and Fluorescent. Video is shot through the same lens as still images. Video resolution choices are 176 x 144 and 128 x 96 and appear to be 15 frames per second. Video clip length choices are Maximum and Default. Default will produce a clip suitable for sending via MMS, or roughly less than 300KB. Other menu choices for video mirror those for still images. Both still and video images may be saved either to the phone memory or to the storage card. Other features Browsing The 6301 includes an xHTML (Extensible HyperText Markup Language) browser for internet browsing. It is capable of displaying many web pages as intended with some side to side scrolling. It handles mobile designed web pages without scrolling. It handles google.com both in mobile and classic form without incident. The phone's rendering of Nokia.com was different from a desktop browser's rendering, which appears to strictly be an Adobe Flash Player related issue. Connectivity Getting the phone connected is accomplished by several channels. There is a mini USB plug behind a cover at the bottom of the device. Nokia does not specify if this device is USB 1.0, 1.1 or 2.0 compliant. This is one avenue for connecting the 6301 to the a computer. The Bluetooth radio is another avenue for connection. This device implements Bluetooth 2.0 + EDR supporting these profiles: A2DP, AVRCP, DUN, FTP, GAP, GAVDP, GOEP, HFP, HSP, OPP, SAP, SDAP, SPP. Sideloading of information may be accomplished by way of the microSD or micro SDHC card. This device supports up to 4 gigabytes. Memory Device memory is up to 30 megabytes for end user purposes. Flash memory of 64 MB for handling the device firmware is also present. Nokia 6300 The Nokia 6300 is virtually identical in exterior appearance to the Nokia 6301. There are several internal and functional differences causing it to occupy a different functional niche than is occupied by the Nokia 6301. Memory The user memory available is 7.8 megabytes. The microSD storage capacity is 2 gigabytes. Connectivity Connectivity is limited to Bluetooth, There is no 802.11b/g (WiFi) associated with this device. Identifying the 6300 and 6301 The battery compartment label in these devices has information about the device. This will include the Model, the type and the FCC ID. The list following summarizes that information and the major differences among the various types. European/Asian 6300 Model 6300 Type: RM-217 FCC ID: PPIRM-217 900/1800/1900 MHz - no WiFi 7.8 megabytes user memory 2 gigabytes microSD maximum North American 6300 Model 6300b Type: RM-222 FCC ID: PPIRM-222 850/1800/1900 MHz - no WiFi 7.8 megabytes user memory 2 gigabytes microSD maximum European/Asian 6301 Model 6301 Type: RM-322 FCC ID: PPIRM-322 900/1800/1900 MHz + WiFi 30 megabytes user memory 4 gigabytes micro SDHC maximum North American 6301 Model 6301b Type: RM-323 FCC ID: PPIRM-323 850/1800/1900 MHz + WiFi 30 megabytes user memory 4 gigabytes micro SDHC maximum References 6301
11957006
https://en.wikipedia.org/wiki/Tekla
Tekla
Tekla is a software product family that consists of programs for analysis and design, detailing and project communication. Tekla software is produced by Trimble, the publicly listed US-based technology company. History , Tekla Corporation was a software engineering company specialised in model-based software products for building, construction and infrastructure management. The company was listed on the Helsinki Stock Exchange from May 2000 until February 2012. The name Tekla is a given name, used in the Nordic countries, in Poland and in Georgia. However, in this case it is an abbreviation of the Finnish words Teknillinen laskenta, which means technical computation. In May 2011, California-based business technology specialist Trimble Navigation announced a public tender offer to acquire Tekla for $450 million. The acquisition was completed in February 2012. In January 2016, Tekla Corporation as an organization changed its name to Trimble. Software Tekla engineering software has been around since the late 1960’s. Tekla Structures is 3D building information modeling (BIM) software used in the building and construction industries for steel and concrete detailing, precast and cast in-situ. The software enables users to create and manage 3D structural models in concrete or steel, and guides them through the process from concept to fabrication. The process of shop drawing creation is automated. Along with the creation of CNC-files, files for controlling reinforcement bending machines, controlling precast concrete manufacturing, importing in PLM-systems etc. Tekla Structures is available in different configurations and localised environments to suit different segment- and culture-specific needs. Tekla Structural Designer is software for analysis and design concrete and steel buildings. Tekla Tedds is an application for automating repetitive structural and civil calculations. The software is used in engineering for creating output such as calculations, sketches and notes. Tekla BIMsight is a software application for building information model-based construction project collaboration. It can import models from other BIM applications using the Industry Foundation Classes (IFC) format, also DWG and DGN. With Tekla BIMsight, users can perform spatial co-ordination (clash or conflict checking) to avoid design and constructability issues, and communicate with others in their construction project by sharing models and notes. See also Comparison of CAD editors for CAE References Engineering companies of Finland Software companies of Finland Computer-aided design software Computer-aided engineering software Building information modeling Product lifecycle management Companies based in Espoo Design companies established in 1966 Electronics companies established in 1966 Technology companies established in 1966 Finnish brands Finnish companies established in 1966
445405
https://en.wikipedia.org/wiki/Computer%20desk
Computer desk
The computer desk and related ergonomic desk are furniture pieces designed to comfortably and aesthetically provide a working surface and house or conceal office equipment including computers, peripherals and cabling for office and home-office users. Computer desk The most common form of the computer desk is a variant of the ergonomic desk, which has an adjustable and sufficient desktop space for handwriting. Provisions for a monitor shelf and holes for routing cables are integrated in the design, making it easier to connect the computer components together. The typical armoire desk provides space for a keyboard, mouse, monitor, printer and speakers. Cubicle desk designs for business and government workplaces include a range of shelves, trays and cable-routing holes for computer systems. In some computer desks, the cabling is affixed to the modesty panel at the back of the desk, to create a neater appearance. There are a great variety of computer desk shapes and forms. Large multiple student computer desks configured in rows are designed to house dozens of computer systems while facilitating wiring, general maintenance, theft prevention and vandalism reduction. Small rolling lectern desks or computer carts with tiny desktops provide just enough room for a laptop computer and a mouse pad. Computer desks are typically mass-produced and require some self-assembly. The computer itself is normally separate from the desk, which is designed to hold a typically sized computer, monitor and accessories. Cabling must be routed through the channels and access openings by the user or installer. A small number of computers are built within a desk made specially for them, like the British i-desk. Various proposals for the "Office of the future" suggested other integrated designs, but these have not been taken up. A rolling chair table configuration offers mobility and improved access in situations where a desk is not convenient. Gyratory computer tables can be used over a bed. Modular computer tables separate user interface elements from the computing and network connection, allowing more placement flexibility. The modules are connected via wireless technology. Ergonomic desk The ergonomic desk is a modern desk form which, like the adjustable drawing table or drafting table, offers mechanical adjustments for the placement of its elements in order to maximize user comfort and efficiency. The ergonomic desk is usually a "stand-alone" piece of furniture allowing access to the adjustment mechanisms. Some ergonomic desks have a sufficiently large desktop height adjustment to create either a common "sit-down" desk or a less common standing desk, which allows the user to work while standing. The ergonomic desk is usually a close companion to the ergonomic chair. The ergonomic desk originated with the beginning of the field of human factors or ergonomics after World War II. Legislation stating minimal requirements for furniture used by office workers referred to ergonomic desk standards. The desk area should be deep enough to accommodate a monitor placed at least 20 inches away from your eyes. Health and safety Some research has indicated that the placement of computer desks in an office environment can influence workers' happiness and productivity. Having an appropriate chair increases comfort and can reduce work-related injuries and pain. See also List of desk forms and types References Durfee, Charles. Build a Computer Desk. Fine Woodworking. No. 164. July–August 2003. pp. 42–49. Lauziere, Stephen. A Laptop Computer desk Doubles as a Side Table. Fine Woodworking. No. 133. July–August 2003. pp. 58–63. Grandjean, E. Ergonomics In Computerized Offices . CRC, 1986. PP 135–149 Desks Ergonomics
4579112
https://en.wikipedia.org/wiki/Contactless%20smart%20card
Contactless smart card
A contactless smart card is a contactless credential whose dimensions are credit-card size. Its embedded integrated circuits can store (and sometimes process) data and communicate with a terminal via NFC. Commonplace uses include transit tickets, bank cards and passports. There are two broad categories of contactless smart cards. Memory cards contain non-volatile memory storage components, and perhaps some specific security logic. Contactless smart cards contain read-only RFID called CSN (Card Serial Number) or UID, and a re-writeable smart card microchip that can be transcribed via radio waves. Overview A contactless smart card is characterized as follows: Dimensions are normally credit card size. The ID-1 of ISO/IEC 7810 standard defines them as 85.60 × 53.98 × 0.76 mm (3.370 × 2.125 × 0.030 in). Contains a security system with tamper-resistant properties (e.g. a secure cryptoprocessor, secure file system, human-readable features) and is capable of providing security services (e.g. confidentiality of information in the memory). Assets managed by way of a central administration systems, or applications, which receive or interchange information with the card, such as card hotlisting and updates for application data. Card data is transferred via radio waves to the central administration system through card read-write devices, such as point of sales devices, doorway access control readers, ticket readers, ATMs, USB-connected desktop readers, etc. Benefits Contactless smart cards can be used for identification, authentication, and data storage. They also provide a means of effecting business transactions in a flexible, secure, standard way with minimal human intervention. History Contactless smart cards were first used for electronic ticketing in 1995 in Seoul, South Korea. Since then, smart cards with contactless interfaces have been increasingly popular for payment and ticketing applications such as mass transit. Globally, contactless fare collection is being employed for efficiencies in public transit. The various standards emerging are local in focus and are not compatible, though the MIFARE Classic card from Philips has a large market share in the United States and Europe. In more recent times, Visa and MasterCard have agreed to standards for general "open loop" payments on their networks, with millions of cards deployed in the U.S., in Europe and around the world. Smart cards are being introduced in personal identification and entitlement schemes at regional, national, and international levels. Citizen cards, drivers’ licenses, and patient card schemes are becoming more prevalent. In Malaysia, the compulsory national ID scheme MyKad includes 8 different applications and is rolled out for 18 million users. Contactless smart cards are being integrated into ICAO biometric passports to enhance security for international travel. With the COVID-19 pandemic demand for and usage of contactless credit and debit cards has increased with the goal of reducing spread of the virus from PIN pads, chip card readers and magnetic stripe readers. Readers Contactless smart card readers use radio waves to communicate with, and both read and write data on a smart card. When used for electronic payment, they are commonly located near PIN pads, cash registers and other places of payment. When the readers are used for public transit they are commonly located on fare boxes, ticket machines, turnstiles, and station platforms as a standalone unit. When used for security, readers are usually located to the side of an entry door. Technology A contactless smart card is a card in which the chip communicates with the card reader through an induction technology similar to that of an RFID (at data rates of 106 to 848 kbit/s). These cards require only close proximity to an antenna to complete a transaction. They are often used when transactions must be processed quickly or hands-free, such as on mass transit systems, where a smart card can be used without even removing it from a wallet. The standard for contactless smart card communications is ISO/IEC 14443. It defines two types of contactless cards ("A" and "B") and allows for communications at distances up to . There had been proposals for ISO/IEC 14443 types C, D, E, F and G that have been rejected by the International Organization for Standardization. An alternative standard for contactless smart cards is ISO/IEC 15693, which allows communications at distances up to . Examples of widely used contactless smart cards are Seoul's Upass (1996), Hong Kong's Octopus card, Shanghai's Public Transportation Card (1999), Paris's Navigo card, Japan Rail's Suica Card (2001), Singapore's EZ-Link, Taiwan's EasyCard, San Francisco Bay Area's Clipper Card (2002), London's Oyster card, Beijing's Municipal Administration and Communications Card (2003), South Korea's T-money, Southern Ontario's Presto card, India's More Card, Melbourne's Myki card and Sydney's Opal card which predate the ISO/IEC 14443 standard. The following tables list smart cards used for public transportation and other electronic purse applications. A related contactless technology is RFID (radio frequency identification). In certain cases, it can be used for applications similar to those of contactless smart cards, such as for electronic toll collection. RFID devices usually do not include writeable memory or microcontroller processing capability as contactless smart cards often do. There are dual-interface cards that implement contactless and contact interfaces on a single card with some shared storage and processing. An example is Porto's multi-application transport card, called Andante, that uses a chip in contact and contactless (ISO/IEC 14443 type B) mode. Like smart cards with contacts, contactless cards do not have a battery. Instead, they use a built-in inductor, using the principle of resonant inductive coupling, to capture some of the incident electromagnetic signal, rectify it, and use it to power the card's electronics. Communication protocols Applications Transportation Since the start of using the Seoul Transportation Card, numerous cities have moved to the introduction of contactless smart cards as the fare media in an automated fare collection system. In a number of cases these cards carry an electronic wallet as well as fare products, and can be used for low-value payments. Contactless bank cards Starting around 2005, a major application of the technology has been contactless payment credit and debit cards. Some major examples include: ExpressPay – American Express MasterCard Contactless (formerly PayPass) – MasterCard Visa Contactless (formerly payWave) – Visa QuickPass – UnionPay JCB Contactless (formerly J/Speedy), QUICPay (not compatible with EMV Contactless/ISO/IEC 14443) – JCB RuPay Contactless - RuPay Zip – Discover Roll-outs started in 2005 in the United States, and in 2006 in some parts of Europe and Asia (Singapore). In the U.S., contactless (non PIN) transactions cover a payment range of ~$5–$100. In general there are two classes of contactless bank cards: magnetic stripe data (MSD) and contactless EMV. Contactless MSD cards are similar to magnetic stripe cards in terms of the data they share across the contactless interface. They are only distributed in the U.S. Payment occurs in a similar fashion to mag-stripe, without a PIN and often in off-line mode (depending on parameters of the terminal). The security level of such a transaction is better than a mag-stripe card, as the chip cryptographically generates a code which can be verified by the card issuer's systems. Contactless EMV cards have two interfaces (contact and contactless) and work as a normal EMV card via their contact interface. The contactless interface provides similar data to a contact EMV transaction, but usually a subset of the capabilities (e.g. usually issuers will not allow balances to be increased via the contactless interface, instead requiring the card to be inserted into a device which uses the contact interface). EMV cards may carry an "offline balance" stored in their chip, similar to the electronic wallet or "purse" that users of transit smart cards are used to. Identification A quickly growing application is in digital identification cards. In this application, the cards are used for authentication of identity. The most common example is in conjunction with a PKI. The smart card will store an encrypted digital certificate issued from the PKI along with any other relevant or needed information about the card holder. Examples include the U.S. Department of Defense (DoD) Common Access Card (CAC), and the use of various smart cards by many governments as identification cards for their citizens. When combined with biometrics, smart cards can provide two- or three-factor authentication. Smart cards are not always a privacy-enhancing technology, for the subject carries possibly incriminating information about him all the time. By employing contactless smart cards, that can be read without having to remove the card from the wallet or even the garment it is in, one can add even more authentication value to the human carrier of the cards. Other The Malaysian government uses smart card technology in the identity cards carried by all Malaysian citizens and resident non-citizens. The personal information inside the smart card (called MyKad) can be read using special APDU commands. Security Smart cards have been advertised as suitable for personal identification tasks, because they are engineered to be tamper resistant. The embedded chip of a smart card usually implements some cryptographic algorithm. There are, however, several methods of recovering some of the algorithm's internal state. Differential power analysis Differential power analysis involves measuring the precise time and electric current required for certain encryption or decryption operations. This is most often used against public key algorithms such as RSA in order to deduce the on-chip private key, although some implementations of symmetric ciphers can be vulnerable to timing or power attacks as well. Physical disassembly Smart cards can be physically disassembled by using acid, abrasives, or some other technique to obtain direct, unrestricted access to the on-board microprocessor. Although such techniques obviously involve a fairly high risk of permanent damage to the chip, they permit much more detailed information (e.g. photomicrographs of encryption hardware) to be extracted. Eavesdrop on NFC communication Short distance (≈10 cm. or 4″) is required for supplying power. The radio frequency, however, can be eavesdropped within several meters once powered-up. Concerns Failure rate The plastic card in which the chip is embedded is fairly flexible, and the larger the chip, the higher the probability of breaking. Smart cards are often carried in wallets or pockets — a fairly harsh environment for a chip. However, for large banking systems, the failure-management cost can be more than offset by the fraud reduction. A card enclosure may be used as an alternative to help prevent the smart card from failing. Privacy Using a smart card for mass transit presents a risk for privacy, because such a system enables the mass transit operator, the banks, and the authorities, to track the movement of individuals. The same argument can be made for banks tracking retail payments. Such information was used in the investigation of the Myyrmanni bombing. Theft and fraud Contactless technology does not necessarily prevent use of a PIN for authentication of the user, but it is common for low value transactions (bank credit or debit card purchase, or public transport fare payment) not to require a PIN. This may make such cards more likely to be stolen, or used fraudulently by the finder of someone else's lost card. Use abroad Inland data networks quickly convey information between terminals and central banking systems, such that contactless payment limits may be monitored and managed. This may not be possible with use of such cards when abroad. Multiple cards detection When two or more contactless cards are in close proximity the system may have difficulty determining which card is intended to be used. The card-reader may charge the incorrect card or reject both. This is generally only an issue where a service provider uses a payment card to facilitate access - eg a wallet containing a parking lot access card, an apartment building entry card and various contactless payment cards can usually be used on entry to a car park or whatever - the car park entry system can detect its own card in the wallet and open the barrier. In a retail shop, however, it is advisable to remove the individual contactless card from the wallet when making a payment. At the very least this gives the cardholder the opportunity to communicate which card they intend to be used to make payment. It is an issue of the card identifying a subscription -v- payment by transaction. See also Access badge Access control Disk encryption Keycard lock Physical security Android Pay Apple Pay Biometric passport Common Access Card Contactless payment Credential Electronic money EMV Identity document Java Card List of smart cards Magnetic stripe card Microchip implant (human) MULTOS Near field communication Octopus Card Payment Card Industry Data Security Standard Proximity card Radio-frequency identification Security engineering Single sign-on Smart card SNAPI Subscriber identity module Telephone card Notes References Ubiquitous computing ISO standards Banking technology
37305349
https://en.wikipedia.org/wiki/2012%20GX17
2012 GX17
, also written as 2012 GX17, is a minor body classified as Centaur and Trans-Neptunian object by the Minor Planet Center. The object was once considered a promising Neptune L5 trojan candidate. Discovery was discovered on 14 April 2012 by the Pan-STARRS 1 telescope, observing from Haleakala, Hawaii. Orbit follows a rather eccentric orbit (0.55) with a semi-major axis of 37.4 AU. This object also has high orbital inclination (32.5º). Physical properties is a rather large minor body with an absolute magnitude of 7.6 which gives a characteristic diameter of 60–200 km for an assumed albedo in the range 0.5–0.05. Former Neptune trojan candidate Initially, was considered to be a promising Neptune trojan candidate, based on a very preliminary determination of 30.13 AU for its semi-major axis. However, the true value is much larger (37.4 AU) and it is now classified as a Trans-Neptunian object. References External links Four temporary Neptune co-orbitals: (148975) 2001 XA255, (310071) 2010 KR59, (316179) 2010 EN65, and 2012 GX17 by de la Fuente Marcos, C., & de la Fuente Marcos, R. 2012, Astronomy and Astrophysics, Volume 547, id.L2, 7 pp. Early discovery note data at MPC IAU list of centaurs and scattered-disk objects IAU list of trans-neptunian objects Another list of TNOs The Long Term Dynamical Stability of the Known Neptune Trojans, Jack Lang Soutter, Master of Science thesis (not a Neptune trojan) Neptune trojans Minor planet object articles (unnumbered) Co-orbital minor planets 20120414
3730657
https://en.wikipedia.org/wiki/Sharp%20MZ
Sharp MZ
The Sharp MZ is a series of personal computers sold in Japan and Europe (particularly Germany and Great Britain) by Sharp beginning in 1978. History Although commonly believed to stand for "Microcomputer Z80", the term MZ actually has its roots in the MZ-40K, a home computer kit produced by Sharp in 1978 which was based on Fujitsu's 4-bit MB8843 processor and provided a simple hexadecimal keypad for input. This was soon followed by the MZ-80K, K2, C, and K2E, all of which were based on 8-bit LH0080A Sharp CPU (compatible to Zilog Z80A) with an alphanumeric keyboard. From the first Z80 processor-based model to the MZ-2200 in 1983, the MZ computers included the PC, monitor, keyboard, and tape-based recorder in a single unit, similar to Commodore's PET series. It was also notable for not including a programming language or operating system in ROM. This invited a host of third-party companies, starting with Hudson Soft, to produce many languages and OSes for the system. In an era when floppy disk drives were too expensive for most home users, the MZ's built-in cassette tape drive was faster and more reliable than the cassette storage on some competing computers; however, this meant that the MZ series was relatively slow to adopt floppy drives as a standard accessory. In 1983, after the most popular home computers appeared in the UK, the Sharp MZ-700 was briefly the 10th best selling machine out of 20 considered, beating the Apple IIe, Atari 800 and TI-99/4A. On 21 December 2012, SHARP published PDF files of manuals for the MZ-80 on their official Twitter page. It was promoted as a "Christmas present" to fans. As of 27 December 2012 the manuals had been downloaded over 1,864,525 times. By 28 December 2012, both of the manual PDF's had been downloaded 3,804,756 times. Tweets of appreciation were received and many praised the release as a wise decision. This Project Started Takafumi Horie tweeted by 15 May 2012 17:51 for SHARP GALAPAGOS together. Sharp's Twitter account said "Manual of MZ-80 ... Nostalgia for those who know those days ... What is that? I hope that those who say that will also take a look and feel the enthusiasm of those days together.". Products The MZ series is divided into several lines, including the text-based MZ-80K series, the graphics-based MZ-80B series, and the MZ-3500/5500 series, based on a completely different architecture. In 1982, Sharp's television division released the X1, a completely new computer. The X series proved to outsell Sharp's own MZ series, and in response, Sharp released the MZ-1500/2500 machines, which featured powered-up graphics and sound capabilities. However, this series saw little marketplace success, and eventually the company abandoned the line in favor of the X68000 series. The MZ name lives on as the initials of two of Sharp's most well-known products: the Mebius line of PCs, and the Zaurus line of personal digital assistants. MZ-80K group The Sharp MZ80K was one of the popular early consumer-level microcomputers, with an architecture based on the Zilog Z80 8-bit microprocessor. It was introduced into Europe in 1979. The machine had 48KB of RAM, 32KB of which was available for user programs (the actual figure was dependent on the memory configuration and the system languages being used). It could run a variety of high-level languages including BASIC, Pascal and FORTRAN, which had to be loaded into RAM before any programming could be undertaken. It could also be programmed directly in assembly code or machine code. The machine had an inbuilt monochrome display and a cassette tape drive. The display, keyboard and cassette drive lifted on hinges to expose the motherboard and circuitry underneath. Graphics capability was primitive, with only preset shapes and icons being available and no native hi-res capability. This was not unusual for a late-1970s vintage microcomputer. The main drawback, however, of the MZ-80K was the non-standard keyboard, which was difficult to use. The MZ-80K sold well in Europe despite its high price (it retailed at over £500 in 1980), and a large range of software was available, including some Japanese arcade games. It was superseded in 1982 by the MZ-80A machine. MZ-80K series MZ-80K (1978): An all-in-one kit with keyboard. MZ-80C: Featured an improved keyboard and 48KB of memory. MZ-80K2: The assembled version of the 80K. MZ-80K2E: A low-price version of the 80K2. MZ-80A (1982)/MZ-1200: An upgraded version of the 80K with improved keyboard, more VRAM and a green-screen VDU. MZ-700 series (MZ-80K machines with color graphics) MZ-700 (1982): The first MZ without a built-in monitor; an optional data recorder and plotter could also be installed to the machine. More-or-less fully compatible with the MZ-80K. MZ-1500 (1984): Available in Japan only. Features 320×200-pixel graphics and built-in sound capability using a Texas Instruments SN76489 sound chip. The tape recorder has been replaced with a floppy drive that reads 2.8-inch Quick Disks. MZ-800 (1985): The first MZ with a 640×200-pixel graphics mode, a Texas Instruments SN76489 sound chip and a Quick Disk drive. MZ-80B group This offshoot of the MZ-80K line was primarily marketed for business use. MZ-80B series MZ-80B (1981): 320×200-pixel graphics. (Extra VRAM optional) MZ-80B2: An 80B with extra VRAM installed. Sold alongside the MZ-2000 for most of the lineup's lifetime. MZ-2000 (1982): 640×200-pixel monochrome monitor built-in; color optional. BASIC-level compatible with the MZ-80B. MZ-2200 (1983): The only monitorless, standalone unit in the series. MZ-2500 (SuperMZ) series: Launched in 1985, the computers in this series all used a Z80B processor running at 6 MHz. They included a data recorder and at least one 3.5 internal floppy disk drive, as well as a YM2203 sound chip, hardware scrolling, and a palette of 256 colors (upgradable to 4096). This makes them among the most powerful 8-bit machines ever released for home use. Some models are also compatible with the MZ-80B and MZ-2000. MZ-2511 MZ-2520: The 2511 without a data recorder and the MZ-80B/2000 compatibility modes. MZ-2521 MZ-2531(MZ-2500V2) (1986) MZ-2800 series MZ-2861 (1987): A hybrid 16-bit machine running on an Intel 80286 and a Z80 for MZ-2500 compatibility. It could run MS-DOS in 16-bit mode, as well as a PC98 emulator. MZ-3500/5500/6500 group A line of business PCs shoehorned into the MZ lineup. All of them feature 5.25-inch floppy disk drives. MZ-3500 series (1982): Runs on two Z80A processors. MZ-3541: FDOS and EOS (CP/M compatible) MZ-5500 series (1983): An MS-DOS-based machine running on an Intel 8086 processor. MZ-6500 series (1984): A high-speed version of the MZ-5500 marketed as a CAD workstation. MZ-6500 MZ-6550: A vertically mounted machine with an 80286 processor and a 3.5-inch floppy drive. Other MZ-100: notebook / laptop with Intel 8088 processor and two 720KB DS/DD 3.5" floppy disk drives. MZ-8000 series: A line of PC/AT-compatible machines running on 80286 and 80386 processors. See also Sharp X1 References External links Games for MZ-800, Download Sharp MZ-800 The Sharp Users Club A dedicated resource for all Sharp MZ machines MZ-80A Comprehensive Guide Sharp MZ-800 emulator FPGA Hardware MZ Series Emulator Sharp MZ Series upgrades Sharp MZ site with many articles on the history of the series MZ Z80-based home computers Home computers Computer-related introductions in 1978 Early microcomputers
1915691
https://en.wikipedia.org/wiki/Off-the-Record%20Messaging
Off-the-Record Messaging
Off-the-Record Messaging (OTR) is a cryptographic protocol that provides encryption for instant messaging conversations. OTR uses a combination of AES symmetric-key algorithm with 128 bits key length, the Diffie–Hellman key exchange with 1536 bits group size, and the SHA-1 hash function. In addition to authentication and encryption, OTR provides forward secrecy and malleable encryption. The primary motivation behind the protocol was providing deniable authentication for the conversation participants while keeping conversations confidential, like a private conversation in real life, or off the record in journalism sourcing. This is in contrast with cryptography tools that produce output which can be later used as a verifiable record of the communication event and the identities of the participants. The initial introductory paper was named "Off-the-Record Communication, or, Why Not To Use PGP". The OTR protocol was designed by cryptographers Ian Goldberg and Nikita Borisov and released on 26 October 2004. They provide a client library to facilitate support for instant messaging client developers who want to implement the protocol. A Pidgin and Kopete plugin exists that allows OTR to be used over any IM protocol supported by Pidgin or Kopete, offering an auto-detection feature that starts the OTR session with the buddies that have it enabled, without interfering with regular, unencrypted conversations. Version 4 of the protocol is currently being designed by a team led by Sofía Celi, and reviewed by Nik Unger and Ian Goldberg. This version aims to provide online and offline deniability, to update the cryptographic primitives, and to support out-of-order delivery and asynchronous communication. History OTR was presented in 2004 by Nikita Borisov, Ian Avrum Goldberg, and Eric A. Brewer as an improvement over the OpenPGP and the S/MIME system at the "Workshop on Privacy in the Electronic Society" (WPES). The first version 0.8.0 of the reference implementation was published on 21 November 2004. In 2005 an analysis was presented by Mario Di Raimondo, Rosario Gennaro, and Hugo Krawczyk that called attention to several vulnerabilities and proposed appropriate fixes, most notably including a flaw in the key exchange. As a result, version 2 of the OTR protocol was published in 2005 which implements a variation of the proposed modification that additionally hides the public keys. Moreover, the possibility to fragment OTR messages was introduced in order to deal with chat systems that have a limited message size, and a simpler method of verification against man-in-the-middle attacks was implemented. In 2007 Olivier Goffart published mod_otr for ejabberd, making it possible to perform man-in-the-middle attacks on OTR users who don't check key fingerprints. OTR developers countered this attack by introducing socialist millionaire protocol implementation in libotr. Instead of comparing key checksums, knowledge of an arbitrary shared secret can be utilised for which relatively low entropy can be tolerated by using the socialist millionaire protocol. Version 3 of the protocol was published in 2012. As a measure against the repeated reestablishment of a session in case of several competing chat clients being signed on to the same user address at the same time, more precise identification labels for sending and receiving client instances were introduced in version 3. Moreover, an additional key is negotiated which can be used for another data channel. Several solutions have been proposed for supporting conversations with multiple participants. A method proposed in 2007 by Jiang Bian, Remzi Seker, and Umit Topaloglu uses the system of one participant as a "virtual server". The method called "Multi-party Off-the-Record Messaging" (mpOTR) which was published in 2009 works without a central management host and was introduced in Cryptocat by Ian Goldberg et al. In 2013, the Signal Protocol was introduced, which is based on OTR Messaging and the Silent Circle Instant Messaging Protocol (SCIMP). It brought about support for asynchronous communication ("offline messages") as its major new feature, as well as better resilience with distorted order of messages and simpler support for conversations with multiple participants. OMEMO, introduced in an Android XMPP client called Conversations in 2015, integrates the Double Ratchet Algorithm used in Signal into the instant messaging protocol XMPP ("Jabber") and also enables encryption of file transfers. In the autumn of 2015 it was submitted to the XMPP Standards Foundation for standardisation. Currently, version 4 of the protocol has been designed. It was presented by Sofía Celi and Ola Bini on PETS2018. Implementation In addition to providing encryption and authentication — features also provided by typical public-key cryptography suites, such as PGP, GnuPG, and X.509 (S/MIME) — OTR also offers some less common features: Forward secrecy: Messages are only encrypted with temporary per-message AES keys, negotiated using the Diffie–Hellman key exchange protocol. The compromise of any long-lived cryptographic keys does not compromise any previous conversations, even if an attacker is in possession of ciphertexts. Deniable authentication: Messages in a conversation do not have digital signatures, and after a conversation is complete, anyone is able to forge a message to appear to have come from one of the participants in the conversation, assuring that it is impossible to prove that a specific message came from a specific person. Within the conversation the recipient can be sure that a message is coming from the person they have identified. Authentication As of OTR 3.1, the protocol supports mutual authentication of users using a shared secret through the socialist millionaire protocol. This feature makes it possible for users to verify the identity of the remote party and avoid a man-in-the-middle attack without the inconvenience of manually comparing public key fingerprints through an outside channel. Limitations Due to limitations of the protocol, OTR does not support multi-user group chat but it may be implemented in the future. As of version 3 of the protocol specification, an extra symmetric key is derived during authenticated key exchanges that can be used for secure communication (e.g., encrypted file transfers) over a different channel. Support for encrypted audio or video is not planned. (SRTP with ZRTP exists for that purpose.) A project to produce a protocol for multi-party off-the-record messaging (mpOTR) has been organized by Cryptocat, eQualitie, and other contributors including Ian Goldberg. Since OTR protocol v3 (libotr 4.0.0) the plugin supports multiple OTR conversations with the same buddy who is logged in at multiple locations. Client support Native (supported by project developers) These clients support Off-the-Record Messaging out of the box. (incomplete list) Via third-party plug-in The following clients require a plug-in to use Off-the-Record Messaging. HexChat, with a third-party plugin Miranda IM (Microsoft Windows), with a third-party plugin Pidgin (cross-platform), with a plugin available from the OTR homepage WeeChat, with a third-party plugin HexChat, for *nix versions, with a third-party plugin Confusion with Google Talk "off the record" Although Gmail's Google Talk uses the term "off the record", the feature has no connection to the Off-the-Record Messaging protocol described in this article, its chats are not encrypted in the way described above—and could be logged internally by Google even if not accessible by end-users. See also References Further reading External links Protocol specification Off-the-Record Messaging: Useful Security and Privacy for IM, talk by Ian Goldberg at the University of Waterloo (video) Cross-platform free software Cryptographic protocols Cryptographic software Free security software Instant messaging Internet privacy software
2515671
https://en.wikipedia.org/wiki/CGI%20Inc.
CGI Inc.
CGI Inc., also known as CGI Group Inc., is a Canadian multinational information technology (IT) consulting and systems integration company headquartered in Montreal, Quebec, Canada. CGI originally stood for "Conseillers en gestion et informatique"(Advisors on Management and Computer Systems). More recently, in English speaking countries it is taken to stand for "Consultants to Government and Industries"). CGI went public in 1986 with a primary listing on the Toronto Stock Exchange. CGI is also a constituent of the S&P/TSX 60, and has a secondary listing on the New York Stock Exchange. After almost doubling in size with the 1998 acquisition of Bell Sygma, CGI acquired IMRGlobal in 2001 for $438 million, which added "global delivery options" for CGI. Other significant purchases include American Management Systems (AMS) for $858 million in 2004, which grew CGI's presence in the United States, Europe and Australia and led to the formation of the CGI Federal division. CGI Federal's 2010 acquisition of Stanley, Inc. for $1.07 billion almost doubled CGI's presence in the United States, and expanded CGI into defense and intelligence contracts. In 2012, CGI acquired Logica for $2.7 billion Canadian, making CGI the fifth-largest independent business processes and IT services provider in the world, and the biggest tech firm in Canada. In 2016, CGI had assets worth C$20.9 billion, annual sales of $10.7 billion, and a market value of $9.6 billion. As of 2017 CGI is based in forty countries with around 400 offices, and employs approximately 70,000 people. As of March 2015, Canada made up 15% of CGI's client base revenue, and 29% originated from the United States, while around 40% of their commissions came from Europe, and the remaining 15% derived from locales in the rest of the world. Services provided by CGI as of 2018 include application services, business consulting, business process services, IT infrastructure services, IT outsourcing services, and systems integration services, among others. CGI has customers in a wide array of industries and markets, with many in financial services. CGI also develops products and services for markets such as telecommunications, health, manufacturing, oil and gas, posts and logistics, retail and consumer services, transportation, and utilities. Clients include both private entities and central governments, state, provincial and local governments, and government departments dealing with defense, intelligence, space, health, human services, public safety, justice, tax, revenue and collections. History Founding and early years (1970s-1980s) CGI Inc. was founded as an IT consulting company on June 15, 1976, in Quebec City, Québec, by Serge Godin. Within several months he was joined by co-founder André Imbeau from Quebec City. They initially ran the business from Godin's basement with a single phone. Starting with one client, as the company grew in size the co-founders moved to Montreal, and by the end of their first year they had generated $138,000 in revenue. While CGI stands for "Conseillers en gestion et informatique" in French (which translates to "consultants in management and information technology" in English), the official English meaning would become "Consultants to Government and Industry." In later years the company began to go to market as simply CGI. Throughout the 1970s CGI grew in size and continued to focus on the information technology (IT) services market, soon offering systems integration alongside consulting. Near the end of the 1970s, however, the systems integration market began to shift to outsourcing, with CGI responding by branching into IT outsourcing as well. The company also secured a number of government contracts, and the UK Ministry of Defence brought in CGI around 1980 to act as a systems integrator, among other roles. CGI's annual revenue in 1986 was $25 million, and that year the company began acquiring a number of smaller IT services companies. CGI went public with an initial public offering (IPO) to fund the acquisitions, and by the late 1980s CGI was expanding further, acquiring several business processes services (BPS) companies and expanding beyond Canada. Doubling in size (1990s) The CGI Management Foundation was formed in 1992 to manage CGI's "management frameworks, policies and guidelines." CGI earned ISO 9001 certification for their "project management framework" in 1994, and in doing so became the first IT consulting firm in North America to comply with the ISO quality standard. A year later CGI's AMICUS library management software was first developed in collaboration with the National Library of Canada, and in 1997 a customized version was commissioned by the British Library. By the mid-1990s CGI had a client base both in Canada and internationally, and was establishing the company's long-term "build and buy" growth strategy. In 1995 CGI entered into a commercial alliance with the large telecommunications company Bell Canada, with Bell Canada purchasing CGI shares then valued at $18.4 million. By the end of 1996, CGI's annual revenue was $122 million. In April 1997, CGI acquired the company CDSL Holdings Limited (CDSL) for a purchase price of about $36.5 million. At the time CDSL was Canada's largest "independent provider of retail banking services and electronic commerce/switching services," and largely serviced the credit union industry in Canada. After the acquisition, CGI's employees in both Canada and internationally numbered 2,500. After various commercial relationships with Interac since the mid-1980s, in 1997 CGI became the first non-financial company in Canada to enable Interac money transfers for clients. In 1998 CGI acquired the Canadian company Bell Sygma, a Bell Canada subsidiary, which almost doubled CGI's size. The deal was one of the largest Canadian outsourcing contracts of the time. Expansion into international markets (2000s) By 2000 CGI had clients in the banking industry. CGI acquired the company IMRGlobal in 2001 for $438 million, which added "global delivery options" for CGI. In January 2003, the Canadian tech company Cognicase was bought out by CGI for US$221 million, and at the end of 2003 CGI had annual sales of $1.85 billion. In May 2004, CGI purchased the majority of American Management Systems (AMS) for $858 million, acquiring the commercial divisions and all government business not related to national defense. The defense and intelligence practice divisions were sold to CACI for $415 million. As of late 2004, CGI was the world's eighth-largest independent provider of information technology services. CGI co-founder Serge Godin stepped aside as CEO in 2006, taking the new position of executive chairman of the board and appointing as new CEO Michael Roach, who quickly focused on further company expansion. Annual revenue at CGI was $3.5 billion by the fiscal end of 2006. That same year, CGI became one of four primary Recovery Audit Contractors in the US, with responsibilities to audit region B. At the end of 2007, CGI had a backlog worth $12.04 billion and an annual revenue of $3.7 billion, employing around 26,500 people. Continuing to develop SaaS products, in 2008 CGI's AMS Advantage ERP system won a Best of Kentucky Award for its use by the Commonwealth of Kentucky. Second doubling in size (2010-2012) In August 2010, CGI Federal acquired Stanley, Inc. for an enterprise value of approximately $1.07 billion. The deal came close to doubling CGI's presence in the United States and expanded CGI into defense and intelligence contracts. Several years earlier, CGI had been legally unable to acquire AMS's defense division because of a lack of U.S. Department of Defense-required infrastructure. In 2010, however, the infrastructure was in place. At the time of merger, Stanley earned annual revenues of $865 million, and that amount, combined with CGI Federal's profit, brought their joint income to about $1.2 billion. In 2010, CGI was included in the Forbes Global 2000 ranking of the 2,000 largest public companies in the world. As of 2011, there were 31,000 CGI employees in 125 offices worldwide and 89% of professionals at CGI also owned company shares. That fall, the EPA awarded CGI Federal a "$207 million task order renewal over a six year period to support the EPA's Central Data Exchange (CDX)." In August 2012, CGI acquired the UK-based computer services company Logica for £1.7 billion in cash. The acquisition raised the number of CGI's staff from 31,000 to 68,000, and CGI became the fifth largest independent business processes and IT services company in the world, with clients in the Americas, Asia, and Europe. It also made CGI the biggest tech firm in Canada. In September 2012, CGI Federal won a $143 million contract to provide operational support for the Army's training elements, the Deputy Chief of Staff for Intelligence, and the United States Army Training and Doctrine Command. Also that September, it was announced that CGI Federal's "health and compliance programs business unit" had been given the highest rating possible by the Software Engineering Institute. In doing so, CGI Federal became the tenth company in the United States to receive the Level 5 rating for CMMI Development. At this point CGI was working on a number of successful health-related projects, largely in North America. However, in 2012 CGI had its $46.2-million contract to build an electronic diabetes registry for eHealth Ontario formally cancelled after it failed to meet deadlines imposed by eHealth. The work that CGI did would later be declared obsolete, and it was overtaken by more recent technology developed by other contractors. Contract work (2013-2014) In 2013, CGI won a significant contract to provide cloud computing services to the UK government, and that April, CGI began working with CIFAS on a modernized platform to visualize and analyse data from the National Fraud Database. At the time, CGI's train occupancy mobile app, iNStAPP, was being used by several train companies and institutions in Europe. In February 2013, the independent analyst firm Verdantix published a report comparing technology consulting and systems integration firms' ability to build efficient renewable energy management systems. The report named CGI as No. 4 on the "overall capabilities" score. Continuing to work in the financial sector, CGI was rated as a "major contender" by Everest Group in a 2013 PEAK Matrix study looking at IT outsourcing capital markets. In 2011 CGI Federal was one of several dozen contractors selected by the United States Department of Health and Human Services to establish a new federal health insurance marketplace. Delays in the two-year development process attracted widespread coverage in the media, and following the public launch of HealthCare.gov on October 1, 2013, technical issues surfaced which prevented many users from logging in. As one of primary contractors involved, CGI Federal's contributions were scrutinized by the press and policy makers, though the Lexington Institute later concluded that "many of the early problems with [CGI's] part of the project were traceable to a front-end feature assembled by a different contractor for which CGI wasn't responsible." CGI was also contracted to help develop health insurance marketplaces for a number of state governments. Some, like the websites for Colorado and Kentucky were launched smoothly, while the websites for Vermont, Massachusetts, and Hawaii Health Connector experienced difficulties. By the December 2013 deadline the problems had largely been fixed, and within several weeks enrollment in the federal marketplace was at 1.1 million people. Analysis of the situation by journalists, government officials, and think tanks has varied. Despite the press scrutiny over HealthCare.gov, in late 2013 and early 2014 the Centers for Medicare and Medicaid Services awarded CGI a value of $37 million in various contracts. However, the agency did not renew CGI Federal's contract for HealthCare.gov when it ended in February 2014, citing that the firm was ineffective at fixing the website's problems. According to CGI, the agreement was mutual. As of 2016, CGI ranked No. 955 on the Forbes Forbes Global 2000. At the time CGI had assets worth C$20.9 billion, annual sales of $10.7 billion, and a market value of $9.6 billion. In 2014, CGI claimed an "$8 billion pipeline of future task orders—doubling its federal business over the period of a year." Among these contracts were $871 million with the Defense Information Services Agency, $143 million for visa processing in China, and an "indefinite quantity" contract for the Coast Guard and Department of Homeland Security. CGI also continued to work with state governments, for example signing a $399 million contract to work on the California Enterprise Data to Revenue (EDR) Project for the California Franchise Tax Board. In October 2014, The Globe and Mail reported that CGI was operating ten security centers, from which 1,400 CGI employees monitor "data traffic for an undisclosed number of customers" that include the Canadian Payments Association, the National Bank of Canada, and about forty Canadian government departments. By 2014, CGI had been working with the European space industry for years, and had developed software that helps support the missions of over 200 individual satellites. CGI had also created the Constellation Control Facility that control's the Galileo Commercial Service's 30 satellites, and software for the first satellite in the world with e-sail (electric solar wind sail). In November 2014 CGI was awarded a new contract by Inmarsat to "support data communications between the pilot and air traffic controller within the European airspace." Inmarsat is the safety communications provider for 98% of airlines. With 16% of CGI's revenue coming from software in 2014, other software projects that year included an app for remote elevator monitoring that uses "machine learning," as well as several high-profile smart grids. In November 2014, CGI was awarded a $2 billion IT contract extension from BCE, which is the parent company of Bell Canada, to continue operating Bell's IT network. A month later, PostNord, a large Nordic logistics company, also renewed its 2005 human resources contract with CGI, with CGI continuing to manage payroll processing for PostNord's 26,000 employees in Sweden. Recent developments (2014-Present) November 2014, CGI Federal was recognized by the Coalition for Government Procurement for its veteran hiring program. At that point, around a quarter of CGI Federal's new hires each year were war veterans. Also in 2014, Canadian Business named Michael Roach their Most Innovative CEO of the year. Fiscal revenue by the end of 2014 was C$10.5 billion, and in the first quarter of 2015, CGI had revenues of $2.54 billion. Concerning media speculation over new CGI acquisitions, on April 30, 2015, CEO Michael Roach was quoted saying that "CGI will not rush into acquisitions," though the company is "open to deals if there is a strategic fit." Many of CGI's more visible projects in 2015 have been related to software and municipal safety, including an emergency response system for the Estonian Rescue Board. In March 2015 CGI was awarded a contract by the UK Ministry of Defence (MOD) to provide support for the MOD's Fire Control Battlefield Information System Application (FC BISA) and the Fire Control Application (FCA) systems. In January 2016, CGI and the British Columbia Ministry of Health extended their partnership. Less than a week later, CGI won a contract with the U.S. Navy to work on their NAVSUP Business Systems Center. In March, 2016, CGI secured a $61.2 million contract for support of the Swedish social insurance agency. In May 2016, CGI signed an agreement with Sears Canada for a 10-year modernization. Later in May, CGI won the Queensland government contract for debt recovery. About a week later, CGI launched an initiative with the Canadian Securities Administration to help with IT modernization. In June, CGI signed an 8-year contract with Banque Postale. The same month, the US DISA awarded CGI task order to provide a test and evaluation of DoD Healthcare Management System Modernization. At the end of June, the US State Department extended their global visa processing work with CGI. In August 2016, CGI secured a $34.2 million contract from the US Army Training and Doctrine Command Intelligence Directorate (TRADOOC G2). Later in the month, CGI's software platform was used for the eHealth exchange. CGI aided in the implementation of a California Franchise Tax Board IT modernization project that has generated $2.6 billion in revenue over 5 years. In September 2016, CGI was selected by 139 of Maine's State agencies to strengthen cloud security and workflow with a $39 million contract. In October 2016, George D. Schindler succeeded Michael Roach as the 3rd CEO in CGI's history. Additionally, in October 2016, CGI was awarded a $824M contract by USDA. Soon after, the Department of Veteran Affairs selected the USDA as the provider for their new enterprise financial system. About a week after this news, CGI was selected by Solvay to modernize its IT applications and to support operations. In November 2016, CGI acquired Collaborative Consulting, agreed to work with GAO Financial Management, and signed a 10-year contract with Yellow Pages. A month later, CGI and iA Financial expanded their long-term partnership. In 2017, CGI acquired Affecto Plc. and CGI continues to acquire local and IP-based services firms like JSL and Alcyane. In January 2017, CGI was selected by a UK Environment Agency to develop a cloud-based flood forecasting platform and selected by Veteran Affairs Community Care to work on a care recovery audit. In February 2017, Swedish insurer Alecta hired CGI to modernize its IT capabilities. Also in 2017, CGI acquired Summa Technologies and Paragon Solutions. Additionally, in 2017, CGI acquired Greenwood Village-based IT firm, ECS Team, along with CTS Inc., a large technology company based in Birmingham. In March 2017, the state of Colorado selected CGI to update and modernize their payroll system. CGI also secured a contract with the National Police of the Netherlands in March 2017 to create a digital platform for community policing. In April 2017, CGI began operating the IT system for England's water and sewerage supply market. That same month, both Bisnode, a Swedish data service firm, and Aerojet Rocketdyne, an aerospace company, contracted CGI to manage their IT systems. In May 2017, CGI secured a $43 million contract from the city of Los Angeles and a multi-state cooperative contract to offer an enterprise resource planning SaaS platform. In June 2017, SEB, a Swedish financial services group, selected to implement CGI's transaction platform. The next month, CGI secured a $68 million contract from the Los Angeles County Office of Education. Additionally in July 2017, CGI was awarded four contracts totaling $92.5 million to provide services for U.S. Army Aviation and Missile Command programs. In October 2017, CGI was awarded another contract with the U.S. Army for $37.4 billion. The city of Glasgow awarded CGI an IT contract in November 2017. In February 2018, CGI renewed and expanded its IT outsourcing contract with Bombardier Aerospace, and extended its contract with Airbus Group to support its global HR system. Mid-2018, CGI was selected by Fingrid Data Hub to develop and run core DataHub for electricity information exchange and was awarded a $530M DHS CDM contract to support the cyber security of federal agencies. In September 2018, CGI announced a merger with ckc AG and GE and CGI an alliance agreement between CGI and GE to develop and implement electric grid software in North America. A month later, CGI worked with USAID to migrate to a hybrid cloud environment. In the same month, CGI partnered with Scotiabank on intelligent process automation POC for trade finance transactions. A few days earlier, CGI partnered with Hydro-Quebec to launch MILES, a data analytics tool to address the root causes of electricity outages before they occur. In November 2018, CGI partnered with the Swedish Migration Agency to improve its immigration services. In May 2018, CGI signed two strategic application management contracts with SNCF. In the same month, CGI was selected by Meyer Werft to advance its global growth strategy through IT modernization, and CGI partnered with TD to leverage CGI's Wealth360 portfolio management software. In February 2019, CGI entered into an agreement with YIT, a Finnish construction company. In the same year, CGI and League Data extended their outsourcing agreement (originating in 2002) through 2023. The value of the extensions is $18 million. In late 2019, The Wall Street Journal indicated CGI was part of the Chinese APT10 group's Operation Cloud Hopper hack, which exposed companies' data from 2013 to 2017. The first known target was Rio Tinto, who was accessed through CGI's managed cloud. CGI Federal CGI Federal is a wholly owned subsidiary of CGI Inc. CGI Federal has partnered with U.S. federal agencies to provide IT services in defense, diplomacy, intelligence, healthcare, environment, homeland security, justice, treasury and more. CGI Federal's board of directors includes former senior U.S. federal government executives such as Dr. James B. Peake, who was Secretary of the Department of Veterans Affairs from 2007 to 2009. CGI Federal has annual revenue exceeding US$1 billion and holds positions on some of the most competitive contract vehicles for the federal government, including the Alliant 2 government wide acquisition contract (GWAC). Awards CGI Federal is ranked 26th on the 2018 Washington Technology Top 100, and 72nd on the 2019 Bloomberg Government 200 (BGOV200). The company also was a finalist for the 2018 Greater Washington Government Contractor Awards, Contractor of the Year ($300 million+), and a 2017 ACT-IACT Igniting Innovation Award. A number of CGI Federal executives are recognized for leadership in their sectors, including: Tim Hurlebaus, President: 2019 Wash100, 2018 Fed100 and GovCon 2018 Executive of the Year finalist; Stephanie Mango, Senior Vice President: 2018 Pinnacle Awards, National Security Executive of the Year  and 2018 Top 10 Executives to Watch In National Security; Malcolm Harden, Vice President: 2019 Fed100; and Steve Soussa, Senior Vice President: 2018 Top 10 Health Care Leaders to Watch. Innovation In 2018, CGI Federal opened an Innovation Center in Arlington, Virginia, to provide a dedicated collaboration space for agency and CGI experts to jointly explore the potential of new technologies. Notable projects 2019 CGI Federal wins $223M Navy Electronic Procurement System Contract 2018 CGI helps USAID adopt hybrid cloud IT infrastructure CGI awarded $530 million Continuous Diagnostics and Mitigation contract to strengthen cyber security of federal agencies 2017 Social Security Administration (SSA) taps CGI, Leidos, Northrop for $7.8B IT services contract CGI Federal to provide contract writing management software under potential $134M Army deal 2016 CGI wins position on US Navy's $809M NAVSUP Business Systems Center Contract USDA awards CGI potential $824M cloud-based financial shared services contract 2015 USDA offers CGI's Momentum as a federal shared service 2014 CGI Federal selected as prime contractor for General Services Administration (GSA) OASIS 2013 Department of Homeland Security awarded CGI Federal a position on $6 billion BPA to provide continuous diagnostic and mitigation tools 2012 The Centers for Medicare & Medicaid Services and CGI redesigned and relaunched Medicare.gov 2011 CGI was contracted to develop the federal health insurance exchange (also referred to as Healthcare.gov or Obamacare) by the Centers of Medicare and Medicaid Services Markets and corporate structure CGI has an international client base, with large institutional clients in a wide array of industries and markets. The United States made up 29% of their client base as of March 2015, while Canada was the second-highest percentage at 15%. The majority of CGI's remaining contracts are in Europe, with 15% in the rest of the world. CGI has a primary listing on the Toronto Stock Exchange and is a constituent of the S&P/TSX 60. It has a secondary listing on the New York Stock Exchange. As of March 2015, CGI made 42% of its revenue through government contracts. Products and services Originally CGI focused its products and services on IT consulting, and the company later branched into outsourcing, software development, and systems integration, among other industries. At the end of 2014, CGI earned 52% of its revenue from providing outsourcing services (specifically through IT services and to a lesser degree, business process services) and 48% of its revenue from systems integration and consulting. Services CGI supplies in relation to business consulting include business intelligence, business transformation, change management, cyber security, CIO advisory services, digital enterprise, as well as other industry-specific services. In relation to business process services, CGI offers customer service and billing, payment services, enterprise services, collections, engineering and logistics, document and data services, and a BPS service launch. CGI provides full IT outsourcing services. The following is an overview of services provided by CGI as of 2015: Application services Business consulting Business process services Infrastructure services IT outsourcing services Systems integration services Recent projects Secure cloud computing CGI was the first large cloud provider to receive the U.S. Federal Risk and Authorization Management Program (FedRAMP) cloud security certification. CGI has also received the Defense Information Systems Agency's (DISA) cloud security accreditation. In December 2014 the firm IDC named CGI a "leader" in cloud IaaS services for governments. Emerging technologies The company has worked on numerous projects utilizing emerging technologies, and in 2013 the World Anti-Doping Agency (WADA) launched the "whereabouts" mobile application. Developed with CGI, the app was put into use by over 25,000 athletes, allowing them to enter, check and change information on their whereabouts as part of their regulatory obligations. CGI's other emerging technology projects in 2014 included an "Internet of Things"-based predictive maintenance software for remote elevator monitoring that uses "machine learning." In January 2015, CGI worked with the Estonian Ministry of the Interior and the Estonian Rescue Board to develop an emergency response system with the intent of accelerating response time in public safety missions. In July 2014, the analyst firm IDC named CGI a "major player" in "worldwide utilities mobile field force management software." Smart meters and smart grids The DECC selected CGI to work on the DCC User Gateway, which is a network for businesses and utilities to access a central network on smart meters. As of 2014 CGI has also been commissioned to build smart grids for high-profile projects such as Low Carbon London and InovGrid in Portugal. Asset management and manufacturing CGI has long been involved in the asset management market, developing a number of related software projects such as the PragmaCAD. CGI's ARM product suite is primarily used by US distribution companies, while CGI's Renewable Management System is used by companies such as EDP Renewables to "monitor and control energy production." CGI continues to be a member of the Institute of Asset Management (IAM) in the UK, and along with the IAM and the British Standards Institution was involved in developing the ISO 55000 asset integrity standard. Logica had co-founded the IAM in 2003 before its acquisition by CGI. As a member of the board of the Manufacturing Enterprise Solutions Association (MESA), CGI is also a provider of MESA C-level training. CGI has developed asset management software for clients in highly regulated markets such as the oil and gas industries, and by the early 2000s CGI had developed ProSteward360 for chemical firms, which is a "point solution for chemical and regulatory compliance." CGI released the IBOR program in 2011, which is a "public space smart control and management system" used to increase energy efficiency in areas such as street lighting. Using IBOR, CGI began working with SPIE Belgium in April 2015 on a project to "modernize the remote management of highway lighting within the Flemish Region". In July 2014, the firm IDC released a report naming CGI as a "major player" in worldwide enterprise asset management software for energy and water delivery utilities. Financial management and fraud detection CGI began working with CIFAS in April 2013 on a modernized version of the fraud detection CaseLink platform, which was released in September 2014 in the UK. The platform is used to visualize and analyze data from the National Fraud Database. CGI's HotScan software scans payments and customer data as a watchlist filter, and since 2005 it has been certified by the Society for Worldwide Interbank Financial Telecommunication (SWIFT) as a plugin for the SWIFT Alliance Add-on Label. CGI worked with the Commonwealth of Virginia to develop eVA, which is Virginia's electronic purchasing software. In April 2014, CGI worked with the UK Payments Council to create Paym, an app that allows customers of major banks in the UK to send and receive funds with their mobile phone. Australia and New Zealand Banking Group renewed its contract with CGI in October 2014 for software-as-a-service (SaaS), in an effort to use their banking platform to expand their international trade program with the CGI Trade360 service. The program was at that point being used by ANZ to allow trading in 28 countries. Government and Military In March 2015, CGI won a contract with the North Wales Police for close to $27.4 million, to provide the department with "managed ICT (information and communications technology) and associated business services." Also in March, the British Ministry of Defence (MOD) selected CGI to provide training needs analysis at the Defense College of Technical Training and to provide support for the MOD's Fire Control Battlefield Information System Application (FC BISA) and the Fire Control Application (FCA) systems. Health insurance marketplaces in the United States The Patient Protection and Affordable Care Act was signed into law in 2010, and called for the creation of health insurance marketplaces for US citizens. In 2011 CGI Federal won a $93.7 million contract with the United States Department of Health and Human Services to help establish the software back-end of a new federal health insurance marketplace. For the next two years the Centers for Medicare and Medicaid Services oversaw the website's design, outsourcing to 55 federal contractors such as Experian and Quality Software Services, Inc. CGI Federal subcontracted the back-end work to other companies, as is common on large government contracts, and was also responsible for building some of the state-level healthcare exchanges. The Obama administration repeatedly modified policies pertaining to the Patient Protection and Affordable Care Act until the summer of 2013, meaning contractors had to adapt the software to changing requirements and delay aspects of development. Following the public launch of HealthCare.gov on October 1, 2013, technical issues surfaced which prevented many users from logging in. As one of primary contractors involved, CGI Federal came under scrutiny for the difficulties, with the Lexington Institute later concluding that "CGI Federal was raked over the coals in congressional hearings because it was responsible for the portal — even though many of the early problems with its part of the project were traceable to a front-end feature assembled by a different contractor for which CGI wasn't responsible." By the December 2013 deadline the HealthCare.gov problems had largely been fixed, and within several weeks enrollment via the federal website was at 1.1 million people. Analysis of the situation by journalists, government officials, and think tanks has varied, and the Government Accountability Office released a non-partisan study in July 2014 concluding that the Obama administration failed to provide "effective planning or oversight practices" during development. Other analysts argued that the Centers for Medicare and Medicaid Services (CMS) was ill-suited for a systems integration role, and that US regulations pertaining to large government contracts stifled agile software development. The state governments of Vermont and Massachusetts also contracted with CGI to work on their health insurance marketplaces, and both experienced difficulties with their launches in late 2013. CGI was also responsible for developing Hawaii Health Connector, and though the site did launch as planned on October 1, 2013, underlying technical issues prevented registered users from comparing and shopping insurance plans until October 15, 2013. The Colorado health insurance market system, Connect for Health Colorado, which unlike the federal website had development led by CGI, has been running relatively smoothly and as of May 6, 2014, it was announced that 129,000 Coloradans signed up for commercial health insurance through the state's health insurance marketplace since opening October 1, 2013. CGI also developed a successful exchange website for Kentucky. Mobile transport services CGI is one of 34 members of MOBiNET, a European consortium that aims to introduce mobile transport services across Europe. CGI branched into the electric car market early in the industry's growth cycle, and by 2010 CGI's Charge-Point Interactive Management System (CiMS) was in use by car companies such as Foundation ElaadNL, which was using the open communication protocol to deploy electric car charging stations throughout the Netherlands. In July 2014, CGI began working with Volvo to provide "authentication certificate services" for each new Volvo car. CGI also developed the SIGMA program for ProRail. According to the organization Esri, "the application is based on ArcGIS for Servers and enables employees to manage the condition of the rail lines and combine design and measurement-data of the railroad track in multiple dimensions." As of 2013, CGI's train occupancy mobile app, iNStAPP, had won a number of industry awards, and was being used by several train companies and institutions in Europe. Other projects relating to passenger experience include a travel journey planner CGI developed for the city of Helsinki, Finland. As of 2014 the app was used by around 150,000 people daily. Space and aviation In July 2014, CGI's Space, Defence and National Security division in the UK was awarded a contract by the European Commission Directorate General for Enterprise and Industry (DG ENTR) to build the "core infrastructure" for the first demonstrator of the Galileo Commercial Service, which is a new service to be created as part of the European Global Navigation Satellite System (GNSS). The service is intended for satellite navigation, and "was introduced with the goal of creating a potential revenue source to support the future maintenance of EU satellite navigation services." Wrote GPS Daily, "once complete, the demonstrator will be made available to other GNSS service providers to test across vertical markets, including transport, insurance and personal mobility." As of 2014 CGI was also involved with developing software for the ESTCube-1, which is the first satellite in the world with electric solar wind sail (e-sail). A Mission Control System is currently being developed by the students of Tartu University under the supervision of CGI. In November 2014, CGI was awarded a new contract by the British satellite telecommunications company Inmarsat to "support data communications between the pilot and air traffic controller within the European airspace." Inmarsat brought CGI in to help develop the Iris Precursor, particularly "key safety and security features needed for future European air traffic management communications." Also, "CGI will develop ground-based gateways" that will allow the SwiftBroadband system and the Single European Sky ATM Research program to interface. In 2015, CGI's contract to provide IT services to Airbus was extended. See also :Category:CGI Group Canadian industrial research and development organizations Companies listed on the Toronto Stock Exchange (C) Companies listed on the New York Stock Exchange (C) Economy of Montreal List of companies of Canada S&P/TSX Composite Index References External links 1976 establishments in Quebec Call centre companies Canadian brands Canadian companies established in 1976 Companies based in Montreal Companies listed on the New York Stock Exchange Companies listed on the Toronto Stock Exchange Consulting firms established in 1976 Information technology companies of Canada Information technology consulting firms International information technology consulting firms Management consulting firms Outsourcing companies S&P/TSX 60 Technology companies established in 1976
2027223
https://en.wikipedia.org/wiki/Max%20Palevsky
Max Palevsky
Max Palevsky (July 24, 1924 – May 5, 2010) was an American art collector, venture capitalist, philanthropist, and computer technology pioneer. He was known as a member of the Malibu Mafia – a group of wealthy American Jewish men who donated money to liberal and progressive causes and politicians. Early life Palevsky was born in Chicago, Illinois, to Jewish immigrant parents — Izchok (Isadore) Palevsky (born May 10, 1890, in Pinsk, in the Brest Region of the Russian Empire [now in Belarus], died September 27, 1969, in Los Angeles), and Sarah Greenblatt (born May 16, 1894, died December 28, 1949, in Chicago). Izchok had arrived in Baltimore from Bremen, Germany, on the S.S. Brandenburg on March 18, 1910, while Sarah immigrated around 1916. Palevsky's parents spoke Yiddish fluently, but little English. His father, a house painter, did not have a car and had to use the Chicago streetcars to transport his equipment. The youngest of three children, Palevsky grew up at 1925½ Hancock Street in Chicago. His older brother, Harry (September 16, 1919 — September 17, 1990), was a physicist who helped develop the atomic bomb at Los Alamos National Laboratory; his sister, Helen (born 1920), married Melvin M. Futterman (December 28, 1918 – March 14, 1989). After graduating from public high school in Chicago, Palevsky volunteered for the US Army Air Corps as a weatherman during World War II and served from 1943 to 1946. For his training he went for a year to the University of Chicago for basic science and mathematics and Yale University for electronics. He was then sent to New Guinea, which was the Air Force's central base for electronics in the South Pacific. After the war, the GI Bill made it financially feasible for Palevsky to earn a B.S. in mathematics and a B.Ph. in philosophy from the University of Chicago in 1948. Palevsky did graduate work in philosophy at UC Berkeley and the University of Chicago. Computers After attending and resigning from a doctorate program in philosophy at UCLA, where he had served as a teaching assistant in the philosophy department, Palevsky discovered computer technology through a lecture at Caltech by John von Neumann about the advent of computer technology. Palevsky began working in the computer industry in 1951 for $100 a week building computers at Northrop Aircraft, building copies of the MADIDDA, a special-purpose computer intended to solve differential equations. The MADIDDA was designed by physicist Floyd Steele, and who left Northrup in 1950, a year after the MADIDDA's completion. Palevsky worked to build copies of Steele's invention between March 1950 and January 1951. MADDIDA was priced from $25,000 to $30,000. MADDIDA would prove to be the last and most sophisticated dedicated differential analyzer ever built, since from then on all attention turned to electronic computers. Two years after Palevsky joined Northrop, the division was sold to Bendix Corporation. Palevsky worked at Bendix from 1952 to 1956 designing digital differential analyzers as a project engineer, working on the logic design for the company's first computer. In March 1956, Bendix offered their first digital computer, the Bendix G-15, described by some as the first personal computer (a claim that is widely disputed). Palevsky worked on the DA-1 differential analyzer option, which connected to the G-15 and resulted in a machine similar to the MADDIDA, using the G-15 to re-wire the inputs to the analyzer instead of the custom drums and wiring of the earlier machine. In March 1957, Palevsky went on to work at Packard Bell Corporation, at a new affiliate of the company that he started, called Packard Bell Computer Corp., in a store front at 11766 W Pico Boulevard in West Los Angeles. He was vice president and director of the new division. The new facility launched a research and development program in the digital computer field, with a staff of experienced engineers and skilled technicians to implement the new development. Palevsky convinced the company that they should enter the computer business and helped develop the first silicon computer, which became the PB 250, which was modestly successful. In April 1960, Packard-Bell Computer Corp. and Bailey Meter Co. signed an agreement for the exclusive application of PB250's in the control of power plants. As vice president and general manager of Packard Bell Computer, Palevsky supervised the building of a new building at 1935 Armacost Avenue to house the firm's expanding computer activities, for consolidation of computer and systems engineering and for needed expansion of systems as well as computer manufacturing facilities. Palevsky gave many lectures during this period, including at the second international meeting on analog computation at Strasbourg, France, in September 1958. Scientific Data Systems Palevsky felt that ten percent of the market of small to medium size scientific and process control computers was being totally neglected. He started looking for venture capital to start a company to address this market, and through contacts from the University of Chicago was able to raise $1 million from Arthur Rock and the Rosenwald family of the Sears Roebuck fortune. He left Packard Bell with eleven associates from the computer division to found Scientific Data Systems of California in September 1961. Within a year they introduced the SDS 910, which made the company profitable. Initially, it targeted scientific and medical computing markets. From 1962 to 1965, the company introduced seven computers, all of them commercial successes. On March 15, 1966, they introduced the Sigma 7, the first of a family of machines that marked the full-scale entry of the company into new areas of business data processing, time sharing, and multiprocessing. The Sigma 7 had business capabilities because the once-separate disciplines of business and scientific electronic data processing had developed to the point where one machine could handle both. SDS captured a little more than two per cent of the overall digital computer market in 1966 and continued to grow with the market. Palevsky sold SDS to Xerox in May 1969 for $920 million, with Arthur Rock's assistance, at which time he became a director and Chairman of the Executive Committee of Xerox Corporation. Palevsky's initial investment of $60,000 in SDS became nearly $100 million at the sale. He retired as a director of Xerox in May 1972. Political donor In 1972 Palevsky donated $319,000 to George McGovern, and in 1973 he managed Tom Bradley's first successful campaign for mayor of Los Angeles. He made numerous friends and allies on the California political scene, including former governor Gray Davis, and was elected to serve on Common Cause's National Governing Board in 1973. Many were dismayed at Palevsky's $1 million contribution in support of California Proposition 25, a campaign-finance reform initiative. He said to Newsweek: "I am making this million-dollar contribution in hopes that I will never again legally be allowed to write huge checks to California political candidates." Palevsky raised funds in 2007 to help Barack Obama with the 2008 United States presidential election. Arts, culture, and venture capital As a venture capitalist, Palevsky helped to fund many companies, including Intel, which grew to become one of the nation's leading semiconductor companies and a pioneer in the development of memory chips and microprocessors. Palevsky became a director along with Arthur Rock, who helped bankroll SDS, at the company's founding, on July 18, 1968, as NM Electronics Corporation, a name later changed to Intel (August 6, 1968). Intel was funded with $2 million in venture capital assembled by Arthur Rock. Palevsky became a director emeritus in February 1998. Palevsky also became a director and chairman of Rolling Stone, which he rescued from financial ruin in 1970 by buying a substantial share of the stock. While on the board he became friends with the late Hunter S. Thompson, inventor of what came to be called Gonzo journalism. In December 1970, Cinema V, a movie-theater distribution operation, entered film production in a joint venture, Cinema X, with Palevsky. Palevsky went into independent production with Peter Bart, former production vice president of Paramount Pictures in November 1973, with a Paramount contract to produce six features in three years. Palevsky produced and bankrolled several Hollywood films, including Fun with Dick and Jane and Islands in the Stream both with Peter Bart in 1977, and Endurance in 1998. Author Albert Goldman dedicated his controversial 1988 biography The Lives of John Lennon to Palevsky. In June 1977, Palevsky was elected to the board of the American Ballet Theater. Palevsky also served as a director and Chairman of the Board of Silicon Systems Inc. of Tustin, California, from April 1983 until February 1984; as chairman and chief executive of the board of Daisy Systems Corporation, a maker of computer systems used to design electronic circuits based in Mountain View, California; and, from November 1984 to 1999, as a director of Komag Corp., a Milpitas, California, based maker of data storage media. Palevsky also collected art, particularly Japanese woodblock prints, and gave generously to establish and maintain institutions of visual art. He established the Palevsky Design Pavilion at the Israel Museum in Jerusalem. He also built an Arts & Crafts collection at the Los Angeles County Museum of Art (LACMA), and donated $1 million to help establish the Los Angeles Museum of Contemporary Art. In 2001, he promised his art holdings to LACMA, but his collection of 250 works was scheduled to be sold by Christie's in the Fall of 2010. Max Palevsky funded the American Cinematheque's refurbishment of the Aero Theater in Santa Monica. The theater re-opened in January 2005 and bears his name. The University of Chicago Palevsky served as a trustee at his alma mater from 1972 to 1982. He established the Palevsky Professorship in History and Civilization in 1972 and the Palevsky Faculty Fund in 1996. In 2000, Palevsky donated $20 million to his alma mater to enhance residential life. In 2001, the University completed construction on three large colorful dorms that are connected through tunnels and bear his name. A one-screen cinema at the University is also named after him, and is the home of Doc Films, the oldest continuously running student film association in the United States. Personal life Palevsky was married six times and divorced five. He had five children. He was married to his first wife, Mary Joan Yates (Joan Palevsky), from 1952 to 1968. With her huge divorce settlement, the largest at that time in California, she became a renowned philanthropist. With Max, she had two children, Madeleine and Nicholas Palevsky. Joan died in 2006. His second wife was Sara Jane Brown, whom he married on September 6, 1969. In November 1972, he married Lynda L. Edelstein, his third wife, the mother of his sons, Alexander and Jonathan Palevsky. Jodie Evans, his fourth and sixth wife and widow, is a political activist and mother of Matthew Palevsky. Palevsky owned homes notable for their architecture, furniture, and art collections. Three California Houses: The Homes of Max Palevsky featured architecture and design by Ettore Sottsass of the Memphis group, Craig Ellwood, George Washington Smith, and Coy Howard. In 1985 and 1988, Palevsky was named to the Forbes 400 list of wealthiest Americans. His estimated worth for those years was $600 million (1985) and $640 million (1988). Palevsky died at the age of 85 of heart failure on May 5, 2010, at his home in Beverly Hills, California. Notes References Reilly, Edwin D. (2003). Milestones in Computer and Science History, Greenwood Publishing Group. Further reading "Enter Max Palevsky", Time, Friday, February 24, 1967 External links 1924 births 2010 deaths American billionaires Philanthropists from Illinois Businesspeople from Chicago University of Chicago alumni University of Chicago trustees Bendix Corporation people American people of Belarusian-Jewish descent Jewish American philanthropists American art collectors American venture capitalists 20th-century American businesspeople 21st-century American Jews
16013460
https://en.wikipedia.org/wiki/National%20Program%20Office
National Program Office
The National Program Office (NPO) was an office of the United States Government, established to ensure continuity of government in the event of a national disaster. The NPO was established by a secret executive order (National Security Decision Directive 55) signed on 14 September 1982 by President Ronald Reagan during the Cold War in preparation for a nuclear war, presumably with the Soviet Union. The NPO plan was classified as Top Secret, codeword Pegasus. It was also referred to as Project 908 (also known as "Nine Naught Eight"). The only oversight was by a Project Pegasus committee chaired by then-Vice President George Herbert Walker Bush. The committee included The Chairman of the Joint Chiefs of Staff (or his deputy), FBI Director William H. Webster, Attorney General Edwin Meese III and other top cabinet officials. The action officer for the project was Marine Lieutenant Colonel Oliver North, who then worked at the National Security Council under retired Marine Lieutenant Colonel Robert McFarlane. Background On June 30, 1980, President Jimmy Carter signed Presidential Directive 58 (PD-58), which directed the establishment of a Joint Program Office to provide Continuity of Government for the Presidency. Organization Disaster preparedness The FBI played a critical role in Project 908: selection and analysis of locations throughout the United States for use during and after a crisis. Agreements were made with various businesses for leasing of space and resources (i.e. power and water) for use by the U.S. government during the crisis period. Survivable communications Most of the money was used to design and build relocatable communications vans that would be activated if there was a threat of nuclear war. The rationale for relocatable vans was that the National Military Command Center (NMCC) at the Pentagon and the Alternate National Military Command Center (ANMCC) located in the Raven Rock Mountain Complex were already targeted by the Soviet Union and therefore would not survive a nuclear strike. The same criticism could not be leveled at the Boeing E-4 aircraft that made up the National Emergency Airborne Command Posts (NEACP), but the plan for relocatable communications vans went forward nevertheless. The government agency that was the strongest advocate for relocatable vans was the Defense Communications Agency (DCA), since renamed the Defense Information Systems Agency (DISA), whose responsibility it was to plan for continuity of military communications despite the possible loss of both land and satellite-based links. This necessitated the development of alternatives that would be independent of both landlines and land-based radio systems, and also satellites, all of which were thought to be possibly subject to destruction or impairment in an all-out war. The alternatives needed to be capable of being relocated, perhaps frequently, and set up rapidly in a new location when necessary. Since the facts of nuclear warfare also seemed to indicate that High Frequency (HF) Radio propagation might be disturbed by unfamiliar nuclear effects, this led to the consideration of exotic technologies such as troposcatter and meteor burst communication links. Such systems, while effective, used relatively small antennas and could indeed be transported efficiently and economically in relocatable vans. Facilities The Federal Reserve established Mount Pony under the NPO where billions of dollars in currency was stored in a hardened bunker. The cash was to be used to restart the economy east of the Mississippi River in case of a nuclear war. The facility also housed the central switching center for the Federal Reserve's Fedwire system until 1988 when all money was removed, switching was decentralized, and the site deactivated as an NPO facility. Cover The NPO was organized in the mid-1980s under a retired Army Lieutenant General, and funded in an initial amount of $2.7 billion in so-called black money. The NPO set up offices at 400 Army-Navy Drive in the Crystal City section of Arlington, Virginia. The NPO recruited communications specialists and retired military officers to do staff work. It was known as the Defense Mobilization Systems Planning Activity (DMSPA), a cover organization. A special security compartment named CHALIS was established for classified documents, which were distributed with a yellow stripe down the right border. Disestablishment President Bill Clinton attempted to dismantle the NPO during his tenure in the White House; he cancelled Project 908 and declassified it. However, those efforts proved incomplete when the legacy NPO plan for Continuity of Government was briefly activated by President George W. Bush on September 11, 2001, in response to the terrorist attacks on New York City and Washington, DC. The relocatable communications vans that had already been built were put under the command of the U. S. Army's 11th Signal Brigade at Fort Huachuca, Arizona. Similarly equipped trucks are presently within the inventory of the Federal Emergency Management Agency (FEMA), called Multi-Radio Vans. Military counterpart The military analog was the Strategic Air Command's (SAC) Headquarters Emergency Relocation Team (HERT)). Later evolving in the 55th Mobile Command and Control Squadron, the unit's purpose was to provide command and control to United States nuclear forces in the event of a national emergency (i.e. nuclear war), and relocation or destruction of SAC Headquarters at Offutt AFB, Nebraska. See also National Audio-Visual Conservation Center at Mount Pony, Culpeper, Virginia Headquarters Emergency Relocation Team 55th Mobile Command and Control Squadron 153d Mobile Command and Control Squadron References External links 'Shadow Government' News to Congress National Program Office - Continuity of Government Project 908 on the Internet Archive Disaster preparedness in the United States Cold War history of the United States Presidency of Ronald Reagan Continuity of government in the United States
22223245
https://en.wikipedia.org/wiki/PathEngine
PathEngine
PathEngine is a software company as well as an advanced path finding software development kit, created under the leadership of Thomas Young. The company was founded after Young left Infogrames Sheffield, and the first commercial version of the software was offered in 2002. The software uses a technique called points of visibility path finding, where the agent takes into account dynamic obstacles and agent shape when navigating between points. Over time the software has been optimized, with rapid bugfixes, and has been made to support platforms such as the Xbox 360. PathEngine has been used in games such as Titan Quest, among others. History The first version of PathEngine SDK was released in early 2002. By the end of 2005, over 50 finished products had been released that used PathEngine. Features PathEngine supports personal computers running the Microsoft Windows, Linux and FreeBSD, as well as the game consoles of Xbox 360 and PlayStation 3. PathEngine implements the search for the path and the movement of the agent in a three-dimensional medium with dynamic obstacles. There is a dynamic control overcoming obstacles and automation of content. Such a technology applied to some very large and detailed worlds includes special optimization for embossed surfaces (or other surfaces that combine detailed obstacles with a good overview and large open spaces). License terms PathEngine is a commercial software product created solely for the purpose of being licensed by third parties. There are three types of licenses for PathEngine SDK, each of which differs in price and level of access to the source code. In addition, each license may differ depending on which and how many platforms the final product will be released on. References External links PathEngine official website Software companies of France Video game engines Software development kits
45019150
https://en.wikipedia.org/wiki/Enterprise%20Storage%20OS
Enterprise Storage OS
Enterprise Storage OS, also known as ESOS, is a Linux distribution that serves as a block-level storage server in a storage area network (SAN). ESOS is composed of open-source software projects that are required for a Linux distribution and several proprietary build and install time options. The SCST project is the core component of ESOS; it provides the back-end storage functionality. Platform ESOS is a niche Linux distribution. ESOS is intended to run on a USB flash drive, or some other type of removable media such as Secure Digital, CompactFlash, etc. ESOS is a memory resident operating system: At boot, a tmpfs file system is initialized as the root file system and the USB flash drive image is copied onto this file system. Configuration files and logs are periodically written to a USB flash drive (persistent storage) or by user intervention when configuration changes occur. Interface ESOS utilizes a text-based user interface (TUI) for system management, network configuration, and storage provisioning functions. The TUI used in ESOS is written in C; the ncurses and CDK libraries are used. Front-end connectivity ESOS supports connectivity on several different front-end storage area network technologies. These core functions are supported by SCST and third-party target drivers that vendors have developed for SCST: Fibre Channel: QLogic HBAs are natively supported, and Emulex OneConnect FC HBAs can be supported by a build time option (requiring the Emulex OCS SDK) InfiniBand: Mellanox, QLogic, and Chelsio IB HCAs, among others, are supported Fibre Channel over Ethernet (FCoE): A software target implementation supports NICs with DCB/DCBX capabilities, or build time options exist for supporting Emulex OneConnect FCoE CNAs (requires the Emulex OCS SDK) and Chelsio Uwire FCoE CNAs. iSCSI: Will work over any IP communication method supported by ESOS (Ethernet, IPoIB). Back-end storage Open-source software projects and commodity computing server hardware are used on the back-end side to provide the underlying storage utilized by the front-end target interfaces: Btrfs, XFS, and ext4 are all supported file systems for virtual disk files used with the "vdisk_fileio" device handler. Popular, modern hardware RAID controllers from LSI, Adaptec, HP, and Areca are also supported in ESOS, including install-time CLI tool integration for these adapters. Clustering and high-availability support is made possible by the Pacemaker and Corosync cluster software stack. DRBD is fully supported to facilitate replication between ESOS storage servers, and/or to create redundant ESOS storage server clusters. Virtual Tape Library (VTL) support by the mhVTL project. Three SSD caching solutions: EnhanceIO, bcache, and dm-cache (lvmcache). Other block storage functions include the automated tiered storage via the BTIER project and Ceph RBD mapping. Installation ESOS differs from popular Linux distributions in that there is no bootable ISO image provided. ESOS consists of one archive file that is extracted on a local computer running a supported operating system (Linux, Windows, or Mac OS X). The local computer is only used for installing the ESOS image to a USB flash drive (or other removable media device). Users of ESOS extract the archive and execute the ESOS install script. The ESOS installer script prompts the user for the installation target device, writes the image, and allow users to integrate proprietary CLI RAID controller utilities into the ESOS USB flash drive. License Change On January 16, 2019 (commit bfb8c55) the license of the ESOS project was changed from GNU General Public License (GPL) to Apache License, Version 2.0. References External links Official Website SCST Project Generic SCSI Target Subsystem for Linux X86-64 Linux distributions Free software Storage software Linux distributions
44972133
https://en.wikipedia.org/wiki/Market%20Dojo
Market Dojo
Market Dojo is an e-Procurement software company based in Stonehouse, England. The company was founded in 2010 by Nick Drewe, Alun Rafique and Nic Martin. Alun previously worked at Rolls Royce before meeting Nick Drewe at Vendigital, whilst Nic Martin came from Attensity. All three co-founders studied at Bristol University. The company's competitors include Ariba, Curtis Fitch and Scanmarket amongst others. Technology Market Dojo's software is based on the Ruby on Rails (RoR) platform, the same used by other e-Procurement solutions such as Coupa. Having previously being hosted with Liberata, the software now operates out of the leading provider of cloud services, Google Cloud Platform (GCP). Market Dojo offerings range from: eSourcing, Opportunity Analysis, Project Management, Supplier Onboarding, SPM & SRM, Contracts Management and Savings Tracking. Funding In 2011, Market Dojo was awarded its first grant by the Technology Strategy Board to the amount of £25,036 towards an overall project cost of £55,635. The deliverable from the project was to evolve the functionality of the Market Dojo software in order to broaden its appeal beyond the Private Sector and provide a low-cost, EU-compliant and fully auditable tool that could assist Public Sector organisations across all member states of the European Union. In 2012, Market Dojo was awarded a second grant from the Technology Strategy Board, this time to the tune of £24,949 towards an overall project cost of £55,443. This project was to develop an innovative procurement web application that will provide spend category insight, strategy and opportunity assessment for public and private sector procurement teams. In 2013, Market Dojo was awarded its third grant by the West of England Local Enterprise Partnership to the amount of £25,000 towards their research and development. In 2018, Market Dojo received significant investment from a large Sovereign Wealth Fund. The investment has enabled the companies continued growth in staff, as well as the implementation of major new releases and software improvements. Products Market Dojo have released seven individually branded products: Market Dojo: their flagship eSourcing tool to enable organisations to negotiate with suppliers online. Innovation Dojo: released on the back of their 2011 Technology Strategy Board grant, it was designed to help buyers and suppliers collaborate on ideas for improving the supply chain. Category Dojo: the result of the 2012 Technology Strategy Board, it helps organisations better understand and prioritise their negotiation opportunities. SIM Dojo: the supplier On-boarding tool, helping organisations to centrally manage supplier information and reduce administration costs while enabling accountability and auditability. Quick Quotes: a simpler version of the sourcing tool for smaller procurement needs Contract Dojo: a contract repository for maintaining documents from suppliers and ensuring that KPIs are being met SRM Dojo: a Supplier Relationship Management tool generating a database of suppliers, with a holistic view of all their data, and allowing for task management Academia and Professional Relationships Market Dojo actively collaborate with academia, forging strong ties with the University of Greenwich and the University of the West of England where they annually give lectures on eAuctions to their MBA and Business School students. Market Dojo have also given talks at events for the professional industry body CIPS on numerous occasions. Lastly Market Dojo is an approved supplier on the UK Government Crown Commercial Service G-Cloud framework for providing eSourcing software as a service. References External links Market Dojo's website Market Dojo’s blog British companies established in 2010 Companies based in Bristol 2010 establishments in England
11955544
https://en.wikipedia.org/wiki/Trojan%20Oaks%20Golf%20Course
Trojan Oaks Golf Course
Trojan Oaks Golf Course was a 9-hole championship golf course on the campus of Troy University. It was for use by the general public, golf team, and students. The Trojan Oaks was from the longest tee. The par for the course was 36 with a course rating of 35.5 and a slope rating of 125. The greens and fairways were both Bermuda grass. The course was built over the course of two years and opened in 1977 under the supervision of Chancellor Ralph Wyatt Adams. The course was closed in March 2010 in order to build a new basketball arena on the grounds. The course does not attract a great deal of extra-county players, other than faculty and students of the university. This is in part due to the presence of a course on the world-renowned Robert Trent Jones Golf Trail in nearby Montgomery. The remaining parts of the course that were left following the construction of Trojan Arena have been converted into what is now called the Troy Golf Practice Course. The $1.5 million renovation of the course was completed in 2013. It uses 40 acres of the original Trojan Oaks Golf Course and created a 9-hole, par-34 practice course plus state-of-the-art putting and chipping greens, a wedge practice area, a full driving range, and a new golf clubhouse. A golf clubhouse has coaches offices and an indoor/outdoor lounge area. The courses hitting bays feature FlightScope Technology for swing analysis, Sam PuttLab for putting analysis, and BodiTrak monitors to measure the body weight shifting of a players swing. It is the only course of its kind on the Sun Belt Conference. The Course Note: All tee distances are from the men's (white) tee box Hole 1 - (Par 4) The hole is long and doglegs about 80 degrees to the right away from the middle tee. There are trees to the right and the ninth hole is to the left. Hole 2 - (Par 3) The hole is long and raises in elevation approximately from tee to pin. There is a creek/marsh down a hill starting after the tee and ending in front of the pin, at a steep incline down and up, respectively. Hole 3 - (Par 3) The hole is long and is relatively flat fairway between the tee box and pin. There is a large sand trap the wraps the entire left of the green and a sharp -down slope to the right. Hole 4 - (Par 5) The hole is long and plays very straight. The hole is sometime referred to as "The Driving Range" by local students due to the straightness and breadth of the fairway. The tee box is next to a lake that doesn't play in and the fairway is about long, starting after the box and ending before the pin. The green is separated from the fairway by a ravine approximately wide and deep. Hole 5 - (Par 5) The hole is long and is one continuous, sweeping curve to the right. The hole is bordered on its right by woods and, up a hill, the sixth hole and on the left by the fifth hole, which is down a hill. There is a lake about wide about away and over the lake lies the final fairway and green. The final fairways slopes up from the lake to the green. Hole 6 - (Par 4) The hole is away. There is a 45-degree dogleg to the left about from the pin with trees blocking a view of the pin. The right of the course is a sharp downhill slope to the 4th hole fairway and 5th hole tee box. After the dogleg the hole is straight with several bunkers around the green. Hole 7 - (Par 3) The hole is long and plays shorter than that. The pin is roughly lower in elevation than the tee box with fairway from the tee box to the green. One bunker comes into play in the front right of the green. Hole 8 - (Par 4) The hole is long and is the longest par 4 on the course. The tee box looks over a large valley, about wide, that is in between the box and the pin and has a swampy area covering half of it. A drive of at least is required to clear the swamp. Two large trees line either side of the fairway and the green ends up being at roughly the same elevation as the tee box. Most locals consider this the toughest green to putt a long distance on. Hole 9 - (Par 5) The hole is away, with a 30-degree dogleg to the right about down the fairway. The fairway is slim in most places, (less than wide) with the Troy University track and field facility lining the right side and the first hole lining the left side. The hole is level, without a single bunker, and the green is the largest on the course. Gallery References Golf clubs and courses in Alabama Buildings and structures in Pike County, Alabama College golf clubs and courses in the United States Troy University
1319970
https://en.wikipedia.org/wiki/NetFront
NetFront
NetFront Browser is a mobile browser developed by Access Company of Japan. The first version shipped in 1995. They currently have several browser variants, both Chromium-based and WebKit-based. Over its lifetime, various versions of NetFront have been deployed on mobile phones, multifunction printers, digital TVs, set-top boxes, PDAs, web phones, game consoles, e-mail terminals, automobile telematics systems, and other device types. This has included Sony PlayStation consoles and several Nintendo consoles. Platforms For Pocket PC devices, the browser converted web page tables to a vertical display, eliminating the need to scroll horizontally. The Nintendo 3DS Internet browser uses the WebKit-based NetFront Browser NX according to the documentation included with the browser. The PlayStation 3 Internet web browser received a major upgrade with firmware version 4.10, upgrading to a custom version of the NetFront browser, adding limited HTML5 support and improved JavaScript speeds. The Wii U console is also equipped with NetFront NX, and GPL source code is available. The Amazon Kindle e-reader uses NetFront as its web browser. Nintendo's latest console, the Nintendo Switch, is also using NetFront NX. Performance Netfront 3.5 had an Acid3 score of 11/100 and NetFront Browser NX v1.0 had an Acid3 score of 92/100. See also Internet Browser (Nintendo 3DS) References 1995 software Android (operating system) software Cross-platform software Mobile web browsers Palm OS software Pocket PC software Software based on WebKit Symbian software Windows Mobile software
2383524
https://en.wikipedia.org/wiki/B.%20Kevin%20Turner
B. Kevin Turner
B. Kevin Turner (born April 3, 1965) is an American businessman and investor who is currently the chairman of Zayo Group and the vice chairman of Albertsons/Safeway Inc. Turner was most recently president and CEO of Core Scientific, a technology company specializing in blockchain and artificial intelligence from 2018 to 2021. He previously served as the COO of Microsoft from 2005 to 2016. Prior to joining Microsoft, Turner was the CEO of Sam's Club and the CIO of Walmart. He is also the former vice chairman of Citadel LLC and CEO of Citadel Securities. As the chief operating officer of Microsoft, Turner was responsible for the strategic and operational leadership of Microsoft's worldwide sales, field marketing and services organization. He also managed support and partner channels, Microsoft stores, and corporate support functions including information technology, licensing and pricing, and operations. His organization included over 51,000 employees in more than 190 countries. Early life and education Turner grew up in Stratford, Oklahoma. In 1987, Turner earned a Bachelor of Science in business administration with a concentration in management from East Central University in Ada, Oklahoma, where he was a member of the Pi Kappa Alpha fraternity. During his college years, he worked full-time as a cashier at Walmart. Career Early career at Walmart (1985–2000) Turner worked nearly 20 years at Walmart. He began working as a cashier at Walmart in 1985 in his hometown of Ada, Oklahoma. While attending college, he rose through the store ranks, to customer service manager, housewares department manager and head office cashier. After several promotions, Turner found himself in the auditing department, where he came into contact with Sam Walton. On Walton's advice, Turner joined the company's information systems division, where he worked his way through a succession of jobs: business analyst, strategy manager, director, and then assistant CIO. In 1995, at the age of 29, Turner became the youngest corporate vice president and officer ever named at Walmart. In 1997, Turner became the recipient of the first "Sam M. Walton – Entrepreneur of the Year" award, which is the highest honor given at Walmart and is voted on by the Walton Family. CIO of Walmart (2000–2002) In February 2000, Turner became the chief information officer of Walmart at the age of 34, when former Walmart CIO Randy Mott departed for Dell. Turner had previously been the assistant CIO under Mott. As the CIO of Walmart, Turner oversaw Walmart's information technology and worldwide data-tracking system. The division consisted of over 2,000 employees in Bentonville, Arkansas. He led the team that developed retail-specific applications such as Retail Link at Walmart. During his tenure, Turner was one of the world's largest corporate buyers of technology, and directed the technology strategy of a company renowned for its deft use of computing to streamline everything from global procurement to neighborhood shopping trends. CEO of Sam's Club (2002–2005) In 2002, Turner replaced Tom Grimm as the president and chief executive officer of the Walmart-owned retailer Sam's Club, which had over 46 million members and over US$37.1 billion in annual sales. In addition to his role at Sam's Club, he was also a member of the executive committee at Walmart. Under Turner, Sam's Club focused on lowering prices to win over small-business customers. In his last fiscal year as CEO, Sam's Club turned in a 5.8 percent sales growth at stores open at least a year, which was nearly double the 2.9 percent sales growth at U.S. Walmart stores. During his tenure as CEO, Turner improved the performance of the warehouse clubs and closed the gap with Costco. Turner was the president and CEO of Sam's Club until he left for Microsoft in 2005. After his departure for Microsoft, Sam's Club named Doug McMillon as its CEO. COO of Microsoft (2005–2016) In 2005, Turner was approached by Microsoft co-founder Bill Gates and CEO Steve Ballmer about overseeing the company's worldwide sales, marketing, services, and internal IT operations organization. He had previously worked with Gates and Ballmer during his time as CIO of Walmart. Turner accepted the offer and moved his wife and three children to Washington State where, in September 2005, he became the chief operating officer of Microsoft (the previous COO, Rick Belluzzo, had left the company in 2002 and no replacement had been hired). From 2005 to 2016, Turner was responsible for the strategic and operational leadership of Microsoft's worldwide sales, field marketing and services organization. He also managed support and partner channels, Microsoft stores, and corporate support functions including information technology, licensing and pricing, and operations. His organization included over 51,000 employees in more than 190 countries. In 2009, Turner started Microsoft's entry into the retail stores business. Along with Steve Ballmer, Satya Nadella and other senior executives, Turner was on the Senior Leadership Team that set the overall strategy and direction for Microsoft. As COO, Turner introduced procedures such as a "conditions of satisfaction" document that details what Microsoft will provide each client. A screw-up required a "correction of errors" in which employees autopsied the mistake and laid out steps to ensure it did not happen again. He also created standard scorecards with 30 categories to measure each subsidiary's performance. At Microsoft, Turner was known for his speeches at partner and sales events that amped up the rivalry with competitors like Oracle, Google and IBM. When Steve Ballmer announced he was stepping down as CEO, Turner was one of three internal candidates on the CEO short-list, but ultimately lost the job to Satya Nadella. In July 2016, after eleven years as COO, Turner left Microsoft to join Citadel LLC. From 2005 to 2016, Turner helped increase Microsoft's yearly revenue from $37 billion to over $93 billion. After his departure, Microsoft CEO Satya Nadella stated that in his time as COO, Turner "built the sales force into the strategic asset it is today with incredible talent, while at the same time more than doubling our revenue and driving customer satisfaction scores to the highest in company history." After his departure for Citadel LLC, Turner's responsibilities were split across five different Microsoft executives: Jean-Philippe Courtois, Amy Hood, Chris Capossela, Kurt DelBene and Judson Althoff. Citadel LLC (2016–2017) In July 2016, Turner left Microsoft to become the vice chairman of Citadel LLC and the chief executive officer of Citadel Securities. Citadel Securities is a market maker, providing liquidity and trade execution to retail and institutional clients. Turner's team included Jamil Nazarali, head of Citadel Execution Services, and Paul Hamill, global head of fixed income, currencies and commodities for Citadel Securities. His appointment occurred after Citadel Securities purchased the designated market-maker business of KCG Holdings and the Automated Trading Desk, a computer-based market making pioneer owned by Citigroup. On January 27, 2017, Turner left his position at Citadel Securities. President and CEO of Core Scientific (2018–2021) In 2018 Turner was appointed president and CEO of Core Scientific, an artificial intelligence, blockchain, transaction processing and application development company headquartered in Bellevue, Washington. The company was co-founded by former Myspace CTO and co-founder, Aber Whitcomb. Under his leadership Core Scientific became the largest blockchain hosting and digital mining company in North America. Turner stepped down from the role as president and CEO in May 2021. Boards and other roles From 2010 to 2020, Turner served on Nordstrom's board of directors. He was on the technology and finance committees as a part of his board role. In May 2020, Turner decided to not run for reelection on his board seat. In 2017, Albertsons/Safeway appointed Turner as vice chairman of the board of managers of AB Acquisition, its direct parent company. He was also named as the senior advisor to Albertsons chairman and CEO, Robert G. Miller. Alongside Brandin Cohen, Andy Cohen and Lewis Wolff, Turner invested in and served on the board of directors of Liquid IV, a California-based health-science nutrition and wellness company. The brand sells products for sleep, energy and hydration. In September 2020, Liquid IV was acquired by Unilever, a British-Dutch multinational consumer goods company. In 2020, Turner was named chairman of the board of directors at Zayo Group. Prior to his appointment, Zayo Group was acquired by global investment firms, EQT Partners and Digital Colony in a deal valued at $14.3 billion. Awards and honors Turner was ranked #4 on Fortune magazine's "40 Under 40" in 2003. He was among Time magazine's People To Watch In International Business in 2002. In 1997, Turner became the recipient of the first "Sam M. Walton – Entrepreneur of the Year" Award, which is the highest honor given at Walmart and is voted on by the Walton family. In 2003, East Central University named him a distinguished alumnus. CIO magazine awarded Turner the 20/20 Vision Award, CIO 100 Award and was named to the CIO Hall of Fame in 2007. CRN magazine listed Turner as one of the Top 25 Most Innovative Executives. Business 2.0 named Turner as one the 20 Young Execs You Need To Know. Personal life Turner lives with his wife, Shelley, in Jackson Hole, Wyoming. They have three children. References Microsoft employees 1965 births Living people People from Ada, Oklahoma American chief operating officers East Central University alumni Chief information officers Businesspeople from Oklahoma Walmart people Safeway Inc. 20th-century American businesspeople 21st-century American businesspeople
517501
https://en.wikipedia.org/wiki/Manic%20Miner
Manic Miner
Manic Miner is a platform video game originally written for the ZX Spectrum by Matthew Smith and released by Bug-Byte in 1983 (later re-released by Software Projects). It is the first game in the Miner Willy series and among the early titles in the platform game genre. The game itself was inspired by the Atari 8-bit family game Miner 2049er. It is considered one of the most influential platform games of all time and has been ported to numerous home computers, video game consoles and mobile phones. Original artwork was created by Les Harvey. Later Software Projects artwork was supplied by Roger Tissyman. Gameplay At the time, its stand-out features included in-game music and sound effects, high replay value, and colourful graphics, which were well designed for the graphical limitations of the ZX Spectrum. The Spectrum's video display allowed the background and foreground colours to be exchanged automatically without software attention and the "animated" load screen appears to swap the words Manic and Miner through manipulation of this feature. On the Spectrum this was the first game with in-game music, the playing of which required constant CPU attention and was thought impossible. It was achieved by constantly alternating CPU time between the music and the game. This results in the music's stuttery rhythm. The in-game music is In the Hall of the Mountain King from Edvard Grieg's music to Henrik Ibsen's play Peer Gynt. The music that plays during the title screen is an arrangement of The Blue Danube. Objective In each of the twenty caverns, each one screen in size, are several flashing objects, which the player must collect before Willy's oxygen supply runs out. Once the player has collected the objects in one cavern, they must then go to the now-flashing portal, which will take them to the next cavern. The player must avoid enemies, listed in the cassette inlay as "...Poisonous Pansies, Spiders, Slime, and worst of all, Manic Mining Robots..." which move along predefined paths at constant speeds. Willy can also be killed by falling too far, so players must time the precision of jumps and other movements to prevent such falls or collisions with the enemies. Extra lives are gained every 10,000 points, and the game ends when the player has no lives left. Above the final portal is a garden. To the right is a house with a white picket fence and red car parked in front. To the left is a slope leading to backyard with a pond and tree; a white animal, resembling a cat or mouse, watches the sun set behind the pond. Upon gaining his freedom, the game restarts from the first level with no increase in difficulty. Version differences There are some differences between the Bug-Byte and Software Projects versions. The scroll-text during the attract mode is different, to reflect the new copyright, and there are also several other cosmetic changes, although gameplay remains the same: In Processing Plant, the enemy at the end of the conveyor belt is a bush in the original, whereas the Software Projects one resembles a Pac-Man ghost. In Amoebatrons' Revenge, the original Bug-Byte amoebatrons look like alien octopuses with tentacles hanging down, whereas the Software Projects amoebatrons resemble the Bug-Byte logo - smiling beetles, with little legs up their sides. In The Warehouse, the original game has threshers travelling up and down the vertical slots, rotating about the screen's X-axis. The Software Projects version has Penrose triangles (i.e. the Software Projects logo) instead, which rotate about the screen's Z-axis. The Bug-Byte cheat code was the numerical sequence "6031769" - based on Matthew Smith's driving licence. In the Software Projects version this changed to "typewriter". The numerical sequence "6031769" was later used as cheat code (infinite lives) for the PC version of Grand Theft Auto. Internal code changes meant that a new POKE was required for infinite lives. Reception In the UK, Manic Miner was the best selling Commodore 64 game of 1984, and the third best selling ZX Spectrum game. It was the winner of a Golden Joystick Award for best arcade style game by Computer and Video Games magazine in the March 1984 edition. Placed third in "Game of the Year 1983" of the same competition. In 1991, ACE magazine listed Manic Miner and its sequel Jet Set Willy - along with Hunchback, Impossible Mission and the Mario series - as the greatest platform games of all time calling it "the first great home computer platform game". Manic Miner was placed at number 25 in the "Your Sinclair official top 100" Spectrum games of all time, and was voted number 6 in the Readers' Top 100 Games of All Time in the same issue. The game was included at #97 on Polygon's 500 best games of all time list. Ports Official ports exist for the Commodore 64, Commodore 16, Amstrad CPC, BBC Micro, Dragon 32/64, Commodore Amiga, Oric 1, Game Boy Advance, MSX, SAM Coupé, Xbox 360 and mobile phones. Unofficial ports exist for the IBM PC compatibles (Windows, DOS and Linux), Apple Macintosh, Atari ST, ZX81, TRS-80 Color Computer, Sony PlayStation, Nintendo 64, Neo Geo Pocket Color, Acorn Archimedes, Orao, Z88, PMD 85, HP48, Microsoft Zune, Acorn Atom, Commodore 128 and Commodore VIC-20. SAM Coupé The SAM Coupé version, programmed by Matthew Holt, like the ZX original requires pixel-perfect timing, and both graphics and audio, the latter by František Fuka, were greatly updated. In addition to the original twenty caverns, forty additional caverns were included in this release. Levels were designed by David Ledbury, and winners of a competition run by SAM Computers Ltd. Although the SAM Coupé was broadly a Spectrum clone, it avoided the Spectrum's original limitations on colour graphics. Spectrum pixels could be of many colours, but all pixels within the span of a character block had to be from one of only two colours. The Manic Miner port made use of the removal of this restriction, with more detailed use of colour, most visibly in the character sprites. This version scored 84% in Your Sinclair, and 88% in Crash. PMD 85 The game was ported for Czechoslovak Computers PMD 85 in 1985. The authors of the PMD 85 version are Vít Libovický and Daniel Jenne. They made it as accurate as they could. BBC Micro The BBC Micro version does not have the Solar Power Generator, instead containing a completely different room called "The Meteor Shower". This has the "reflecting machines" from the Solar Power Generator, but there is no beam of light. Instead, it has meteors which descend from the top of the screen and disintegrate when they hit platforms, like the Skylabs in Skylab Landing Bay. It also has forcefields which turn on and off, and the layout is completely different. Also, the very last screen (which is still called The Final Barrier) is complex and difficult (unlike the Spectrum version, which is considered to be fairly easy) and has a completely different layout. It also features the blinking forcefields. Amstrad CPC The Amstrad version was effectively the same as the Spectrum version by Software Projects, except that Eugene's Lair was renamed "Eugene Was Here," and the layout of The Final Barrier was again completely different (but is more similar to the Spectrum version than the BBC version). Dragon 32/64 The Dragon 32 version, programmed by Roy Coates, had two extra rooms (i.e. 22 altogether) and a cheat mode accessed by typing "P P PENGUIN". To retain the resolution of the original, the Dragon version used PMODE 4 in black/white mode. Oric/Atmos Programmed by Chris Larkin, the Oric version features 32 screens instead of 20. Z88 The Z88 port has all the functionality (and cheats) of the Bug-Byte and Software Projects versions. The levels are the same and there is even some background music. HP 48 The HP 48 series version is somewhat limited by the low resolution screen size, scrolling the area rather than displaying the level as a whole. This makes it a very difficult port for those who have not previously mastered another version. Otherwise it is fairly loyal to the ZX Spectrum version. Sound is somewhat different sounding and colour omitted for obvious hardware reasons, but game play remains similar despite the awkward platform. Commodore 16 The Commodore 16 version was limited in a number of respects - this was mainly due to the initial lack of developer material for the C16 machine, and a two-week deadline to produce and test the game, then generate a master tape for the duplication house. Other issues related to the lack of a fast loader system for the C16 cassette deck, as a result it took about seven minutes for the game to load, and a bug resulted in the game entering the first screen as soon as the tape had finished loading instead of waiting for the user to start the game. Further issues related to the lack of music and in game sound, and the way that video memory was mapped in the C16, this resulted in a number of the screens having to be removed so that load time and video mapping could be correctly handled. Orao The Orao version was made in 1987 by Nenad Mihailovic. It was made without using any original game resources or files, by watching and replicating original Spectrum version. Therefore Orao version does not contain any secrets that were not obvious in original game, but it does have most of original levels replicated accurately. Orao computer had 256x256 black and white video, so game was adjusted accordingly. This Manic Miner version is also contained in Android Orao emulator app, made by same author, under 'Load Game' menu. Xbox 360 A version of the game was released for the Xbox 360 as an Xbox Live Indie Game under the name Manic Miner 360 on the 21st of June 2012. Sequels The sequel to Manic Miner is Jet Set Willy, and it was followed by Jet Set Willy II. Software Projects also released a game in the style of Manic Miner for the Commodore VIC-20 called The Perils of Willy. In addition quite a few unofficial sequels, remakes, homages and updates have been released, even up to this day, including a ZX81 version. Influence A homage to the loading screen appeared in one episode of the 2005 British sitcom Nathan Barley. See also Miner Willy series of games Roller Coaster Miner 2049er Blagger Sir Lancelot References External links Complete video from the C64 Version on archive.org Retro Gamer Magazine: The Making Of Manic Miner HTML5 version of Manic Miner Android Orao emulator containing Manic Miner game 1983 video games Amiga games Amstrad CPC games BBC Micro and Acorn Electron games Commodore 16 and Plus/4 games Commodore 64 games Dragon 32 games Game Boy Advance games Mobile games MSX games Oric games PMD 85 games Platform games SAM Coupé games Video games developed in the United Kingdom ZX Spectrum games Video games set in the United Kingdom
2468105
https://en.wikipedia.org/wiki/Hybrid%20drive
Hybrid drive
In computing, a hybrid drive (solid state hybrid drive – SSHD) is a logical or physical storage device that combines a faster storage medium such as solid-state drive (SSD) with a higher-capacity hard disk drive (HDD). The intent is adding some of the speed of SSDs to the cost-effective storage capacity of traditional HDDs. The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD. There are two main configurations for implementing hybrid drives: dual-drive hybrid systems and solid-state hybrid drives. In dual-drive hybrid systems, physically separate SSD and HDD devices are installed in the same computer, having the data placement optimization performed either manually by the end user, or automatically by the operating system through the creation of a "hybrid" logical device. In solid-state hybrid drives, SSD and HDD functionalities are built into a single piece of hardware, where data placement optimization is performed either entirely by the device (self-optimized mode), or through placement "hints" supplied by the operating system (host-hinted mode). Types There are two main "hybrid" storage technologies that combine NAND flash memory or SSDs, with the HDD technology: dual-drive hybrid systems and solid-state hybrid drives. Dual-drive hybrid systems Dual-drive hybrid systems combine the usage of separate SSD and HDD devices installed in the same computer. Overall performance optimizations are managed in one of three ways: By the computer user, who manually places more frequently accessed data onto the faster drive. By the computer's operating system software, which combines SSD and HDD into a single hybrid volume, providing an easier experience to the end-user. Examples of hybrid volumes implementations in operating systems are ZFS' "hybrid storage pools", bcache and dm-cache on Linux, and Apple's Fusion Drive and other Logical Volume Management based implementations on OS X. By chipsets external to the individual storage drives. An example is the use of flash cache modules (FCMs). FCMs combine the use of separate SSD (usually an mSATA SSD module) and HDD components, while managing performance optimizations via host software, device drivers, or a combination of both. One example is Intel Smart Response Technology (SRT), which is implemented through a combination of certain Intel chipsets and Intel storage drivers, is the most common implementation of FCM hybrid systems today. What distinguished this dual-drive system from an SSHD system is that each drive maintains its ability to be addressed independently by the operating system if desired. Solid-state hybrid drive Solid-state hybrid drive (also known by the initialism SSHD) refers to products that incorporate a significant amount of NAND flash memory into a hard disk drive (HDD), resulting in a single, integrated device. The term SSHD is a more precise term than the more general hybrid drive, which has previously been used to describe SSHD devices and non-integrated combinations of solid-state drives (SSDs) and hard disk drives. The fundamental design principle behind SSHDs is to identify data elements that are most directly associated with performance (frequently accessed data, boot data, etc.) and store these data elements in the NAND flash memory. This has been shown to be effective in delivering significantly improved performance over the standard HDD. An example of an often confused dual-drive system being considered an SSHD is the use of laptops which combine separate SSD and HDD components into the same 2.5-inch HDD-size unit, while at the same time (unlike SSHDs) keeping these two components visible and accessible to the operating system as two distinct partitions. WD's Black2 drive is a typical example; the drive can either be used as a distinct SSD and HDD by partitioning it appropriately, or software can be used to automatically manage the SSD portion and present the drive to the user as a single large volume. Operation In the two forms of hybrid storage technologies (dual-drive hybrid systems and SSHDs), the goal is to combine HDD and a faster technology (often NAND flash memory) to provide a balance of improved performance and high-capacity storage availability. In general, this is achieved by placing "hot data", or data that is most directly associated with improved performance, on the "faster" part of the storage architecture. Making decisions about which data elements are prioritized for NAND flash memory is at the core of SSHD technology. Products offered by various vendors may achieve this through device firmware, through device drivers or through software modules and device drivers. Modes of operation Self-optimized mode In this mode of operation, the SSHD works independently from the host operating system or host device drives to make all decisions related to identifying data that will be stored in NAND flash memory. This mode results in a storage product that appears and operates to a host system exactly as a traditional hard drive would. Host-optimized mode (or host-hinted mode) In this mode of operation, the SSHD enables an extended set of SATA commands defined in the so-called Hybrid Information feature, introduced in version 3.2 of the Serial ATA International Organization (SATA-IO) standards for the SATA interface. Using these SATA commands, decisions about which data elements are placed in the NAND flash memory come from the host operating system, device drivers, file systems, or a combination of these host-level components. Some of the specific features of SSHD drives, such as the host-hinted mode, require software support from the operating system. Microsoft added support for the host-hinted operation into Windows 8.1, while patches for the Linux kernel are available since October 2014, pending their inclusion into the Linux kernel mainline. History Hybrid-drive technology has come a long way with modern implementations improving over the past decade beginning in 2007: In 2007, Seagate and Samsung introduced the first hybrid drives with the Seagate Momentus PSD and Samsung SpinPoint MH80 products. Both models were 2.5-inch drives, featuring 128 MB or 256 MB NAND flash memory options. Seagate's Momentus PSD emphasized power efficiency for a better mobile experience and relied on Windows Vista's ReadyDrive. The products were not widely adopted. In May 2010, Seagate introduced a new hybrid product called the Momentus XT and used the term solid-state hybrid drive. This product focused on delivering the combined benefits of hard drive capacity points with SSD-like performance. It shipped as a 500 GB HDD with 4 GB of integrated NAND flash memory. In November 2011, Seagate introduced what they referred to as their second-generation SSHD, which increased the capacity to 750 GB and pushed the integrated NAND flash memory to 8 GB. In March 2012, Seagate introduced their third-generation laptop SSHDs with two modelsa 500 GB and 1 TB, both with 8 GB of integrated NAND flash memory. In September 2012, Toshiba announced its first SSHD, delivering SSD-like performance and responsiveness by combining 8 GB of Toshiba's own SLC NAND flash memory and innovative, self-learning algorithms with up to 1 TB of storage capacity. In September 2012, Western Digital (WD) announced a hybrid technology platform pairing cost-effective MLC NAND flash memory with magnetic disks to deliver high-performance, large-capacity integrated storage systems. In November 2012, Apple Inc. released the factory-configured dual-drive hybrid system named Fusion Drive. In October 2015, TarDisk introduced the plug-and-play dual-drive hybrid system "TarDisk Pear", with flash memory size options up to 256 GB. In August 2021, Western Digital introduced OptiNAND™, a New Flash-Enhanced HDD Architecture. It uses a new iNAND read/write cache system for performance. This feature is for when power is lost during a write phase, to prevent data loss. The System-on-a-Chip (SoC) of an OptiNAND drive, in under a second, will use the rotational power generated by the already spinning disk platter inside the drive to power internal capacitors until the iNAND cached data transfers to non-volatile NAND. Benchmarks Late 2011 and early 2012 benchmarks using an SSHD consisting of a 750 GB HDD and 8 GB of NAND cache found that SSHDs did not offer SSD performance on random read/write and sequential read/write, but were faster than HDDs for application startup and shutdown. The 2011 benchmark included loading an image of a system that had been used heavily, running many applications, to bypass the performance advantage of a freshly-installed system; it found in real-world tests that performance was much closer to an SSD than to a mechanical HDD. Different benchmark tests found the SSHD to be between an HDD and SSD, but usually significantly slower than an SSD. In the case of uncached random access performance (multiple 4 KB random reads and writes) the SSHD was no faster than a comparable HDD; there is advantage only with data that is cached. The author concluded that the SSHD drive was the best non-SSD type of drive by a significant margin, and that the larger the solid-state cache, the better the performance. See also ExpressCache Fusion Drive Hybrid array ReadyBoost Linux topics bcache dm-cache flashcache Notes References Computer peripherals Computer storage devices Computer storage media Solid-state caching Hard disk drives Non-volatile memory
1188540
https://en.wikipedia.org/wiki/ICE%20%28cipher%29
ICE (cipher)
In cryptography, ICE (Information Concealment Engine) is a symmetric-key block cipher published by Kwan in 1997. The algorithm is similar in structure to DES, but with the addition of a key-dependent bit permutation in the round function. The key-dependent bit permutation is implemented efficiently in software. The ICE algorithm is not subject to patents, and the source code has been placed into the public domain. ICE is a Feistel network with a block size of 64 bits. The standard ICE algorithm takes a 64-bit key and has 16 rounds. A fast variant, Thin-ICE, uses only 8 rounds. An open-ended variant, ICE-n, uses 16n rounds with 64n bit key. Van Rompay et al. (1998) attempted to apply differential cryptanalysis to ICE. They described an attack on Thin-ICE which recovers the secret key using 223 chosen plaintexts with a 25% success probability. If 227 chosen plaintexts are used, the probability can be improved to 95%. For the standard version of ICE, an attack on 15 out of 16 rounds was found, requiring 256 work and at most 256 chosen plaintexts. Structure ICE is a 16-round Feistel network. Each round uses a 32→32 bit F function, which uses 60 bits of key material. The structure of the F function is somewhat similar to DES: The input is expanded by taking overlapping fields, the expanded input is XORed with a key, and the result is fed to a number of reducing S-boxes which undo the expansion. First, ICE divides the input into 4 overlapping 10-bit values. They are bits 0–9, 8–17, 16–25, and 24–33 of the input, where bits 32 and 33 are copies of bits 0 and 1. Second is a keyed permutation, which is unique to ICE. Using a 20-bit permutation subkey, bits are swapped between halves of the 40-bit expanded input. (If subkey bit i is 1, then bits i and i+20 are swapped.) Third, the 40-bit value is exclusive-ORed with 40 more subkey bits. Fourth, the value is fed through 4 10-bit S-boxes, each of which produces 8 bits of output. (These are much larger than DES's 8 6→4 bit S-boxes.) Fifth, the S-box output bits are permuted so that each S-box's outputs are routed to each 4-bit field of 32-bit word, including 2 of the 8 "overlap" bits duplicated during the next round's expansion. Like DES, a software implementation would typically store the S-boxes pre-permuted, in 4 1024×32 bit lookup tables. References Matthew Kwan, The Design of the ICE Encryption Algorithm, Fast Software Encryption 1997, pp. 69–82 . Bart van Rompay, Lars R. Knudsen and Vincent Rijmen, Differential Cryptanalysis of the ICE Encryption Algorithm, Fast Software Encryption 1998, pp270–283 (PDF). External links The ICE Home Page The ICE information slides Feistel ciphers
2845793
https://en.wikipedia.org/wiki/Arachne%20%28web%20browser%29
Arachne (web browser)
Arachne is a stable Internet suite containing a graphical web browser, email client, and dialer. Originally, Arachne was developed by Michal Polák under his xChaos label, a name he later changed into Arachne Labs. It was written in C and compiled using Borland C++ 3.1. Arachne has since been released under the GPL as Arachne GPL. Arachne primarily runs on DOS-based operating systems, but includes builds for Linux as well. The Linux version relies on SVGALib and therefore does not require a display server. Background Arachne supports many file formats, protocols and standards including video modes from CGA 640×200 in monochrome to VESA 1024×768 in high color mode ( colors). It is designed for systems that do not have any windowing system installed. Arachne supports multiple image formats including JPEG, PNG, BMP and animated GIF. It supports a subset of the HTML 4.0 and CSS 1.0 standards, including full support for tables and frames. Supported protocols include FTP, NNTP for USENET forums, POP3, SMTP and Gopher. Arachne includes a full-fledged TCP/IP connection suite, which has support for some dial-up and Ethernet connections. However, Arachne has no support for JavaScript, Java or SSL. Arachne can be expanded with the use of add-ons for such tasks as watching DivX movies, playing MP3 files, IRC chat, RSS and viewing PDF documents. Arachne also supports DOS Gateway Interface (DGI), a unique feature similar to Common Gateway Interface (CGI) scripting on the client. The first version of Arachne with a known release date was 1.0 Beta 2, which was released on 22 December 1996. The final and official version by Arachne Labs was 1.70R3 for DOS (released 22 January 2001) and 1.66 beta for Linux (released 20 July 2000). While there have been several more DOS versions, Linux development lay dormant until 24 May 2008 when a beta version 1.93 for Linux was released. The current DOS version, maintained by Glenn McCorkle, is 1.99 as of 23 December 2021. In 2006, there also was an experimental DPMI port of Arachne by Udo Kuhnt, named DPMI Arachne. Support Arachne supports a limited subset of stylesheets and HTML. Known support as of version 1.93: Derivatives xChaos software licensed the source code of Arachne to Caldera UK in 1997. Caldera UK added Novell's dialer and TCP/IP stack, JavaScript, SSL, implemented their own support for frames, added support for animated GIFs, audio output, printing on a multitude of printers, an optional on-screen keyboard for mouse and touch panel usage (SoftKeyboards), user profiles, and they completely changed the design of the browser (customizable), using Allegro for graphics. Also, they ported it to compile as a 32-bit protected mode extended DOS application (utilizing DPMI using DJGPP, a GNU compiler for DOS), while Arachne is a 16-bit application. This program was sold as DR-WebSpyder in 1998; the name was to associate it with DR-DOS, which Caldera owned at the time. When Caldera had transferred DR-DOS to its branch company Caldera Thin Clients, which renamed itself into Lineo in 1999, the browser was referred to under the name Embrowser. Since 2000, the Linux port of the browser was called Embedix Browser. See also Comparison of web browsers Lynx (text-based) FreeDOS List of web browsers MINUET (graphical) List of Usenet newsreaders Comparison of Usenet newsreaders References Further reading External links Arachne Development group & list Arachne GPL Arachne Add-ons Arachne Labs homepage Mel's Arachne4DOS UK Home page Arachne HTML support at freedos.org Arachne web browser. Installing and setting up for internet connection via Ethernet network adapter 1996 software Discontinued web browsers DOS software Gopher clients Internet suites Web browsers for DOS Free web browsers Free software programmed in C
60329642
https://en.wikipedia.org/wiki/G%201/19
G 1/19
G 1/19 is a decision issued by the Enlarged Board of Appeal of the European Patent Office (EPO) on 10 March 2021, which deals with the patentability of computer-implemented simulations. Background The case, triggered by decision T 489/14 issued on 22 February 2019 by Board of Appeal 3.5.07, deals with a European patent application relating to "a computer-implemented method, computer program and apparatus for simulating the movement of a pedestrian crowd through an environment". "The main purpose of the simulation is its use in a process for designing a venue such as a railway station or a stadium". While Board 3.5.07 acknowledged the analogy with case T 1227/05 (Circuit simulation I/Infineon Technologies) (in which the specific mathematical steps involved in a computer-implemented simulation of an electrical circuit subject to noise were found to contribute to the technical character of the invention), which supported the applicant's case, the Board did not agree with the conclusion reached by the deciding Board in T 1227/05. Eventually, considering this to be a question of fundamental importance, Board 3.5.07 decided to refer three questions to the Enlarged Board of Appeal. The questions The three questions referred to the Enlarged Board of Appeal are: "In the assessment of inventive step, can the computer-implemented simulation of a technical system or process solve a technical problem by producing a technical effect which goes beyond the simulation's implementation on a computer, if the computer-implemented simulation is claimed as such? If the answer to the first question is yes, what are the relevant criteria for assessing whether a computer-implemented simulation claimed as such solves a technical problem? In particular, is it a sufficient condition that the simulation is based, at least in part, on technical principles underlying the simulated system or process? What are the answers to the first and second questions if the computer-implemented simulation is claimed as part of a design process, in particular for verifying a design?" Amicus curiae and oral proceedings Oral proceedings took place before the Enlarged Board of Appeal on July 15, 2020. The oral proceedings were live streamed over the internet. Additionally, third parties were given the opportunity to file written statements after the initial referral to the Enlarged Board of Appeal, to be considered as part of these oral proceedings, resulting in the filing of 23 amicus curiae briefs. Decision The Enlarged Board of Appeal held "that existing case law regarding computer-implemented inventions also applies to computer-implemented simulations", and it retained "its established approach in assessing inventive step, known as the COMVIK approach". See also G 3/08, referral relating to the patentability of programs for computers (referral held to be inadmissible for lack of divergent case law) List of decisions and opinions of the Enlarged Board of Appeal of the European Patent Office References Further reading External links Decision G 1/19 of the Enlarged Board of Appeal of 10 March 2021 Decision T 489/14 (Pedestrian simulation/CONNOR) of 22 February 2019 (referring decision) G 2019 1
23491050
https://en.wikipedia.org/wiki/Fred%20Davis%20%28entrepreneur%29
Fred Davis (entrepreneur)
Frederic Emery Davis (born June 17, 1955), known as Fred Davis, is a veteran US technology writer and publisher who served as editor of [[A+ magazine]], MacUser, PC Magazine and PC Week; personal computer pioneer; technologist; and entrepreneur involved in the startups of Wired, CNET, Ask Jeeves, Lumeria, Jaduka, and Grabbit. Childhood Davis was born at the Yale New Haven Hospital while his father was enrolled at Yale. Davis's father was Dr. Donald Davis (deceased), an IBM Fellow and the creator of the "learning organization" management practice (while a professor at the University of Utrecht in the Netherlands). Davis's mother was Doris Davis (deceased), an educator, artist, and longtime director of the Upward Bound project at Bowdoin College, in Brunswick, Maine. Education Davis attended Friends Academy, in Locust Valley, New York, from kindergarten through sixth grade, while his mother was an English teacher and assistant principal. In 1966 Davis enrolled at Eaglebrook School, in Deerfield, Massachusetts, where he spent grades 7 through 9 and graduated in 1970. In 1966 Davis, then age 11, learned computer programming, by participating in the testing and development of the BASIC computer language via a time-sharing hookup to Dartmouth College, where BASIC was being developed. In 1970 Davis enrolled at Northfield Mt. Hermon School, in Northfield, Massachusetts, where he spent his sophomore and junior years. He spent his senior year of high school at Collins Brook School, in Freeport, Maine. The school was modeled after and run by former educators of the progressive Summerhill School in Suffolk, England. Davis graduated in 1973 from Collins Brook, where his senior class consisted of only six people. In 1978 Davis enrolled in Antioch College and completed his B.A. in botany. The Wall St. Journal described Davis as "a teen-age prodigy who earned a B.A. in only one year." In 1979 Davis continued at Antioch College, and he earned his M.S. in Ecosystems Management in 1981. In 1980 Davis enrolled at Union Institute & University as a degree candidate for a PhD in Information Technology. His dissertation was titled "Computer Assisted Publishing," and although Davis did not complete his PhD, the dissertation formed the groundwork for a book he coauthored—Desktop Publishing—one of the first books to be published on that topic. Early business ventures In 1974 Davis bought Lougee's, a greenhouse and florist business in Belfast, Maine. He expanded the business to include North Star Orchids, an orchid nursery that was a major importer of orchids in 1975 and 1976. During that time, he designed and established a commercial plant tissue culture laboratory for North Star Orchids; the lab was one of the first commercial plant tissue culture labs to use a laminar flow hood based on millipore filters for aseptic lab work, at a time when glove boxes were the standard aseptic tissue culture work areas. Davis invented a new type of plant tissue culture vessel based on millipore filters for respiration, and that invention was published in Orchid Review, where it received international attention. It was cited in subsequent scientific papers and eventually became part of the standard design of almost all types of labware that needed to provide aseptic respiration. History with computers In the late '70s, Davis moved from Maine to San Francisco, where he became one of the early personal computer pioneers working with CPM-based systems. Davis bought an original Apple II in 1977 and emerged as an early Apple II programmer, due to his prior knowledge of BASIC, the language that computer used. Davis was among the first computer engineers to successfully connect microcomputers to mainframes, connecting his Apple II with Stanford's DEC PDP-10 while conducting research on database publishing at Stanford. Davis also served as a computer consultant to large corporations and venture capitalists in the early days of the industry. Magazine publishing Davis held several executive positions at technology magazines from 1983 to 2002. Davis was the editor of A+, MacUser, PC Magazine, and PC Week; a columnist and writer for Wired; and the publisher of dig_iT. ZD, A+, MacUser In 1983 Davis became one of the founders of Ziff-Davis's computer publishing division and worked on the startup of ZD's first computer publication, A+, which rapidly became the leading publication about Apple computers. Later, during Fred's tenure as editor-in-chief, A+ won the Computer Press Association award for Best Computer Magazine. From A+, Davis moved over to serve as editor-in-chief of its sister magazine MacUser, where he founded MacUser Labs. He oversaw the development of the magazine during its period of greatest growth, bringing it up to parity with Macworld. PC Magazine, PC Week Next, Davis joined PC Magazine, which, at the time, was the world's leading computer publication in terms of revenue and circulation, with an annual revenue of approximately $250 million. As editor of PC Magazine and director of PC Magazine Labs, he helped develop benchmarks and scripts for testing thousands of products under review. Later, as editor of PC Week, he founded PC Week Labs and developed industry-standard benchmark tests for corporate computing. After leaving PC Week, which was located in Massachusetts, to return to California, Davis helped launch and served as a columnist for several other Ziff-Davis publications, including Windows Sources, Computer Life, Family PC, and the ZD Personal Computing newspaper supplement. Wired In 1992 Davis worked with Wired CEO Louis Rosetto on the launch of Wired magazine and was part of the original "Wired Brain Trust." After Wired, Davis worked with CNET CEO Halsey Minor as an original member of the CNET startup team, where he helped develop both television and online strategies. dig_iT In 2001 Davis and computer publishing pioneer David Bunnell founded Prosumer Media, the publisher of dig_iT, a magazine focused on "the digital lifestyle." Internet Davis was an early Internet pioneer who launched one of the Internet's first curated search and discovery sites, Weblust.com. Early users of Weblust included Ralph Nader and Avram Miller. NetGuide Working with CMP Media, Davis led a team that developed an advanced consumer portal and Internet search site named NetGuide. It included multimedia features and games such as "Where's Barlowe?" based on the travelings of John Perry Barlow. These features were developed by Marc Canter, founder of MacroMind/MacroMedia, and David Biedney, a highly acclaimed graphic artist. NetGuide was one of the first major consumer Web projects to use Java. One of the programmers on the NetGuide project was Craig Newmark, who later went on to start Craigslist. CNET In 1994 Davis joined CNET during its early startup stage. He was the first computer industry person to join the CNET team, which at the time consisted of Halsey Minor, Shelby Bonnie, and their administrative assistant. Davis worked with Fox Network cofounder Kevin Wendle and former Disney creative associate Dan Baker to produce CNET's four pilot television programs about computers, technology, and the Internet. Ask Jeeves In 1996 Davis helped launch Ask Jeeves (now Ask.com), after venture capitalist Garrett Gruener introduced Davis to David Warthen, the founding CEO. Lumeria In 1998 Davis was the CEO and founder of Lumeria, an infomediary company involved in identity management; identity commerce; consumer privacy; and helping consumers own, control, and get value from their personal information. Lumeria's identity management business was designed to provide a secure way for individuals to protect and share their personal information, with Lumeria acting as an agent on their behalf to protect their information and extract value from that information, which was stored in Lumeria's SuperProfile distributed secure XML database. Lumeria had a controversial subsidiary, Lumeria Ad Network, that replaced ads in a user's browser with ads from that person's own ad network. PrivaTel, Jaduka In 2005 Davis founded and served as the CEO of PrivaTel, a wholly owned subsidiary of Network Enhanced Telecom, LLP (also known as Network IP), based in Dallas, Texas. While at PrivaTel, Davis developed three main services: My Private Line, a prepaid calling card with disposable phone numbers that could be used for privacy protection when people are not comfortable providing their actual phone number and providing the ability to turn individual numbers off or direct them to go directly to voicemail; Click-and-Connect, a service that enables someone to initiate a phone call from almost every Web page or application; and CallsAd, a service in which classified ads are sold with a temporary contact number, enabling the seller to keep personal numbers private and turn the contact number off after the sale. After Davis's departure, the company was renamed Jaduka and it dropped the privacy products to focus exclusively on the Click-and-Connect service. Grabbit In 2009 Davis founded Grabbit, with Lisa Padilla and Peter Karnig. Grabbit is an open source social media platform that integrates social network and other content streams into a single stream that can be filtered by social networks, content sources, keywords, users, and so on. Users can add many types of real-time streams, updates, and alerts such as RSS feeds, e-mail alerts, news alerts, blog alerts, alarms and reminders, shopping alerts, and more across a broad range of social networks, information, media, and commerce. Grabbit also gives users a powerful set of tools for managing and discovering friends, contacts, and groups of contacts across a wide variety of social and business networks. Grabbit was built with Drupal, and the open source project and source code is published on GitHub. In partnership with Cognition, Grabbit developed new semantic technology to analyze social media, blog posts, friends, locations, media consumption, product purchases, brand preferences, and other data. As a result of this work, Davis filed a patent application titled "Semantically Generating Personalized Recommendations Based on Social Feeds to a User in Real-Time and Display Methods Thereof." Other work From 1996 to mid-1997, Davis served as director of Strategic Development for CMP Media, during the period leading up to its successful IPO, in August 1997. While working for CMP Media's CEO, Ken Cron, on long-range business strategies, Davis also wrote articles and columns in various CMP publications, including Windows Magazine, Home PC, and Computer Reseller News. Davis was a columnist for the San Jose Mercury News and was the US columnist for EYE-COM, one of Japan's leading computer magazines. Davis was also a regular technology commentator for National Public Radio's All Things Considered and is the former cohost of the radio call-in show On Computers, with John Dvorak, Gina Smith, and Leo LaPorte. Davis also served as president of the Computer Institute, a nonprofit scientific and cultural foundation involved in education research and the study of human/computer ecology. Davis was also the founder of the Festival of Computer and Multimedia Arts (CoMA) in San Francisco. Davis is the author of more than a dozen computer books, including The Complete IBM Personal Computer—the first hardware expansion guide to the IBM PC, published in the early 1980s. His 1985 book, Desktop Publishing (with coauthors John Barry and Michael Wiesenberg), helped popularize the term and received an award from the Computer Press Association. The New York Times hailed Davis's Windows 3.1 Bible as "the best" book on the topic. Davis also developed the Windows Bible CD-ROM, released in early 1994. The Windows 95 Bible was released in April 1996, and his Windows 98 Bible (with coauthor Kip Crosby) was published in April 1998. Davis has been named one of the most influential people in the tech industry by several publications in both the US and Japan and is listed in Marquis Who's Who in America. Davis has been widely quoted in publications such as Business Week, The Wall Street Journal, The New York Times, USA Today, U.S. News & World Report, and Atlantic Monthly and has appeared on many radio and television programs, including NPR's All Things Considered, CBS Evening News, and ABC News Tonight''. Most recently, Davis has been exploring IPTV and Google Glass. References External links Grabbit Grabbit open source project American male journalists American magazine editors Antioch University alumni 1955 births Living people Technology evangelists American inventors American Internet celebrities Northfield Mount Hermon School alumni Berkeley Macintosh Users Group members
1160910
https://en.wikipedia.org/wiki/ProBoards
ProBoards
ProBoards is a free, remotely hosted message board service that facilitates online discussions by allowing people to create their own online communities. Ownership and service statistics ProBoards was founded and is owned by Patrick Clinger, who wrote the ProBoards software. The service hosts over 3,000,000 internet forums, which in turn have approximately 22,800,000 users worldwide. Currently, all ProBoards forums combined receive a total of over 600 million pageviews per month, making ProBoards one of the largest websites on the Internet. However, according to Techcrunch.com writer Anthony Ha, those numbers have seemingly dropped. In an interview, founder/owner Patrick Clinger stated "ProBoards has been used to create 3.5 million forums", but about 1.2 million of them are still active (i.e. resulting in the occasional page view). Software history Proboards is coded in Perl, a popular programming language with web developers. Previously, due to the remotely hosted nature of the service, users could not modify the software directly as with some forum systems, but some customisation was possible through the use of CSS or JavaScript codes. With the release of v.5, however, ProBoards gives Administrators and certain other members access to the HTML and CSS of the webpage, for easier coding purposes. The first day of business for ProBoards was January 1, 2000. At first, ProBoards originally used software created by the owner, Patrick Clinger. In late 2001, though, ProBoards switched to the YaBB system. At the same time, other changes to the service made it the first remotely hosted service to offer a subdomain with each forum (e.g. username.proboards[servernumber].com) On June 11, 2002, ProBoards Version 2 was launched. This was coded by Clinger and was a rewrite of the entire software rather than improvements to the existing YaBB based setup. The main goals of this rewrite were to improve the overall speed of the software and add new features to keep the product competitive. In February 2003, version 3 of the ProBoards software was released, again making improvements on the overall speed of the software and including over 30 new features. ProBoards upgraded to version 4 of its software on April 30, 2005. This time, the upgrade added over 100 new features and enhancements to the service. Despite this, bugs of varying levels of severity still existed. The current version of the software is v5. ProBoards' servers - physical machines running the ProBoards software - are hosted by SoftLayer. Previous to November 2010, ProBoards was hosted by ThePlanet.com, and previous to 2006, EV1 Servers. The servers are hosted in multiple SoftLayer datacenters in Texas. In 2005, Patrick Clinger was invited by EV1 Servers to take part in a commercial for their business. The commercial opened with a voiceover introducing Clinger as the owner of ProBoards.com, and he then gave a testimonial about how EV1's hosting benefited ProBoards. The commercial was shown at the 2005 Houston Bowl. Since 2007, EV1 no longer exists as a webhost, having merged with The Planet. As of March 2009, the server numbers (boardname.proboards##.com) no longer need to be used due to a recent change that allows every ProBoards forum to be accessed without a server number in the URL. (For example, boardname.proboards.com) Due to the advent of Facebook, ProBoards transitioned into a social network and forum service hybrid with the introduction of version 5. Hosting Although a number of subscription style features are optionally available, there is no obligation for any user to purchase anything from ProBoards. Forums are hosted for free, with no bandwidth or webspace cap, provided users allow advertisements to be displayed on their forum. Until September 2003, ProBoards was supported by popunders, but these were discontinued in favor of less intrusive methods of advertising. Currently a typical forum will contain a Google AdSense banner ad and some small text links on every page. ProBoards also sells advertising directly to users through a selfserve system. ProBoards also has an agreement with a third party chatroom provider, addonInteractive, to provide Java-based chatrooms to users. Each forum admin can activate a free version of the chat on their forum, with paid upgrades available for busy forums. The chats integrate fully with forum accounts. Policies ProBoards users are bound by a number of Terms of Service, restricting the type of content which may appear on a ProBoards forum. ProBoards prohibits illegal or adult content. Formerly, only English forums were allowed, but in May 2010 this policy was changed, allowing boards to be created in any language. In addition to content policies, Proboards terms also seemingly prohibit the use of "ad-blocking" technology when accessing its services. User privacy is protected by a Privacy Policy outlining the use of logged information, as well as cookie policy, forum monitoring, and publicly available information. The US COPPA law is enforced by requiring all users to enter their date of birth on registration. Users aged under 13 are not permitted to register at any ProBoards forum. According to the Terms of Service, any user under the age of 18 also requires parental permission to register, but this is taken as implied when they accept the registration agreement and not verified. ProBoards allows users to apply affiliate marketing practices to monetize their communities via a partnership with VigLink announced January 2014. This partnership allows any ProBoards forum managers or creators to generate revenue from traffic with VigLink Insert. References External links ProBoards official website ProBoards Blog Internet forum hosting
17316652
https://en.wikipedia.org/wiki/Misuse%20case
Misuse case
Misuse case is a business process modeling tool used in the software development industry. The term Misuse Case or mis-use case is derived from and is the inverse of use case. The term was first used in the 1990s by Guttorm Sindre of the Norwegian University of Science and Technology, and Andreas L. Opdahl of the University of Bergen, Norway. It describes the process of executing a malicious act against a system, while use case can be used to describe any action taken by the system. Overview Use cases specify required behaviour of software and other products under development, and are essentially structured stories or scenarios detailing the normal behavior and usage of the software. A Misuse Case on the other hand highlights something that should not happen (i.e. a Negative Scenario) and the threats hence identified, help in defining new requirements, which are expressed as new Use Cases. This modeling tool has several strengths: It allows provision of equal weightage to functional and non-functional requirements (e.g. security requirements, platform requirements, etc.), which may not be possible with other tools. It emphasises security from the beginning of the design process and helps to avoid premature design decisions. It is a tool for improving communication between developers and stakeholders and is valuable in ensuring that both agree on critical system solutions and Trade-off analysis. Creating misuse cases often trigger a chain reaction which eases the identification of functional and non-functional requirements. The discovery of a misuse case will often leads to the creation of a new use case which acts as a counter measure. This in turn might be the subject of a new misuse case. As compared to other tools, It relates better to use cases and UML and eases the seamless employment of the model. Its biggest weakness is its simplicity. It needs to be combined with more powerful tools to establish an adequate plan for the execution of a project. One other weakness is its lack of structure and semantics. From use to misuse case In an industry it is important to describe a system's behavior when it responds to a request that originates from outside : the use cases have become popular for requirements between the engineers thanks to its features like the visual modeling technique, they describe a system from an actor's viewpoint and its format explicitly conveys each actor's goals and the flows the system must implement to accomplish them. The level of abstraction of a use case model makes it an appropriate starting point for design activities, thanks to the use of UML use case diagrams and the end user's or domain expert's language. But for software security analyses, the developers should pay attention to negative scenarios and understand them. That is why, in the 1990s, the concept of "inverse of an use case" was born in Norway. The contrast between the misuse case and the use case is the goal: the misuse case describes potential system behaviors that a system's stakeholders consider unacceptable or, as Guttorm Sindre and Andreas L. Opdahl said, "a function that the system should not allow". This difference is also in the scenarios: a "positive" scenario is a sequence of actions leading to a Goal desired by a person or organization, while a "negative" one is a scenario whose goal is desired not to occur by the organization in question or desired by a hostile agent (not necessarily human). Another description of the difference is by that defines a use case as a completed sequence of actions which gives increased value to the user, one could define a misuse case as a completed sequence of actions which results in loss for the organization or some specific stakeholder. Between the "good" and the "bad" case the language to represent the scenario is common: the use case diagrams are formally included in two modeling languages defined by the OMG: the Unified Modeling Language (UML) and the Systems Modeling Language (SysML), and this use of drawing the agents and misuse cases of the scenario explicitly helps focus attention on it. Area of use Misuse case are most commonly used in the field of security. With the ever-growing importance of IT system, it has become vital for every company to develop capability to protect its data. Hence, for example a misuse case might be used to define what a hacker would want to do with the system and define his or her requirements. A developer or designer can then define the requirements of the user and the hacker in the same UML diagram which in turn helps identify the security risks of the system. Basic concepts A misuse case diagram is created together with a corresponding use case diagram. The model introduces 2 new important entities (in addition to those from the traditional use case model, use case and actor: Misuse case : A sequence of actions that can be performed by any person or entity in order to harm the system. Misuser : The actor that initiates the misuse case. This can either be done intentionally or inadvertently. Diagrams The misuse case model makes use of those relation types found in the use case model; include, extend, generalize and association. In addition, it introduces two new relations to be used in the diagram: mitigates A use case can mitigate the chance that a misuse case will complete successfully. threatens A misuse case can threaten a use case, e.g. by exploiting it or hinder it from achieving its goals. These new concepts together with the existing ones from use case give the following meta model, which is also found as fig. 2 in Sindre and Opdahl (2004). Descriptions There are two different ways of describing a misuse case textual; one is embedded in a use case description template - where an extra description field called Threats can be added. This is the field where misuse case steps (and alternate steps) can be filled in. This is referred to as the lightweight mode of describing a misuse case. The other way of describing a misuse case, is by using a separate template for this purpose only. It is suggested to inherit some of the field from use case description (Name, Summary, Author and Date). It also adapts the fields Basic path and Alternative path, where they now describe the paths of the misuse cases instead of the use cases. In addition to there, it is proposed to use several other fields too: Misuse case name Summary Author Date Basic path Alternative paths Mitigation points Extension points Triggers Preconditions Assumptions Mitigation guarantee Related business rules Potential misuser profile Stakeholders and threats Terminology and explanations Scope Abstraction level Precision level As one might understand, the list above is too comprehensive to be completely filled out every time. Not all the fields are required to be filled in at the beginning, and it should thus be viewed as a living document. There has also been some debating whether to start with diagrams or to start with descriptions. The recommendation given by Sindre and Opdahl on that matter is that it should be done as with use cases. Sindre and Opdahl proposes the following 5 steps for using misuse cases to identify security requirements: Identify critical assets in the system Define security goals for each assets Identify threats to each of these security goals, by identifying the stakeholders that may want to cause harm to the system Identify and analyze risks for the threats, using techniques like Risk Assessment Define security requirements for the risks. It is suggested to use a repository of reusable misuse cases as a support in this 5-step process. Research Current field of research Current research on misuse cases are primarily focused on the security improvements they can bring to a project, software projects in particular. Ways to increase the widespread adoption of the practice of misuse case development during earlier phases of application development are being considered: the sooner a flaw is found, the easier it is to find a patch and the lower the impact is on the final cost of the project. Other research focuses on improving the misuse case to achieve its final goal: for "there is a lack on the application process, and the results are too general and can cause a under-definition or misinterpretation of their concepts". They suggest furthermore "to see the misuse case in the light of a reference model for information system security risk management (ISSRM)" to obtain a security risk management process. Future improvement The misuse cases are well known by the population of researchers. The body of research on the subject demonstrate the knowledge, but beyond the academic world, the misuse case has not been broadly adopted. As Sindre and Opdahl (the parents of the misuse case concept) suggest: "Another important goal for further work is to facilitate broader industrial adoption of misuse cases". They propose, in the same article, to embed the misuse case in a usecase modeling tool and to create a "database" of standard misuse cases to assist software architects. System stakeholders should create their own misuse case charts for requirements that are specific to their own problem domains. Once developed, a knowledge database can reduce the amount of standard security flaws used by lambda hackers. Other research focused on possible missing concrete solutions of the misuse case: as wrote "While this approach can help in a high level elicitation of security requirements, it does not show how to associate the misuse cases to legitimate behavior and concrete assets; therefore, it is not clear what misuse case should be considered, nor in what context". These criticisms might be addressed with the suggestions and improvements presented in the precedent section. Standardization of the misuse case as part of the UML notation might allow it to become a mandatory part of project development. "It might be useful to create a specific notation for security functionality, or countermeasures that have been added to mitigate vulnerabilities and threats." See also Use case diagram Steps for Business Analyst To Gather Security Requirements from Misuse Cases Exception handling Threat model (software) References Business process Software project management Software requirements
24523966
https://en.wikipedia.org/wiki/Comparison%20of%20OLAP%20servers
Comparison of OLAP servers
The following tables compare general and technical information for a number of online analytical processing (OLAP) servers. Please see the individual products articles for further information. General information Data storage modes APIs and query languages APIs and query languages OLAP servers support. OLAP distinctive features A list of OLAP features that are not supported by all vendors. All vendors support features such as parent-child, multilevel hierarchy, drilldown. Data processing, management and performance related features: Data modeling features: System limits Security Operating systems The OLAP servers can run on the following operating systems: Note (1):The server availability depends on Java Virtual Machine not on the operating system Support information See also Cubes (light-weight open-source OLAP server) ClickHouse Apache Pinot Apache Druid icCube Oracle Retail Predictive Application Server (RPAS), a retail specific MOLAP/OLAP server using Berkeley DB for persistence Palo (OLAP database) References OLAP Servers Data management Data warehousing products
8952081
https://en.wikipedia.org/wiki/CricketPaint
CricketPaint
CricketPaint was a second generation 1-bit (black and white) painting software program for the Apple Macintosh by Cricket Software. It followed MacPaint and was a competitor to Silicon Beach Software's SuperPaint. Like SuperPaint it was an early attempt to combine the separate graphic methods of bitmap and vector graphics. Cricket Software already had a vector-only package called CricketDraw. The way it achieved this dualism was with a feature called WetPaint. This allowed the user to draw vector graphics and modify them in an object-oriented way like in Apple's MacDraw, for example, changing the size, stroke and fill. When satisfied, the user could click outside the object and CricketPaint would convert the vector graphic into a bitmap and place it on the canvas, in a destructive edit. This package had some extra tools not found in MacPaint or MacDraw, such as the Spiral and Starburst, which drew radial lines. It was also released for Microsoft Windows. See also MacPaint SuperPaint (Macintosh) CricketDraw List of old Macintosh software References Notes Infoworld 1992 Raster graphics editors Classic Mac OS software Discontinued software
36623915
https://en.wikipedia.org/wiki/Kensington%20College%20of%20Business
Kensington College of Business
Kensington College of Business (KCB) is an independent higher education institution located in Oxford Circus, London. Background KCB was established in 1982 and celebrated its Silver Jubilee in 2007. It specialises primarily in Business and Information Technology. KCB concentrates in three areas of education: The delivery of pre-university, undergraduates and postgraduates studies Providing specialist training to major corporate clients (including Lloyds TSB and Capita Registrars) Preparing the students for the qualifications of leading Chartered Professional bodies such as Institute of Chartered Secretaries and Administrators (ICSA) and the Chartered Institute of Marketing (CIM) Associations Kensington College of Business is a London campus of the University of Chester to offer MBA, MSc, BA and BSc programs. Kensington College of Business is a registered centre with the University of London and the University of Wales to deliver the programs and receive validations from the respective universities. KCB also offers courses in collaboration with the University of South Wales, the University of Hertfordshire, and the University of Portsmouth. KCB created a subsidiary institution named the Laksamana College of Business in Brunei, allowing students there to receive a foundation degree accredited by KCB. These students may then continue their education at KCB in London to receive their university degree. Validations All undergraduate and postgraduate programs offered by KCB are validated by the University of Wales with the exception of the Law degree (LLB) which is awarded by University of London. KCB itself is accredited to offer HNC and HND courses from BTEC Pearson in Business, while students seeking university degrees must sit for exams at the University of Chester. Academics KCB courses validated by the University of Wales include BA-level study in Business Studies, Marketing, Information Technology, Business Accounting and Finance; MBA-level studies in Health Care Management, Tourism and Hospitality Management, Travel Management, Information Technology Management, Finance, Human Resource Management, Marketing, International Management, Security Management, Banks and Financial Institutions Management as well as General Studies; and MSc-level studies in Computing. KCB courses validated by the University of London include LLB studies. Accreditation Accredited by the Accreditation Service for International Colleges. Recognised under the United Kingdom Government's Department of Innovation Universities and Skills List 2 as a Degree Teaching Institution Accredited Teaching Centre of the Chartered Institute of Marketing (CIM) Approved Teaching Centre of the Institute of Chartered Secretaries and Administrators (The ICSA) References External links Kensington College of Business, United Kingdom Business schools in England Higher education colleges in London Educational institutions established in 1982 1982 establishments in England
1232264
https://en.wikipedia.org/wiki/XORP
XORP
XORP is an open-source Internet Protocol routing software suite originally designed at the International Computer Science Institute in Berkeley, California. The name is derived from eXtensible Open Router Platform. It supports OSPF, BGP, RIP, PIM, IGMP, OLSR. The product is designed from principles of software modularity and extensibility and aims at exhibiting stability and providing feature requirements for production use while also supporting networking research. The development project was founded by Mark Handley in 2000. Receiving funding from Intel, Microsoft, and the National Science Foundation, it released its first production software in July 2004. The project was then run by Atanu Ghosh of the International Computer Science Institute, in Berkeley, California. In July 2008, the International Computer Science Institute transferred the XORP technology to a new entity, XORP Inc., a commercial startup founded by the leaders of the opensource project team and backed by Onset Ventures and Highland Capital Partners. In February 2010, XORP Inc. was wound up, a victim of the recession. However the open source project continued, with the servers based at University College London. In March 2011, Ben Greear became the project maintainer and the www.xorp.org server is now hosted by Candela Technologies. The XORP codebase consists of around 670,000 lines of C++ and is developed primarily on Linux, but supported on FreeBSD, OpenBSD, DragonFlyBSD, NetBSD. Support for XORP on Microsoft Windows was recently re-added to the development tree. XORP is available for download as a Live CD or as source code via the project's homepage. The software suite was selected commercially as the routing platform for the Vyatta line of products in its early releases, but later has been replaced with quagga. Routing features As of 2009, the project supports the following routing protocols: Static routing Routing Information Protocol (RIP and RIPng): (RIP version 2) (RIP-2 MD5 Authentication) (RIPng for IPv6) Border Gateway Protocol: (A Border Gateway Protocol 4 (BGP-4)) (Capabilities Advertisement with BGP-4) (Multiprotocol Extensions for BGP-4) (Use of BGP-4 Multiprotocol Extensions for IPv6 Inter-Domain Routing) (BGP Communities Attribute) (BGP Route Reflection - An Alternative to Full Mesh IBGP) (Autonomous System Confederations for BGP) (BGP Route Flap Damping) (BGP Support for Four-octet AS Number Space) (Definitions of Managed Objects for the Fourth Version of the Border Gateway Protocol (BGP-4) using SMIv2) Open Shortest Path First version 2 (OSPFv2) and version 3 (OSPFv3): (OSPF Version 2) (The OSPF Not-So-Stubby Area (NSSA) Option) (OSPF for IPv6) PIM Sparse Mode (PIM-SM): IGMP v1, v2, and v3: (Internet Group Management Protocol, Version 2) (Internet Group Management Protocol, Version 3) Multicast Listener Discovery (MLD v1 and v2): (Multicast Listener Discovery (MLD) for IPv6) (Multicast Listener Discovery Version 2 (MLDv2) for IPv6) Virtual Router Redundancy Protocol (VRRP v2): User interface XORP provides a command line interface for interactive configuration and operation monitoring. The interface is implemented as a distinct application called xorpsh, that may be invoked by multiple users simultaneously. It interacts via interprocess communication with the router core modules. The command line language is modelled after that of Juniper Networks's JunOS platform. See also List of open source routing platforms References External links Official website SourceForge website Release Notes Slashdot discussion Free routing software
39643325
https://en.wikipedia.org/wiki/Crowdsourcing%20software%20development
Crowdsourcing software development
Crowdsourcing software development or software crowdsourcing is an emerging area of software engineering. It is an open call for participation in any task of software development, including documentation, design, coding and testing. These tasks are normally conducted by either members of a software enterprise or people contracted by the enterprise. But in software crowdsourcing, all the tasks can be assigned to or are addressed by members of the general public. Individuals and teams may also participate in crowdsourcing contests. Goals Software crowdsourcing may have multiple goals. Quality software: Crowdsourcing organizers need to define specific software quality goals and their evaluation criteria. Quality software often comes from competent contestants who can submit good solutions for rigorous evaluation. Rapid acquisition: Instead of waiting for software to be developed, crowdsourcing organizers may post a competition hoping that something identical or similar has been developed already. This is to reduce software acquisition time. Talent identification: A crowdsourcing organizer may be mainly interested in identifying talents as demonstrated by their performance in the competition. Cost reduction: A crowdsourcing organizer may acquire software at a low cost by paying a small fraction of development cost as the price for award may include recognition awards. Solution diversity: As teams will turn in different solutions for the same problem, the diversity in these solutions will be useful for fault-tolerant computing. Ideas creation: One goal is to get new ideas from contestants and these ideas may lead to new directions. Broadening participation: One goal is to recruit as many participants as possible to get best solution or to spread relevant knowledge. Participant education: Organizers are interested in educating participants new knowledge. One example is nonamesite.com sponsored by DARPA to teach STEM Science, Technology, Engineering, and Mathematics. Fund leveraging: The goal is to stimulate other organizations to sponsor similar projects to leverage funds. Marketing: Crowdsourcing projects can be used for brand recognition among participants. Ecosystem Architecture support A crowdsourcing support system needs to include 1) Software development tools: requirement tools, design tools, coding tools, compilers, debuggers, IDE, performance analysis tools, testing tools, and maintenance tools. 2) Project management tools: ranking, reputation, and award systems for products and participants. 3) Social network tools: allow participants to communicate and support each other. 4) Collaborating tools: For example, a blackboard platform where participants can see a common area and suggest ideas to improve the solutions presented in the common area. Social networks Social networks can provide communication, documentation, blogs, twitters, wikis, comments, feedbacks, and indexing. Organization Processes Any phase of software development can be crowdsourced, and that phase can be requirements (functional, user interface, performance), design (algorithm, architecture), coding (modules and components), testing (including security testing, user interface testing, user experience testing), maintenance, user experience, or any combination of these. Existing software development processes can be modified to include crowdsourcing: 1) Waterfall model; 2) Agile processes; 3) Model-driven approach; 4) Open-Sourced approach; 5) Software-as-a-Service (SaaS) approach where service components can be published, discovered, composed, customized, simulated, and tested; 6) formal methods: formal methods can be crowdsourced. The crowdsourcing can be competitive or non-competitive. In competitive crowdsourcing, only selected participants will win, and in highly competitive projects, many contestants will compete but few will win. In non-competitive manner, either single individuals will participate in crowdsourcing or multiple individuals can collaborate to create software. Products produced can be cross evaluated to ensure the consistency and quality of products and to identify talents, and the cross evaluation can be evaluated by crowdsourcing. Items developed by crowdsourcing can be evaluated by crowdsourcing to determine the work produced, and evaluation of evaluation can be crowdsourced to determine the quality of evaluation. Notable crowdsourcing processes include AppStori and Topcoder processes. Pre-selection of participants is important for quality software crowdsourcing. In competitive crowdsourcing, a low-ranked participant should not compete against a high-ranked participant. Platforms Software crowdsourcing platforms including Apple Inc.'s App Store, Topcoder, and uTest demonstrate the advantage of crowdsourcing in terms of software ecosystem expansion and product quality improvement. Apple’s App Store is an online iOS application market, where developers can directly deliver their creative designs and products to smartphone customers. These developers are motivated to contribute innovative designs for both reputation and payment by the micro-payment mechanism of the App Store. Within less than four years, Apple's App Store has become a huge mobile application ecosystem with 150,000 active publishers, and generated over 700,000 IOS applications. Around the App Store, there are many community-based, collaborative platforms for the smart-phone applications incubators. For example, AppStori introduces a crowd funding approach to build an online community for developing promising ideas about new iPhone applications. IdeaScale is another platform for software crowdsourcing. Another crowdsourcing example—Topcoder—creates a software contest model where programming tasks are posted as contests and the developer of the best solution wins the top prize. Following this model, Topcoder has established an online platform to support its ecosystem and gathered a virtual global workforce with more than 1 million registered members and nearly 50,000 active participants. All these Topcoder members compete against each other in software development tasks such as requirement analysis, algorithm design, coding, and testing. Sample processes The Topcoder Software Development Process consists of a number of different phases, and within each phase there can be different competition types: Architecture; Component Production; Application Assembly; Deployment Each step can be a crowdsourcing competition. BugFinders testing process: Engage BugFinders; Define Projects; Managed by BugFinders; Review Bugs; Get Bugs Fixed; and Release Software. Theoretical issues Game theory has been used in the analysis of various software crowdsourcing projects. Information theory can be a basis for metrics. Economic models can provide incentives for participation in crowdsourcing efforts. Reference architecture Crowdsourcing software development may follow different software engineering methodologies using different process models, techniques, and tools. It also has specific crowdsourcing processes involving unique activities such as bidding tasks, allocating experts, evaluating quality, and integrating software. To support outsourcing process and facilitate community collaboration, a platform is usually built to provide necessary resources and services. For example, Topcoder follows the traditional software development process with competition rules embedded, and AppStori allow flexible processes and crowd may be involved in almost all aspects of software development including funding, project concepts, design, coding, testing, and evaluation. The reference architecture hence defines umbrella activities and structure for crowd-based software development by unifying best practices and research achievements. In general, the reference architecture will address the following needs: Customizable to support typical process models; Configurable to compose different functional components; Scalable to facilitate problem solution of varied size. Particularly, crowdsourcing is used to develop large and complex software in a virtualized, decentralized manner. Cloud computing is a colloquial expression used to describe a variety of different types of computing concepts that involve a large number of computers connected through a real-time communication network (typically the Internet). Many advantages are to be found when moving crowdsourcing applications to the cloud: focus on project development rather than on the infrastructure that supports this process, foster the collaboration between geographically distributed teams, scale resources to the size of the projects, work in a virtualized, distributed, and collaborative environment. The demands on software crowdsourcing systems are ever evolving as new development philosophies and technologies gain favor. The reference architecture presented above is designed to encompass generality in many dimensions including, for example different software development methodologies, incentive schemes, and competitive/collaborative approaches. There are several clear research directions that could be investigated to enhance the architecture such as data analytics, service based delivery, and framework generalization. As systems grow understanding the use of the platform is an important consideration, data regarding users, projects, and interaction between the two can all be explored to investigate performance. These data may also provide helpful insights when developing tasks or selecting participants. Many of the components designed in the architecture are general purpose and could be delivered as hosted services. By hosting these services the barriers for entry would be significantly reduced. Finally, through deployments of this architecture there is potential to derive a general purpose framework that could be used for different software development crowdsourcing projects or more widely for other crowdsourcing applications. The creation of such frameworks has had transformative effects in other domains for instance the predominant use of BOINC in volunteer computing. Aspects and metrics Crowdsourcing in general is a multifaceted research topic. The use of crowdsourcing in software development is associated with a number of key tension points, or facets, which should be considered (see the figure below). At the same time, research can be conducted from the perspective of the three key players in crowdsourcing: the customer, the worker, and the platform. Task decomposition: Coordination and communication: Planning and scheduling: Quality assurance: A software crowdsourcing process can be described in a game process, where one party tries to minimize an objective function, yet the other party tries to maximize the same objective function as though both parties compete with each other in the game. For example, a specification team needs to produce quality specifications for the coding team to develop the code; the specification team will minimize the software bugs in the specification, while the coding team will identify as many bugs as possible in the specification before coding. The min-max process is important as it is a quality assurance mechanism and often a team needs to perform both. For example, the coding team needs to maximize the identification of bugs in the specification, but it also needs to minimize the number of bugs in the code it produces. Bugcrowd showed that participants will follow the prisoner's dilemma to identify bugs for security testing. Knowledge and Intellectual Property: Motivation and Remuneration: Levels There are the following levels of crowdsourcing: Level 1: single persons, well-defined modules, small size, limited time span (less than 2 months), quality products, current development processes such as the one by Topcoder and uTest. At this level, coders are ranked, websites contains online repository crowdsourcing materials, software can be ranked by participants, have communication tools such as wiki, blogs, comments, software development tools such as IDE, testing, compilers, simulation, modeling, and program analysis. Level 2: teams of people (< 10), well-defined systems, medium large, medium time span (3 to 4 months), adaptive development processes with intelligent feedback in a blackboard architecture. At this level, a crowdsourcing website may support adaptive development process and even concurrent development processes with intelligent feedback with the blackboard architecture; intelligent analysis of coders, software products, and comments; multi-phase software testing and evaluation; Big Data analytics, automated wrapping software services into SaaS (Software-as-a-Service), annotate with ontology, cross reference to DBpedia, and Wikipedia; automated analysis and classification of software services; ontology annotation and reasoning such as linking those service with compatible input/output. Level 3: teams of people (< 100 and > 10), well-defined system, large systems, long time span (< 2 years), automated cross verification and cross comparison among contributions. A crowdsourcing website at this level may contain automated matching of requirements to existing components including matching of specification, services, and tests; automated regression testing. Level 4: multinational collaboration of large and adaptive systems. A crowdsourcing website at this level may contain domain-oriented crowdsourcing with ontology, reasoning, and annotation; automated cross verification and test generation processes; automated configuration of crowdsourcing platform; and may restructure the platform as SaaS with tenant customization. Significant events Microsoft crowdsourcing Windows 8 development. In 2011, Microsoft started blogs to encourage discussions among developers and general public. In 2013, Microsoft also started crowdsourcing their mobile devices for Windows 8. In June 2013, Microsoft also announced crowdsourcing software testing by offering $100K for innovative techniques to identify security bugs, and $50K for a solution to the problem identified. In 2011 the United States Patent and Trademark Office launching a crowdsourcing challenge under the America COMPETES Act on the Topcoder platform to develop for image processing algorithms and software to recognize figure and part labels in patent documents with a prize pool of $50,000 USD. The contest resulted in 70 teams collectively making 1,797 code submissions. The solution of the contest winner achieved high accuracy in terms of recall and precision for the recognition of figure regions and part labels. Oracle uses crowdsourcing in their CRM projects. Conferences and workshops A software crowdsourcing workshop was held at Dagstuhl, Germany in September 2013. See also Collaborative software development model Commons-based peer production Crowdsourcing Open-source software Open-source software development References External links Crowdsourced WikiMedia development Finding open source projects on GitHub Further reading Karim R. Lakhani, David A. Garvin, Eric Logstein, "TopCoder: Developing Software through Crowdsourcing," Harvard Business School Case 610-032, 2010. Software Software development process
60203108
https://en.wikipedia.org/wiki/Maneuvering%20Characteristics%20Augmentation%20System
Maneuvering Characteristics Augmentation System
The Maneuvering Characteristics Augmentation System (MCAS) is a flight stabilizing program developed by Boeing which became notorious for its role in two fatal accidents of the 737 MAX, which killed all passengers and crew on both flights, 346 people in total. MCAS was first used on Boeing KC-46 Pegasus military air tanker to balance fuel loads, but the aircraft, which was based on the Boeing 767, allowed pilots to assume control of the aircraft. On the MAX, MCAS was intended to mimic flight behavior of the previous generation of the series, the Boeing 737 NG. During MAX flight tests, Boeing discovered that the position and larger size of the engines tended to push the nose up during certain maneuvers. Engineers decided to use MCAS to counter that tendency, since major structural redesign would have been prohibitively expensive and time-consuming. Boeing's goal was to have the MAX certified as another 737 version, which would appeal to airlines for the reduced cost of pilot training. The Federal Aviation Administration (FAA) approved Boeing's request to remove a description of MCAS from the aircraft manual, leaving pilots unaware of the system when the airplane entered service in 2017. After the Lion Air crash in 2018, Boeing and the FAA, still not revealing MCAS, referred pilots to a revised checklist procedure that must be performed in case of a malfunction. Boeing then received many requests for more information and revealed MCAS in another message, and that it can intervene without pilot input. According to Boeing, MCAS was supposed to compensate for an excessive nose up angle by adjusting horizontal stabilizer before the aircraft would potentially stall. Boeing denied that MCAS was an anti-stall system, and stressed it was intended to improve the handling of the aircraft. After the second crash, Ethiopian Airlines Flight 302 in 2019, Ethiopian authorities stated that the procedure did not enable the crew to prevent the accident, which occurred while a fix to MCAS was under development. Boeing admitted MCAS played a role in both accidents, when it acted on false data from a single angle of attack (AoA) sensor. In early 2020, the FAA, Transport Canada, and European Union Aviation Safety Agency (EASA) evaluated flight test results with MCAS disabled, and suggested that the MAX might not have needed MCAS at all. In late 2020, an FAA Airworthiness Directive approved design changes for each MAX aircraft, which would prevent MCAS activation unless both AoA sensors register similar readings, eliminate MCAS's ability to repeatedly activate, and allow pilots to override the system if necessary. The FAA began requiring all MAX pilots to undergo MCAS-related training in flight simulators by 2021. Prior to the 737 MAX In the 1960s, a basic pitch control system to avoid stalling was installed in the Boeing 707. A modern software-implemented MCAS was deployed on the Boeing KC-46 Air Force tanker. On the 737 MAX The (MCAS) flight control law was implemented on the 737 MAX to mitigate the aircraft's tendency to pitch up because of the aerodynamic effect of its larger, heavier, and more powerful CFM LEAP-1B engines and nacelles. The stated goal of MCAS, according to Boeing, was to provide consistent aircraft handling characteristics at elevated angles of attack in certain unusual flight conditions only and hence make the 737 MAX perform similarly to its immediate predecessor, the 737NG. This was necessary to meet Boeing's internal objective of minimizing training requirements for pilots already qualified on the 737NG. However, the MAX would have been stable even without MCAS, according to both the FAA and EASA. Role in accidents In flights ET 302 and JT 610, investigators determined that MCAS was triggered by falsely high angle of attack (AoA) inputs, as if the plane had pitched up excessively. On both flights, shortly after takeoff, MCAS repeatedly actuated the horizontal stabilizer trim motor to push down the airplane nose. Satellite data for the flights showed that the planes struggled to gain altitude. Pilots reported difficulty controlling the airplane and asked to return to the airport. The implementation of MCAS has been found to disrupt autopilot operations. On March 11, 2019, after China had grounded the aircraft, Boeing published some details of new system requirements for the MCAS software and for the cockpit displays, which it began implementing in the wake of the prior accident five months earlier: If the two AoA sensors disagree with the flaps retracted, MCAS will not activate and an indicator will alert the pilots. If MCAS is activated in non-normal conditions, it will only "provide one input for each elevated AoA event." Flight crew will be able to counteract MCAS by pulling back on the column. On March 27, Daniel Elwell, the acting administrator of the FAA, testified before the Senate Committee on Commerce, Science, and Transportation, saying that on January 21, "Boeing submitted a proposed MCAS software enhancement to the FAA for certification. ... the FAA has tested this enhancement to the 737 MAX flight control system in both the simulator and the aircraft. The testing, which was conducted by FAA flight test engineers and flight test pilots, included aerodynamic stall situations and recovery procedures." After a series of delays, the updated MCAS software was released to the FAA in May 2019. On May 16, Boeing announced that the completed software update was awaiting approval from the FAA. The flight software underwent 360 hours of testing on 207 flights. Boeing also updated existing crew procedures. On April 4, 2019, Boeing publicly acknowledged that MCAS played a role in both accidents. Purpose of MCAS and the stabilizer trim system The FAA and Boeing both refuted media reports describing MCAS as an anti-stall system, which Boeing asserted it is distinctly not. The aircraft had to perform well in a low-speed stall test. The "considers that the /MCAS and elevator feel shift (EFS) functions could be considered as stall identification systems or stall protection systems, depending on the natural (unaugmented) stall characteristics of the aircraft". The JATR said, "MCAS used the stabilizer to change the column force feel, not trim the aircraft. This is a case of using the control surface in a new way that the regulations never accounted for and should have required an issue paper for further analysis by the FAA. If the FAA technical staff had been fully aware of the details of the MCAS function, the JATR team believes the agency likely would have required an issue paper for using the stabilizer in a way that it had not previously been used; this [might have] identified the potential for the stabilizer to overpower the elevator." Description Background The Maneuvering Characteristics Augmentation System (MCAS) is a flight control law built into the Boeing 737 MAX's flight control computer, designed to help the aircraft emulate the handling characteristics of the earlier Boeing 737 Next Generation. According to an international Civil Aviation Authorities team review (JATR) commissioned by the FAA, MCAS may be a stall identification or protection system, depending on the natural (unaugmented) stall characteristics of the aircraft. Boeing considered MCAS part of the flight control system, and elected to not describe it in the flight manual or in training materials, based on the fundamental design philosophy of retaining commonality with the 737NG. Minimizing the functional differences between the Boeing 737 MAX and Next Generation aircraft variants allowed both variants to share the same type rating. Thus, airlines can save money by employing and training one pool of pilots to fly both variants of the Boeing 737 interchangeably. When activated, MCAS directly engages the horizontal stabilizer, thus is distinct from an anti-stall device, such as stick pusher, which physically moves the pilot's control column forward and engages the airplane's elevators when the airplane is approaching a stall. Boeing's former CEO Dennis Muilenburg said "[MCAS] has been reported or described as an anti-stall system, which it is not. It's a system that's designed to provide handling qualities for the pilot that meet pilot preferences." The 737 MAX's larger CFM LEAP-1B engines are fitted farther forward and higher up than in previous models. The aerodynamic effect of its nacelles contributes to the aircraft's tendency to pitch up at high angles of attack (AOA). The MCAS is intended to compensate in such cases, modeling the pitching behavior of previous models, and meet a certain certification requirement, in order to enhance handling characteristics and thus minimizing the need for significant pilot retraining. The software code for the MCAS function and the computer for executing the software are built to Boeing's specifications by Collins Aerospace, formerly Rockwell Collins. As an automated corrective measure, the MCAS was given full authority to bring the aircraft nose down, and could not be overridden by pilot resistance against the control wheel as on previous versions of the 737. Following the Lion Air accident, Boeing issued an Operations Manual Bulletin (OMB) on November 6, 2019 to outline the many indications and effects resulting from erroneous AOA data and provided instructions to turn off the motorized trim system for the remainder of the flight, and trim manually instead. Until Boeing supplemented the manuals and training, pilots were unaware of the existence of MCAS due to its omission from the crew manual and no coverage in training. Boeing first publicly named and revealed the existence of MCAS on the 737 MAX in a message to airline operators and other aviation interests on November 10, 2018, twelve days after the Lion Air crash. Safety engineering and human factors As with any other equipment on board an aircraft, the FAA approves a functional "design assurance level" corresponding to the consequences of a failure, using the SAE International standards ARP4754 and ARP4761. MCAS was designated a "hazardous failure" system. This classification corresponds to failures causing "a large reduction in safety margins" or "serious or fatal injury to a relatively small number of the occupants", but nothing "catastrophic". The MCAS was designed with the assumption, approved by FAA, that pilots would react to an unexpected activation within three seconds. Technology readiness The MCAS design parameters originally envisioned automated corrective actions to be taken in cases of high AoA and g-forces beyond normal flight conditions. Test pilots routinely push aircraft to such extremes, as the FAA requires airplanes to perform as expected. Before the MCAS, test pilot Ray Craig determined the plane did not fly smoothly, in part due to the larger engines. Craig would have preferred an aerodynamic solution, but Boeing decided to implement a control law in software. According to a news report in the Wall Street Journal, engineers who had worked on the KC-46A Pegasus tanker, which includes an MCAS function, suggested MCAS to the design team. With the MCAS implemented, new test pilot Ed Wilson said the "MAX wasn't handling well when nearing stalls at low speeds" and recommended MCAS to apply across a broader range of flight conditions. This required the MCAS to function under normal g-forces and, at stalling speeds, deflect the vertical trim more rapidly and to a greater extent—but now it reads a single AoA sensor, creating a single point of failure that allowed false data to trigger MCAS to pitch the nose downward and force the aircraft into a dive. "Inadvertently, the door was now opened to serious system misbehavior during the busy and stressful moments right after takeoff", said Jenkins of The Wall Street Journal. The FAA did not conduct a safety analysis on the changes. It had already approved the previous version of MCAS, and the agency's rules did not require it to take a second look because the changes did not affect how the plane operated in extreme situations. The Joint Authorities Technical Review found the technology unprecedented: "If the FAA technical staff had been fully aware of the details of the MCAS function, the JATR team believes the agency likely would have required an issue paper for using the stabilizer in a way that it had not previously been used. MCAS used the stabilizer to change the column force feel, not trim the aircraft. This is a case of using the control surface in a new way that the regulations never accounted for and should have required an issue paper for further analysis by the FAA. If an issue paper had been required, the JATR team believes it likely would have identified the potential for the stabilizer to overpower the elevator." In November 2019, Jim Marko, a manager of aircraft integration and safety assessment at Transport Canada aviation regulator's National Aircraft Certification Branch questioned the readiness of MCAS. Because new problems kept emerging, he suggested to his peers at FAA, ANAC and EASA to consider the safety benefits of removing MCAS from the MAX. Scrutiny The MCAS came under scrutiny following the fatal crashes of Lion Air Flight 610 and Ethiopian Airlines Flight 302 soon after takeoff. The Boeing 737 MAX global fleet was grounded by all airlines and operators, and a number of functional issues were raised. The MCAS deflects the horizontal stabilizer four times farther than was stated in the initial safety analysis document. Due to the amount of trim the system applies to the horizontal stabilizer, aerodynamic forces resist pilot control effort to raise the nose. As long as the faulty AOA readings persist, a human pilot "can quickly become exhausted trying to pull the column back". In addition, switches for the horizontal stabilizer trim assist now serve a shared purpose of turning off the MCAS. In simulator sessions, pilots were stunned by the substantial effort needed to manually crank the trim wheel out of its nose down setting. Boeing CEO Dennis Muilenburg has stated that there was "no surprise, or gap, or unknown here or something that somehow slipped through a certification process." On April 29, 2019 he stated the design of the aircraft was not flawed and reiterated that it was designed per Boeing's standards. In a May 29 interview with CBS, Boeing admitted that it had botched the software implementation and lamented the poor communications. On September 26, the National Transportation Safety Board criticized Boeing's inadequate testing of the 737 MAX, and pointed out that Boeing made erroneous assumptions on pilots' response to alerts in 737 MAX, triggered by activation of MCAS due to a faulty signal from an angle-of-attack sensor. The Joint Authorities Technical Review (JATR), a team commissioned by the FAA for 737 MAX investigation, concluded that FAA failed to properly review MCAS. Boeing failed to provide adequate and updated technical information regarding the MCAS system to FAA during Boeing 737 Max certification process, and had not carried out a thorough verification by stress-testing of the MCAS system. On October 18, Boeing turned over a discussion from 2016 between two employees which revealed prior issues with the MCAS system. Boeing's own internal design guidelines related to the 737 MAX's development stated that the system should "not have any objectionable interaction with the piloting of the airplane" and "not interfere with dive recovery". The operation of MCAS violated those. National Transportation Safety Board On September 26, 2019, the National Transportation Safety Board (NTSB) released the results of its review of potential lapses in the design and approval of the 737 MAX. The NTSB report concludes that assumptions "that Boeing used in its functional hazard assessment of uncommanded MCAS function for the 737 MAX did not adequately consider and account for the impact that multiple flight deck alerts and indications could have on pilots' responses to the hazard". When Boeing induced a stabilizer trim input that simulated the stabilizer moving consistent with the MCAS function, the specific failure modes that could lead to unintended MCAS activation (such as an erroneous high AOA input to the MCAS) were not simulated as part of these functional hazard assessment validation tests. As a result, additional flight deck effects (such as IAS DISAGREE and ALT DISAGREE alerts and stick shaker activation) resulting from the same underlying failure (for example, erroneous AOA) were not simulated and were not in the stabilizer trim safety assessment report reviewed by the NTSB." The NTSB questioned the long-held industry and FAA practice of assuming the nearly instantaneous responses of highly trained test pilots as opposed to pilots of all levels of experience to verify human factors in aircraft safety. The NTSB expressed concerns that the process used to evaluate the original design needs improvement because that process is still in use to certify current and future aircraft and system designs. The FAA could, for example, randomly sample pools from the worldwide pilot community to obtain a more representative assessment of cockpit situations. Supporting systems The updates proposed by Boeing focus mostly on MCAS software. In particular, there have been no public statements regarding reverting the functionality of the stabilizer trim cutout switches to pre-MAX configuration. A veteran software engineer and experienced pilot suggested that software changes may not be enough to counter the 737 MAX's engine placement. The Seattle Times noted that while the new software fix Boeing proposed "will likely prevent this situation recurring, if the preliminary investigation confirms that the Ethiopian pilots did cut off the automatic flight-control system, this is still a nightmarish outcome for Boeing and the FAA. It would suggest the emergency procedure laid out by Boeing and passed along by the FAA after the Lion Air crash is wholly inadequate and failed the Ethiopian flight crew." Boeing and the FAA decided that the AoA display and an AoA disagree light, which signals if the sensors give different readings, were not critical features for safe operation. Boeing charged extra for the addition of the AoA indicator to the primary display. In November 2017, Boeing engineers discovered that the standard AoA disagree light cannot independently function without the optional AoA indicator software, a problem affecting 80% of the global fleet which had not ordered the option. The software remedy was scheduled to coincide with the roll out of the elongated 737 MAX 10 in 2020, only to be accelerated by the Lion Air accident. Furthermore, the problem had not been disclosed to the FAA until 13 months after the fact. Although it is unclear whether the indicator could have changed the outcome for the ill-fated flights, American Airlines said the disagree indicator provided the assurance in continued operations of the airplane. "As it turned out, that wasn't true." Runaway stabilizer and manual trim In February 2016, the EASA certified the MAX with the expectation that pilot procedures and training would clearly explain unusual situations in which the seldom used manual trim wheel would be required to trim the plane, i.e. adjust the angle of the nose; however, the original flight manual did not mention those situations. The EASA certification document referred to simulations whereby the electric thumb switches were ineffective to properly trim the MAX under certain conditions. The EASA document said that after flight testing, because the thumb switches could not always control trim on their own, the FAA was concerned by whether the 737 MAX system complied with regulations. The American Airlines flight manual contains a similar notice regarding the thumb switches but does not specify conditions where the manual wheel may be needed. Boeing's CEO Muilenburg, when asked about the non-disclosure of MCAS, cited the "runaway stabilizer trim" procedure as part of the training manual. He added that Boeing's bulletin pointed to that existing flight procedure. Boeing views the "runaway stabilizer trim" checklist as a memory item for pilots. Mike Sinnett, vice president and general manager for the Boeing New Mid-Market Airplane (NMA) since July 2019, repeatedly described the procedure as a "memory item". However, some airlines view it as an item for the quick reference card. The FAA issued a recommendation about memory items in an Advisory Circular, Standard Operating Procedures and Pilot Monitoring Duties for Flight Deck Crewmembers: "Memory items should be avoided whenever possible. If the procedure must include memory items, they should be clearly identified, emphasized in training, less than three items, and should not contain conditional decision steps." In November 2018, Boeing told airlines that MCAS could not be overcome by pulling back on the control column to stop a runaway trim as on previous generation 737s. Nevertheless, confusion continued: the safety committee of a major U.S. airline misled its pilots by telling that the MCAS could be overcome by "applying opposite control-column input to activate the column cutout switches". Former pilot and CBS aviation & safety expert Chesley Sullenberger testified, "The logic was that if MCAS activated, it had to be because it was needed, and pulling back on the control wheel shouldn't stop it." In October, Sullenberger wrote, "These emergencies did not present as a classic runaway stabilizer problem, but initially as ambiguous unreliable airspeed and altitude situations, masking MCAS." In a legal complaint against Boeing, the Southwest Airlines Pilot Association states:An MCAS failure is not like a runaway stabilizer. A runaway stabilizer has continuous un-commanded movement of the tail, whereas MCAS is not continuous and pilots (theoretically) can counter the nose-down movement, after which MCAS would move the aircraft tail down again. Moreover, unlike runaway stabilizer, MCAS disables the control column response that 737 pilots have grown accustomed to and relied upon in earlier generations of 737 aircraft. Stabilizer cutoff switches re-wiring In May 2019, The Seattle Times reported that the two stabilizer cutoff switches, located on the center console, operate differently on the MAX than on the earlier 737 NG. On previous aircraft, one cutoff switch deactivates the thumb buttons on the control yoke that pilots use to move the horizontal stabilizer; the other cutoff switch disables automatic control of the horizontal stabilizer by autopilot or /MCAS. On the MAX, both switches are wired in parallel and perform the same function: they cut off all electric power to the stabilizer, both from the yoke buttons and from an automatic system. Thus, on previous aircraft it is possible to disable automatic control of the stabilizer yet to employ electric power assist by operating the yoke switches. On the MAX, with all power to the stabilizer cut, pilots have no choice but to use the mechanical trim wheel in the center console. However, as pilots pull on the 737 controls to raise the nose of the aircraft, aerodynamic forces on the elevator create an opposing force, effectively paralyzing the jackscrew mechanism that moves the stabilizer. It becomes very difficult for pilots to hand crank the trim wheel. The problem was encountered on earlier 737 versions, and a "roller coaster" emergency technique for handling the flight condition was documented in 1982 for the 737-200 but did not appear in training documentation for later versions (including the MAX). Manual trim stiffness In the early 1980s a problem was found with the 737-200 model. When the elevator operated to raise or lower the nose, it set up a strong force on the trim jackscrew which opposed any corrective force from the control systems. When attempting to correct an unwanted deflection using the manual trim wheel, exerting enough hand force to overcome the force exerted by the elevator became increasingly difficult as speed and deflection increased and the jack screw effectively jammed in place. A workaround was developed called the "roller coaster" technique. Counter-intuitively, to correct an excessive deflection causing a dive the pilot first pushes the nose down further, before easing back to gently raise the nose again. During this easing back period, the elevator deflection reduces or even reverses, its force on the jackscrew does likewise and the manual trim eases up. The workaround was included in the pilot's emergency procedures and in the training schedule. However while the 737 MAX has a similar jackscrew mechanism the "roller coaster" technique has been dropped from the pilot information. During the events leading to the two MAX crashes, the stiffness of the manual trim wheel repeatedly prevented manual trim adjustment to correct the MCAS-induced nose-down pitching. The issue has been brought to the notice of the DoJ criminal inquiry into the 737 MAX crashes. In simulator tests of Ethiopian Airlines Flight 302 flight scenario, the trim wheel was "impossible" to move when one of the pilots would instinctively pull up from the nosedives. It takes 15 turns to manually trim the aircraft one degree, and up to 40 turns to bring the trim back to neutral from the nose down position caused by MCAS. Horizontal stabilizer actuator The horizontal stabilizer is fitted with a conventional elevator for flight control. However, it is itself all-moving about a single pivot and can be trimmed to adjust its angle. The trim is actuated via a jackscrew mechanism. Slippage concern Sylvain Alarie and Gilles Primeau, experts on the horizontal stabilizers, observed anomalies in the data from the aircraft data recorders: a progressive shift of 0.2 degrees of the horizontal stabilizer, before the crash. "It may not seem like much, but it is an order of magnitude higher than what is normally allowed when designing systems like these", says Gilles Primeau. They say that the movements are easily observable, and disallowed according to Regulation 395A. These anomalies raise fundamental questions about this jack screw, which controls the horizontal stabilizer since the beginning of the 737 models, first certified in 1967. These slips are particularly visible on flight ET302: "While there is no MCAS command, and no control of the pilots, we see a movement of the jack screw which controls the horizontal stabilizer, we see a slip. And at the very end of the flight, the jack screw starts to slide again with an increase in the speed of the plane and its dive," says Alarie. Since its original design, the 737 has become 61% heavier, 24% longer, and 40% wider, and its engines twice as powerful. These experts are concerned that the loads on the jack screw have potentially increased since the creation of the 737. By regulations, the controls must be designed for 125% of the foreseeable loads. These experts have raised concerns about the motors possibly overheating in April 2019. MCAS circumvention for ferry flights During the groundings, special flights to reposition MAX aircraft to storage locations, as per 14 CFR § 21.197, flew at lower altitude and with flaps extended to circumvent MCAS activation, rather than using the recovery procedure after the fact. Such flights required a certain pilot qualification as well as permission from corresponding regulators, and with no other cabin crew or passengers. Angle of Attack (AoA) As per Boeing technical description: "the Angle of Attack (AoA) is an aerodynamic parameter that is key to understanding the limits of airplane performance. Recent accidents and incidents have resulted in new flight crew training programs, which in turn have raised interest in AoA in commercial aviation. Awareness of AOA is vitally important as the airplane nears stall." Chesley Sullenberger said AoA indicators might have helped in these two crashes. "It is ironic that most modern aircraft measure (angle of attack) and that information is often used in many aircraft systems, but it is not displayed to pilots. Instead, pilots must infer (angle of attack) from other parameters, deducing it indirectly." AoA sensors Though there are two sensors on the MAX only one of them is used at a time to trigger MCAS activation on the 737 MAX. Any fault in this sensor, perhaps due to physical damage, creates a single point failure: the flight control system lacks any basis for rejecting its input as faulty information. Reports of a single point of failure were not always acknowledged by Boeing. Addressing American Airlines pilots, Boeing vice-president Mike Sinnett contradicted reports that the MCAS had a single-point failure, because the pilots themselves are the backup. Reporter Useem said in The Atlantic it was "showing both a misunderstanding of the term and a sharp break from Boeing's long-standing practice of having multiple backups for every flight system". Problems with the AoA sensor had been reported in over 200 incident reports submitted to the FAA; however, Boeing did not flight test a scenario in which it malfunctioned. The sensors themselves are under scrutiny. Sensors on the Lion air aircraft were supplied by United Technologies' Rosemount Aerospace. In September 2019, the EASA said it prefers triple-redundant AoA sensors rather than the dual redundancy in Boeing's proposed upgrade to the MAX. Installation of a third sensor could be expensive and take a long time. The change, if mandated, could be extended to thousands of older model 737s in service around the world. A former professor at Embry-Riddle Aeronautical University, Andrew Kornecki, who is an expert in redundancy systems, said operating with one or two sensors "would be fine if all the pilots were sufficiently trained in how to assess and handle the plane in the event of a problem". But, he would much prefer building the plane with three sensors, as Airbus does. AoA Disagree alert In November 2017, after several months of MAX deliveries, Boeing discovered that the AoA Disagree message, which is indicative of potential sensor mismatch on the primary flight display, was unintentionally disabled. Clint Balog, a professor at Embry-Riddle Aeronautical University, said after the Lion Air crash: "In retrospect, clearly it would have been wise to include the warning as standard equipment and fully inform and train operators on MCAS". According to Bjorn Fehrm, Aeronautical and Economic Analyst at Leeham News and Analysis, "A major contributor to the ultimate loss of JT610 is the missing AoA DISAGREE display on the pilots' displays." The software depended on the presence of the visual indicator software, a paid option that was not selected by most airlines. For example, Air Canada, American Airlines and Westjet had purchased the disagree alert, while Air Canada and American Airlines also purchased, in addition, the AoA value indicator, and Lion Air had neither. Boeing had determined that the defect was not critical to aircraft safety or operation, and an internal safety review board (SRB) corroborated Boeing's prior assessment and its initial plan to update the aircraft in 2020. Boeing did not disclose the defect to the FAA until November 2018, in the wake of the Lion Air crash. Consequently, Southwest had announced to pilots that its entire fleet of MAX 8 aircraft will receive the optional upgrades. In March 2019, after the second accident of Ethiopian Airlines Flight 302, a Boeing representative told Inc. magazine, "Customers have been informed that AoA Disagree alert will become a standard feature on the 737 MAX. It can be retrofitted on previously delivered airplanes." On May 5, 2019, The Wall Street Journal reported that Boeing had known of existing problems with the flight control system a year before the Lion Air accident. Boeing defended that "Neither the angle of attack indicator nor the AoA Disagree alert are necessary for the safe operation of the airplane." Boeing recognized that the defective software was not implemented to their specifications as a "standard, standalone feature." Boeing stated, "...MAX production aircraft will have an activated and operable AoA Disagree alert and an optional angle of attack indicator. All customers with previously delivered MAX airplanes will have the ability to activate the AoA Disagree alert." Boeing CEO Muilenburg said the company's communication about the alert "was not consistent. And that's unacceptable." Visual AoA indicator Boeing published an article in Aero magazine about AoA systems, "Operational use of Angle of Attack on modern commercial jet planes": Boeing announced a change in policy in the Frequently Asked Questions in a (FAQ) about the MAX corrective work, "With the software update, customers are not charged for the AoA Disagree feature or their selection of the AoA indicator option." In 1996, the NTSB issued Safety Recommendation A-96-094. TO THE FEDERAL AVIATION ADMINISTRATION (FAA): Require that all transport-category aircraft present pilots with angle-of-attack info in a visual format, and that all air carriers train their pilots to use the info to obtain maximum possible airplane climb performance. The NTSB also stated about another accident in 1997, that "a display of angle of attack on the flight deck would have maintained the flightcrew's awareness of the stall condition and it would have provided direct indication of the pitch attitudes required for recovery throughout the attempted stall recovery sequence." The NTSB also believed that the accident may have been prevented if a direct indication of AoA was presented to the flightcrew (NTSB, 1997)." Flight computer architecture In early April 2019, Boeing reported a problem with software affecting flaps and other flight-control hardware, unrelated to MCAS; classified as critical to flight safety, the FAA has ordered Boeing to fix the problem correspondingly. In October 2019, the EASA has suggested to conduct more testing on proposed revisions to flight-control computers due to its concerns about portions of proposed fixes to MCAS. The necessary changes to improve redundancy between the two flight control computers have proved more complex and time-consuming than the fixes for the original MCAS issue, delaying any re-introduction to service beyond the date originally envisaged. In January 2020, new software issues were discovered, affecting monitoring of the flight computer start-up process and verifying readiness for flight. In April 2020, Boeing identified new risks where the trim system might unintentionally command nose down during flight or prematurely disconnect the autopilot. Microprocessor stress testing The MAX systems are integrated in the "e-cab" test flight deck, a simulator built for developing the MAX. In June 2019, "in a special Boeing simulator that is designed for engineering reviews," FAA pilots performed a stress testing scenarioan abnormal condition identified through FMEA after the MCAS update was implementedfor evaluating the effect of a fault in a microprocessor: as expected from the scenario, the horizontal stabilizer pointed the nose downward. Although the test pilot ultimately recovered control, the system was slow to respond to the proper runaway stabilizer checklist steps. Boeing initially classified this as a "major" hazard, and the FAA upgraded it to a much more severe "catastrophic" rating. Boeing stated that the issue can be fixed in software. The software change will not be ready for evaluation until at least September 2019. EASA director Patrick Ky said that retrofitting additional hardware is an option to be considered. The test scenario simulated an event toggling five bits in the flight control computer. The bits represent status flags such as whether MCAS is active, or whether the tail trim motor is energized. Engineers were able to simulate single event upsets and artificially induce MCAS activation by manipulating these signals. Such a fault occurs when memory bits change from 0 to 1 or vice versa, which is something that can be caused by cosmic rays striking the microprocessor. The failure scenario was known before the MAX entered service in 2017: it had been assessed in a safety analysis when the plane was certified. Boeing had concluded that pilots could perform a procedure to shut off the motor driving the stabilizer to overcome the nose-down movement. The scenario also affects 737NG aircraft, though it presents less risk than on the MAX; on the NG, moving the yoke counters any uncommanded stabilizer input, but this function is bypassed on the MAX to avoid negating the purpose of MCAS. Boeing also said that it agreed with additional requirements that the FAA required it to fulfill, and added that it was working toward resolving the safety risk. It will not offer the MAX for certification until all requirements have been satisfied. Early news reports were inaccurate in attributing the problem to an 80286 microprocessor overwhelmed with data, though as of April 2020 the concern remains that the MCAS software is overloading the 737 MAX's computers. Computer redundancy , the two flight control computers of Boeing 737 never cross-checked each other's operations; i.e., each was a single non-redundant channel. This lack of robustness existed since the early implementation and persisted for decades. The updated flight control system will use both flight control computers and compare their outputs. This switch to a fail-safe two-channel redundant system, with each computer using an independent set of sensors, is a radical change from the architecture used on 737s since the introduction on the older model 737-300 in the 1980s. Up to the MAX in its prior-to-grounding-version, the system alternates between computers after each flight. The two computers' architecture allowed switching in-flight if the operating computer failed, thus increasing availability. In the revised architecture, Boeing required the two computers to monitor each other so that each one can vet the other. Trim system malfunction indicator In January 2020, during flight testing, Boeing discovered a problem with an indicator light; the defect was traced to the "redesign of the two flight computers that control the 737 MAX to make them more resilient to failure". The indicator, which signals a problem with the trim system, can remain on longer than intended by design. Updates for return to service In November 2020, an Airworthiness Directive required corrective actions to the airplane's flight control laws (embodied in the Speed Trim System software): The new flight control laws now require inputs from both AOA sensors in order to activate MCAS. They also compare the inputs from the two sensors, and if those inputs differ significantly (greater than 5.5 degrees for a specified period of time), will disable the Speed Trim System (STS), which includes MCAS, for the remainder of the flight and provide a corresponding indication of that deactivation on the flight deck. The new flight control laws now permit only one activation of MCAS per sensed high-AOA event, and limit the magnitude of any MCAS command to move the horizontal stabilizer such that the resulting position of the stabilizer will preserve the flightcrew's ability to control the airplane's pitch by using only the control column. This means the pilot will have sufficient control authority without the need to make electric or manual stabilizer trim inputs. The new flight control laws also include Flight Control Computer (FCC) integrity monitoring of each FCC's performance and cross-FCC monitoring, which detects and stops erroneous FCC-generated stabilizer trim commands (including MCAS) References External links Further reading Design of a pitch stability augmentation system. Boeing Aircraft controls Engineering failures Software bugs
2806833
https://en.wikipedia.org/wiki/Michael%20Berlyn
Michael Berlyn
Michael Berlyn (born 1949) is an American video game designer and writer. He is best known as an implementer at Infocom, part of the text adventure game design team. Brainwave Creations was a small game programming company started by Michael Berlyn. The company was founded in the mid-1980s, and is probably best known for co-creating Tass Times in Tonetown along with Interplay's Rebecca Heineman. Berlyn joined Marc Blank in founding the game company Eidetic, which later became Bend Studio. In the midst of working on the company's second game, Syphon Filter, Berlyn left the video game industry. He later explained, "I did not like what the game business had become, the people who were driving it, or the nature of the product. I left before it was done and said, 'Do not put my name on the product.' I walked away from my own company. When you tell me you want to put a monk or a nun in my game and have them standing there holding guns so I can justify having the players shoot them, I think that crosses the boundaries of good taste. It doesn't offend ME, but it's got to be in bad taste, and you have to know that." In 1998, Berlyn started Cascade Mountain Publishing, whose goals were to publish ebooks and interactive fiction. Cascade Mountain Publishing went out of business in 2000. After his business ventures collapsed, Berlyn return to the video game industry, with a focus on casual games. Berlyn created a "light-jazz" band called Hot Mustard, made up entirely of his own music and performances. Berlyn was diagnosed with cancer in September 2014, after which he underwent chemotherapy and radiation treatment until at least mid-2015. Games Oo-Topos, 1981, Sentient Software and Polarware/Penguin Software Cyborg, 1981, Sentient Software Gold Rush, 1982, Sentient Software Congo, 1982, Sentient Software Suspended, 1983, Infocom Infidel, 1983, Infocom Cutthroats, 1984, Infocom Fooblitzky, 1985, co-designer, Infocom Tass Times in Tonetown, 1986, Activision Dr. Dumont's Wild P.A.R.T.I., 1988, First Row Software Publishing Keef the Thief, 1989, Electronic Arts Altered Destiny, 1990, Accolade Les Manley in: Search for the King, 1990, Accolade Snoopy's Game Club, 1992, Accolade (with former Intellivision programmer Gene Smith) Bubsy in Claws Encounters of the Furred Kind, 1993, Accolade Bubsy 3D, 1996, Accolade Zork: The Undiscovered Underground, 1997, Activision (with Marc Blank) Dr. Dumont's Wild P.A.R.T.I., 1999, Cascade Mountain Publishing Syphon Filter, 1999, contributor, producer, 989 Studios Zen Ball, Quick Click Software The Art of Murder (with Muffy Berlyn), iOS, Windows, OS X, Flexible Tales Grok the Monkey (aka Carnival of Death) (with Muffy Berlyn), iOS, Windows, Flexible Tales A Taste for Murder (with Muffy Berlyn), iOS, Windows, Flexible Tales Reconstructing Remy (an interactive novel with Muffy Berlyn), iOS, Windows, Flexible Tales Ogg!, iOS, OS X, Flexible Tales Novels The Integrated Man, Bantam Books, (1980) Crystal Phoenix, Bantam Books, (1980) Blight as Mark Sonders, Ace Books, (1981) Eternal Enemy, Wm. Morrow, (1990) References External links Interview with Mike Berlyn Hot Mustard (virtual jazz band by Berlyn) Keeping Warm - A Berlyn Jazz creation (as Hot Mustard) 1949 births American video game designers Infocom Interactive fiction writers Living people
3755562
https://en.wikipedia.org/wiki/Tagged%20pointer
Tagged pointer
In computer science, a tagged pointer is a pointer (concretely a memory address) with additional data associated with it, such as an indirection bit or reference count. This additional data is often "folded" into the pointer, meaning stored inline in the data representing the address, taking advantage of certain properties of memory addressing. The name comes from "tagged architecture" systems, which reserved bits at the hardware level to indicate the significance of each word; the additional data is called a "tag" or "tags", though strictly speaking "tag" refers to data specifying a type, not other data; however, the usage "tagged pointer" is ubiquitous. Folding tags into the pointer There are various techniques for folding tags into a pointer. Most architectures are byte-addressable (the smallest addressable unit is a byte), but certain types of data will often be aligned to the size of the data, often a word or multiple thereof. This discrepancy leaves a few of the least significant bits of the pointer unused, which can be used for tags – most often as a bit field (each bit a separate tag) – as long as code that uses the pointer masks out these bits before accessing memory. E.g., on a 32-bit architecture (for both addresses and word size), a word is 32 bits = 4 bytes, so word-aligned addresses are always a multiple of 4, hence end in 00, leaving the last 2 bits available; while on a 64-bit architecture, a word is 64 bits = 8 bytes, so word-aligned addresses end in 000, leaving the last 3 bits available. In cases where data is aligned at a multiple of word size, further bits are available. In case of word-addressable architectures, word-aligned data does not leave any bits available, as there is no discrepancy between alignment and addressing, but data aligned at a multiple of word size does. Conversely, in some operating systems, virtual addresses are narrower than the overall architecture width, which leaves the most significant bits available for tags; this can be combined with the previous technique in case of aligned addresses. This is particularly the case on 64-bit architectures, as 64 bits of address space are far above the data requirements of all but the largest applications, and thus many practical 64-bit processors have narrower addresses. Note that the virtual address width may be narrower than the physical address width, which in turn may be narrower than the architecture width; for tagging of pointers in user space, the virtual address space provided by the operating system (in turn provided by the memory management unit) is the relevant width. In fact, some processors specifically forbid use of such tagged pointers at the processor level, notably x86-64, which requires the use of canonical form addresses by the operating system, with most significant bits all 0s or all 1s. Lastly, the virtual memory system in most modern operating systems reserves a block of logical memory around address 0 as unusable. This means that, for example, a pointer to 0 is never a valid pointer and can be used as a special null pointer value. Unlike the previously mentioned techniques, this only allows a single special pointer value, not extra data for pointers generally. Examples One of the earliest examples of hardware support for tagged pointers in a commercial platform was the IBM System/38. IBM later added tagged pointer support to the PowerPC architecture to support the IBM i operating system, which is an evolution of the System/38 platform. A significant example of the use of tagged pointers is the Objective-C runtime on iOS 7 on ARM64, notably used on the iPhone 5S. In iOS 7, virtual addresses only contain 33 bits of address information but are 64-bits long leaving 31 bits for tags. Objective-C class pointers are 8-byte aligned freeing up an additional 3 bits of address space, and the tag fields are used for many purposes, such as storing a reference count and whether the object has a destructor. Early versions of MacOS used tagged addresses called Handles to store references to data objects. The high bits of the address indicated whether the data object was locked, purgeable, and/or originated from a resource file, respectively. This caused compatibility problems when MacOS addressing advanced from 24 bits to 32 bits in System 7. Null versus aligned pointer Use of zero to represent a null pointer is extremely common, with many programming languages (such as Ada) explicitly relying on this behavior. In theory, other values in an operating system-reserved block of logical memory could be used to tag conditions other than a null pointer, but these uses appear to be rare, perhaps because they are at best non-portable. It is generally accepted practice in software design that if a special pointer value distinct from null (such as a sentinel in certain data structures) is needed, the programmer should explicitly provide for it. Taking advantage of the alignment of pointers provides more flexibility than null pointers/sentinels because it allows pointers to be tagged with information about the type of data pointed to, conditions under which it may be accessed, or other similar information about the pointer's use. This information can be provided along with every valid pointer. In contrast, null pointers/sentinels provide only a finite number of tagged values distinct from valid pointers. In a tagged architecture, a number of bits in every word of memory are reserved to act as a tag. Tagged architectures, such as the Lisp machines, often have hardware support for interpreting and processing tagged pointers. GNU libc malloc() provides 8-byte aligned memory addresses for 32-bit platforms, and 16-byte alignment for 64-bit platforms. Larger alignment values can be obtained with posix_memalign(). Examples Example 1 In the following C code, the value of zero is used to indicate a null pointer: void optionally_return_a_value (int* optional_return_value_pointer) { /* ... */ int value_to_return = 1; /* is it non-NULL? (note that NULL, logical false, and zero compare equally in C) */ if (optional_return_value_pointer) /* if so, use it to pass a value to the calling function */ *optional_return_value_pointer = value_to_return; /* otherwise, the pointer is never dereferenced */ } Example 2 Here, the programmer has provided a global variable, whose address is then used as a sentinel: #define SENTINEL &sentinel_s node_t sentinel_s; void do_something_to_a_node (node_t * p) { if (NULL == p) /* do something */ else if (SENTINEL == p) /* do something else */ else /* treat p as a valid pointer to a node */ } Example 3 Assume we have a data structure table_entry that is always aligned to a 16 byte boundary. In other words, the least significant 4 bits of a table entry's address are always 0 We could use these 4 bits to mark the table entry with extra information. For example, bit 0 might mean read only, bit 1 might mean dirty (the table entry needs to be updated), and so on. If pointers are 16-bit values, then: 0x3421 is a read-only pointer to the table_entry at address 0x3420 0xf472 is a pointer to a dirty table_entry at address 0xf470 Advantages The major advantage of tagged pointers is that they take up less space than a pointer along with a separate tag field. This can be especially important when a pointer is a return value from a function. It can also be important in large tables of pointers. A more subtle advantage is that by storing a tag in the same place as the pointer, it is often possible to guarantee the atomicity of an operation that updates both the pointer and its tag without external synchronization mechanisms. This can be an extremely large performance gain, especially in operating systems. Disadvantages Tagged pointers have some of the same difficulties as xor linked lists, although to a lesser extent. For example, not all debuggers will be able to properly follow tagged pointers; however, this is not an issue for a debugger that is designed with tagged pointers in mind. The use of zero to represent a null pointer does not suffer from these disadvantages: it is pervasive, most programming languages treat zero as a special null value, and it has thoroughly proven its robustness. An exception is the way that zero participates in overload resolution in C++, where zero is treated as an integer rather than a pointer; for this reason the special value nullptr is preferred over the integer zero. However, with tagged pointers zeros are usually not used to represent null pointers. References Programming constructs
48959870
https://en.wikipedia.org/wiki/SPC-1000
SPC-1000
The SPC-1000 is a Z80-based personal computer produced by Samsung. It was the first computer created by the brand. Developed in South Korea, it features built-in HuBASIC BASIC written by Hudson Soft in Japan. The computer features a 4 MHz processor and 64 KB of RAM. History Launched in 1983, the SPC-1000 was the first personal computer produced by Samsung. The machine was mainly used in education. Description The main unit included the keyboard and a built-in tape recorder. External disk drives, a gamepad, and a dedicated CRT monitor could be connected to this unit. , and with a user manual on how to use the computer. Software was available on tapes, with more than one hundred titles released, between games and programs. Some games were conversions of popular Arcade games in the early 1980s, but adapted to the computer limitations. Features The computer uses a Zilog Z80 CPU running at 4MHz, and offers 64KB or RAM. Sounds is produced by a General Instrument AY-3-8910 chip, providing 3 voices with 8 octaves each. Video is generated by an AMI S68047 chip (quite similar to the Motorola 6847), offering semigraphics in 9 colors, a 128 x 192 mode in 4 colors, or a 256 x 192 mode in 2 colors. Gallery References Microcomputers Products introduced in 1983 Personal computers Z80-based home computers Samsung computers
23814823
https://en.wikipedia.org/wiki/98th%20Operations%20Group
98th Operations Group
The 98th Operations Group is a component unit of the Nevada Test and Training Range, assigned to the United States Air Force Air Combat Command. The group is stationed at Nellis Air Force Base, Nevada. It provides day-to-day control of the Nevada Test and Training Range (NTTR) and directly supports Air Force, joint and multi-national test and training activities; and operates two Air Combat Command bombing ranges; the NTTR and Leach Lake Tactics Range, near Barstow, California. During World War II, the group's predecessor unit, the 98th Bombardment Group was a Consolidated B-24 Liberator heavy bomb group that fought in North Africa and Italy. Two of its members, Colonel John R. (Killer) Kane and First Lieutenant Donald Pucket were awarded the Medal of Honor for their actions in combat. The group flew a total of 417 missions, earning a total of 15 battle streamers as well as two Presidential Unit Citations. In the postwar era, the 98th Bombardment Group was one of the first United States Army Air Forces units assigned to Strategic Air Command (SAC) on 1 July 1947, prior to the establishment of the USAF. Equipped with low-hour Boeing B-29 Superfortress World War II aircraft, it was deployed to Far East Air Force in 1950 and flew combat missions over North Korea early in the Korean War. The group was inactivated in 1952 when the parent wing adopted the Tri-Deputate organization and assigned all of the group's squadrons directly to the wing. It was reactivated in 1987 as the 98th Air Refueling Group, Heavy; as an Air Force Reserve associate unit of the 434th Air Refueling Wing. History See 98th Range Wing for related history and lineage World War II The 98th trained for bombardment missions with Consolidated B-24 Liberators during the first half of 1942. The group was alerted and departed for the Middle East on 15 July 1942, arriving in Palestine in late July 1942. The 98th was initially assigned to the USMEAF (United States Middle East Air Force). However, the USMEAF was dissolved on 12 November 1942. At that time, the 98th came under Ninth Air Force. It flew its first mission to Mersa Matruh, Libya on 1 August 1942, with the aircraft being serviced by Royal Air Force personnel until 98th maintenance personnel arrived in mid-August 1942. It supported the British Eighth Army in its westward advance from Egypt into Libya and Tunisia. It bombed shipping and harbor installations in North Africa, Sicily, Italy, Crete, and Greece to cut enemy supply lines to Africa and to prepare for the Allied invasion of Italy. The 98th earned a Distinguished Unit Citation (DUC) for action against the enemy in the Middle East, North Africa, and Sicily from August 1942 to August 1943. It received a second DUC for participation in a low-level bombing raid on enemy-held oil refineries at Ploesti, Romania, on 1 August 1943. On this raid, of 47 B-24s launched, only 21 returned safely. One crashed on takeoff with the loss of all crewmembers except two. Six aborted before reaching the target. Seventeen went down in enemy territory. Two went down at sea. The Group Commander, Col. John R. (Killer) Kane was awarded the Medal of Honor for his leadership. The 98th was under the command of the Twelfth Air Force in September and October 1943. From 1 November 1943 it was under the Fifteenth Air Force and moved to Italy. It flew many long-range missions to France, Germany, Austria, Czechoslovakia, Hungary, and Romania to bomb enemy heavy industries, airdromes, harbors, oil fields, and communication centers. On another raid on Ploesti on 9 July 1944, Lt. Donald Pucket sacrificed his life trying to save three of his crewmembers who could not or would not bail out of their doomed B-24. Donald Pucket was awarded the Medal of Honor posthumously for his sacrifice. In the summer of 1944, the 98th participated in the invasion of southern France, assisted in the Soviet advance into the Balkans, and supported the partisans and guerrillas in Yugoslavia and neighboring countries. It flew a total of 417 missions and earned a total of 15 battle streamers as well as two Presidential Unit Citations. The group returned to the United States as the war was ending in Europe, where it trained in preparation for movement to the Pacific Theater. It was re-designated the 98th Bombardment Group, Very Heavy and equipped with Boeing B-29 Superfortresses, but the war with Japan ended before redeployment. The 98th was inactivated as a group on 10 November 1945. However, its 343rd, 344th, and 345th Squadrons were reassigned to other B-29 groups. The 343d Squadron was assigned to the 40th Bombardment Group at March Air Force Base, California and inactivated on 27 November 1946. The 344th was assigned to the 444th Bombardment Group at Davis-Monthan Field, Arizona and inactivated on 1 October 1946. The 345th was assigned to the 462nd Bombardment Group at MacDill Field, Florida and inactivated on 31 March 1946. Postwar era and Korean War The 98th was reactivated on 1 July 1947 and equipped with B-29 Superfortresses at Spokane Army Air Field, Washington. In 1948, it carried out a 90-day deployment to Kadena Air Base, Okinawa. During this period, the 98th lost two B-29s; and a Douglas C-54 Skymaster returning to the US with 98th personnel ditched in the Pacific. ANother 90-day deployment was conducted in the summer of 1949 to RAF Sculthorpe, England. During the training phase of the years 1947–1950, the 98th recorded six B-29 losses. During the deployment to England, the 98th practiced high level (35,000 ft) bombing missions on the German Island of Helgoland. The aircraft were challenged by RAF and USAF fighters. The gunners were evaluated on gun camera film. The bombardiers were rated on their performance as well as were other air crew members. As a result of the exercise, the 98th was rated very highly and combat ready. In early 1950, the 98th was alerted for permanent change of stations to Ramey Air Force Base, Puerto Rico. However, before the move was completed, the Korean War broke out and the 98th arrived at Yokota Air Base, Japan in the first week of August 1950, and was placed under the operational control of the Far East Air Forces Bomber Command (Provisional). The first planes arrived at Yokota on 5 August 1950. It flew its first combat mission on 7 August, striking marshalling yards at Pyongyang, capital of North Korea. The Group attacked enemy communication lines and supported United Nations ground forces during the war. Targets included rail facilities, oil centers, bridges, roads, troop concentrations, airfields, and military installations. The last mission was There were 34 known losses. It became an administrative unit in 1951 when its operational squadrons were assigned directly to the wing as a result of the SAC dual deputate reorganization. Reserve refueling operations The unit was reactivated in the Air Force Reserve on 1 October 1987 as the 98th Air Refueling Group, Heavy at Barksdale Air Force Base, Louisiana with McDonnell Douglas KC-10 Extender aircraft. It consisted of the 78th Air Refueling Squadron and the 98th Consolidated Maintenance Squadron under the command of the 452d Air Refueling Wing at March Air Force Base. On 12–14 May 1989, the group was tasked to support USAF transport aircraft airlifting troops into Panama, which was the prelude to Operation Just Cause. In early August 1990 aircraft and crews of the 98th again were called on to support operations in the Gulf War. Following that operation, the 98th was involved with President Bush's code name Sinbad, a secret plan to monitor drug trafficking in South America. Yet again the 98th flew mercy missions into Mogadishu, Somalia delivering 491,610 pounds of supplies to try to alleviate the humanitarian disaster. Still operating in Operation Southern Watch the group flew missions along the southern border of Iraq in January 1993 until inactivated on 30 September 1994. Nevada range It was redesignated the 98th Operations Group and reactivated in October 2001, supporting the 98th Range Wing in its operations at Nellis Air Force Base, Nevada. It is now a non-flying unit that commands two squadrons with 55 military and civil service personnel and has functional responsibility for approximately 300 contract personnel. It prioritizes and schedules all range activities for all range users, provides ground control intercept operations, flight-following safety deconfliction, simulated threat command and control operations, communications, data link operations, and range access control. It also assists test customers by coordinating support activities, and coordinates airspace issues with military and federal agencies. The 98th Operations Support Squadron is the scheduling, command and control and project support authority for NTTR operations. The Weapons and Tactics Flight provides qualified ground control intercept and Link 16 operations for more than 5,000 test and training sorties per year on the NTTR. The Current Operations Flight is responsible for range scheduling, range monitoring and advisory control (Blackjack), and provides a comprehensive debrief tool for combat air forces aircrews. The Operations Plans Flight coordinates all exercise, test and experimentation customer assistance. The 98th Range Squadron is responsible for technical support of NTTR Air Force, joint and multinational aircrew training. The Communications Flight provides small computer hardware and software support and all communications. The Operations and Maintenance Flight provides operation, maintenance and deployment of threat systems, mission control and debriefing systems, time-space-position indicator/scoring systems and Roulette (Red Forces Command and Control). The Engineering Flight conducts research, engineers, develops and manages hardware and software projects. Lineage Established as the 98th Bombardment Group (Heavy) on 28 January 1942 Activated on 3 February 1942 Redesignated: 98th Bombardment Group, Heavy on 1 July 1943 Redesignated: 98th Bombardment Group, Very Heavy on 12 July 1945 Inactivated on 10 November 1945 Activated on 1 July 1947 Redesignated 98th Bombardment Group, Medium on 12 July 1948 Inactivated on 16 June 1952 Redesignated 98th Air Refueling Group, Heavy on 12 May 1987 Activated in the reserve on 1 October 1987 Redesignated: 98th Air Refueling Group on 1 February 1992 Inactivated on 30 September 1994 Redesignated: 98th Operations Group on 21 September 2001 Activated on 29 October 2001 Assignments Third Air Force, 3 February 1942 US Army Middle East Air Force, c. 25 July 1942 Ninth Air Force, 12 November 1942 Twelfth Air Force, 13 September 1943 XII Bomber Command, 19 September 1943 47th Bombardment Wing, 24 September 1943 5th Bombardment Wing, 1 November 1943 47th Bombardment Wing, 17 November 1943 Second Air Force, c. 29 April – 10 November 1945 Strategic Air Command, 1 July 1947 Fifteenth Air Force, 24 September 1947 98th Bombardment Wing, 10 November 1947 – 16 June 1952 (attached to 92d Bombardment Wing, 10 November 1947 – 24 August 1948, 10 December 1948 – 16 May 1949, 18 August 1949 – 15 April 1950; 32d Composite Wing, c. 25 August – 10 December 1948; 3d Air Division, 17 May – 17 August 1949; Far East Air Forces Bomber Command [Provisional], 7 August 1950 – 31 March 1951) 434th Air Refueling Wing, 1 October 1987 452d Air Refueling Wing, 1 August 1992 514th Airlift Wing, 1 October 1993 – 30 September 1994 98th Range Wing, 29 October 2001 – present Components 25th Reconnaissance Squadron (later 415th Bombardment Squadron): 3 February 1942 – 3 July 1945 78th Air Refueling Squadron: 1 October 1987 – 1 August 1992 98th Air Refueling Squadron: 16 August 1950 – 16 June 1952 (attached to 98th Bombardment Wing) 98th Operations Support Squadron (circa 2017) 98th Range Squadron (circa 2017) 343d Bombardment Squadron: 3 February 1942 – 10 November 1945; 1 July 1947 – 16 June 1952 (attached to 98th Bombardment Wing after c. 1 April 1951) 344th Bombardment Squadron: 3 February 1942 – 10 November 1945; 1 July 1947 – 16 June 1952 (attached to 98th Bombardment Wing after c. 1 April 1951) 345th Bombardment Squadron: 3 February 1942 – 10 November 1945; 1 July 1947 – 16 June 1952 (attached to 98th Bombardment Wing after c. 1 April 1951) Stations MacDill Field, Florida, 3 February 1942 Barksdale Field, Louisiana, 9 February 1942 Fort Myers Army Air Field, Florida, 30 March 1942 Drane Field, Florida, 17 May– July 1942 RAF Ramat David, Palestine, 25 July 1942 (air echelon), 21 August 1942 (ground echelon) RAF Fayid, Egypt, c. 11 November 1942 Baheira Airfield, Libya, 29 January 1943 Benina Airfield, Libya, c. 14 February – 26 March 1943; 4 Ap4 – 25 September 1943 Berca Airfield, Libya, 26 March – 4 April 1943 Hergla Airfield, Tunisia, c. 25 September 1943 Brindisi Airfield, Italy, c. 22 November 1943 Manduria Airfield, Italy, 19 December 1943 Lecce Airfield, Italy, 17 January 1944 – 19 April 1945 Fairmont Army Air Field, Nebraska, 8 May 1945 McCook Army Air Field, Nebraska, 25 June – 10 November 1945 Andrews Field, Maryland, 1 July 1947 Spokane Army Air Field (later Spokane Air Force Base, Fairchild Air Force Base), Washington, 24 September 1947 – 16 June 1952 Deployed to Kadena Air Base, Okinawa, c. 25 August – 10 December 1948 Deployed to RAF Sculthorpe, England, 17 May – 17 August 1949 Deployed to Yokota Air Base, Japan, c. 5 August 1950 – 16 June 1952 Barksdale Air Force Base, Louisiana, 1 October 1987 – 30 September 1994 Nellis Air Force Base, Nevada, 29 October 2001 – present Aircraft Consolidated B-24 Liberator, 1942–1945 Boeing B-29 Superfortress, 1945; 1947–1953 McDonnell Douglas KC-10 Extender, 1987–1994. References Notes Explanatory notes Citations Bibliography 098
44171593
https://en.wikipedia.org/wiki/Nuria%20Oliver
Nuria Oliver
Nuria Oliver is a computer scientist. She is Chief Scientific Adviser at the Vodafone Institute, Chief Data Scientist at DataPop Alliance, an independent director on the board of directors of Bankia, and Commissioner of the Presidency of Valencia for AI and COVID-19. Previously, she was Director of Data Science Research at Vodafone, Scientific Director at Telefónica and researcher at Microsoft Research. She holds a PhD from the Media Lab at MIT, and is an IEEE Fellow, ACM Fellow, a member of the board of ELLIS, and elected permanent member of the Royal Academy of Engineering of Spain. She is one of the most cited female computer scientists in Spain, with her research having been cited by more than 19,000 publications. She is well known for her work in computational models of human behavior, human computer-interaction, mobile computing and big data for social good. Biography Nuria graduated with a degree in Telecommunications Engineering from the Universidad Politecnica de Madrid in 1994. She was awarded the Spanish First National Prize of Telecommunication Engineers in 1994. In 1995 she received a La Caixa fellowship to study at MIT, where she received her doctorate at the Media Lab in the area perceptual intelligence. In 2000, she joined as a Research in the area of human-computer interfaces for Microsoft Research in Redmond USA and worked there until 2007. In 2007 she moved to Spain to work at Telefónica R&D in Barcelona as Director of Multimedia Research, the only female director hired at Telefónica R&D at the time. Her work focused on the use of the mobile phone as a sensor of human activity, and worked there until 2016. In 2017 she joined Vodafone as Director of Data Science Research, and also was named the first Chief Data Scientist at DataPop Alliance, an international non-profit organization created by the Harvard Humanitarian Initiative, MIT Media Lab and Overseas Development Institute devoted to leveraging Big Data to improve the world. In 2018 she was elected permanent member of the Spanish Royal Academy of Engineering. She is a member of the external advisory board of the ETIC department at the Pompeu Fabra University, the LASIGE department at the University of Lisbon, the Informatics Department at King's College London, the eHealth Center at the Open University of Catalonia and Mahindra Comviva. She is the spokesperson and a member of the High Level Advisory Committee to the Spanish Government on Artificial Intelligence and Big Data. She is also a member of the Strategic Advisory Board to the Innovation Agency of Valencia. In 2019, she launched a successful bid for Alicante to host a research unit of ELLIS, a network of European AI research laboratories. In 2020 during the COVID-19 pandemic, she was named Commissioner of the Presidency of Valencia for AI and COVID-19, and led the data-science team for the Valencian Government during the crisis. She was responsible for designing and launching covid19impactsurvey, one of the largest citizen-science surveys in Spain, with over 500,000 participants. She was co-leader of ValenciaIA4COVID, the winning team of the $500,000 XPRIZE Pandemic Response Challenge, sponsored by Cogizant. This was the first Spanish team to win an XPrize competition. Awards and honors Spanish First National Prize of Telecommunication Engineers in 1994 Top 100 innovators under 35 (TR100, today TR35) by MIT Technology Review, based on her work in intelligent human-computer interfaces. "100 future leaders who will design Spain in the next decades" by El Capital Magazine in 2009. Best paper award in ACM Multimedia 2009 for her research on duplicate video detection. Best paper award of ACM MobileHCI 2009 for her research on comparing speech and text on mobile phones. Rising Talent by the Women's Forum for the Economy and Society. ACM RecSys 2012 best paper award on collaborative filtering. Profiled as one of nine female Spanish leaders in technology in 2012 by the Spanish newspaper El País. ACM ICMI Ten Year Technical Impact Award as one of the authors of a paper on layered graphical models of human behavior. The paper described a system that was able to discern the activity of a user based on evidence from video, acoustic and computer interactions. ACM Ubicomp 2014 best paper award for her work on economic value of personal data. Distinguished Scientist Award by Association for Computing Machinery (ACM), being the first Spanish female computer scientist to receive this award. Best paper award in ACM Ubicomp 2015 for her research on boredom detection using mobile phones. Fellow of the European Association of Artificial Intelligence (ECCAI). IEEE Fellow, recognizing her work in probabilistic models of human behavior and design of interactive intelligent systems. Winner of the 2016 European Digital Woman of the Year Award Top 100 female leader in Spain by Mujeres&Cia "Salvà i Campillo" prize by the Catalan association of telecommunication engineers Ada Byron prize from the University of Deusto, a Spanish prize at the national level which highlights the work of women who bring progress to new areas of technology. It recognized her work in artificial intelligence, big data, human-machine interaction, computational models of human behavior and mobile computing. Gaudí Gresol award 2016 Ángela Ruiz Robles Spanish National Computer Science Award Honorary doctorate by the Universidad Miguel Hernández of Elche Distinction of the Government of the Valencian Community in 2017 Named ACM Fellow in 2017 for her contributions in probabilistic multimodal models of human behavior and uses in intelligent, interactive systems. Named elected academic of the Spanish Royal Academy of Engineering (2018). Elected to Academia Europaea, 2018 Winner of European DatSci & AI 2019 Data Scientist of the Year Winner of the Esri Data Scientist of the Year 2020 Winner of the "Women to Follow" award 2020 (technology section) Winner of the 500k XPRIZE Pandemic Response Challenge sponsored by Cognizant Keynotes and scientific talks (selected) IJCAI 2001: Live demo with Eric Horvitz of a context aware office activity recognition system during Bill Gates keynote speech at IJCAI 2001. TTI Vanguard 2006: Invited Speaker. CICV 2010 IEEE workshop: Plenary speaker, "Research Challenges and Opportunities in Multimedia: a Human Centric Perspective" UCVP 2011: Keynote speaker EUSIPCO 2011: Keynote speaker, "Urban Computing and Smart Cities: Opportunities and Challenges" NIPS 2011 Big Learning - Algorithms, Systems & Tools Workshop: Keynote speaker, "Towards Human Behavior Understanding from Pervasive Data: Opportunities and Challenges Ahead" European Wireless 2014: Keynote speaker, "Small devices for big impact" ACM/IEEE Models 2014: Keynote speaker, "Towards data-driven models of human behavior" NTTS 2015 (New Techniques and Technologies for Statistics): Keynote speaker, "Big Mobile Data for Official Statistics" IEEE Int Conf on Data Science and Advanced Analytics 2015 (IEEE DSAA): Keynote speaker, "Towards data-driven models of human behavior" ACM Intelligent Environments 2016 (ACM IE): Keynote speaker, "Towards human behavior modeling from data" IAPP Europe Data Protection Congress 2016 (IAPP DPC): Visionary keynote speaker, "Big data for social good" Media appearances 1998: Presentation to Spanish Senate 2001: Interview with El País: People never associate Spain with 'high tech' 2004: Interview with El Pais: A Spaniard at the digital peak 2005: Featured on La 2 (Spain) TV program "De cerca" [Close up]: "Conversation with Alicante native Nuria Oliver, research at Microsoft about the latest advances in artificial intelligence". 2006: Interview with El País: Voyage to the center of Microsoft. 2008: Speech in front of the King and Queen of Spain 2010: Featured on TV program "Para todos" [For everyone] on La 2 (Spain) 2012: TEDxRamblas Talk: The Invisible Army. 2012: Radio Interview as an expert on Artificial Intelligence 2013: TEDxBarcelona Talk: My cellphone, my partner. 2013: Talk at WIRED 2013: What big data and the Mexican pandemic taught us. 2014: Personal Data Monetization at MIT Technology Review, the Washington Post and other media. 2014: Predicting crime using mobile data. 2014: Interview by BBC: Ebola: Can big data analytics help contain its spread? 2015: Interview with ARA: "We look at our phone because we are constantly rewarded" 2015: Interview with Glamour magazine: "Women with success" 2015: El Mundo: Predicting a crime using 'big data' 2015: Interview for Barcelona Metropolis Magazine: Nothing in excess, including technology 2015: Article for El Pais newspaper: The mobile phone sheds its skin 2015: NBC news: Next thing your phone may detect: boredom 2015: MIT Technology Review: Your smartphone can tell if you're bored 2015: Fortune magazine: Your phone can tell when you're bored 2015: RNE Radio 3: Interview to Nuria Oliver on Artificial Intelligence 2015: Featured article for El Pais Sunday magazine: Nuria Oliver 2016: RTVE 2: Interview with Nuria Oliver on Mobile Computing in "El cazador de cerebros" program 2017: El Periódico de Catalunya: Nuria Oliver: Questioning the status quo 2019: La Vanguardia: Nuria Oliver: a brilliant mind in artificial intelligence 2020: El Mundo: The women of the new decade - Nuria Oliver - She knows (almost) everything about the relationship between humans and computers 2020: Politico Europe: How AI is helping fight a pandemic - Europe's coronavirus app - Insights from Valencia References External links Year of birth missing (living people) Living people Massachusetts Institute of Technology alumni Spanish computer scientists Spanish women computer scientists People from Alicante Fellows of the Association for Computing Machinery Fellow Members of the IEEE Fellows of the European Association for Artificial Intelligence Members of Academia Europaea
3834804
https://en.wikipedia.org/wiki/Sabayon%20Linux
Sabayon Linux
Sabayon Linux or Sabayon (formerly RR4 Linux and RR64 Linux), was a Gentoo-based Italian Linux distribution created by Fabio Erculiani and the Sabayon development team. Sabayon followed the "out of the box" philosophy, aiming to give the user a wide number of applications ready to use and a self-configured operating system. Sabayon Linux featured a rolling release cycle, its own software repository and a package management system called Entropy. Sabayon was available in both x86 and AMD64 distributions and there was support for ARMv7 in development for the BeagleBone. It was named after an Italian dessert, zabaione, which is made from eggs. Sabayon's logo was an impression of a chicken foot. In November 2020 it was announced that future Sabayon Linux versions would base on Funtoo instead of Gentoo Linux. Sabayon Linux would hence be rebranded to MocaccinoOS. Editions Since version 4.1, Sabayon had been released in two different flavors featuring either the GNOME or KDE desktop environments, with the ultralight Fluxbox environment included as well. (In the previous versions all three environments were included in a DVD ISO image). Since Sabayon's initial release, additional versions of Sabayon have added other X environments, including Xfce and LXDE. A CoreCD edition which featured a minimal install of Sabayon was released to allow the creation of spins of the Sabayon operating system; however, this was later discontinued and replaced by CoreCDX (fluxbox window manager) and Spinbase (no X environment) first and by "Sabayon Minimal" later. A ServerBase edition was released which featured a server-optimized kernel and a small footprint, but this was later discontinued and integrated into the "Sabayon Minimal". Daily build images were available to Sabayon testers, but were released weekly to the public on the system mirrors containing stable releases. Official releases were simply DAILY versions which had received deeper testing. The adoption of Molecule led the team to change the naming system for releases. Currently available versions are: Derivatives Additional X window managers could also be installed from the Sabayon repositories, such as Cinnamon and Razor-qt. Configuration Sabayon used the same core components as the Gentoo Linux distribution, which means it used systemd. All of the Gentoo configuration tools, such as etc-update and eselect were fully functional. Sabayon also included additional tools for automatic configuration of various system components such as OpenGL. Sabayon provided proprietary video drivers for both nVidia and ATI hardware. These are enabled if compatible hardware is found; otherwise, the default open-source drivers are used. Because of the automatic driver configuration, the compositing window manager Compiz Fusion and KWin were used for the GNOME and KDE editions, respectively. The discovery and configuration of network cards, wireless cards, and webcams was similarly automatic. Most printers were detected automatically but required specific manual configuration through the CUPS interface. Package management Sabayon Linux relied on two package managers. Portage was inherited from Gentoo, while Entropy was developed for Sabayon by Fabio Erculiani and others. Portage downloaded source-code and compiled it specifically for the target system, whereas Entropy managed binary files from servers. The binary tarball packages were precompiled using the Gentoo Linux unstable tree. Entropy clients then pulled these tarballs and performed the various post- and pre-compilation calls of the Gentoo ebuild to set up a package correctly. This means the system was completely binary-compatible with a Gentoo system using the same build configuration. The adoption of two package managers allowed expert users to access the full flexibility of the Gentoo system and others to easily and quickly manage software applications and updates. The Entropy software featured the ability of allowing users to help generate relevant content by voting and by attaching images, files and web links to a package. The Rigo application browser was a GUI front-end to Entropy that was the successor to Sulfur (aka Entropy Store). Taking on a "less is more" approach, Rigo was designed to be simple and fast. During an interview with Fabio Erculiani he described Rigo as a ”Google-like” Applications Management UI. Rigo handled system updates, package searching, install/removal of packages, up/down voting of packages, and many other common Entropy tasks. Applications The number of applications installed by default was higher for DVD editions than for editions small enough to fit on a CD. Their selection was also tailored to the choice between GNOME, KDE, Xfce, and MATE. The XBMC environment could be run without loading the full desktop environment. The following table summarizes the software included in GNOME, KDE, Xfce, and MATE versions: Considerable software was also available in the main repository. Many Microsoft Windows executables were automatically run in Wine. Other applications included Adobe Reader, Audacity, Clementine, aMSN, Celestia, Eclipse, FileZilla, GnuCash, Google Earth, Inkscape, Kdenlive, Mozilla Firefox, Mozilla Sunbird, Mozilla Thunderbird, Nero Burning ROM, Opera, Picasa, Skype, Teamviewer, VirtualBox, Vuze and Wireshark. Games (open-source and proprietary) included Doom 3, Eternal Lands, Nexuiz, OpenArena, Quake, Quake 2, Quake 3, Quake 4, Sauerbraten, The Battle for Wesnoth, Tremulous, Unreal, Unreal Tournament, Urban Terror, Vendetta Online, Warsow, Warzone 2100, Wolfenstein: Enemy Territory, World of Padman and Xonotic. Installation Gentoo's installation was generally not recommended for beginners because its package management system required users to compile source code to install packages (most distributions rely on precompiled binaries). Compiling larger programs and the base operating system could take several hours. Sabayon was considered easier to install than "pure Gentoo" because it used both the Portage package management system and its own Entropy package management, which allowed the user the option of using precompiled binary files during installation. Although the distribution was a LiveDVD (or a LiveCD for LXDE, CoreCDX, SpinBase and ServerBase) it could be installed on a hard disk once the system was fully booted. Sabayon Linux used the Calamares installer. In previous releases, Anaconda and the Gentoo Linux Installer were used. Installation was designed to be simpler than is typical for Gentoo, which required more extensive knowledge of the operating system (particularly for the compilation of the Linux kernel). Installation took up to 30 minutes depending on the speed of the DVD drive. Those without a DVD drive could install the GNOME and KDE versions through a USB drive, which could be created with Unetbootin. A program played music during the boot process. System requirements i686-compatible processor (ex. Intel Pentium II, Pentium III, Celeron, AMD Athlon, AMD Duron) 512 MB of RAM (1 GB recommended) OpenGL capable 3D graphics card (mostly Nvidia, ATI (brand), Intel GMA, VIA Technologies) Display Data Channel capable Monitor Mouse and Keyboard DVD Drive or USB flash drive for installation Internet Connection Recommended Minimum of 12 GB of free hard disk space for KDE and GNOME. Minimum of 5 GB for the others. Recommended at least 40 GB for KDE or GNOME installations, and 15 GB for the others. Releases Reception Tux Machines reviewed Sabayon Linux in 2005. Tux Machines wrote: Dedoimedo wrote post in 2008. Its review of Sabayon Linux: Linux.com wrote review about Sabayon 3.4: LWN.net reviewed Sabayon 4.0: DistroWatch Weekly reviewed Sabayon Linux in 2009: LinuxBSDos wrote post in 2009. Its review of Sabayon 5: References External links Sabayon Linux on DistroWatch Sabayon Linux on OpenSourceFeed Gallery Gentoo Linux derivatives KDE Live USB Operating system distributions bootable from read-only media X86-64 Linux distributions Rolling Release Linux distributions Linux distributions
2074703
https://en.wikipedia.org/wiki/Trojan%20Horse%20%28disambiguation%29
Trojan Horse (disambiguation)
The Trojan Horse, according to legend, was a giant hollow horse in which Greeks hid to gain entrance to Troy. It is used metaphorically to mean any trick or strategy that causes a target to invite a foe into a securely protected place, or to deceive by appearance, hiding malevolent intent in an outwardly benign exterior. Trojan Horse may also refer to: Trojan horse (business), a business offer that appears to be a good deal but is not Trojan horse (computing), a computer program that appears harmless but is harmful Art, entertainment, and media Fictional entities White Base or The Trojan Horse, a fictional battleship from Mobile Suit Gundam Literature Caballo de Troya or Trojan Horse, a 1984 science fiction novel by Juan José Benitez Creationism's Trojan Horse, a 2004 book on the origins of the intelligent design movement, by Barbara Forrest and Paul R. Gross The Trojan Horse, a 1940 novel by Hammond Innes The Trojan Horse (Morley novel), a novel by Christopher Morley Trojan Horse, a 2012 novel by Mark Russinovich Music "Trojan Horse" (song), a 1978 song by Dutch girl group Luv' "Trojan Horse", a song by Bloc Party from the 2008 album Intimacy "Trojan Horse", a song by Agnes Obel from the 2016 album Citizen of Glass Television The Trojan Horse (miniseries), a 2008 Canadian miniseries "Trojan Horse" (NCIS), an episode of NCIS "Trojan Horse" (The Avengers), an episode of the British TV series See also Operation Trojan Horse (book), a 1970 book by John Keel Trojan Horse scandal, a 2014 scandal involving claims of a plot by Muslims to take over schools in Birmingham The Trojan Horse (film), the American title of the 1961 film La guerra di Troia Trojan Horse Incident, a 1985 incident in Athlone, Cape Town, in which police officers opened fire on stone-throwing protesters
69130946
https://en.wikipedia.org/wiki/Freedom%20Finger
Freedom Finger
Freedom Finger is a 2019 side-scrolling shooter developed and published by Wide Right Interactive. The game was originally released for Microsoft Windows and Nintendo Switch, and was later ported to PlayStation 4, Xbox One and macOS. Gameplay Freedom Finger is a shoot 'em up where players assume the role of a rookie space pilot, Gamma Ray, where they have to rescue a group of kidnapped scientists. Unlike most shmup games, there are options for melee combat. There's also an option to grab enemies and either use them as shield, or using their guns as power-ups. Development and release The game was announced for Microsoft Windows and macOS on March 4, 2019. On August 29, 2019, the game got its definitive release date and initial platforms, September 27, 2020 for Microsoft Windows and Nintendo Switch, replacing macOS. On March 10, 2020, PlayStation 4 and Xbox One ports were announced, for a March 24, 2020 release. On August 3, 2020, a macOS port was released. On December 3, 2020, Limited Run Games announced a physical PlayStation 4 physical release with a manual that includes the entire source code for the game, under the BSD-4-Clause license according to its programmer Mark Zorn. Reception Freedom Finger for Nintendo Switch received "mixed" reviews according to the review aggregation website Metacritic. The PlayStation 4 and Xbox One ports receieved "generally favorable" reviews. Both Nintendo Life and Shacknews rated it 8/10. Footnotes Notes References External links 2019 video games Commercial video games with freely available source code Horizontally scrolling shooters Indie video games macOS games Nintendo Switch games Open-source video games PlayStation 4 games Shooter video games Single-player video games Software using the BSD license Video games developed in the United States Xbox One games Windows games
11384090
https://en.wikipedia.org/wiki/Data%20warehouse%20appliance
Data warehouse appliance
In computing, the term data warehouse appliance (DWA) was coined by Foster Hinshaw for a computer architecture for data warehouses (DW) specifically marketed for big data analysis and discovery that is simple to use (not a pre-configuration) and high performance for the workload. A DWA includes an integrated set of servers, storage, operating systems, and databases. In marketing, the term evolved to include pre-installed and pre-optimized hardware and software as well as similar software-only systems promoted as easy to install on specific recommended hardware configurations or preconfigured as a complete system. These are marketing uses of the term and do not reflect the technical definition. A DWA is designed specifically for high performance big data analytics and is delivered as an easy-to-use packaged system. DW appliances are marketed for data volumes in the terabyte to petabyte range. Technology The data warehouse appliance (DWA) has several characteristics which differentiate that architecture from similar machines in a data center, such as an enterprise data warehouse (EDW). A DWA has a very tight integration of its internal components which are optimized for "data-centric" operations in contrast to "compute-centric" operations. The latter tend to emphasize number of CPU's, cores and network bandwidth. A DWA is trivial to use and install. In contrast to a "pre-configuration" of components, a DWA has very few configuration switches or options. The elimination of such options significantly reduces configuration error – the number one cause for failure in large systems. A DWA is optimized for analytics on big data. In contrast, preceding architectures (including parallel ones) focused on "enterprise data warehouse" being a general-purpose repository for data and supporting analytics as an ancillary task. Most DW appliances use massively parallel processing (MPP) architectures to provide high query performance and platform scalability. MPP architectures consist of independent processors or servers executing in parallel. Most MPP architectures implement a "shared-nothing architecture" where each server operates self-sufficiently and controls its own memory and disk. DW appliances distribute data onto dedicated disk storage units connected to each server in the appliance. This distribution allows DW appliances to resolve a relational query by scanning data on each server in parallel. The divide-and-conquer approach delivers high performance and scales linearly as new servers are added into the architecture. History "Data warehouse appliance" is a term coined by Foster Hinshaw, the founder of Netezza. In creating the first data warehouse appliance, Hinshaw and Netezza used the foundations developed by Model 204, Teradata, and others, to pioneer a new category to address consumer analytics efficiently by providing a modular, scalable, easy-to-manage database system that’s cost effective. MPP database architectures have a long pedigree. Some consider Teradata's initial product as the first DW appliance — or Britton-Lee's. Teradata acquired Britton Lee — renamed ShareBase — in June, 1990. Others disagree, considering appliances as a "disruptive technology" for Teradata Additional vendors, including Tandem Computers, and Sequent Computer Systems also offered MPP architectures in the 1980s. Open source and commodity computing components aided a re-emergence of MPP data warehouse appliances. Advances in technology reduced costs and improved performance in storage devices, multi-core CPUs and networking components. Open-source RDBMS products, such as Ingres and PostgreSQL, reduce software-license costs and allow DW-appliance vendors to focus on optimization rather than providing basic database functionality. Open-source Linux became a common operating system for DW appliances. Other DW appliance vendors use specialized hardware and advanced software, instead of MPP architectures. Netezza announced a "data appliance" in 2003, and used specialized field-programmable gate array hardware. Kickfire followed in 2008 with what they called a dataflow "sql chip". In 2009 more DW appliances emerged. IBM integrated its InfoSphere warehouse (formerly DB2 Warehouse) with its own servers and storage to create the IBM InfoSphere Balanced Warehouse. Netezza introduced its TwinFin platform based on commodity IBM hardware. Other DW appliance vendors have also partnered with major hardware vendors. DATAllegro, prior to acquisition by Microsoft, partnered with EMC Corporation and Dell and implemented open-source Ingres on Linux. Greenplum had a partnership with Sun Microsystems and implements Greenplum Database (based on PostgreSQL) on Solaris using the ZFS file system. HP Neoview uses HP NonStop SQL. The market has also seen the emergence of data-warehouse bundles where vendors combine their hardware and database software together as a data warehouse platform. The Oracle Optimized Warehouse Initiative combines the Oracle Database with hardware from various computer manufacturers (Dell, EMC, HP, IBM, SGI and Sun Microsystems). Oracle's Optimized Warehouses offer pre-validated configurations and the database software comes pre-installed. In September 2008 Oracle began offering a more classic appliance offering, the HP Oracle Database Machine, a jointly developed and co-branded platform that Oracle sold and supported and HP built in configurations specifically for Oracle. In September 2009, Oracle released a second-generation Exadata system, based on their acquired Sun Microsystems hardware. See also Business Intelligence (BI) Data mining Data mart Data warehouse References External links DBMS2 - Positioning the data warehouse appliances Business intelligence Data warehousing Information technology management
364494
https://en.wikipedia.org/wiki/OpenMSX
OpenMSX
openMSX is a free software emulator for the MSX architecture. It is available for multiple platforms, including Microsoft Windows and POSIX systems such as Linux For copyright reasons, the emulator cannot be distributed with original MSX-BIOS ROM images. Instead, openMSX includes C-BIOS, a minimal implementation of the MSX BIOS, allowing some games to be played without the original ROM image. It is possible for the user to replace C-BIOS by native BIOS if they prefer. OpenMSX emulates a large amount of MSX systems and MSX related hardware, including: MSXturboR Moonsound IDE Controller by Sunrise GFX9000 Pioneer Palcom LaserDisc Notable features include: Hard- and software Scalers Debugging Tcl Script Support Cheat Finder (through Tcl) Game Trainers (through Tcl) Audio/Video recording Reverse support (go back in emulated time to correct mistakes or debug what happened) OpenMSX has an open communication protocol to communicate with the openMSX emulator. Utilizing this communication protocol enables to write versatile add-ons for openMSX. Projects making use of this protocol include the following applications: openMSX Catapult (by the openMSX team) openMSX Debugger (by the openMSX team) openMSXControl plugin NekoLauncher openMSX openMSX Peashooter openMSX Control Plugin for Gedit Currently Catapult, a GUI developed for the emulator that is part of the project, is being redeveloped utilizing Python and the Qt toolkit. The openMSX Debugger is also under development, written in C++, also utilizing the Qt Toolkit. References Sources Project Homepage Project Forum C-BIOS Compatibility Page openMSX 0.5.1 review (2005) NekoLauncher openMSX openMSX Peashooter openMSX Control Plugin for Gedit openMSX development builds for Mac, Windows, Android & Dingux Free emulation software Free software programmed in Tcl Free software projects Linux emulation software MSX emulators Software that uses wxWidgets Unix emulation software Windows emulation software Android emulation software Free and open-source Android software
24565549
https://en.wikipedia.org/wiki/Maciej%20Stachowiak
Maciej Stachowiak
Maciej Stachowiak (; born June 6, 1976) is a Polish American software developer currently employed by Apple Inc., where he is a leader of the development team responsible for the WebKit Framework. A longtime proponent of open source software, Stachowiak was involved with the SCWM, GNOME and Nautilus projects for Linux before joining Apple. He is actively involved the development of web standards, served as a co-chair of the World Wide Web Consortium's HTML 5 working group and is a member of the Web Hypertext Application Technology Working Group steering committee. Education After graduating from East High School (Rochester, New York) in 1994, Stachowiak was accepted into MIT where he completed Course 6 - Electrical Engineering and Computer Science and received both his S.B. and M.Eng. in 1998. While at MIT Stachowiak worked on the Rethinking CS101 project, and in 1997 he began the Scheme Constraints Window Manager project with Greg J. Badros. He also contributed to a paper with the Cognitive & Neural Sciences Office of Naval Research. Stachowiak's MIT M.Eng. thesis on "Automated Extraction of Structured data from HTML Documents" was indicative of his early interest in web standards and development. Career Eazel From 1999 to 2001, Stachowiak contributed to various Linux software projects and was employed by Eazel as one of their lead developers along with Andy Hertzfeld and Darin Adler to create the Nautilus file manager. He was also a developer on the Object Activation Framework (OAF) for the GNOME desktop environment from 1999 to 2001. In 1999, he became a maintainer for the Scheme interpreter for Guile. During his employment at Eazel, Stachowiak also contributed to Eye of GNOME, GNOME Libs, Gravevine, GnoP, and was a Developer on Medusa, Bonobo, and GNOME VFS. Stachowiak was also a member of GNOME Foundation board of directors. He told Fortune magazine, "[Eazel's] seemed like a borderline-crazy business plan ... But I said, 'Sure, I'll work on it.'" while his colleagues "fidgeted uncomfortably". Two months later, Eazel ceased operations, laying off its entire staff. Apple Inc. After the closure of Eazel, most of the remaining senior engineers (including Bud Tribble, Don Melton, Darin Adler, John Sullivan, Ken Kocienda, and Stachowiak) joined Apple's Safari team in June 2001 and were later joined by Netscape/Mozilla Firefox alumnus David Hyatt. On June 13, 2002, Stachowiak announced on a mailing list that Apple was releasing JavaScriptCore, a framework for Mac OS X that was based on KDE's JavaScript engine. Through the WebKit project, JavaScriptCore has since evolved into SquirrelFish Extreme, a JavaScript engine that compiles JavaScript into native machine code. On June 6, 2005, Webkit was made open source (which was coincidentally Stachowiak's birthday). Web standards participation Stachowiak wrote on behalf of Apple along with members of the Mozilla Foundation and Opera Software in a proposal that the new HTML working group of the W3C adopt the Web Hypertext Application Technology Working Group’s HTML5 as the starting point of its work. On 9 May 2007, the new HTML working group resolved to do that. In May 2009, Stachowiak co-authored the W3C HTML Design Principles for HTML5, one of his first major documentation projects for the W3C. As of 27 August 2009, Stachowiak has co-chaired the World Wide Web Consortium's HTML Working Group along with IBM's Sam Ruby and Microsoft's Paul Cotton. WebKit, the underpinnings of Safari, was published as open-source software on June 6, 2005. When Safari was run with this latest version of WebKit, it passed the Web Standards Project's Acid2 test. Stachowiak reported on the WebKit blog on March 26, 2008, that the software had passed 100/100 on the Acid3 test, making Safari the first browser to pass. References External links Surfin' Safari - a weblog dedicated to discussing WebKit development, by various members of the WebKit team Apple Inc. employees Free software programmers World Wide Web Consortium GNOME developers Living people MIT School of Engineering alumni American people of Polish descent People from Koszalin 1976 births
2911654
https://en.wikipedia.org/wiki/Data%20conversion
Data conversion
Data conversion is the conversion of computer data from one format to another. Throughout a computer environment, data is encoded in a variety of ways. For example, computer hardware is built on the basis of certain standards, which requires that data contains, for example, parity bit checks. Similarly, the operating system is predicated on certain standards for data and file handling. Furthermore, each computer program handles data in a different manner. Whenever any one of these variables is changed, data must be converted in some way before it can be used by a different computer, operating system or program. Even different versions of these elements usually involve different data structures. For example, the changing of bits from one format to another, usually for the purpose of application interoperability or of the capability of using new features, is merely a data conversion. Data conversions may be as simple as the conversion of a text file from one character encoding system to another; or more complex, such as the conversion of office file formats, or the conversion of image formats and audio file formats. There are many ways in which data is converted within the computer environment. This may be seamless, as in the case of upgrading to a newer version of a computer program. Alternatively, the conversion may require processing by the use of a special conversion program, or it may involve a complex process of going through intermediary stages, or involving complex "exporting" and "importing" procedures, which may include converting to and from a tab-delimited or comma-separated text file. In some cases, a program may recognize several data file formats at the data input stage and then is also capable of storing the output data in several different formats. Such a program may be used to convert a file format. If the source format or target format is not recognized, then at times a third program may be available which permits the conversion to an intermediate format, which can then be reformatted using the first program. There are many possible scenarios. Information basics Before any data conversion is carried out, the user or application programmer should keep a few basics of computing and information theory in mind. These include: Information can easily be discarded by the computer, but adding information takes effort. The computer can add information only in a rule-based fashion. Upsampling the data or converting to a more feature-rich format does not add information; it merely makes room for that addition, which usually a human must do. Data stored in an electronic format can be quickly modified and analyzed. For example, a true color image can easily be converted to grayscale, while the opposite conversion is a painstaking process. Converting a Unix text file to a Microsoft (DOS/Windows) text file involves adding characters, but this does not increase the entropy since it is rule-based; whereas the addition of color information to a grayscale image cannot be done programmatically, since only a human knows which colors are needed for each section of the picture–there are no rules that can be used to automate that process. Converting a 24-bit PNG to a 48-bit one does not add information to it, it only pads existing RGB pixel values with zeroes, so that a pixel with a value of FF C3 56, for example, becomes FF00 C300 5600. The conversion makes it possible to change a pixel to have a value of, for instance, FF80 C340 56A0, but the conversion itself does not do that, only further manipulation of the image can. Converting an image or audio file in a lossy format (like JPEG or Vorbis) to a lossless (like PNG or FLAC) or uncompressed (like BMP or WAV) format only wastes space, since the same image with its loss of original information (the artifacts of lossy compression) becomes the target. A JPEG image can never be restored to the quality of the original image from which it was made, no matter how much the user tries the "JPEG Artifact Removal" feature of his or her image manipulation program. Automatic restoration of information that was lost through a lossy compression process would probably require important advances in artificial intelligence. Because of these realities of computing and information theory, data conversion is often a complex and error-prone process that requires the help of experts. Pivotal conversion Data conversion can occur directly from one format to another, but many applications that convert between multiple formats use an intermediate representation by way of which any source format is converted to its target. For example, it is possible to convert Cyrillic text from KOI8-R to Windows-1251 using a lookup table between the two encodings, but the modern approach is to convert the KOI8-R file to Unicode first and from that to Windows-1251. This is a more manageable approach; rather than needing lookup tables for all possible pairs of character encodings, an application needs only one lookup table for each character set, which it uses to convert to and from Unicode, thereby scaling the number of tables down from hundreds to a few tens. Pivotal conversion is similarly used in other areas. Office applications, when employed to convert between office file formats, use their internal, default file format as a pivot. For example, a word processor may convert an RTF file to a WordPerfect file by converting the RTF to OpenDocument and then that to WordPerfect format. An image conversion program does not convert a PCX image to PNG directly; instead, when loading the PCX image, it decodes it to a simple bitmap format for internal use in memory, and when commanded to convert to PNG, that memory image is converted to the target format. An audio converter that converts from FLAC to AAC decodes the source file to raw PCM data in memory first, and then performs the lossy AAC compression on that memory image to produce the target file. Lost and inexact data conversion The objective of data conversion is to maintain all of the data, and as much of the embedded information as possible. This can only be done if the target format supports the same features and data structures present in the source file. Conversion of a word processing document to a plain text file necessarily involves loss of formatting information, because plain text format does not support word processing constructs such as marking a word as boldface. For this reason, conversion from one format to another which does not support a feature that is important to the user is rarely carried out, though it may be necessary for interoperability, e.g. converting a file from one version of Microsoft Word to an earlier version to enable transfer and use by other users who do not have the same later version of Word installed on their computer. Loss of information can be mitigated by approximation in the target format. There is no way of converting a character like ä to ASCII, since the ASCII standard lacks it, but the information may be retained by approximating the character as ae. Of course, this is not an optimal solution, and can impact operations like searching and copying; and if a language makes a distinction between ä and ae, then that approximation does involve loss of information. Data conversion can also suffer from inexactitude, the result of converting between formats that are conceptually different. The WYSIWYG paradigm, extant in word processors and desktop publishing applications, versus the structural-descriptive paradigm, found in SGML, XML and many applications derived therefrom, like HTML and MathML, is one example. Using a WYSIWYG HTML editor conflates the two paradigms, and the result is HTML files with suboptimal, if not nonstandard, code. In the WYSIWYG paradigm a double linebreak signifies a new paragraph, as that is the visual cue for such a construct, but a WYSIWYG HTML editor will usually convert such a sequence to <BR><BR>, which is structurally no new paragraph at all. As another example, converting from PDF to an editable word processor format is a tough chore, because PDF records the textual information like engraving on stone, with each character given a fixed position and linebreaks hard-coded, whereas word processor formats accommodate text reflow. PDF does not know of a word space character—the space between two letters and the space between two words differ only in quantity. Therefore, a title with ample letter-spacing for effect will usually end up with spaces in the word processor file, for example INTRODUCTION with spacing of 1 em as I N T R O D U C T I O N on the word processor. Open vs. secret specifications Successful data conversion requires thorough knowledge of the workings of both source and target formats. In the case where the specification of a format is unknown, reverse engineering will be needed to carry out conversion. Reverse engineering can achieve close approximation of the original specifications, but errors and missing features can still result. Electronics Data format conversion can also occur at the physical layer of an electronic communication system. Conversion between line codes such as NRZ and RZ can be accomplished when necessary. See also Character encoding Comparison of programming languages (basic instructions)#Data conversions Data migration Data transformation Data wrangling Transcoding Distributed Data Management Architecture (DDM) Code conversion (computing) Source-to-source translation Presentation layer References Computer data
261925
https://en.wikipedia.org/wiki/Health%20care
Health care
Health care is the maintenance or improvement of health via the prevention, diagnosis, treatment, amelioration, or cure of disease, illness, injury, and other physical and mental impairments in people. Health care is delivered by health professionals and allied health fields. Medicine, dentistry, pharmacy, midwifery, nursing, optometry, audiology, psychology, occupational therapy, physical therapy, athletic training, and other health professions are all part of health care. It includes work done in providing primary care, secondary care, and tertiary care, as well as in public health. Access to health care may vary across countries, communities, and individuals, influenced by social and economic conditions as well as health policies. Providing health care services means "the timely use of personal health services to achieve the best possible health outcomes". Factors to consider in terms of health care access include financial limitations (such as insurance coverage), geographic barriers (such as additional transportation costs, the possibility to take paid time off of work to use such services), and personal limitations (lack of ability to communicate with health care providers, poor health literacy, low income). Limitations to health care services affects negatively the use of medical services, the efficacy of treatments, and overall outcome (well-being, mortality rates). Health care systems are organizations established to meet the health needs of targeted populations. According to the World Health Organization (WHO), a well-functioning health care system requires a financing mechanism, a well-trained and adequately paid workforce, reliable information on which to base decisions and policies, and well-maintained health facilities to deliver quality medicines and technologies. An efficient health care system can contribute to a significant part of a country's economy, development, and industrialization. Health care is conventionally regarded as an important determinant in promoting the general physical and mental health and well-being of people around the world. An example of this was the worldwide eradication of smallpox in 1980, declared by the WHO as the first disease in human history to be eliminated by deliberate health care interventions. Delivery The delivery of modern health care depends on groups of trained professionals and paraprofessionals coming together as interdisciplinary teams. This includes professionals in medicine, psychology, physiotherapy, nursing, dentistry, midwifery and allied health, along with many others such as public health practitioners, community health workers and assistive personnel, who systematically provide personal and population-based preventive, curative and rehabilitative care services. While the definitions of the various types of health care vary depending on the different cultural, political, organizational, and disciplinary perspectives, there appears to be some consensus that primary care constitutes the first element of a continuing health care process and may also include the provision of secondary and tertiary levels of care. Health care can be defined as either public or private. Primary care Primary care refers to the work of health professionals who act as a first point of consultation for all patients within the health care system. Such a professional would usually be a primary care physician, such as a general practitioner or family physician. Another professional would be a licensed independent practitioner such as a physiotherapist, or a non-physician primary care provider such as a physician assistant or nurse practitioner. Depending on the locality, health system organization the patient may see another health care professional first, such as a pharmacist or nurse. Depending on the nature of the health condition, patients may be referred for secondary or tertiary care. Primary care is often used as the term for the health care services that play a role in the local community. It can be provided in different settings, such as Urgent care centers that provide same-day appointments or services on a walk-in basis. Primary care involves the widest scope of health care, including all ages of patients, patients of all socioeconomic and geographic origins, patients seeking to maintain optimal health, and patients with all types of acute and chronic physical, mental and social health issues, including multiple chronic diseases. Consequently, a primary care practitioner must possess a wide breadth of knowledge in many areas. Continuity is a key characteristic of primary care, as patients usually prefer to consult the same practitioner for routine check-ups and preventive care, health education, and every time they require an initial consultation about a new health problem. The International Classification of Primary Care (ICPC) is a standardized tool for understanding and analyzing information on interventions in primary care based on the reason for the patient's visit. Common chronic illnesses usually treated in primary care may include, for example, hypertension, diabetes, asthma, COPD, depression and anxiety, back pain, arthritis or thyroid dysfunction. Primary care also includes many basic maternal and child health care services, such as family planning services and vaccinations. In the United States, the 2013 National Health Interview Survey found that skin disorders (42.7%), osteoarthritis and joint disorders (33.6%), back problems (23.9%), disorders of lipid metabolism (22.4%), and upper respiratory tract disease (22.1%, excluding asthma) were the most common reasons for accessing a physician. In the United States, primary care physicians have begun to deliver primary care outside of the managed care (insurance-billing) system through direct primary care which is a subset of the more familiar concierge medicine. Physicians in this model bill patients directly for services, either on a pre-paid monthly, quarterly, or annual basis, or bill for each service in the office. Examples of direct primary care practices include Foundation Health in Colorado and Qliance in Washington. In the context of global population aging, with increasing numbers of older adults at greater risk of chronic non-communicable diseases, rapidly increasing demand for primary care services is expected in both developed and developing countries. The World Health Organization attributes the provision of essential primary care as an integral component of an inclusive primary health care strategy. Secondary care Secondary care includes acute care: necessary treatment for a short period of time for a brief but serious illness, injury, or other health condition. This care is often found in a hospital emergency department. Secondary care also includes skilled attendance during childbirth, intensive care, and medical imaging services. The term "secondary care" is sometimes used synonymously with "hospital care". However, many secondary care providers, such as psychiatrists, clinical psychologists, occupational therapists, most dental specialties or physiotherapists, do not necessarily work in hospitals. Some primary care services are delivered within hospitals. Depending on the organization and policies of the national health system, patients may be required to see a primary care provider for a referral before they can access secondary care. In countries that operate under a mixed market health care system, some physicians limit their practice to secondary care by requiring patients to see a primary care provider first. This restriction may be imposed under the terms of the payment agreements in private or group health insurance plans. In other cases, medical specialists may see patients without a referral, and patients may decide whether self-referral is preferred. In other countries patient self-referral to a medical specialist for secondary care is rare as prior referral from another physician (either a primary care physician or another specialist) is considered necessary, regardless of whether the funding is from private insurance schemes or national health insurance. Allied health professionals, such as physical therapists, respiratory therapists, occupational therapists, speech therapists, and dietitians, also generally work in secondary care, accessed through either patient self-referral or through physician referral. Tertiary care Tertiary care is specialized consultative health care, usually for inpatients and on referral from a primary or secondary health professional, in a facility that has personnel and facilities for advanced medical investigation and treatment, such as a tertiary referral hospital. Examples of tertiary care services are cancer management, neurosurgery, cardiac surgery, plastic surgery, treatment for severe burns, advanced neonatology services, palliative, and other complex medical and surgical interventions. Quaternary care The term quaternary care is sometimes used as an extension of tertiary care in reference to advanced levels of medicine which are highly specialized and not widely accessed. Experimental medicine and some types of uncommon diagnostic or surgical procedures are considered quaternary care. These services are usually only offered in a limited number of regional or national health care centers. Home and community care Many types of health care interventions are delivered outside of health facilities. They include many interventions of public health interest, such as food safety surveillance, distribution of condoms and needle-exchange programs for the prevention of transmissible diseases. They also include the services of professionals in residential and community settings in support of self-care, home care, long-term care, assisted living, treatment for substance use disorders among other types of health and social care services. Community rehabilitation services can assist with mobility and independence after the loss of limbs or loss of function. This can include prostheses, orthotics, or wheelchairs. Many countries, especially in the west, are dealing with aging populations, so one of the priorities of the health care system is to help seniors live full, independent lives in the comfort of their own homes. There is an entire section of health care geared to providing seniors with help in day-to-day activities at home such as transportation to and from doctor's appointments along with many other activities that are essential for their health and well-being. Although they provide home care for older adults in cooperation, family members and care workers may harbor diverging attitudes and values towards their joint efforts. This state of affairs presents a challenge for the design of ICT (information and communication technology) for home care. Because statistics show that over 80 million Americans have taken time off of their primary employment to care for a loved one, many countries have begun offering programs such as the Consumer Directed Personal Assistant Program to allow family members to take care of their loved ones without giving up their entire income. With obesity in children rapidly becoming a major concern, health services often set up programs in schools aimed at educating children about nutritional eating habits, making physical education a requirement and teaching young adolescents to have a positive self-image. Ratings Health care ratings are ratings or evaluations of health care used to evaluate the process of care and health care structures and/or outcomes of health care services. This information is translated into report cards that are generated by quality organizations, nonprofit, consumer groups and media. This evaluation of quality is based on measures of: hospital quality health plan quality physician quality MERHS healthcare treatment quality for other health professionals of patient experience Related sectors Health care extends beyond the delivery of services to patients, encompassing many related sectors, and is set within a bigger picture of financing and governance structures. Health system A health system, also sometimes referred to as health care system or healthcare system, is the organization of people, institutions, and resources that deliver health care services to populations in need. Healthcare industry The healthcare industry incorporates several sectors that are dedicated to providing health care services and products. As a basic framework for defining the sector, the United Nations' International Standard Industrial Classification categorizes health care as generally consisting of hospital activities, medical and dental practice activities, and "other human health activities." The last class involves activities of, or under the supervision of, nurses, midwives, physiotherapists, scientific or diagnostic laboratories, pathology clinics, residential health facilities, patient advocates or other allied health professions. In addition, according to industry and market classifications, such as the Global Industry Classification Standard and the Industry Classification Benchmark, health care includes many categories of medical equipment, instruments and services including biotechnology, diagnostic laboratories and substances, drug manufacturing and delivery. For example, pharmaceuticals and other medical devices are the leading high technology exports of Europe and the United States. The United States dominates the biopharmaceutical field, accounting for three-quarters of the world's biotechnology revenues. Health care research The quantity and quality of many health care interventions are improved through the results of science, such as advanced through the medical model of health which focuses on the eradication of illness through diagnosis and effective treatment. Many important advances have been made through health research, biomedical research and pharmaceutical research, which form the basis for evidence-based medicine and evidence-based practice in health care delivery. Health care research frequently engages directly with patients, and as such issues for whom to engage and how to engage with them become important to consider when seeking to actively include them in studies. While single best practice does not exist, the results of a systematic review on patient engagement suggest that research methods for patient selection need to account for both patient availability and willingness to engage. Health services research can lead to greater efficiency and equitable delivery of health care interventions, as advanced through the social model of health and disability, which emphasizes the societal changes that can be made to make populations healthier. Results from health services research often form the basis of evidence-based policy in health care systems. Health services research is also aided by initiatives in the field of artificial intelligence for the development of systems of health assessment that are clinically useful, timely, sensitive to change, culturally sensitive, low-burden, low-cost, built into standard procedures, and involve the patient. Health care financing There are generally five primary methods of funding health care systems: General taxation to the state, county or municipality Social health insurance Voluntary or private health insurance Out-of-pocket payments Donations to health charities In most countries, there is a mix of all five models, but this varies across countries and over time within countries. Aside from financing mechanisms, an important question should always be how much to spend on health ccare. For the purposes of comparison, this is often expressed as the percentage of GDP spent on health care. In OECD countries for every extra $1000 spent on health care, life expectancy falls by 0.4 years. A similar correlation is seen from the analysis carried out each year by Bloomberg. Clearly this kind of analysis is flawed in that life expectancy is only one measure of a health system's performance, but equally, the notion that more funding is better is not supported. In 2011, the health care industry consumed an average of 9.3 percent of the GDP or US$ 3,322 (PPP-adjusted) per capita across the 34 members of OECD countries. The US (17.7%, or US$ PPP 8,508), the Netherlands (11.9%, 5,099), France (11.6%, 4,118), Germany (11.3%, 4,495), Canada (11.2%, 5669), and Switzerland (11%, 5,634) were the top spenders, however life expectancy in total population at birth was highest in Switzerland (82.8 years), Japan and Italy (82.7), Spain and Iceland (82.4), France (82.2) and Australia (82.0), while OECD's average exceeds 80 years for the first time ever in 2011: 80.1 years, a gain of 10 years since 1970. The US (78.7 years) ranges only on place 26 among the 34 OECD member countries, but has the highest costs by far. All OECD countries have achieved universal (or almost universal) health coverage, except the US and Mexico. (see also international comparisons.) In the United States, where around 18% of GDP is spent on health care, the Commonwealth Fund analysis of spend and quality shows a clear correlation between worse quality and higher spending. Administration and regulation The management and administration of health care is vital to the delivery of health care services. In particular, the practice of health professionals and the operation of health care institutions is typically regulated by national or state/provincial authorities through appropriate regulatory bodies for purposes of quality assurance. Most countries have credentialing staff in regulatory boards or health departments who document the certification or licensing of health workers and their work history. Health information technology Health information technology (HIT) is "the application of information processing involving both computer hardware and software that deals with the storage, retrieval, sharing, and use of health care information, data, and knowledge for communication and decision making." Health information technology components: Electronic Health Record (EHR) - An EHR contains a patient's comprehensive medical history, and may include records from multiple providers. Electronic Medical Record (EMR) - An EMR contains the standard medical and clinical data gathered in one's provider's office. Personal Health Record (PHR) - A PHR is a patient's medical history that is maintained privately, for personal use. Medical Practice Management software (MPM) - is designed to streamline the day-to-day tasks of operating a medical facility. Also known as practice management software or practice management system (PMS). Health Information Exchange (HIE) - Health Information Exchange allows health care professionals and patients to appropriately access and securely share a patient's vital medical information electronically. See also :Category:Health care by country Healthcare system / Health professionals Health equity Health policy Tobacco control laws Universal health care By country: References External links Primary care Public services
149848
https://en.wikipedia.org/wiki/Combinatory%20logic
Combinatory logic
Combinatory logic is a notation to eliminate the need for quantified variables in mathematical logic. It was introduced by Moses Schönfinkel and Haskell Curry, and has more recently been used in computer science as a theoretical model of computation and also as a basis for the design of functional programming languages. It is based on combinators which were introduced by Schönfinkel in 1920 with the idea of providing an analogous way to build up functions—and to remove any mention of variables—particularly in predicate logic. A combinator is a higher-order function that uses only function application and earlier defined combinators to define a result from its arguments. In mathematics Combinatory logic was originally intended as a 'pre-logic' that would clarify the role of quantified variables in logic, essentially by eliminating them. Another way of eliminating quantified variables is Quine's predicate functor logic. While the expressive power of combinatory logic typically exceeds that of first-order logic, the expressive power of predicate functor logic is identical to that of first order logic (Quine 1960, 1966, 1976). The original inventor of combinatory logic, Moses Schönfinkel, published nothing on combinatory logic after his original 1924 paper. Haskell Curry rediscovered the combinators while working as an instructor at Princeton University in late 1927. In the late 1930s, Alonzo Church and his students at Princeton invented a rival formalism for functional abstraction, the lambda calculus, which proved more popular than combinatory logic. The upshot of these historical contingencies was that until theoretical computer science began taking an interest in combinatory logic in the 1960s and 1970s, nearly all work on the subject was by Haskell Curry and his students, or by Robert Feys in Belgium. Curry and Feys (1958), and Curry et al. (1972) survey the early history of combinatory logic. For a more modern treatment of combinatory logic and the lambda calculus together, see the book by Barendregt, which reviews the models Dana Scott devised for combinatory logic in the 1960s and 1970s. In computing In computer science, combinatory logic is used as a simplified model of computation, used in computability theory and proof theory. Despite its simplicity, combinatory logic captures many essential features of computation. Combinatory logic can be viewed as a variant of the lambda calculus, in which lambda expressions (representing functional abstraction) are replaced by a limited set of combinators, primitive functions without free variables. It is easy to transform lambda expressions into combinator expressions, and combinator reduction is much simpler than lambda reduction. Hence combinatory logic has been used to model some non-strict functional programming languages and hardware. The purest form of this view is the programming language Unlambda, whose sole primitives are the S and K combinators augmented with character input/output. Although not a practical programming language, Unlambda is of some theoretical interest. Combinatory logic can be given a variety of interpretations. Many early papers by Curry showed how to translate axiom sets for conventional logic into combinatory logic equations (Hindley and Meredith 1990). Dana Scott in the 1960s and 1970s showed how to marry model theory and combinatory logic. Summary of lambda calculus Lambda calculus is concerned with objects called lambda-terms, which can be represented by the following three forms of strings: where is a variable name drawn from a predefined infinite set of variable names, and and are lambda-terms. Terms of the form are called abstractions. The variable v is called the formal parameter of the abstraction, and is the body of the abstraction. The term represents the function which, applied to an argument, binds the formal parameter v to the argument and then computes the resulting value of — that is, it returns , with every occurrence of v replaced by the argument. Terms of the form are called applications. Applications model function invocation or execution: the function represented by is to be invoked, with as its argument, and the result is computed. If (sometimes called the applicand) is an abstraction, the term may be reduced: , the argument, may be substituted into the body of in place of the formal parameter of , and the result is a new lambda term which is equivalent to the old one. If a lambda term contains no subterms of the form then it cannot be reduced, and is said to be in normal form. The expression represents the result of taking the term and replacing all free occurrences of in it with . Thus we write By convention, we take as shorthand for (i.e., application is left associative). The motivation for this definition of reduction is that it captures the essential behavior of all mathematical functions. For example, consider the function that computes the square of a number. We might write The square of x is (Using "" to indicate multiplication.) x here is the formal parameter of the function. To evaluate the square for a particular argument, say 3, we insert it into the definition in place of the formal parameter: The square of 3 is To evaluate the resulting expression , we would have to resort to our knowledge of multiplication and the number 3. Since any computation is simply a composition of the evaluation of suitable functions on suitable primitive arguments, this simple substitution principle suffices to capture the essential mechanism of computation. Moreover, in lambda calculus, notions such as '3' and '' can be represented without any need for externally defined primitive operators or constants. It is possible to identify terms in lambda calculus, which, when suitably interpreted, behave like the number 3 and like the multiplication operator, q.v. Church encoding. Lambda calculus is known to be computationally equivalent in power to many other plausible models for computation (including Turing machines); that is, any calculation that can be accomplished in any of these other models can be expressed in lambda calculus, and vice versa. According to the Church-Turing thesis, both models can express any possible computation. It is perhaps surprising that lambda-calculus can represent any conceivable computation using only the simple notions of function abstraction and application based on simple textual substitution of terms for variables. But even more remarkable is that abstraction is not even required. Combinatory logic is a model of computation equivalent to lambda calculus, but without abstraction. The advantage of this is that evaluating expressions in lambda calculus is quite complicated because the semantics of substitution must be specified with great care to avoid variable capture problems. In contrast, evaluating expressions in combinatory logic is much simpler, because there is no notion of substitution. Combinatory calculi Since abstraction is the only way to manufacture functions in the lambda calculus, something must replace it in the combinatory calculus. Instead of abstraction, combinatory calculus provides a limited set of primitive functions out of which other functions may be built. Combinatory terms A combinatory term has one of the following forms: The primitive functions are combinators, or functions that, when seen as lambda terms, contain no free variables. To shorten the notations, a general convention is that , or even , denotes the term . This is the same general convention (left-associativity) as for multiple application in lambda calculus. Reduction in combinatory logic In combinatory logic, each primitive combinator comes with a reduction rule of the form where E is a term mentioning only variables from the set . It is in this way that primitive combinators behave as functions. Examples of combinators The simplest example of a combinator is I, the identity combinator, defined by (I x) = x for all terms x. Another simple combinator is K, which manufactures constant functions: (K x) is the function which, for any argument, returns x, so we say ((K x) y) = x for all terms x and y. Or, following the convention for multiple application, (K x y) = x A third combinator is S, which is a generalized version of application: (S x y z) = (x z (y z)) S applies x to y after first substituting z into each of them. Or put another way, x is applied to y inside the environment z. Given S and K, I itself is unnecessary, since it can be built from the other two: ((S K K) x) = (S K K x) = (K x (K x)) = x for any term x. Note that although ((S K K) x) = (I x) for any x, (S K K) itself is not equal to I. We say the terms are extensionally equal. Extensional equality captures the mathematical notion of the equality of functions: that two functions are equal if they always produce the same results for the same arguments. In contrast, the terms themselves, together with the reduction of primitive combinators, capture the notion of intensional equality of functions: that two functions are equal only if they have identical implementations up to the expansion of primitive combinators. There are many ways to implement an identity function; (S K K) and I are among these ways. (S K S) is yet another. We will use the word equivalent to indicate extensional equality, reserving equal for identical combinatorial terms. A more interesting combinator is the fixed point combinator or Y combinator, which can be used to implement recursion. Completeness of the S-K basis S and K can be composed to produce combinators that are extensionally equal to any lambda term, and therefore, by Church's thesis, to any computable function whatsoever. The proof is to present a transformation, T[ ], which converts an arbitrary lambda term into an equivalent combinator. T[ ] may be defined as follows: T[x] => x T[(E₁ E₂)] => (T[E₁] T[E₂]) T[λx.E] => (K T[E]) (if x does not occur free in E) T[λx.x] => I T[λx.λy.E] => Tλx.Tλy.E (if x occurs free in E) T[λx.(E₁ E₂)] => (S T[λx.E₁] T[λx.E₂]) (if x occurs free in E₁ or E₂) Note that T[ ] as given is not a well-typed mathematical function, but rather a term rewriter: Although it eventually yields a combinator, the transformation may generate intermediary expressions that are neither lambda terms nor combinators, via rule (5). This process is also known as abstraction elimination. This definition is exhaustive: any lambda expression will be subject to exactly one of these rules (see Summary of lambda calculus above). It is related to the process of bracket abstraction, which takes an expression E built from variables and application and produces a combinator expression [x]E in which the variable x is not free, such that [x]E x = E holds. A very simple algorithm for bracket abstraction is defined by induction on the structure of expressions as follows: [x]y := K y [x]x := I [x](E₁ E₂) := S([x]E₁)([x]E₂) Bracket abstraction induces a translation from lambda terms to combinator expressions, by interpreting lambda-abstractions using the bracket abstraction algorithm. Conversion of a lambda term to an equivalent combinatorial term For example, we will convert the lambda term λx.λy.(y x) to a combinatorial term: T[λx.λy.(y x)] = Tλx.Tλy.(y x) (by 5) = T[λx.(S T[λy.y] T[λy.x])] (by 6) = T[λx.(S I T[λy.x])] (by 4) = T[λx.(S I (K T[x]))] (by 3) = T[λx.(S I (K x))] (by 1) = (S T[λx.(S I)] T[λx.(K x)]) (by 6) = (S (K (S I)) T[λx.(K x)]) (by 3) = (S (K (S I)) (S T[λx.K] T[λx.x])) (by 6) = (S (K (S I)) (S (K K) T[λx.x])) (by 3) = (S (K (S I)) (S (K K) I)) (by 4) If we apply this combinatorial term to any two terms x and y (by feeding them in a queue-like fashion into the combinator 'from the right'), it reduces as follows: (S (K (S I)) (S (K K) I) x y) = (K (S I) x (S (K K) I x) y) = (S I (S (K K) I x) y) = (I y (S (K K) I x y)) = (y (S (K K) I x y)) = (y (K K x (I x) y)) = (y (K (I x) y)) = (y (I x)) = (y x) The combinatory representation, (S (K (S I)) (S (K K) I)) is much longer than the representation as a lambda term, λx.λy.(y x). This is typical. In general, the T[ ] construction may expand a lambda term of length n to a combinatorial term of length Θ(n3). Explanation of the T[ ] transformation The T[ ] transformation is motivated by a desire to eliminate abstraction. Two special cases, rules 3 and 4, are trivial: λx.x is clearly equivalent to I, and λx.E is clearly equivalent to (K T[E]) if x does not appear free in E. The first two rules are also simple: Variables convert to themselves, and applications, which are allowed in combinatory terms, are converted to combinators simply by converting the applicand and the argument to combinators. It is rules 5 and 6 that are of interest. Rule 5 simply says that to convert a complex abstraction to a combinator, we must first convert its body to a combinator, and then eliminate the abstraction. Rule 6 actually eliminates the abstraction. λx.(E₁ E₂) is a function which takes an argument, say a, and substitutes it into the lambda term (E₁ E₂) in place of x, yielding (E₁ E₂)[x : = a]. But substituting a into (E₁ E₂) in place of x is just the same as substituting it into both E₁ and E₂, so (E₁ E₂)[x := a] = (E₁[x := a] E₂[x := a]) (λx.(E₁ E₂) a) = ((λx.E₁ a) (λx.E₂ a)) = (S λx.E₁ λx.E₂ a) = ((S λx.E₁ λx.E₂) a) By extensional equality, λx.(E₁ E₂) = (S λx.E₁ λx.E₂) Therefore, to find a combinator equivalent to λx.(E₁ E₂), it is sufficient to find a combinator equivalent to (S λx.E₁ λx.E₂), and (S T[λx.E₁] T[λx.E₂]) evidently fits the bill. E₁ and E₂ each contain strictly fewer applications than (E₁ E₂), so the recursion must terminate in a lambda term with no applications at all—either a variable, or a term of the form λx.E. Simplifications of the transformation η-reduction The combinators generated by the T[ ] transformation can be made smaller if we take into account the η-reduction rule: T[λx.(E x)] = T[E] (if x is not free in E) λx.(E x) is the function which takes an argument, x, and applies the function E to it; this is extensionally equal to the function E itself. It is therefore sufficient to convert E to combinatorial form. Taking this simplification into account, the example above becomes:   T[λx.λy.(y x)] = ... = (S (K (S I)) T[λx.(K x)]) = (S (K (S I)) K) (by η-reduction) This combinator is equivalent to the earlier, longer one:   (S (K (S I)) K x y) = (K (S I) x (K x) y) = (S I (K x) y) = (I y (K x y)) = (y (K x y)) = (y x) Similarly, the original version of the T[ ] transformation transformed the identity function λf.λx.(f x) into (S (S (K S) (S (K K) I)) (K I)). With the η-reduction rule, λf.λx.(f x) is transformed into I. One-point basis There are one-point bases from which every combinator can be composed extensionally equal to any lambda term. The simplest example of such a basis is {X} where: X ≡ λx.((xS)K) It is not difficult to verify that: X (X (X X)) =β K and X (X (X (X X))) =β S. Since {K, S} is a basis, it follows that {X} is a basis too. The Iota programming language uses X as its sole combinator. Another simple example of a one-point basis is: X' ≡ λx.(x K S K) with (X' X') X' =β K and X' (X' X') =β S In fact, there exist infinitely many such bases. Combinators B, C In addition to S and K, Schönfinkel's paper included two combinators which are now called B and C, with the following reductions: (C f g x) = ((f x) g) (B f g x) = (f (g x)) He also explains how they in turn can be expressed using only S and K: B = (S (K S) K) C = (S (S (K (S (K S) K)) S) (K K)) These combinators are extremely useful when translating predicate logic or lambda calculus into combinator expressions. They were also used by Curry, and much later by David Turner, whose name has been associated with their computational use. Using them, we can extend the rules for the transformation as follows: T[x] ⇒ x T[(E₁ E₂)] ⇒ (T[E₁] T[E₂]) T[λx.E] ⇒ (K T[E]) (if x is not free in E) T[λx.x] ⇒ I T[λx.λy.E] ⇒ Tλx.Tλy.E (if x is free in E) T[λx.(E₁ E₂)] ⇒ (S T[λx.E₁] T[λx.E₂]) (if x is free in both E₁ and E₂) T[λx.(E₁ E₂)] ⇒ (C T[λx.E₁] T[E₂]) (if x is free in E₁ but not E₂) T[λx.(E₁ E₂)] ⇒ (B T[E₁] T[λx.E₂]) (if x is free in E₂ but not E₁) Using B and C combinators, the transformation of λx.λy.(y x) looks like this:    T[λx.λy.(y x)] = Tλx.Tλy.(y x) = T[λx.(C T[λy.y] x)] (by rule 7) = T[λx.(C I x)] = (C I) (η-reduction) = (traditional canonical notation : ) = (traditional canonical notation: ) And indeed, (C I x y) does reduce to (y x):    (C I x y) = (I y x) = (y x) The motivation here is that B and C are limited versions of S. Whereas S takes a value and substitutes it into both the applicand and its argument before performing the application, C performs the substitution only in the applicand, and B only in the argument. The modern names for the combinators come from Haskell Curry's doctoral thesis of 1930 (see B, C, K, W System). In Schönfinkel's original paper, what we now call S, K, I, B and C were called S, C, I, Z, and T respectively. The reduction in combinator size that results from the new transformation rules can also be achieved without introducing B and C, as demonstrated in Section 3.2 of. CLK versus CLI calculus A distinction must be made between the CLK as described in this article and the CLI calculus. The distinction corresponds to that between the λK and the λI calculus. Unlike the λK calculus, the λI calculus restricts abstractions to: λx.E where x has at least one free occurrence in E. As a consequence, combinator K is not present in the λI calculus nor in the CLI calculus. The constants of CLI are: I, B, C and S, which form a basis from which all CLI terms can be composed (modulo equality). Every λI term can be converted into an equal CLI combinator according to rules similar to those presented above for the conversion of λK terms into CLK combinators. See chapter 9 in Barendregt (1984). Reverse conversion The conversion L[ ] from combinatorial terms to lambda terms is trivial: L[I] = λx.x L[K] = λx.λy.x L[C] = λx.λy.λz.(x z y) L[B] = λx.λy.λz.(x (y z)) L[S] = λx.λy.λz.(x z (y z)) L[(E₁ E₂)] = (L[E₁] L[E₂]) Note, however, that this transformation is not the inverse transformation of any of the versions of T[ ] that we have seen. Undecidability of combinatorial calculus A normal form is any combinatory term in which the primitive combinators that occur, if any, are not applied to enough arguments to be simplified. It is undecidable whether a general combinatory term has a normal form; whether two combinatory terms are equivalent, etc. This is equivalent to the undecidability of the corresponding problems for lambda terms. However, a direct proof is as follows: First, the term Ω = (S I I (S I I)) has no normal form, because it reduces to itself after three steps, as follows: (S I I (S I I)) = (I (S I I) (I (S I I))) = (S I I (I (S I I))) = (S I I (S I I)) and clearly no other reduction order can make the expression shorter. Now, suppose N were a combinator for detecting normal forms, such that (Where and represent the conventional Church encodings of true and false, λx.λy.x and λx.λy.y, transformed into combinatory logic. The combinatory versions have and .) Now let Z = (C (C (B N (S I I)) Ω) I) now consider the term (S I I Z). Does (S I I Z) have a normal form? It does if and only if the following do also: (S I I Z) = (I Z (I Z)) = (Z (I Z)) = (Z Z) = (C (C (B N (S I I)) Ω) I Z) (definition of Z) = (C (B N (S I I)) Ω Z I) = (B N (S I I) Z Ω I) = (N (S I I Z) Ω I) Now we need to apply N to (S I I Z). Either (S I I Z) has a normal form, or it does not. If it does have a normal form, then the foregoing reduces as follows: (N (S I I Z) Ω I) = (K Ω I) (definition of N) = Ω but Ω does not have a normal form, so we have a contradiction. But if (S I I Z) does not have a normal form, the foregoing reduces as follows: (N (S I I Z) Ω I) = (K I Ω I) (definition of N) = (I I) = I which means that the normal form of (S I I Z) is simply I, another contradiction. Therefore, the hypothetical normal-form combinator N cannot exist. The combinatory logic analogue of Rice's theorem says that there is no complete nontrivial predicate. A predicate is a combinator that, when applied, returns either T or F. A predicate N is nontrivial if there are two arguments A and B such that N A = T and N B = F. A combinator N is complete if and only if NM has a normal form for every argument M. The analogue of Rice's theorem then says that every complete predicate is trivial. The proof of this theorem is rather simple. Proof: By reductio ad absurdum. Suppose there is a complete non trivial predicate, say N. Because N is supposed to be non trivial there are combinators A and B such that (N A) = T and (N B) = F. Define NEGATION ≡ λx.(if (N x) then B else A) ≡ λx.((N x) B A) Define ABSURDUM ≡ (Y NEGATION) Fixed point theorem gives: ABSURDUM = (NEGATION ABSURDUM), for ABSURDUM ≡ (Y NEGATION) = (NEGATION (Y NEGATION)) ≡ (NEGATION ABSURDUM). Because N is supposed to be complete either: (N ABSURDUM) = F or (N ABSURDUM) = T Case 1: F = (N ABSURDUM) = N (NEGATION ABSURDUM) = (N A) = T, a contradiction. Case 2: T = (N ABSURDUM) = N (NEGATION ABSURDUM) = (N B) = F, again a contradiction. Hence (N ABSURDUM) is neither T nor F, which contradicts the presupposition that N would be a complete non trivial predicate. Q.E.D. From this undecidability theorem it immediately follows that there is no complete predicate that can discriminate between terms that have a normal form and terms that do not have a normal form. It also follows that there is no complete predicate, say EQUAL, such that: (EQUAL A B) = T if A = B and (EQUAL A B) = F if A ≠ B. If EQUAL would exist, then for all A, λx.(EQUAL x A) would have to be a complete non trivial predicate. Applications Compilation of functional languages David Turner used his combinators to implement the SASL programming language. Kenneth E. Iverson used primitives based on Curry's combinators in his J programming language, a successor to APL. This enabled what Iverson called tacit programming, that is, programming in functional expressions containing no variables, along with powerful tools for working with such programs. It turns out that tacit programming is possible in any APL-like language with user-defined operators. Logic The Curry–Howard isomorphism implies a connection between logic and programming: every proof of a theorem of intuitionistic logic corresponds to a reduction of a typed lambda term, and conversely. Moreover, theorems can be identified with function type signatures. Specifically, a typed combinatory logic corresponds to a Hilbert system in proof theory. The K and S combinators correspond to the axioms AK: A → (B → A), AS: (A → (B → C)) → ((A → B) → (A → C)), and function application corresponds to the detachment (modus ponens) rule MP: from A and A → B infer B. The calculus consisting of AK, AS, and MP is complete for the implicational fragment of the intuitionistic logic, which can be seen as follows. Consider the set W of all deductively closed sets of formulas, ordered by inclusion. Then is an intuitionistic Kripke frame, and we define a model in this frame by This definition obeys the conditions on satisfaction of →: on one hand, if , and is such that and , then by modus ponens. On the other hand, if , then by the deduction theorem, thus the deductive closure of is an element such that , , and . Let A be any formula which is not provable in the calculus. Then A does not belong to the deductive closure X of the empty set, thus , and A is not intuitionistically valid. See also Applicative computing systems B, C, K, W system Categorical abstract machine Combinatory categorial grammar Explicit substitution Fixed point combinator Graph reduction machine Lambda calculus and Cylindric algebra, other approaches to modelling quantification and eliminating variables SKI combinator calculus Supercombinator To Mock a Mockingbird References Further reading Reprinted as Chapter 23 of Quine's Selected Logic Papers (1966), pp. 227–235 Schönfinkel, Moses, 1924, "Über die Bausteine der mathematischen Logik," translated as "On the Building Blocks of Mathematical Logic" in From Frege to Gödel: a source book in mathematical logic, 1879–1931, Jean van Heijenoort, ed. Harvard University Press, 1967. . The article that founded combinatory logic. Smullyan, Raymond, 1985. To Mock a Mockingbird. Knopf. . A gentle introduction to combinatory logic, presented as a series of recreational puzzles using bird watching metaphors. __, 1994. Diagonalization and Self-Reference. Oxford University Press. Chapters 17–20 are a more formal introduction to combinatory logic, with a special emphasis on fixed point results. Sørensen, Morten Heine B. and Paweł Urzyczyn, 1999. Lectures on the Curry–Howard Isomorphism. University of Copenhagen and University of Warsaw, 1999. e. A celebration of the development of combinators, a hundred years after they were introduced by Moses Schönfinkel in 1920. External links Stanford Encyclopedia of Philosophy: "Combinatory Logic" by Katalin Bimbó. 1920–1931 Curry's block notes. Keenan, David C. (2001) "To Dissect a Mockingbird: A Graphical Notation for the Lambda Calculus with Animated Reduction." Rathman, Chris, "Combinator Birds." A table distilling much of the essence of Smullyan (1985). Drag 'n' Drop Combinators. (Java Applet) Binary Lambda Calculus and Combinatory Logic. Combinatory logic reduction web server Combinatory logic Lambda calculus Logic in computer science
32682855
https://en.wikipedia.org/wiki/2004%20KV18
2004 KV18
is an eccentric Neptune trojan trailing Neptune's orbit in the outer Solar System, approximately 70 kilometers in diameter. It was first observed on 24 May 2004, by astronomers at the Mauna Kea Observatories on Hawaii, United States. It was the eighth Neptune trojan identified and the second in Neptune's Lagrangian point. Orbit and classification Neptune trojans are resonant trans-Neptunian objects (TNO) in a 1:1 mean-motion orbital resonance with Neptune. These Trojans have a semi-major axis and an orbital period very similar to Neptune's (30.10 AU; 164.8 years). belongs to the trailing group, which follow 60° behind Neptune's orbit. It orbits the Sun with a semi-major axis of 30.370 AU at a distance of 24.7–36.1 AU once every 167 years and 4 months (61,132 days). Its orbit has a notably high eccentricity of 0.19 and an inclination of 14° with respect to the ecliptic. Orbital instability is not a primordial Neptune trojan, and will leave the region on a relatively short time scale. The orbit of a Neptune trojan can only be stable when the eccentricity is less than 0.12. Its lifetime as a trailing Neptune trojan is on the order of 100,000 years into the future. Physical properties Diameter and albedo Based on a generic magnitude-to-diameter conversion, it measures approximately 71 kilometers in diameter using an absolute magnitude of 8.9 and an assumed albedo of 0.10. It is one of the smaller bodies among the first 17 Neptune trojans discovered so far, which measure between 60 and 200 kilometers (for an absolute magnitude of 9.3–6.6 and an assumed albedo of 0.10). Other estimates, implying a higher albedo than 0.10, gave a diameter of approximately 56 kilometers. Numbering and naming Due to its orbital uncertainty, this minor planet has not been numbered and its official discoverers have not been determined. If named, it will follow the naming scheme already established with 385571 Otrera, which is to name these objects after figures related to the Amazons, an all-female warrior tribe that fought in the Trojan War on the side of the Trojans against the Greek. References External links MPEC 2011-O47 : 2004 KV18, MPEC –Minor Planet Electronic Circular Neptune trojans Minor planet object articles (unnumbered) 20040524
896946
https://en.wikipedia.org/wiki/Joe%20Born
Joe Born
Joseph Born (born May 5, 1969), better known as Joe Born, is an American Inventor and businessman. As CEO of Aiwa and previously Neuros Technology and contributing inventor to many of its products, Joe Born has advocated on behalf of open source hardware, digital rights, and generally on the subject of the global maker movement, and open source. Professional Accomplishments In 1995, after receiving a patent for a CD repair device, later named the SkipDoctor, Joe Born founded Digital Innovations, LLC. With an original investment of $15,000, the company was created to commercialize the skipdoctor invention. Joined in 1996 by Collin Anderson, they brought the invention to market in 1999, and, as of December 2013, had sold 10 million units of the SkipDoctor globally. In September 2001, the Neuros division was started within Digital Innovations to develop open digital media products. In December 2003, Neuros was spun off into a separate entity, Neuros Technology, LLC. Influenced by its developer community, Joe Born, as Chief of Neuros, has become a pioneer in the field of open source hardware, helping to influence many of its partners to become more open, including successfully lobbying Texas Instruments to release a free compiler for a previously closed Digital Signal Processor In March 2011, Joe Born and David W. Phillips founded Hale Devices (previously called Sonr Labs, Inc,) as a provider of Android audio peripherals. After helping to bring his then 10-year-old daughter, Lily Born's Kangaroo Cup invention to market in October 2012, Born has become an advocate for young inventors including advising the SEE/Dig-8 program at Nettlehorst Elementary School, a program created to teach product development and entrepreneurship to middle school students Speaking engagements are typically focused on student entrepreneurship, open source and the maker movement and include: Ohio Linux Fest, LinuxWorld, LugRadioLive, Various Linux User Groups & University of Chicago China Immersion program and Kellogg's Masters of Management in Manufacturing Program. In addition to consumer electronics, Born has also received patents in areas ranging from internal combustion engine components to cosmetic accessories. External links Joe Born's blog "Open Sesame" -(The Economist Story on open source hardware) Newsforge Article on Neuros Open Source Approach to Hardware and Software Development Linux Link Tech Show interview (audio), 2006 "What This Gadget Can Do Is Up to You" -(New York Times review of the Neuros OSD) "The Ultimate Music Buff" -(Business 2.0 coverage of Digital Innovations) The Kangaroo Cup References 1969 births Living people
16789051
https://en.wikipedia.org/wiki/Hardware-based%20full%20disk%20encryption
Hardware-based full disk encryption
Hardware-based full disk encryption (FDE) is available from many hard disk drive (HDD/SSD) vendors, including: ClevX, Hitachi, Integral Memory, iStorage Limited, Micron, Seagate Technology, Samsung, Toshiba, Viasat UK, Western Digital. The symmetric encryption key is maintained independently from the computer's CPU, thus allowing the complete data store to be encrypted and removing computer memory as a potential attack vector. Hardware-FDE has two major components: the hardware encryptor and the data store. There are currently four varieties of hardware-FDE in common use: Hard disk drive (HDD) FDE (self-encrypting drive) Enclosed hard disk drive FDE Removable Hard Drive FDE Bridge and Chipset (BC) FDE Hardware designed for a particular purpose can often achieve better performance than disk encryption software, and disk encryption hardware can be made more transparent to software than encryption done in software. As soon as the key has been initialised, the hardware should in principle be completely transparent to the OS and thus work with any OS. If the disk encryption hardware is integrated with the media itself the media may be designed for better integration. One example of such design would be through the use of physical sectors slightly larger than the logical sectors. Hardware-based full disk encryption Types Hard disk drive FDE Usually referred to as self-encrypting drive (SED). HDD FDE is made by HDD vendors using the OPAL and Enterprise standards developed by the Trusted Computing Group. Key management takes place within the hard disk controller and encryption keys are 128 or 256 bit Advanced Encryption Standard (AES) keys. Authentication on power up of the drive must still take place within the CPU via either a software pre-boot authentication environment (i.e., with a software-based full disk encryption component - hybrid full disk encryption) or with a BIOS password. Hitachi, Micron, Seagate, Samsung, and Toshiba are the disk drive manufacturers offering TCG OPAL SATA drives. HDDs have become a commodity so SED allow drive manufacturers to maintain revenue. Older technologies include the proprietary Seagate DriveTrust, and the older, and less secure, PATA Security command standard shipped by all drive makers including Western Digital. Enterprise SAS versions of the TCG standard are called "TCG Enterprise" drives. Enclosed hard disk drive FDE Within a standard hard drive form factor case the encryptor (BC), key store and a smaller form factor, commercially available, hard disk drive is enclosed. The enclosed hard disk drive's case can be tamper-evident, so when retrieved the user can be assured that the data has not been compromised. The encryptors electronics including the key store and integral hard drive (if it is solid-state) can be protected by other tamper respondent measures. The key can be purged, allowing a user to prevent his authentication parameters being used without destroying the encrypted data. Later the same key can be re-loaded into the Enclosed hard disk drive FDE, to retrieve this data. Tampering is not an issue for SEDs as they cannot be read without the decryption key, regardless of access to the internal electronics . For example: Viasat UK (formerly Stonewood Electronics) with their FlagStone and Eclypt drives or GuardDisk with an RFID token. Removable Hard Drive FDE The Inserted Hard Drive FDE allows a standard form factor hard disk drive to be inserted into it. The concept can be seen on This is an improvement on removing [unencrypted] hard drives from a computer and storing them in a safe when not in use. This design can be used to encrypt multiple drives using the same key. Generally they are not securely locked so the drive's interface is open to attack. Chipset FDE The encryptor bridge and chipset (BC) is placed between the computer and the standard hard disk drive, encrypting every sector written to it. Intel announced the release of the Danbury chipset but has since abandoned this approach. Characteristics Hardware-based encryption when built into the drive or within the drive enclosure is notably transparent to the user. The drive, except for bootup authentication, operates just like any drive, with no degradation in performance. There is no complication or performance overhead, unlike disk encryption software, since all the encryption is invisible to the operating system and the host computer's processor. The two main use cases are Data at Rest protection, and Cryptographic Disk Erasure. For Data at Rest protection a computer or laptop is simply powered off. The disk now self-protects all the data on it. The data is safe because all of it, even the OS, is now encrypted, with a secure mode of AES, and locked from reading and writing. The drive requires an authentication code which can be as strong as 32 bytes (2^256) to unlock. Disk sanitisation Crypto-shredding is the practice of 'deleting' data by (only) deleting or overwriting the encryption keys. When a cryptographic disk erasure (or crypto erase) command is given (with proper authentication credentials), the drive self-generates a new media encryption key and goes into a 'new drive' state. Without the old key, the old data becomes irretrievable and therefore an efficient means of providing disk sanitisation which can be a lengthy (and costly) process. For example, an unencrypted and unclassified computer hard drive that requires sanitising to conform with Department of Defense Standards must be overwritten 3+ times; a one Terabyte Enterprise SATA3 disk would take many hours to complete this process. Although the use of faster solid-state drives (SSD) technologies improves this situation, the take up by enterprise has so far been slow. The problem will worsen as disk sizes increase every year. With encrypted drives a complete and secure data erasure action takes just a few milliseconds with a simple key change, so a drive can be safely repurposed very quickly. This sanitisation activity is protected in SEDs by the drive's own key management system built into the firmware in order to prevent accidental data erasure with confirmation passwords and secure authentications related to the original key required. When keys are self-generated randomly, generally there is no method to store a copy to allow data recovery. In this case protecting this data from accidental loss or theft is achieved through a consistent and comprehensive data backup policy. The other method is for user-defined keys, for some Enclosed hard disk drive FDE, to be generated externally and then loaded into the FDE. Protection from alternative boot methods Recent hardware models circumvents booting from other devices and allowing access by using a dual Master Boot Record (MBR) system whereby the MBR for the operating system and data files is all encrypted along with a special MBR which is required to boot the operating system. In SEDs, all data requests are intercepted by their firmware, that does not allow decryption to take place unless the system has been booted from the special SED operating system which then loads the MBR of the encrypted part of the drive. This works by having a separate partition, hidden from view, which contains the proprietary operating system for the encryption management system. This means no other boot methods will allow access to the drive. Vulnerabilities Typically FDE, once unlocked, will remain unlocked as long as power is provided. Researchers at Universität Erlangen-Nürnberg have demonstrated a number of attacks based on moving the drive to another computer without cutting power. Additionally, it may be possible to reboot the computer into an attacker-controlled operating system without cutting power to the drive. When a computer with a self-encrypting drive is put into sleep mode, the drive is powered down, but the encryption password is retained in memory so that the drive can be quickly resumed without requesting the password. An attacker can take advantage of this to gain easier physical access to the drive, for instance, by inserting extension cables. The firmware of the drive may be compromised and so any data that is sent to it may be at risk. Even if the data is encrypted on the physical medium of the drive, the fact that the firmware is controlled by a malicious third-party means that it can be decrypted by that third-party. If data is encrypted by the operating system, and it is sent in a scrambled form to the drive, then it would not matter if the firmware is malicious or not. Criticism Hardware solutions have also been criticised for being poorly documented. Many aspects of how the encryption is done are not published by the vendor. This leaves the user with little possibility to judge the security of the product and potential attack methods. It also increases the risk of a vendor lock-in. In addition, implementing system wide hardware-based full disk encryption is prohibitive for many companies due to the high cost of replacing existing hardware. This makes migrating to hardware encryption technologies more difficult and would generally require a clear migration and central management solution for both hardware- and software-based full disk encryption solutions. however Enclosed hard disk drive FDE and Removable Hard Drive FDE are often installed on a single drive basis. See also Disk encryption hardware Disk encryption software Crypto-shredding Opal Storage Specification Yubikey Full disk encryption IBM Secure Blue References Disk encryption Cryptographic hardware
3723923
https://en.wikipedia.org/wiki/Active%20networking
Active networking
Active networking is a communication pattern that allows packets flowing through a telecommunications network to dynamically modify the operation of the network. Active network architecture is composed of execution environments (similar to a unix shell that can execute active packets), a node operating system capable of supporting one or more execution environments. It also consists of active hardware, capable of routing or switching as well as executing code within active packets. This differs from the traditional network architecture which seeks robustness and stability by attempting to remove complexity and the ability to change its fundamental operation from underlying network components. Network processors are one means of implementing active networking concepts. Active networks have also been implemented as overlay networks. What does it offer? Active networking allows the possibility of highly tailored and rapid "real-time" changes to the underlying network operation. This enables such ideas as sending code along with packets of information allowing the data to change its form (code) to match the channel characteristics. The smallest program that can generate a sequence of data can be found in the definition of Kolmogorov complexity. The use of real-time genetic algorithms within the network to compose network services is also enabled by active networking. How it relates to other networking paradigms Active networking relates to other networking paradigms primarily based upon how computing and communication are partitioned in the architecture. Active networking and software-defined networking Active networking is an approach to network architecture with in-network programmability. The name derives from a comparison with network approaches advocating minimization of in-network processing, based on design advice such as the "end-to-end argument". Two major approaches were conceived: programmable network elements ("switches") and capsules, a programmability approach that places computation within packets traveling through the network. Treating packets as programs later became known as "active packets". Software-defined networking decouples the system that makes decisions about where traffic is sent (the control plane) from the underlying systems that forward traffic to the selected destination (the data plane). The concept of a programmable control plane originated at the University of Cambridge in the Systems Research Group, where (using virtual circuit identifiers available in Asynchronous Transfer Mode switches) multiple virtual control planes were made available on a single physical switch. Control Plane Technologies (CPT) was founded to commercialize this concept. Fundamental challenges Active network research addresses the nature of how best to incorporate extremely dynamic capability within networks. In order to do this, active network research must address the problem of optimally allocating computation versus communication within communication networks. A similar problem related to the compression of code as a measure of complexity is addressed via algorithmic information theory. One of the challenges of active networking has been the inability of information theory to mathematically model the active network paradigm and enable active network engineering. This is due to the active nature of the network in which communication packets contain code that dynamically change the operation of the network. Fundamental advances in information theory are required in order to understand such networks. Nanoscale active networks As the limit in reduction of transistor size is reached with current technology, active networking concepts are being explored as a more efficient means accomplishing computation and communication.<ref>``NANA: A Nanoscale Active Network Architecture by Patwardhan, J. P.; Dwyer, C. L.; Lebeck, A. R. & Sorin, D. J., ACM Journal on Emerging Technologies in Computing Systems (JETC), ACM Journal on Emerging Technologies in Computing Systems) Vol. 2, No. 1, Pages 1–30, January 2006, 3, 1–31.</ref> More on this can be found in nanoscale networking. See also Nanoscale networking Network processing Software-defined networking (SDN) Communication complexity Kolmogorov complexity References Further reading Towards an Active Network Architecture (1996), David L. Tennenhouse, et al., Computer Communication Review Active Networks and Active Network Management: A Proactive Management Framework by Stephen F. Bush and Amit Kulkarni, Kluwer Academic/Plenum Publishers, New York, Boston, Dordrecht, London, Moscow, 2001, 196 pp. Hardbound, . Programmable Networks for IP Service Deployment" ''by Galis, A., Denazis, S., Brou, C., Klein, C.- Artech House Books, London, June 20;, 450 pp. . External links Introduction to Active Networks (video) Network architecture
62468339
https://en.wikipedia.org/wiki/November%20Story
November Story
November Story is an Indian Tamil-language crime thriller web series for Hotstar Specials, directed by Indhra Subramanian. Produced by Vikatan Televistas the series stars Tamannaah in the lead role along with Pasupathy M., G. M. Kumar and Namita Krishnamurthy. The series is a classic murder mystery where the quest to find the truth behind the crime unveils a series of hidden truths. It was released on Disney+ Hotstar on 20 May 2021. Synopsis Anuradha Ganesan is an Ethical Hacker who works in an contract with the Crime Records Bureau with her best friend Malarmannan. Plot Anu, an ethical hacker lives with her old father who is suffering from Alzheimer's and her maid. Cast Tamannaah as Anuradha Ganesan - Ganesan's elder daughter, Ethical Hacker by Profession G. M. Kumar as Ganesan - Anuradha and Mathi's father, a crime novel writer Pasupathy M. as Kulandhai Yesu - Mathi's foster father. The main antagonist of the series Johnny as vicenarian Kulandhai Yesu Ashwanth as young Kulandhai Yesu Namita Krishnamurthy as Mathi - Ganesan's biological younger daughter and Anuradha's younger sister; Kulandhai Yesu's adopted daughter Vivek Prasanna as Malarmannan - Anuradha's best friend Myna Nandhini as Chithra - Ganesan's caretaker Aruldass as Inspector Sudalai Janaki Suresh as Savitri - Mathi's caretaker Pujitha Devaraju as Neeta Ramchandhani Tharani Suresh Kumar as Sudar Ganesan - Ganesan's wife Arshath Feras as Binu K. Pooranesh as Ahmed Nishanth Naidu as Sandeep Supergood Subramani Episodes Production Development On November 2019, Vikatan Televistas, the television arm of the Tamil magazine Ananda Vikatan announced a web series for the streaming platform Disney+ Hotstar. Tentatively titled The November Story, the makers announced Indhra Subramanian as the director, while Tamannaah, Pasupathy M. and G. M. Kumar in prominent roles. Tamannaah stated about her digital debut stating that "the streaming platforms are also the new playground for accomplished actors looking to break grounds with more challenging roles outside the two-hour cinematic time-frame." She further added that about the character in the series "I love to get under the skin of characters I essay, and hence the longer web series format is the perfect medium to showcase my skills as it is almost like doing five films at one go. There are lots of detailing and one can explore the character in depth." On October 2020, Disney+ Hotstar announced the title of the series as November Story. Filming The shooting of the series began in early November 2019 and the first schedule of the series were completed within the end of the month. The team was able to film most of the series by March 2020 before the nationwide COVID-19 lockdown took place, although production and post-production of the series were affected by COVID-19 restrictions. The remaining portions were shot after lockdown and completed in January 2021. Release Disney+ Hotstar released the teaser of the series on 24 October 2020, during the announcement of their original contents in Tamil language for the platform. The trailer of the series was released on 6 May 2021, through the YouTube channel of Cinema Vikatan, along with its dubbed Hindi and Telugu versions. The entire show comprising seven episodes, was broadcast exclusively on the streaming service on in Tamil and also dubbed in Telugu and Hindi languages. Reception M. Suganth, editor-in-chief of The Times of India reviewed "The solid performances and production values of the series makes November Story engaging, despite its predictable arc." Ranjani Krishnakumar of Firstpost reviewed "November Story is an excellent entry into the pure-play murder mystery genre, but fails to deliver a satisfying pay-off." Haricharan Pudipeddi of Hindustan Times wrote "November Story unravels slowly, and its pace is an issue at times. But what keeps one engaged is the gripping screenplay that beautifully weaves together a web of incidents to produce something worthwhile. For the most part of the show, the writing is highly competent and one can’t find fault with until the climax which is needlessly long drawn." Avinash Ramachandran of The New Indian Express wrote "November Story does score high on the engagement factor. Barring the final pay-off that is not exactly an organic culmination, the series largely works. There is a lot of activity, even if they don’t necessarily add up. The forced humour, in particular, fails to add any flavour." Nandini Ramanath of Scroll.in stated "Always slick but equally slippery, the severely overstretched and needlessly complicated series benefits from rich atmospherics and sharp performances." Manoj Kumar R. in his review for The Indian Express praised the series as a "significant improvement compared to the current Tamil daily soaps on television", but labelled it as a "colossal disappointment" by the analysing the standards of web series. India Today's chief critic Janani K. reviewed it as "If not for its length and some logical loopholes, November Story could have been a great murder mystery." Film critic Srinivasa Ramanujam also gave a mixed verdict, in the review for The Hindu stating that the series "needed to pack in more punch in its core narrative." References External links November Story at Disney+ Hotstar Tamil-language Hotstar original programming 2021 web series debuts Indian web series Tamil-language web series
45102490
https://en.wikipedia.org/wiki/Stochastic%20empirical%20loading%20and%20dilution%20model
Stochastic empirical loading and dilution model
The stochastic empirical loading and dilution model (SELDM) is a stormwater quality model. SELDM is designed to transform complex scientific data into meaningful information about the risk of adverse effects of runoff on receiving waters, the potential need for mitigation measures, and the potential effectiveness of such management measures for reducing these risks. The U.S. Geological Survey developed SELDM in cooperation with the Federal Highway Administration to help develop planning-level estimates of event mean concentrations, flows, and loads in stormwater from a site of interest and from an upstream basin. SELDM uses information about a highway site, the associated receiving-water basin, precipitation events, stormflow, water quality, and the performance of mitigation measures to produce a stochastic population of runoff-quality variables. Although SELDM is, nominally, a highway runoff model is can be used to estimate flows concentrations and loads of runoff-quality constituents from other land use areas as well. SELDM was developed by the U.S. Geological Survey so the model, source code, and all related documentation are provided free of any copyright restrictions according to U.S. copyright laws and the USGS Software User Rights Notice. SELDM is widely used to assess the potential effect of runoff from highways, bridges, and developed areas on receiving-water quality with and without the use of mitigation measures. Stormwater practitioners evaluating highway runoff commonly use data from the Highway Runoff Database (HRDB) with SELDM to assess the risks for adverse effects of runoff on receiving waters. SELDM is a stochastic mass-balance model A mass-balance approach (figure 1) is commonly applied to estimate the concentrations and loads of water-quality constituents in receiving waters downstream of an urban or highway-runoff outfall. In a mass-balance model, the loads from the upstream basin and runoff source area are added to calculate the discharge, concentration, and load in the receiving water downstream of the discharge point. SELDM can do a stream-basin analysis and a lake-basin analysis. The stream-basin analysis uses a stochastic mass-balance analysis based on multi-year simulations including hundreds to thousands of runoff events. SELDM generates storm-event values for the site of interest (the highway site) and the upstream receiving stream to calculate flows, concentrations, and loads in the receiving stream downstream of the stormwater outfall. The lake-basin analysis also is a stochastic multi-year mass-balance analysis. The lake-basin analysis uses the highway loads that occur during runoff periods, the total annual loads from the lake basin to calculate annual loads to and from the lake. The lake basin analysis uses the volume of the lake and pollutant-specific attenuation factors to calculate a population of average-annual lake concentrations. The annual flows and loads SELDM calculates for the stream and lake analyses also can be used to estimate total maximum daily loads (TMDLs) for the site of interest and the upstream lake basin. The TMDL can be based on the average of annual loads because product of the average load times the number of years of record will be the sum-total load for that (simulated) period of record. The variability in annual values can be used to estimate the risk of exceedance and the margin of safety for the TMDL analysis Model description SELDM is a stochastic model because it uses Monte Carlo methods to produce the random combinations of input variable values needed to generate the stochastic population of values for each component variable. SELDM calculates the dilution of runoff in the receiving waters and the resulting downstream event mean concentrations and annual average lake concentrations. Results are ranked, and plotting positions are calculated, to indicate the level of risk of adverse effects caused by runoff concentrations, flows, and loads on receiving waters by storm and by year. Unlike deterministic hydrologic models, SELDM is not calibrated by changing values of input variables to match a historical record of values. Instead, input values for SELDM are based on site characteristics and representative statistics for each hydrologic variable. Thus, SELDM is an empirical model based on data and statistics rather than theoretical physicochemical equations. SELDM is a lumped parameter model because the highway site, the upstream basin, and the lake basin each are represented as a single homogeneous unit. Each of these source areas is represented by average basin properties, and results from SELDM are calculated as point estimates for the site of interest. Use of the lumped parameter approach facilitates rapid specification of model parameters to develop planning-level estimates with available data. The approach allows for parsimony in the required inputs to and outputs from the model and flexibility in the use of the model. For example, SELDM can be used to model runoff from various land covers or land uses by using the highway-site definition as long as representative water quality and impervious-fraction data are available. SELDM is easy to use because it has a simple graphical user interface and because much of the information and data needed to run SELDM are embedded in the model. SELDM provides input statistics for precipitation, prestorm flow, runoff coefficients, and concentrations of selected water-quality constituents from National datasets. Input statistics may be selected on the basis of the latitude, longitude, and physical characteristics of the site of interest and the upstream basin. The user also may derive and input statistics for each variable that are specific to a given site of interest or a given area. Information and data from hundreds to thousands of sites across the country were compiled to facilitate use of SELDM. Most of the necessary input data are obtained by defining the location of the site of interest and five simple basin properties. These basin properties are the drainage area, the basin length, the basin slope, the impervious fraction, and the basin development factor SELDM models the potential effect of mitigation measures by using Monte Carlo methods with statistics that approximate the net effects of structural and nonstructural best management practices (BMPs). Structural BMPs are defined as the components of the drainage pathway between the source of runoff and a stormwater discharge location that affect the volume, timing, or quality of runoff. SELDM uses a simple stochastic statistical model of BMP performance to develop planning-level estimates of runoff-event characteristics. This statistical approach can be used to represent a single BMP or an assemblage of BMPs. The SELDM BMP-treatment module has provisions for stochastic modeling of three stormwater treatments: volume reduction, hydrograph extension, and water-quality treatment. In SELDM, these three treatment variables are modeled by using the trapezoidal distribution and the rank correlation with the associated highway-runoff variables. This report describes methods for calculating the trapezoidal-distribution statistics and rank correlation coefficients for stochastic modeling of volume reduction, hydrograph extension, and water-quality treatment by structural stormwater BMPs and provides the calculated values for these variables. These statistics are different from the statistics commonly used to characterize or compare BMPs. They are designed to provide a stochastic transfer function to approximate the quantity, duration, and quality of BMP effluent given the associated inflow values for a population of storm events. Model interface SELDM was developed as a Microsoft Access® database software application to facilitate storage, handling, and use of the hydrologic dataset with a simple graphical user interface (GUI). The program's menu-driven GUI uses standard Microsoft Visual Basic for Applications® (VBA) interface controls to facilitate entry, processing, and output of data. Appendix 4 of the SELDM manual has detailed instructions for using the GUI. The SELDM user interface has one or more GUI forms that are used to enter four categories of input data, which include documentation, site and region information, hydrologic statistics, and water-quality data. The documentation data include information about the analyst, the project, and the analysis. The site and region data include the highway-site characteristics, the ecoregions, the upstream-basin characteristics, and, if a lake analysis is selected, the lake-basin characteristics. The hydrologic data include precipitation, streamflow, and runoff-coefficient statistics. The water-quality data include highway-runoff-quality statistics, upstream-water-quality statistics, downstream-water-quality definitions, and BMP-performance statistics. There also is a GUI form for running the model and accessing the distinct set of output files. The SELDM interface is designed to populate the database with data and statistics for the analysis and to specify index variables that are used by the program to query the database when SELDM is run. It is necessary to step through the input forms each time an analysis is run. Model output The results of each SELDM analysis are written to 5–10 output files, depending on the options that were selected during the analysis-specification process. The five output files that are created for every model run are the output documentation, highway-runoff quality, annual highway runoff, precipitation events, and stormflow file. If the Stream Basin or Stream and Lake Basin output options are selected, then the prestorm streamflow and dilution factor files also are created. If these same two output options are selected and, in addition, one or more downstream water-quality pairs are defined by using the water-quality menu, then the upstream water-quality and downstream water-quality output files also are created by SELDM. If the Stream and Lake Basin Output or Lake Basin Output option is selected, and one or more downstream water-quality pairs are defined by using the water-quality menu, then the Lake Analysis output file is created when the Lake Basin Analysis is run. The output files are written as tab-delimited ASCII text files in a relational database (RDB) format that can be imported into many software packages. This output is designed to facilitate post-modeling analysis and presentation of results. The benefit of the Monte Carlo analysis is not to decrease uncertainty in the input statistics, but to represent the different combinations of the variables that determine potential risks of water-quality excursions. SELDM provides a method for rapid assessment of information that is otherwise difficult or impossible to obtain because it models the interactions among hydrologic variables (with different probability distributions) that result in a population of values that represent likely long-term outcomes from runoff processes and the potential effects of different mitigation measures. SELDM also provides the means for rapidly doing sensitivity analyses to determine the potential effects of different input assumptions on the risks for water-quality excursions. SELDM produces a population of storm-event and annual values to address the questions about the potential frequency, magnitude, and duration of water-quality excursions. The output represents a collection of random events rather than a time series. Each storm that is generated in SELDM is identified by sequence number and annual-load accounting year. The model generates each storm randomly; there is no serial correlation, and the order of storms does not reflect seasonal patterns. The annual-load accounting years, which are just random collections of events generated with the sum of storm interevent times less than or equal to a year, are used to generate annual highway flows and loads for TMDL analysis and the lake basin analysis. In 2019, the USGS developed a model post processor for SELDM to facilitate analysis and graphing of results from SELDM simulations; that software, known as InterpretSELDM, is available in the public domain on a USGS ScienceBase site. History SELDM was developed between 2010 and 2013 and was published as version 1.0.0 in March 2013. A small problem with the algorithm used to calculate upstream and lake-basin transport curves was discovered and version 1.0.1 was released in July 2013. Version 1.0.2 was released in June, 2016 to use the Cunnane plotting position formula for all output files. Version 1.0.3 was released in July, 2018 to address issues with load calculations for constituents with concentrations of nanograms per liter or picograms per liter and to address other sundry issues. Version 1.1.0 was released in May 2021 to add batch processing, change the highway runoff duration used for upstream transport curves from the discharge duration, which could vary from BMP to BMP, to the runoff-concurrent duration and volume, and fix a problem that allowed users to simulate a dependent variable in a lake analysis without the explanatory variable, which caused an error. The code for SELDM is open source and public domain code that can be downloaded from the SELDM software support page. See also Computer simulation Drainage basin Monte Carlo method Hydrology Stochastic Stormwater Surface runoff Surface-water hydrology Water pollution Water quality Water quality modelling References External links SELDM Documentation Page SELDM Software Support Page SELDM Software Archive Stormwater YouTube Page Environmental engineering Federal Highway Administration Stormwater management Environmental issues with water Hydrology models Hydrology and urban planning Water and the environment Water resource management in the United States United States Geological Survey
25679436
https://en.wikipedia.org/wiki/John%20R.%20Womersley
John R. Womersley
John Ronald Womersley (20 June 1907 – 7 March 1958) was a British mathematician and computer scientist who made important contributions to computer development, and hemodynamics. Nowadays he is principally remembered for his contribution to blood flow, fluid dynamics and the eponymous Womersley number, a dimensionless parameter characterising unsteady flow. Biography Early life and education Womersley was born on 20 June 1907 in Morley, near Leeds in the West Riding of Yorkshire. He was the only child of George William and Ruth Womersely; his father managed a grocery store in Morley. He was educated at Morley Grammar School from 1917 to 1925. In 1925 he was awarded an Open Scholarship to the University of Cambridge and the Royal Scholarship in Physics at Imperial College of Science and Technology, but he chose to read mathematics at Imperial College. His courses included Pure and Applied Mathematics, Physics, Hydrodynamics and the Kinetic Theory of Gases. He was awarded a BSc degree with first-class honours in mathematics in 1929 and became an associate of the Royal College of Science. He remained at Imperial College for another two years and was awarded the Diploma of Imperial College (D.I.C.) in 1930. Work In 1930 Womersely left Imperial College to take up a position as a junior research officer at the Shirley Institute (British Cotton Industry Research Institute), Manchester. There he applied mathematical techniques to problems in textile manufacture, including research on cotton spinning, drafting fibrous materials, and, through L. H. C. Tippett, the use of mathematical statistics in industrial production and quality control. While at the Shirley Institute he also met Leslie Comrie and became interested in computational techniques. As a result, he spent a month at HM Nautical Almanac Office, London learning Comrie's numerical approaches. In 1936 he collaborated with Douglas Hartree who had built a Differential Analyser at the University of Manchester; together they devised a much cited method for the numerical integration of partial differential equations. In 1937, with war looming, he joined the armaments research department at Woolwich as a scientific officer, and worked on using statistical techniques applied to ballistics and ammunition proofing. In 1942, after the outbreak of World War II, he was appointed assistant director of scientific research at the Ministry of Supply and asked to set up and head the Advisory Service on Statistical Methods (later known as SR17). This organization was responsible for advice and research into ammunition supply, engineering factories and the investigations of a range of Government Inspectorates. It was particularly important in ensuring quality control and promoting sample inspection methods to British industry during wartime. In 1944 he joined the British Association mathematical tables committee and in the same year he was appointed as the first superintendent of the Mathematics Division of the National Physical Laboratory (NPL). In addition to being responsible for statistical quality control, NPL was tasked with building an electronic computer, for which Womersley coined the name Automatic Computing Engine (ACE), echoing Babbage's Analytical Engine. Womersely was therefore responsible for the set up and operation of the first national computing centre in UK. One of his first actions was to visit the US for a fact-finding tour, where he learned about ENIAC (Electronic Numerical Integrator and Computer), Howard Aitken's Harvard machine, George Stibitz's machines and von Neumann's plans for the binary computer, EDVAC (Electronic Discrete Variable Automatic Computer). On his return, he recruited Alan Turing to work on the ACE section and backed him strongly. Womersley also recruited Donald Davies in 1947. However, progress on the ACE project was delayed and Turing developed a dislike of Womersley and disdain for his abilities and left the project in 1948. Davies then took over from Turing and a small experimental model, pilot ACE was produced in 1950. Whether Womersely led NPL successfully has been questioned, although the consensus seems to be that he did a good job in difficult circumstances. He himself left the project in 1950, before the prototype pilot ACE was completed, to join the British Tabulating Machine Company (BTM), a forerunner of International Computers Limited (ICL). There he recognised that the computers previously developed by academia or governments were too large and expensive to be commercially viable and he recruited Andrew Booth who had developed the All Purpose Electronic Computer at Birbeck University, as a consultant to develop a smaller inexpensive computer. The computer copied from Booth's original design by Ray Bird was named the Hollerith Electronic Computer (HEC1), and was Britain's first mass-produced business computer. In 1954 Womersley left BTM and joined a research team, led by Donald McDonald at St Bartholomew's Hospital, who were studying blood flow in arteries. This change seems to have been a temporary arrangement to 'fill in time whilst awaiting completion of arrangements to come to WADC' (the Wright Air Development Center, Dayton, Ohio, USA). His collaboration with McDonald may have been prompted by his eldest daughter, Barbara, who was studying medicine at St Bartholomew's. Whatever the reason, this move led to a new and highly productive period in his research, as he applied mathematical and computational techniques to the analysis of blood flow and hemodynamics. Most notably in 1955 he published an article which described a dimensionless parameter (α) which characterised the nature of unsteady flow; subsequently this has been called the Womersley number. In July 1955, as planned, he moved to WADC to take a post as acting chief of the Analysis Section, System Dynamics Branch Aeronautical Research Laboratory. In 1956, he was promoted to Supervisory Mathematician and then Supervisory Aeronautical Research Engineer (Flight Systems), although he continued to publish on mathematical aspects of blood flow until his early death in 1958. His 1957 monograph on 'An elastic tube theory of pulse transmission and oscillatory flow in mammalian arteries' is widely regarded as a major influence in the field. In 1957 he returned to Britain for treatment of cancer. He underwent a number of operations in London and returned to Ohio in 1957, but never fully recovered and died at Ohio State University Hospital, Columbus, on 7 March 1958. Personal life Womersley married Jean Isobel Jordan in Hammersmith, London in 1931. The couple had three daughters, Barbara, Ruth and Marion. Womersley's wife, Jean lived in Dayton until 1996, and as of 2014, they were survived by two daughters and six grandchildren living in the US and Canada. Selected publications Womersley, JR (1957). An Elastic Tube Theory of Pulse Transmission and Oscillatory Flow in Mammalian Arteries, Wright Air Development Center Technical Report 56-614, (sometimes referred to as WADC TR56-614 and sometimes cited as 1958) A complete bibliography was compiled by Brian E. Carpenter and Robert W. Doran References 1907 births 1958 deaths 20th-century British mathematicians Fluid dynamicists History of computing in the United Kingdom
21109338
https://en.wikipedia.org/wiki/Oklahoma%20City%20Air%20Logistics%20Complex
Oklahoma City Air Logistics Complex
The Oklahoma City Air Logistics Complex (OC-ALC) Tinker Air Force Base, Oklahoma is one of the largest units in the Air Force Materiel Command. The complex performs programmed depot maintenance on the C/KC-135, B-1B, B-52 and E-3 aircraft; expanded phase maintenance on the Navy E-6 aircraft; and maintenance, repair and overhaul of F100, F101, F108, F110, F117, F118, F119, F135, and TF33 engines for the Air Force, Air Force Reserve, Air National Guard, Navy and foreign military sales. Additionally, the complex is responsible for the maintenance, repair and overhaul of a myriad of Air Force and Navy airborne accessory components, and the development and sustainment of a diverse portfolio of operational flight programs, test program sets, automatic test equipment, and industrial automation software. It was established as the Oklahoma Air Depot Control Area Command on 19 Jan 1943. Activated on 1 Feb 1943. Redesignated as: Oklahoma City Air Service Command on 17 May 1943; Oklahoma City Air Technical Service Command on 14 Nov 1944; Oklahoma Air Materiel Area on 2 Jul 1946; Oklahoma City Air Logistics Center on 1 Apr 1974; Oklahoma City Air Logistics Complex on 10 Jul 2012. On 15 January 1988, the 2871st Test Squadron was activated at Tinker Air Force Base, assigned to the Oklahoma City Air Logistics Center. It was redesignated the 10th Test Squadron on 1 October 1992, later being inactivated, and its resources and personnel absorbed by another unit. Structure In the late 2010s, the Oklahoma City Air Logistics Complex comprised five groups and eight staff offices providing USAF maintenance, repair, and overhaul support: The 76th Aircraft Maintenance Group directs, manages and accomplishes organic depot-level maintenance, repair, modification, overhaul, functional check flights and reclamation of B-1, B-52, C/KC/EC-135, E-3, KC-10, C-130 and E-6 aircraft. The group conducts depot support operations on a fleet of Air Force, Air Force Reserve, Air National Guard, Navy and Foreign Military Sales aircraft, as well as expeditionary combat-logistics depot maintenance and distribution support. The 76th Propulsion Maintenance Group is responsible for operation of the only Air Force depot-level maintenance facility supporting Air Force and Navy aircraft engines. The group performs repairs on engines and major engine assemblies for F-15, F-16, E-3, E-6, E-8, B-52, B-1, B-2, C-17, C-18, KC/RC-135, and F/A-22 aircraft. The group has been identified as the Depot source of repair for the F-35 engine workload. The 76th Commodities Maintenance Group directs, manages, and operates organic depot level maintenance facilities in the restoration of Air Force and Navy aircraft and engine parts to serviceable condition. These systems include the A-10, B-1, B-2, B-52, C-5, C-17, C-130, C-135, C-141, E-3, F-4, F-5, F-15, F-16, F-22, T-37 and T-38 aircraft. The group is also the Air Force Technology Repair Center for air & fuel accessories, constant speed drives and oxygen related components. The 76th Software Maintenance Group is responsible for the development, modernization, and sustainment of embedded software in the Air Force's mission critical weapon systems and associated with depot, acquisition, and logistics activities. The Group's multi-skilled, highly trained and motivated workforce are focused on the Continuous Process Improvement (CPI) of software and systems engineering, including: Operational Flight Programs, Automatic Test Equipment, Test Program Sets, Jet Engine Test, Modeling and Simulation, Industrial Automation, Software Information Assurance, and multiple weapon systems software. The Group also provides engineering support to the depot sustainment and acquisition communities. The 76th Maintenance Support Group manages industrial services, physical sciences laboratories, precision measurement equipment laboratories and tools for the Oklahoma City Air Logistics Complex. It provides engineering, installation, maintenance and management support for the complex's industrial plant equipment and facilities. In addition, the group provides environmental, occupational health, focal point for energy reduction and point of use technology. List of commanders Brig Gen Donald Kirkland, September 2012 Brig Gen Mark K. Johnson, March 2015 Brig Gen Tom D. Miller, June 2017 Brig Gen Christopher D. Hill, June 2018 Maj Gen Jeffrey R. King, July 2020 References External links https://www.tinker.af.mil/About-Us/Biographies/Display/Article/1562017/brigadier-general-christopher-d-hill/ Brigadier Christopher D. Hill Oklahoma City Air Logistics Complex Fact Sheet Military units and formations in Oklahoma Centers of the United States Air Force Logistics units and formations of the United States Air Force Buildings and structures in Oklahoma City Military in Oklahoma City 1943 establishments in Oklahoma
2367846
https://en.wikipedia.org/wiki/Oracle%20Financial%20Services%20Software
Oracle Financial Services Software
Oracle Financial Services Software Limited (OFSS) is a subsidiary of Oracle Corporation. It is a retail banking, corporate banking, and insurance technology solutions provider for the banking industry. It also provides risk and compliance management, and performance measurement applications, as well as accounting, business process management, human resources and procurement tools. The company claims to have more than 900 customers in over 145 countries. Oracle Financial Services Software Limited is ranked No. 9 in IT companies of India and overall ranked No. 253 in Fortune India 500 list in 2011. History Part of Citi corporation (I-Flex Solutions) iCorp Oversea Software Ltd. ("COSL") was owned by CITIBANK in 1990 and later went on to be known as I-FLEX in the world market. After some time, it merged with another company, and a new company was formed, namely Citicorp Information Technologies Industries Ltd ("CITIL"). Rajesh Hukku was named to head CITIL. While COSL's mandate was to serve Citicorp's internal needs globally and be a cost center, CITIL's mandate was to be profitable by serving not only Citicorp but the whole global financial software market. Largely known as I-FLEX, it was eventually renamed as Oracle Financial Services Software. Early history CITIL was started with a universal banking product named called MicroBanker (which became successful in some English speaking parts of Africa and other developing regions over the next 3–4 years) and the retail banking product Finware. In the mid-90s, the firm developed Flexcube (stylized FLEXCUBE) at its Bangalore Centre. After the launch of Flexcube, all of CITIL's transnational banking products were brought under a common brand umbrella. Subsequently, the company's name was changed to I-FLEX Solutions India Ltd. Oracle Corporation Oracle purchased Citigroup's 41% stake in I-FLEX solutions for US$593 million in August 2005, a further 7.52% in March and April 2006, and 3.2% in an open-market purchase in mid-April 2006. On 14 August 2006, Oracle Financial Services announced it would acquire Mantas, a US-based anti-money laundering and compliance software company for US$122.6 million. The company part-funded the transaction through a preferential share allotment to the majority shareholder Oracle Corporation. On 12 January 2007, after an open offer price to minority shareholders, Oracle increased its stake in I-FLEX to around 83%. On 4 April 2008, Oracle changed the name of the company to Oracle Financial Services Limited. On 24 October 2010, Oracle announced the appointment of Chaitanya M Kamat as Managing Director and CEO of Oracle Financial Services Software Limited. The outgoing CEO and MD, N.R.K. Raman retired from these posts after 25 years of service. Now, Oracle Financial Services Software Limited is a major part of Oracle Financial Services Global Business Unit (FSGBU) under Harinderjit (Sonny) Singh who is the Vice President & Group Head of Oracle FSGBU World Wide. Products and services Oracle Financial Services Software Limited has two main streams of business. The products division (formerly called BPD – Banking Products Division) and PrimeSourcing. The company's offerings cover retail, corporate and investment banking, funds, cash management, trade, treasury, payments, lending, private wealth management, asset management, and business analytics. The company undertook a re-branding exercise in the latter half of 2008. As part of this, the corporate website was integrated with Oracle's website. Various divisions, services, and products were renamed to reflect the new identity post alignment with Oracle. Recently, Oracle Financial Services launched products for Internal Capital Adequacy Assessment Process, exposure management, enterprise performance management and energy, and commodity trading compliance. The company promotes its BPO business process outsourcing business via its subsidiary Equinox Corporation which is based in Irvine, California. Portfolio In 2002 DotEx International Joint Venture NSE.IT and i-flex solutions Ltd signed a memorandum of understanding (MoU) with BgSE Financials Ltd, to provide Internet trading services. Opens its first Overseas Software Development Center in Singapore. See also List of IT consulting firms Fortune India 500 List of companies of India TCS BaNCS Finacle References Banking software companies Oracle acquisitions Software companies based in Mumbai Banks established in 1991 Financial services companies established in 1991 Indian companies established in 1991 Software companies established in 1991 1991 establishments in Maharashtra Companies listed on the National Stock Exchange of India Companies listed on the Bombay Stock Exchange
7978
https://en.wikipedia.org/wiki/Data%20Encryption%20Standard
Data Encryption Standard
The Data Encryption Standard (DES ) is a symmetric-key algorithm for the encryption of digital data. Although its short key length of 56 bits makes it too insecure for applications, it has been highly influential in the advancement of cryptography. Developed in the early 1970s at IBM and based on an earlier design by Horst Feistel, the algorithm was submitted to the National Bureau of Standards (NBS) following the agency's invitation to propose a candidate for the protection of sensitive, unclassified electronic government data. In 1976, after consultation with the National Security Agency (NSA), the NBS selected a slightly modified version (strengthened against differential cryptanalysis, but weakened against brute-force attacks), which was published as an official Federal Information Processing Standard (FIPS) for the United States in 1977. The publication of an NSA-approved encryption standard led to its quick international adoption and widespread academic scrutiny. Controversies arose from classified design elements, a relatively short key length of the symmetric-key block cipher design, and the involvement of the NSA, raising suspicions about a backdoor. The S-boxes that had prompted those suspicions were designed by the NSA to remove a backdoor they secretly knew (differential cryptanalysis). However, the NSA also ensured that the key size was drastically reduced so that they could break the cipher by brute force attack. The intense academic scrutiny the algorithm received over time led to the modern understanding of block ciphers and their cryptanalysis. DES is insecure due to the relatively short 56-bit key size. In January 1999, distributed.net and the Electronic Frontier Foundation collaborated to publicly break a DES key in 22 hours and 15 minutes (see chronology). There are also some analytical results which demonstrate theoretical weaknesses in the cipher, although they are infeasible in practice. The algorithm is believed to be practically secure in the form of Triple DES, although there are theoretical attacks. This cipher has been superseded by the Advanced Encryption Standard (AES). DES has been withdrawn as a standard by the National Institute of Standards and Technology. Some documents distinguish between the DES standard and its algorithm, referring to the algorithm as the DEA (Data Encryption Algorithm). History The origins of DES date to 1972, when a National Bureau of Standards study of US government computer security identified a need for a government-wide standard for encrypting unclassified, sensitive information. Around the same time, engineer Mohamed Atalla in 1972 founded Atalla Corporation and developed the first hardware security module (HSM), the so-called "Atalla Box" which was commercialized in 1973. It protected offline devices with a secure PIN generating key, and was a commercial success. Banks and credit card companies were fearful that Atalla would dominate the market, which spurred the development of an international encryption standard. Atalla was an early competitor to IBM in the banking market, and was cited as an influence by IBM employees who worked on the DES standard. The IBM 3624 later adopted a similar PIN verification system to the earlier Atalla system. On 15 May 1973, after consulting with the NSA, NBS solicited proposals for a cipher that would meet rigorous design criteria. None of the submissions was suitable. A second request was issued on 27 August 1974. This time, IBM submitted a candidate which was deemed acceptable—a cipher developed during the period 1973–1974 based on an earlier algorithm, Horst Feistel's Lucifer cipher. The team at IBM involved in cipher design and analysis included Feistel, Walter Tuchman, Don Coppersmith, Alan Konheim, Carl Meyer, Mike Matyas, Roy Adler, Edna Grossman, Bill Notz, Lynn Smith, and Bryant Tuckerman. NSA's involvement in the design On 17 March 1975, the proposed DES was published in the Federal Register. Public comments were requested, and in the following year two open workshops were held to discuss the proposed standard. There was criticism received from public-key cryptography pioneers Martin Hellman and Whitfield Diffie, citing a shortened key length and the mysterious "S-boxes" as evidence of improper interference from the NSA. The suspicion was that the algorithm had been covertly weakened by the intelligence agency so that they—but no one else—could easily read encrypted messages. Alan Konheim (one of the designers of DES) commented, "We sent the S-boxes off to Washington. They came back and were all different." The United States Senate Select Committee on Intelligence reviewed the NSA's actions to determine whether there had been any improper involvement. In the unclassified summary of their findings, published in 1978, the Committee wrote: However, it also found that Another member of the DES team, Walter Tuchman, stated "We developed the DES algorithm entirely within IBM using IBMers. The NSA did not dictate a single wire!" In contrast, a declassified NSA book on cryptologic history states: and Some of the suspicions about hidden weaknesses in the S-boxes were allayed in 1990, with the independent discovery and open publication by Eli Biham and Adi Shamir of differential cryptanalysis, a general method for breaking block ciphers. The S-boxes of DES were much more resistant to the attack than if they had been chosen at random, strongly suggesting that IBM knew about the technique in the 1970s. This was indeed the case; in 1994, Don Coppersmith published some of the original design criteria for the S-boxes. According to Steven Levy, IBM Watson researchers discovered differential cryptanalytic attacks in 1974 and were asked by the NSA to keep the technique secret. Coppersmith explains IBM's secrecy decision by saying, "that was because [differential cryptanalysis] can be a very powerful tool, used against many schemes, and there was concern that such information in the public domain could adversely affect national security." Levy quotes Walter Tuchman: "[t]hey asked us to stamp all our documents confidential... We actually put a number on each one and locked them up in safes, because they were considered U.S. government classified. They said do it. So I did it". Bruce Schneier observed that "It took the academic community two decades to figure out that the NSA 'tweaks' actually improved the security of DES." The algorithm as a standard Despite the criticisms, DES was approved as a federal standard in November 1976, and published on 15 January 1977 as FIPS PUB 46, authorized for use on all unclassified data. It was subsequently reaffirmed as the standard in 1983, 1988 (revised as FIPS-46-1), 1993 (FIPS-46-2), and again in 1999 (FIPS-46-3), the latter prescribing "Triple DES" (see below). On 26 May 2002, DES was finally superseded by the Advanced Encryption Standard (AES), following a public competition. On 19 May 2005, FIPS 46-3 was officially withdrawn, but NIST has approved Triple DES through the year 2030 for sensitive government information. The algorithm is also specified in ANSI X3.92 (Today X3 is known as INCITS and ANSI X3.92 as ANSI INCITS 92), NIST SP 800-67 and ISO/IEC 18033-3 (as a component of TDEA). Another theoretical attack, linear cryptanalysis, was published in 1994, but it was the Electronic Frontier Foundation's DES cracker in 1998 that demonstrated that DES could be attacked very practically, and highlighted the need for a replacement algorithm. These and other methods of cryptanalysis are discussed in more detail later in this article. The introduction of DES is considered to have been a catalyst for the academic study of cryptography, particularly of methods to crack block ciphers. According to a NIST retrospective about DES, The DES can be said to have "jump-started" the nonmilitary study and development of encryption algorithms. In the 1970s there were very few cryptographers, except for those in military or intelligence organizations, and little academic study of cryptography. There are now many active academic cryptologists, mathematics departments with strong programs in cryptography, and commercial information security companies and consultants. A generation of cryptanalysts has cut its teeth analyzing (that is, trying to "crack") the DES algorithm. In the words of cryptographer Bruce Schneier, "DES did more to galvanize the field of cryptanalysis than anything else. Now there was an algorithm to study." An astonishing share of the open literature in cryptography in the 1970s and 1980s dealt with the DES, and the DES is the standard against which every symmetric key algorithm since has been compared. Chronology Description DES is the archetypal block cipher—an algorithm that takes a fixed-length string of plaintext bits and transforms it through a series of complicated operations into another ciphertext bitstring of the same length. In the case of DES, the block size is 64 bits. DES also uses a key to customize the transformation, so that decryption can supposedly only be performed by those who know the particular key used to encrypt. The key ostensibly consists of 64 bits; however, only 56 of these are actually used by the algorithm. Eight bits are used solely for checking parity, and are thereafter discarded. Hence the effective key length is 56 bits. The key is nominally stored or transmitted as 8 bytes, each with odd parity. According to ANSI X3.92-1981 (Now, known as ANSI INCITS 92-1981), section 3.5: Like other block ciphers, DES by itself is not a secure means of encryption, but must instead be used in a mode of operation. FIPS-81 specifies several modes for use with DES. Further comments on the usage of DES are contained in FIPS-74. Decryption uses the same structure as encryption, but with the keys used in reverse order. (This has the advantage that the same hardware or software can be used in both directions.) Overall structure The algorithm's overall structure is shown in Figure 1: there are 16 identical stages of processing, termed rounds. There is also an initial and final permutation, termed IP and FP, which are inverses (IP "undoes" the action of FP, and vice versa). IP and FP have no cryptographic significance, but were included in order to facilitate loading blocks in and out of mid-1970s 8-bit based hardware. Before the main rounds, the block is divided into two 32-bit halves and processed alternately; this criss-crossing is known as the Feistel scheme. The Feistel structure ensures that decryption and encryption are very similar processes—the only difference is that the subkeys are applied in the reverse order when decrypting. The rest of the algorithm is identical. This greatly simplifies implementation, particularly in hardware, as there is no need for separate encryption and decryption algorithms. The ⊕ symbol denotes the exclusive-OR (XOR) operation. The F-function scrambles half a block together with some of the key. The output from the F-function is then combined with the other half of the block, and the halves are swapped before the next round. After the final round, the halves are swapped; this is a feature of the Feistel structure which makes encryption and decryption similar processes. The Feistel (F) function The F-function, depicted in Figure 2, operates on half a block (32 bits) at a time and consists of four stages: Expansion: the 32-bit half-block is expanded to 48 bits using the expansion permutation, denoted E in the diagram, by duplicating half of the bits. The output consists of eight 6-bit (8 × 6 = 48 bits) pieces, each containing a copy of 4 corresponding input bits, plus a copy of the immediately adjacent bit from each of the input pieces to either side. Key mixing: the result is combined with a subkey using an XOR operation. Sixteen 48-bit subkeys—one for each round—are derived from the main key using the key schedule (described below). Substitution: after mixing in the subkey, the block is divided into eight 6-bit pieces before processing by the S-boxes, or substitution boxes. Each of the eight S-boxes replaces its six input bits with four output bits according to a non-linear transformation, provided in the form of a lookup table. The S-boxes provide the core of the security of DES—without them, the cipher would be linear, and trivially breakable. Permutation: finally, the 32 outputs from the S-boxes are rearranged according to a fixed permutation, the P-box. This is designed so that, after permutation, the bits from the output of each S-box in this round are spread across four different S-boxes in the next round. The alternation of substitution from the S-boxes, and permutation of bits from the P-box and E-expansion provides so-called "confusion and diffusion" respectively, a concept identified by Claude Shannon in the 1940s as a necessary condition for a secure yet practical cipher. Key schedule Figure 3 illustrates the key schedule for encryption—the algorithm which generates the subkeys. Initially, 56 bits of the key are selected from the initial 64 by Permuted Choice 1 (PC-1)—the remaining eight bits are either discarded or used as parity check bits. The 56 bits are then divided into two 28-bit halves; each half is thereafter treated separately. In successive rounds, both halves are rotated left by one or two bits (specified for each round), and then 48 subkey bits are selected by Permuted Choice 2 (PC-2)—24 bits from the left half, and 24 from the right. The rotations (denoted by "<<<" in the diagram) mean that a different set of bits is used in each subkey; each bit is used in approximately 14 out of the 16 subkeys. The key schedule for decryption is similar—the subkeys are in reverse order compared to encryption. Apart from that change, the process is the same as for encryption. The same 28 bits are passed to all rotation boxes. Security and cryptanalysis Although more information has been published on the cryptanalysis of DES than any other block cipher, the most practical attack to date is still a brute-force approach. Various minor cryptanalytic properties are known, and three theoretical attacks are possible which, while having a theoretical complexity less than a brute-force attack, require an unrealistic number of known or chosen plaintexts to carry out, and are not a concern in practice. Brute-force attack For any cipher, the most basic method of attack is brute force—trying every possible key in turn. The length of the key determines the number of possible keys, and hence the feasibility of this approach. For DES, questions were raised about the adequacy of its key size early on, even before it was adopted as a standard, and it was the small key size, rather than theoretical cryptanalysis, which dictated a need for a replacement algorithm. As a result of discussions involving external consultants including the NSA, the key size was reduced from 128 bits to 56 bits to fit on a single chip. In academia, various proposals for a DES-cracking machine were advanced. In 1977, Diffie and Hellman proposed a machine costing an estimated US$20 million which could find a DES key in a single day. By 1993, Wiener had proposed a key-search machine costing US$1 million which would find a key within 7 hours. However, none of these early proposals were ever implemented—or, at least, no implementations were publicly acknowledged. The vulnerability of DES was practically demonstrated in the late 1990s. In 1997, RSA Security sponsored a series of contests, offering a $10,000 prize to the first team that broke a message encrypted with DES for the contest. That contest was won by the DESCHALL Project, led by Rocke Verser, Matt Curtin, and Justin Dolske, using idle cycles of thousands of computers across the Internet. The feasibility of cracking DES quickly was demonstrated in 1998 when a custom DES-cracker was built by the Electronic Frontier Foundation (EFF), a cyberspace civil rights group, at the cost of approximately US$250,000 (see EFF DES cracker). Their motivation was to show that DES was breakable in practice as well as in theory: "There are many people who will not believe a truth until they can see it with their own eyes. Showing them a physical machine that can crack DES in a few days is the only way to convince some people that they really cannot trust their security to DES." The machine brute-forced a key in a little more than 2 days' worth of searching. The next confirmed DES cracker was the COPACOBANA machine built in 2006 by teams of the Universities of Bochum and Kiel, both in Germany. Unlike the EFF machine, COPACOBANA consists of commercially available, reconfigurable integrated circuits. 120 of these field-programmable gate arrays (FPGAs) of type XILINX Spartan-3 1000 run in parallel. They are grouped in 20 DIMM modules, each containing 6 FPGAs. The use of reconfigurable hardware makes the machine applicable to other code breaking tasks as well. One of the more interesting aspects of COPACOBANA is its cost factor. One machine can be built for approximately $10,000. The cost decrease by roughly a factor of 25 over the EFF machine is an example of the continuous improvement of digital hardware—see Moore's law. Adjusting for inflation over 8 years yields an even higher improvement of about 30x. Since 2007, SciEngines GmbH, a spin-off company of the two project partners of COPACOBANA has enhanced and developed successors of COPACOBANA. In 2008 their COPACOBANA RIVYERA reduced the time to break DES to less than one day, using 128 Spartan-3 5000's. SciEngines RIVYERA held the record in brute-force breaking DES, having utilized 128 Spartan-3 5000 FPGAs. Their 256 Spartan-6 LX150 model has further lowered this time. In 2012, David Hulton and Moxie Marlinspike announced a system with 48 Xilinx Virtex-6 LX240T FPGAs, each FPGA containing 40 fully pipelined DES cores running at 400 MHz, for a total capacity of 768 gigakeys/sec. The system can exhaustively search the entire 56-bit DES key space in about 26 hours and this service is offered for a fee online. Attacks faster than brute force There are three attacks known that can break the full 16 rounds of DES with less complexity than a brute-force search: differential cryptanalysis (DC), linear cryptanalysis (LC), and Davies' attack. However, the attacks are theoretical and are generally considered infeasible to mount in practice; these types of attack are sometimes termed certificational weaknesses. Differential cryptanalysis was rediscovered in the late 1980s by Eli Biham and Adi Shamir; it was known earlier to both IBM and the NSA and kept secret. To break the full 16 rounds, differential cryptanalysis requires 247 chosen plaintexts. DES was designed to be resistant to DC. Linear cryptanalysis was discovered by Mitsuru Matsui, and needs 243 known plaintexts (Matsui, 1993); the method was implemented (Matsui, 1994), and was the first experimental cryptanalysis of DES to be reported. There is no evidence that DES was tailored to be resistant to this type of attack. A generalization of LC—multiple linear cryptanalysis—was suggested in 1994 (Kaliski and Robshaw), and was further refined by Biryukov and others. (2004); their analysis suggests that multiple linear approximations could be used to reduce the data requirements of the attack by at least a factor of 4 (that is, 241 instead of 243). A similar reduction in data complexity can be obtained in a chosen-plaintext variant of linear cryptanalysis (Knudsen and Mathiassen, 2000). Junod (2001) performed several experiments to determine the actual time complexity of linear cryptanalysis, and reported that it was somewhat faster than predicted, requiring time equivalent to 239–241 DES evaluations. Improved Davies' attack: while linear and differential cryptanalysis are general techniques and can be applied to a number of schemes, Davies' attack is a specialized technique for DES, first suggested by Donald Davies in the eighties, and improved by Biham and Biryukov (1997). The most powerful form of the attack requires 250 known plaintexts, has a computational complexity of 250, and has a 51% success rate. There have also been attacks proposed against reduced-round versions of the cipher, that is, versions of DES with fewer than 16 rounds. Such analysis gives an insight into how many rounds are needed for safety, and how much of a "security margin" the full version retains. Differential-linear cryptanalysis was proposed by Langford and Hellman in 1994, and combines differential and linear cryptanalysis into a single attack. An enhanced version of the attack can break 9-round DES with 215.8 chosen plaintexts and has a 229.2 time complexity (Biham and others, 2002). Minor cryptanalytic properties DES exhibits the complementation property, namely that where is the bitwise complement of denotes encryption with key and denote plaintext and ciphertext blocks respectively. The complementation property means that the work for a brute-force attack could be reduced by a factor of 2 (or a single bit) under a chosen-plaintext assumption. By definition, this property also applies to TDES cipher. DES also has four so-called weak keys. Encryption (E) and decryption (D) under a weak key have the same effect (see involution): or equivalently, There are also six pairs of semi-weak keys. Encryption with one of the pair of semiweak keys, , operates identically to decryption with the other, : or equivalently, It is easy enough to avoid the weak and semiweak keys in an implementation, either by testing for them explicitly, or simply by choosing keys randomly; the odds of picking a weak or semiweak key by chance are negligible. The keys are not really any weaker than any other keys anyway, as they do not give an attack any advantage. DES has also been proved not to be a group, or more precisely, the set (for all possible keys ) under functional composition is not a group, nor "close" to being a group. This was an open question for some time, and if it had been the case, it would have been possible to break DES, and multiple encryption modes such as Triple DES would not increase the security, because repeated encryption (and decryptions) under different keys would be equivalent to encryption under another, single key. Simplified DES Simplified DES (SDES) was designed for educational purposes only, to help students learn about modern cryptanalytic techniques. SDES has similar structure and properties to DES, but has been simplified to make it much easier to perform encryption and decryption by hand with pencil and paper. Some people feel that learning SDES gives insight into DES and other block ciphers, and insight into various cryptanalytic attacks against them. Replacement algorithms Concerns about security and the relatively slow operation of DES in software motivated researchers to propose a variety of alternative block cipher designs, which started to appear in the late 1980s and early 1990s: examples include RC5, Blowfish, IDEA, NewDES, SAFER, CAST5 and FEAL. Most of these designs kept the 64-bit block size of DES, and could act as a "drop-in" replacement, although they typically used a 64-bit or 128-bit key. In the Soviet Union the GOST 28147-89 algorithm was introduced, with a 64-bit block size and a 256-bit key, which was also used in Russia later. DES itself can be adapted and reused in a more secure scheme. Many former DES users now use Triple DES (TDES) which was described and analysed by one of DES's patentees (see FIPS Pub 46-3); it involves applying DES three times with two (2TDES) or three (3TDES) different keys. TDES is regarded as adequately secure, although it is quite slow. A less computationally expensive alternative is DES-X, which increases the key size by XORing extra key material before and after DES. GDES was a DES variant proposed as a way to speed up encryption, but it was shown to be susceptible to differential cryptanalysis. On January 2, 1997, NIST announced that they wished to choose a successor to DES. In 2001, after an international competition, NIST selected a new cipher, the Advanced Encryption Standard (AES), as a replacement. The algorithm which was selected as the AES was submitted by its designers under the name Rijndael. Other finalists in the NIST AES competition included RC6, Serpent, MARS, and Twofish. See also Brute Force: Cracking the Data Encryption Standard DES supplementary material Skipjack (cipher) Triple DES Notes References (preprint) Biham, Eli and Shamir, Adi, Differential Cryptanalysis of the Data Encryption Standard, Springer Verlag, 1993. , . Biham, Eli and Alex Biryukov: An Improvement of Davies' Attack on DES. J. Cryptology 10(3): 195–206 (1997) Biham, Eli, Orr Dunkelman, Nathan Keller: Enhancing Differential-Linear Cryptanalysis. ASIACRYPT 2002: pp254–266 Biham, Eli: A Fast New DES Implementation in Software Cracking DES: Secrets of Encryption Research, Wiretap Politics, and Chip Design, Electronic Frontier Foundation (preprint). Campbell, Keith W., Michael J. Wiener: DES is not a Group. CRYPTO 1992: pp512–520 Coppersmith, Don. (1994). . IBM Journal of Research and Development, 38(3), 243–250. Diffie, Whitfield and Martin Hellman, "Exhaustive Cryptanalysis of the NBS Data Encryption Standard" IEEE Computer 10(6), June 1977, pp74–84 Ehrsam and others., Product Block Cipher System for Data Security, , Filed February 24, 1975 Gilmore, John, "Cracking DES: Secrets of Encryption Research, Wiretap Politics and Chip Design", 1998, O'Reilly, . Junod, Pascal. "On the Complexity of Matsui's Attack." Selected Areas in Cryptography, 2001, pp199–211. Kaliski, Burton S., Matt Robshaw: Linear Cryptanalysis Using Multiple Approximations. CRYPTO 1994: pp26–39 Knudsen, Lars, John Erik Mathiassen: A Chosen-Plaintext Linear Attack on DES. Fast Software Encryption - FSE 2000: pp262–272 Langford, Susan K., Martin E. Hellman: Differential-Linear Cryptanalysis. CRYPTO 1994: 17–25 Levy, Steven, Crypto: How the Code Rebels Beat the Government—Saving Privacy in the Digital Age, 2001, . National Bureau of Standards, Data Encryption Standard, FIPS-Pub.46. National Bureau of Standards, U.S. Department of Commerce, Washington D.C., January 1977. Christof Paar, Jan Pelzl, "The Data Encryption Standard (DES) and Alternatives", free online lectures on Chapter 3 of "Understanding Cryptography, A Textbook for Students and Practitioners". Springer, 2009. External links FIPS 46-3: The official document describing the DES standard (PDF) COPACOBANA, a $10,000 DES cracker based on FPGAs by the Universities of Bochum and Kiel DES step-by-step presentation and reliable message encoding application A Fast New DES Implementation in Software - Biham On Multiple Linear Approximations RFC4772 : Security Implications of Using the Data Encryption Standard (DES) Broken block ciphers
6212351
https://en.wikipedia.org/wiki/Reynolds%20and%20Reynolds
Reynolds and Reynolds
The Reynolds and Reynolds Company is a private corporation based in Dayton, Ohio. Its primary business is providing business forms, management software and professional services to car dealerships. Its software is used to manage sales logistics at dealerships. It also produces forms used in medicine and insurance. Reynolds and Reynolds was founded in 1866 as a printer of standardized business forms. It began developing and marketing digital products in the 1960s. This was followed by a major downsizing of the printing division and subsequent advancements in its software products. By the 1980s, Reynolds and Reynolds had won contracts with all of the Big Three automotive manufacturers, as well as some insurance businesses. The company went public in 1961, but was re-formed as a private company in 2006, when it was merged with Universal Computer Systems, resulting in a culture clash between the two companies. History Early history Reynolds and Reynolds was founded by Lucius Reynolds and his brother-in-law, James Gardner, in June 1866 in Dayton, Ohio. It was a small printing shop founded with $500 in capital and originally named Reynolds and Gardner. It made standardized business documents using carbon copy paper. A year after Reynolds was founded, Gardner sold his interest in the company to co-founder Lucius' father, Ira Reynolds, and the company was renamed to its current namesake, Reynolds and Reynolds. Co-owners Ira and Lucius died in 1880 and 1913 respectively. The youngest of the Reynolds family, Edwin Stanton Reynolds, took over. In 1927, Reynolds and Reynolds won a contract to provide all of the business forms for Chevrolet dealerships. The company opened new offices throughout the U.S. in the 1930s, and had 19 sales offices by the end of the decade. A controlling interest in the company was acquired in 1939 by Richard Hallam Grant, Sr., ending the Reynolds family ownership. He became the company's president in 1941. A new printing facility was built in 1948 in Celina, Ohio, and another in 1953 in Dallas, Texas, in addition to the one built in Los Angeles in 1928. Reynolds became a public company in 1961. In the 1960s, Reynolds opened new printing facilities in North Hollywood, Los Angeles, New Jersey and Canada. In 1963, Reynolds expanded into Canada through the acquisition of the automotive business unit of Windsor Office Supply, forming Reynolds and Reynolds (Canada) Ltd. By the end of the decade it had about $50 million in revenues. Establishment of software business Reynolds and Reynolds first entered into the electronic accounting market with the acquisition of a Boston-based accounting software developer in 1960. The software division was doing well in the 1970s, but its products were out-of-date by the 1980s. At the time, data could not be shared between departments and only one user was allowed on the system at a time. Each computer came at a cost of more than $100,000. Even as the computer division grew, the company's overall business revenue declined due to paper business forms becoming obsolete. In 1986, the company acquired Arnold Corporation, which increased Reynolds' revenues 50 percent and expanded its market presence to other industries besides car dealerships. The head of the computers division, David Holmes, was appointed CEO in 1989. He led the company's first large-scale lay-off in the printing division, cutting headcount and manufacturing space in half. The employees resisted the changes he incorporated. According to Forbes, this move was necessary and led to increases in profit and revenues. After Holmes retired, he was replaced by former IBM executive Lloyd G. "Buzz" Waterhouse, who created an eBusiness department to focus on internet technologies. In 2000, Reynolds also acquired the HAC Group, a learning, customer relationship management and web services company for retailers and manufacturers. The following year CarsDirect.com and Reynolds and Reynolds introduced a car shopping website called CarsDirect Connect. In November 2002, it acquired Networkcar Inc. (now Verizon Networkfleet) and further developed its telematics device, CAReader. This product communicates a car's mechanical status to a dealer. Reynolds sold Networkcar to Hughes Telematics for $17.7 million in 2006. Acquisitions and growth In the 1980s, Reynolds and Reynolds signed agreements with the rest of the Big Three automotive manufacturers, several major insurers, General Electric and others. In 1986 the company acquired National Medical Computer Services and a business forms company called Arnold Corporation. By the end of that year, Reynolds had more than $200 million in annual revenue, 42% of which came from business forms. Reynolds acquired several smaller technology companies in the 1990s and further developed its software products. Reynolds and Reynolds acquired PD Medical Systems in 1994, forming Reynolds Healthcare Systems. Reynolds Healthcare Systems later acquired a business document company, Fiscal Information, which serves radiologists. From 1994 to 1996, David Holmes led the acquisition of several other business forms and computer businesses outside the automotive industry for a total of $155 million. By 2000, Reynolds and Reynolds had revenues of $800 million and more than one-third of its users were General Motors dealerships. It sold the Information Solutions Group (ISG), which primarily sold business forms and supplies to non-automotive companies, that year to the Carlyle Group for $360 million. On August 8, 2006, Reynolds and Reynolds announced it was becoming a private company through a $2.8 billion acquisition by Houston-based Universal Computer Systems (UCS). The combined organization had a 40% market share in the dealer management systems sector. According to Automotive News, there was a "major culture clash" between the two companies. For example, the new CEO would not hire smokers and required annual physicals to maintain health insurance. Recent history After the merger with UCS, Robert Brockman became CEO of the combined entity. He introduced more discipline to the company's software development, resulting in more modern software products and a greater breadth of features. A series of legal disputes between Reynolds and General Motors (GM) began in 2007. Through a GM program called the Integrated Dealership Management System (GMIDMS), Reynolds provided software to GM dealerships through GM. When Reynolds would not make changes to its software requested by GM, GM alleged it was a breach of contract. A settlement was reached in 2008, which ended Reynolds' participation in GM's program. In 2008, Reynolds acquired DiversiForm, a Beaverton, Oregon-based printer of forms and business documents for car dealerships. The terms of the deal were not disclosed. In August 2013, it acquired the newsletter company IMN. This was followed by an acquisition that November of the customer retention software vendor XtreamService, also for a non-disclosed sum. It acquired AddOnAuto in May 2014, which was the company's fifth acquisition in a little over twelve months. AddOnAuto develops software for shopping for car accessories. In 2017, Reynolds acquired Xpressdocs, a Fort Worth, Texas-based marketing solutions provider for franchise organizations. In October 2020, federal court documents were unsealed showing that Brockman had been indicted for charges of money laundering, evidence tampering, destruction of evidence, and wire fraud totaling. Brockman is accused of using "a family charitable trust based in Bermuda and other offshore entities to hide assets from the Internal Revenue Service while failing to pay taxes", totaling $2 billion in untaxed income. Brockman pleaded not guilty and was released on a $1 million bond. On June 8, 2021, Reynolds announced that they are acquiring Gubagoo, a leading provider of conversational commerce and digital retail tools for both automotive dealerships and OEMs in North America. Current software and services Reynolds and Reynolds is a software and document printing company that primarily serves the automotive industry. It develops and markets the ERA and POWER suites of dealer management systems. Its software is used for inventory, accounting, contract documents and other business logistics. For example, one Reynolds application called AddOnAuto can visualize what a car will look like with accessories, while docuPAD adds a touch-screen on top of a desk that customers use to go through vehicle sales paperwork and interact with options. Reynolds also provides paper business forms, consulting and training. It provides some software and services to other industries, like medical and insurance. Its customer service has been recognized with awards like the STAR (Software Technical Assistance Recognition) Award from The Help Desk Institute. It is one of the three largest vendors in the dealership management software segment. Product history Reynolds and Reynolds started as a printer of standardized business forms on carbon paper. By the 1940s, Reynolds' business was divided into four main areas: automotive, medical, custom forms and Post-Rite Peg Boards. Reynolds' first electronic accounting service was introduced in 1963. Its parts inventory software product, called Electronic Parts Inventory Control (EPIC), was released in beta in 1966. It was renamed upon full release the following year as RAPIC. This was followed by the accounting and management software called LEASe and an accounts receivable product. At first clients sent hole-punched accounting records to a Reynolds processing center, which would print a complete accounting that is sent back to the client by mail. The development of modems and internet technology in the 1970s led to several advancements. Reynolds provided 3,600 specialized modems to dealerships between 1974 and 1978. The modems communicated with Reynolds' VIM-brand minicomputers at 80 Reynolds locations, which provided computing power and printed forms. This eliminated the need for clients to ship data to Reynolds in tapes and allowed daily access to online services. By the end of the 1970s, batch processing and computer processing centers were being phased out in response to personal computers kept at the dealership. In the years 1978 and 1982, Reynolds introduced VIM-brand computer systems that were kept at dealerships. By 1986, the VIM-based dealer management computer systems had helped Reynolds acquire a 45 percent market-share and was on its fifth generation with 9,000 installations. In 1987 Reynolds moved to a software model with its first release of the ERA dealer management software, which was a complete rewrite of its prior programming. ERA allows users to manage logistics for sales, finance, service and parts across departments. That same year Reynolds developed a digital, graphical parts catalog program for selecting and ordering automotive parts. This was followed by the Vehicle Locators and Marketing Network sales toolsets. By 1997, Reynolds and Reynolds had more than 30 applications for various functions of a car dealership. In February 2000, Reynolds formed a joint venture with Automatic Data Processing, Inc. and CCC Information Services, Inc. to create a web-based dealer-to-dealer parts network called ChoiceParts. In January 2002, Reynolds and Reynolds announced it was switching from a Unix to a Windows-based system for its core software. This caused "a flurry of discussion in the automobile market." According to Automotive News, the Unix-based system could support more users, but the Microsoft software was compatible with more of the newer applications being used by dealerships. Reynolds also developed the Reynolds Generations Series Suite in collaboration with Microsoft, but the product was not successful in the marketplace. It was discontinued in 2005. In 2011 Reynolds and Reynolds introduced the current version of its dealer management software, called ERA-IGNITE, which reduced the number of screens needed to perform tasks by two-thirds. Reynolds and Reynolds offers Dealer Management Systems, Document Services, Consulting and Training, and Data Management. References Notes External links Business services companies established in 1866 1866 establishments in Ohio Companies based in Dayton, Ohio Service companies of the United States Software companies based in Ohio Software companies of the United States
6195854
https://en.wikipedia.org/wiki/James%20Turnbull
James Turnbull
James Turnbull is an Australian free software and open source author, security specialist, and software developer. He lives in Brooklyn, New York where he is VP of Engineering at Sotheby's and an advisor at Access Now. Prior to that he was co-chair of the Velocity conference, led startup advocacy at Microsoft, was founder and CTO at Empatico, CTO at Kickstarter, VP of Engineering at Venmo and VP of Service at Docker. He was also VP of Technology Operations for the open-source company Puppet Labs. Career Turnbull has been involved in technology and the open-source community since the early 1990s. He has written eleven books on engineering, operations, security and open-source software: Monitoring with Prometheus The Packer Book The Terraform Book The Art of Monitoring The Docker Book The Logstash Book Pro Puppet (Apress 2011) Pro Linux System Administration (Apress 2009) Pulling Strings with Puppet (Apress 2008) Pro Nagios 2.0 (Apress 2006) Hardening Linux (Apress 2004) He has also published numerous articles on Linux and open source technology. Free Software involvement Turnbull is a contributor to Docker, the open source logging tool Logstash, Riemann, Prometheus, the qpsmtpd SMTP daemon, and the Puppet configuration management tool. Turnbull was the Treasurer, member of the papers committee, and coordinated the mini-conference program at linux.conf.au 2008. He is a member of Linux Australia, including being President in 2010 and sitting on the Executive Council in 2008. He has previously also been on the committee of Linux Users of Victoria. References External links Blog Homepage Riemann Living people Free software programmers Australian computer specialists Year of birth missing (living people) Writers from Brooklyn
25380195
https://en.wikipedia.org/wiki/List%20of%20people%20from%20Bakersfield%2C%20California
List of people from Bakersfield, California
A list of notable people from Bakersfield, California, in order to be included they need to have an article and clear connection to the city; Notable people Arts Designers Marc Davis – animator and one of Disney's Nine Old Men, born in Bakersfield. Poy Gum Lee – architect, died in Bakersfield Visual artists Greg Colson – visual artist Tyrus Wong – calligrapher, artist, animator, married in 1937 in Bakersfield Business, entrepreneur Ric Drasin – designer of Gold's Gym and World Gym logos, retired professional wrestler, actor, author Crime Vincent Brothers – convicted murderer; shot and stabbed five members of his family to death Rodolfo Cadena – Rudy "Cheyenne" Cadena, one of the founders of the Mexican Mafia, basis of character played by Edward James Olmos in the film American Me Entertainment Film, television, actors, models Ana Lily Amirpour – filmmaker, director, writer Noah Beery – actor Robert Beltran – actor Justin Berry – former teenage webcam pornographer and public speaker Kelli Garner – actress Frank Gifford – television sportscaster, college and professional football player Justin Gordon – actor, producer, artist Fay Helm – actress Brian Hooks – actor, Soul Plane, Three Strike Nathan Jung – actor, martial artist, stunt coordinator Dalene Kurtis – model, former Playboy Playmate of the Year Joanne Linville – actress Guy Madison – actor Roger Mathey – theatrical director, actor, playwright, producer Derek Mears – actor and stuntman Michelle St. John – actress, singer, director, producer Sigrid Valdis – actress Musicians David Benoit - jazz pianist Jo Ann Castle - pianist, The Lawrence Welk Show' Brandon Cruz - punk musician and former child actor Merle Haggard - singer, Country Music Hall of Fame inductee Michael Lockwood - guitarist and music producer Kareem Lopez - musician Mary Osborne - jazz guitarist Buck Owens - singer, musician Country Music Hall of Fame inductee Gregory Porter - singer Sheléa - singer-songwriter Lawrence Tibbett - baritone of the New York Metropolitan Opera Grant Whitson - drummer for Arlington Bands Adema - rock band (Tim Fluckey, Dave DeRoo, Kris Kohls, Mark Chavez, Mike Ransom) Burning Image - deathrock band (Moe Adame, Tony Bonanno, Paul Burch, Anthony Leyva) The Def Dames - female rap duo, hip hop musicians Korn - Grammy Award-winning metal band Reginald "Fieldy" Arvizu Jonathan Davis James "Munky" Shaffer David Silveria Brian "Head" Welch Government, law, politics Leonard L. Alvarado - Medal of Honor Recipient, Specialist 4th Class, United States Army - Republic of Vietnam 1969 General Edward Fitzgerald Beale - Superintendent of Indian Affairs for California and Nevada (1850s), Surveyor General of California and Nevada (1860s), U.S. Ambassador to Austria-Hungary (1870s), founder of Tejon Ranch Vince Fong - Politician in California State Assembly Harvey Hall - past Mayor of Bakersfield (2001-2016) Kevin McCarthy - California Congressman, House Republican Leader Ken Mettler - past president (2008-2010) of the California Republican Assembly Erik Paulsen - Minnesota Congressman Walter W. Stiern - California Democratic State Senator Earl Warren - Chief Justice of the United States Supreme Court, former governor of California Science, medicine, academia Hans Einstein - world's foremost authority on Valley Fever Carver Mead - pioneer in the field of VLSI design, inventor of the concept of neuromorphic computing Sports Olympics Jake Varner - Olympic Gold Medalist wrestler, 2012 London Games (2x NCAA Div. 1 Champion) Baseball Larry Barnes - California Angels first baseman Corbin Burnes - Milwaukee Brewers pitcher Johnny Callison - Philadelphia Phillies right fielder Phil Dumatrait - Pittsburgh Pirates pitcher, first-round draft pick Jack Hiatt - catcher Leon Lee - Nippon Professional Baseball player and manager Colby Lewis - Texas Rangers pitcher, first-round draft pick William "Buckshot" May - Pittsburgh Pirates pitcher Brent Morel - Chicago White Sox third baseman Kurt Miller - pitcher for Florida Marlins and Chicago Cubs Steve Ontiveros - infielder for San Francisco Giants and Chicago Cubs Dave Rader - catcher Rick Sawyer - pitcher for the New York Yankees and San Diego Padres Todd Walker - second baseman Bruce Walton - pitcher for Montreal Expos, pitching coach for Toronto Blue Jays Jake Woods - Seattle Mariners pitcher Basketball Nikki Blue - New York Liberty guard (WNBA) Fred Boyd - NBA player Chris Childs - NBA guard J. R. Sakuragi (formerly J.R. Henderson) - player for Memphis Grizzlies Lonnie Shelton - Seattle SuperSonics all-star Robert Swift - Tokyo Apache (Japan) center Tyrone Wallace - Los Angeles Clippers guard Football Mike Ariey - offensive tackle for Green Bay Packers Jon Baker - NFL and CFL player Theo Bell - wide receiver for Pittsburgh Steelers, earned Super Bowl rings in 1979 and 1980 Jeff Buckey - starting offensive lineman for Miami Dolphins Vern Burke - tight end for San Francisco 49ers, New Orleans Saints, and Atlanta Falcons David Carr - quarterback, first overall selection in 2002 NFL Draft (Houston Texans), won Super Bowl with New York Giants Derek Carr - quarterback for Oakland/Las Vegas Raiders, Mountain West Conference Player of the Year for Fresno State Frank Gifford - Pro Football Hall of Fame inductee, broadcaster Cory Hall - safety for Cincinnati Bengals, Atlanta Falcons, and Washington Redskins Joe Hawley - center for Tampa Bay Buccaneers A.J. Jefferson - free safety for Arizona Cardinals Cody Kessler - quarterback for Cleveland Browns and USC Trojans Rodney Leisle - defensive tackle for New Orleans Saints, member of 2009 Super Bowl-winning team Jordan Love- backup quarterback for the Green bay packers Bob McCaffrey - center for USC Trojans and Green Bay Packers Brent McClanahan - running back for Minnesota Vikings Ray Mansfield - center for Pittsburgh Steelers Brock Marion - Dallas Cowboys Super Bowl champion and Pro Bowl player Jerry Marion - wide receiver for Pittsburgh Steelers, father of Brock Ryan Mathews - running back for Philadelphia Eagles and former Fresno State All-American Aaron Merz - offensive lineman for University of California and Buffalo Bills Stephen Neal - lineman for New England Patriots, Super Bowl champion, NCAA wrestling champion, world gold medalist, Olympian Mark Nichols - wide receiver for Detroit Lions Jared Norris - NFL player Larry Parker - wide receiver for USC Trojans and Kansas City Chiefs Joey Porter - All-Pro and Pro Bowl outside linebacker, member of Pittsburgh Steelers Super Bowl champion team in 2006 Rocky Rasley - guard for Detroit Lions D. J. Reed - free safety for Seattle Seahawks Randy Rich - defensive back for Detroit Lions, Denver Broncos, and Cleveland Browns Greg Robinson - defensive coordinator for Denver Broncos, University of Michigan Ken Ruettgers - offensive tackle for Green Bay Packers Colton Schmidt - punter for Buffalo Bills Rashaan Shehee - running back for Kansas City Chiefs L. J. Shelton - offensive tackle for Arizona Cardinals Jeff Siemon - Pro Bowl linebacker for Minnesota Vikings, inducted to College Football Hall of Fame in 2006 Kevin Smith - tight end for UCLA, Oakland Raiders, and Green Bay Packers Jeremy Staat - defensive lineman for Pittsburgh Steelers, United States Marine Jason Stewart - defensive tackle for Indianapolis Colts in 1994 Michael Stewart - safety for Miami Dolphins John Tarver - running back for New England Patriots in 1970s Leonard Williams - defensive lineman for USC Trojans Dick Witcher - wide receiver for San Francisco 49ers Louis Wright - All-Pro defensive back for Denver Broncos, member of 1970s NFL all-decade team Rodney Wright - wide receiver for Fresno State and Buffalo Bills Motorsports Kevin Harvick - NASCAR driver, 2007 Daytona 500 winner and 2014 Sprint Cup Series champion Casey Mears - NASCAR driver Rick Mears - 4-time Indianapolis 500 winner Roger Mears - Baja 1000 winner Blaine Perkins - NASCAR driver Ryan Reed - NASCAR driver Bruce Sarver - NHRA champion George Snider - 22-time competitor Indianapolis 500 Boxing Ruben Castillo - WBO and NABO lightweight champion, WBC featherweight and super featherweight contender Michael Dallas, Jr. - Golden Gloves silver medalist, light welterweight contender Jack Johnson - first African-American heavyweight champion, member of World Boxing Hall of Fame Jerry Quarry - national Golden Gloves champion, heavyweight professional boxer, fought Muhammad Ali and Joe Frazier Mike Quarry - light-heavyweight professional boxer Joey Guillen - light-heavyweight professional boxer Track & Field Lonnie Spurrier - middle-distance runner, Olympian (1956), set world's record in the half-mile run in 1955 Soccer Cami Privett - Former NWSL Soccer Player for the Houston Dash Omar Madrid-Reyes - La Liga Soccer Player for the Real Madrid FC Alexis Rivas - Premier League Soccer Player for the Manchester City FC Tony Espinoza - Water-boy for the Soccer club Bakersfield FC Thursday League Writers, poets, journalists Frank Bidart - poet James Chapman - novelist and publisher Robert Duncan - poet (lived in Bakersfield 1927-32) Gerald Haslam - author Lawrence Kimble - Hollywood screenwriter References External links City of Bakersfield official website Bakersfield Convention & Visitors Bureau Bakersfield Downtown Business and Property Owner's Association Bakersfield Bakersfield, California af:Bakersfield ar:بكرسفيلد، كاليفورنيا bg:Бейкърсфийлд ca:Bakersfield da:Bakersfield de:Bakersfield es:Bakersfield (California) fr:Bakersfield (Californie) gl:Bakersfield, California ko:베이커즈필드 hr:Bakersfield, Kalifornija id:Bakersfield, California it:Bakersfield pam:Bakersfield, California ht:Bakersfield, Kalifòni nl:Bakersfield (Californië) ja:ベーカーズフィールド (カリフォルニア州) no:Bakersfield pl:Bakersfield (Kalifornia) pt:Bakersfield (Califórnia) ro:Bakersfield, California ru:Бейкерсфилд (Калифорния) simple:Bakersfield, California sr:Бејкерсфилд fi:Bakersfield sv:Bakersfield tl:Bakersfield, California tr:Bakersfield, Kaliforniya vi:Bakersfield, California vo:Bakersfield (California) zh:贝克斯菲尔德 (加利福尼亚州)
4431048
https://en.wikipedia.org/wiki/RedFox
RedFox
RedFox (formerly SlySoft) is a software development company based in Belize. The company is most prominently known for its software AnyDVD, which can be used to bypass copy protection measures on optical media, including DVD and Blu-ray Disc media, as well as CloneCD, which is used to back up the contents of optical discs. The company formerly operated as the St. John's, Antigua and Barbuda-based SlySoft. At some point in February 2016, SlySoft shut down, with its home page replaced by a message citing "recent regulatory requirements". On or around 16 February 2016, AACS LA had requested that the Office of the United States Trade Representative place Antigua and Barbuda on its Priority Watch List of countries that fail to prevent intellectual property violations, with specific reference to SlySoft. However, the company's online forum remained online, and had replaced the brand SlySoft with "RedFox". SlySoft developers also revealed that none of the company's staff were actually based in Antigua, that the company was not involved in legal settlements from AACS LA, and that key staff members still had access to SlySoft's technical infrastructure—including build systems and licensing servers—feasibly allowing development of AnyDVD to continue. On 2 March 2016, SlySoft reformed as RedFox, under a top-level domain based in Belize, and released a new version of AnyDVD. Products AnyDVD to remove/disable DRM restrictions and user prohibited operations on DVD films, and to fix structure protections and mastering errors AnyDVD HD – to remove DRM, lock-outs, and UOPs on DVD films and additionally High Definition media, specifically Blu-ray Disc and HD DVD AnyStream – to download and remove DRM from streaming video on Amazon Prime Video and Netflix CloneCD – to copy optical discs in raw format CloneDVD mobile – to convert DVD files to mobile video players like the iPod or the PlayStation Portable Game Jackal – to create CD profiles so a disc isn't required when starting the game Game Jackal Enterprise – extended version of Game Jackal with additional features such as automatic distribution of game profiles to client machines Some products are now supported by Elaborate Bytes such as Clone DVD and Virtual CloneDrive. AACS and BD+ SlySoft was the first to offer AACS circumvention that worked for any disc available; previous programs only cracked "compatible" discs using a database of known keys. On 8 November 2007, SlySoft claimed to have completely cracked BD+. However, this turned out to be incorrect, as subsequent versions of BD+ security code have caused SlySoft to re-design its software. On 3 March 2008, SlySoft updated AnyDVD HD allowing the full decryption of BD+, allowing for not only the viewing of the film itself but also playing and copying disks with third-party software. A third iteration of BD+ was released in November 2008, and was announced to be cracked by SlySoft with the release of AnyDVD HD 6.5.0.2 on 29 December 2008. A fourth version of BD+ security code was discovered with the movie Australia on 17 February 2009, thwarting the effectiveness of SlySoft's software. However, on 19 March 2009, SlySoft updated AnyDVD HD to version 6.5.3.1 which allowed the decryption of the new version of BD+ used by Australia. Licensing On 1 December 2008 SlySoft announced it would for the first time begin charging its customers for updates to its software. In November 2010, SlySoft initially announced the discontinuation of the lifetime licensing option beginning January 2011. An e-mail announcing the change ahead of time was sent to all registered customers allowing everyone the chance to purchase the lifetime option "while it is still possible;" and the notice was posted to their official forums. In January 2011, all announcements regarding the change were deleted without comment and a new structured licensing plan was put into place; including the lifetime licensing option at the highest priced tier. SlySoft was able to balance the internal cost matter and the licensing strategy in such a way as to allow continuation of the "lifetime" license option. The 'update service' will only allow updates with a valid license and users are warned if they attempt to install an update beyond the license expiry date. If the user cancels the update the current paid-up license will continue to work. If the user continues then a "renewal" is required. Following the collapse of Slysoft, holders of a SlySoft AnyDVD/AnyDVD HD Lifetime License are required to purchase a new license for use with the RedFox software beginning with Version 8.0.2.0. Version 7.6.9.5 will be able to use the prior license interminably; however, it will not have access to updates for the latest decryption of copy-protection. See also AnyDVD CloneCD CloneDVD DVD ripper software with similar features DVD Shrink HandBrake K9Copy References Further reading "Appapalooza", Computer Power User (CPU), October 2009 • Vol.9 Issue 10, Pages 60–70 "What's Happening", Computer Power User (CPU), January 2008 • Vol.8 Issue 1, Pages 6–11 "What's Happening", Computer Power User (CPU), March 2009 • Vol.9 Issue 3, Pages 9–16 "The Bleeding Edge of Software", Computer Power User (CPU), February 2009 • Vol.9 Issue, 2 Page 72 External links Elaborate Bytes Software companies of Belize Cryptography companies Disk image emulators Notorious markets Belizean brands
517183
https://en.wikipedia.org/wiki/Mindset%20%28computer%29
Mindset (computer)
The Mindset, released in spring 1984, is an Intel 80186-based MS-DOS personal computer. Unlike other IBM PC compatibles of the time, it has custom graphics hardware supporting 16 simultaneous colors (chosen from a 512-shade palette), and hardware-accelerated drawing capabilities including a blitter which allows it to update the screen 50 times as fast as a CGA adapter in a standard PC. The basic unit was priced at . It is conceptually similar to the more successful Commodore Amiga released over a year later. The system never sold well and disappeared from the market after about a year. This was lamented by industry commenters, who looked at this event as the first clear evidence of the end of innovation in favor of compatibility. Its distinctive case remains in the permanent collection of the Museum of Modern Art in New York. History Design In most computer systems of the era, the CPU is used to create graphics by drawing bit patterns directly into memory. Separate hardware then reads these patterns and produces the actual video signal for the display. The Mindset was designed by ex-Atari engineers and added a new custom-designed VLSI vector processor to handle many common drawing tasks, like lines or filling areas. Instead of the CPU doing all of this work by changing memory directly, in the Mindset the CPU sets up those instructions and then hands off the actual bit fiddling to the separate processor. Mindset's president compared the chipset to the Intel 8087 floating point processor, running alongside the Intel 80186 on which the machine is based. There are a number of parallels between the Mindset and the Amiga 1000, another computer designed by ex-Atari engineers that offered advanced graphics. As development continued and it became clear that the machine would be ready before the MS-DOS-based Microsoft Windows 1.0 was, Bill Gates became personally involved in the project to assist Mindset in emulating IBM character graphics without losing performance. Once Mindset officials determined that most of the desirable software was compatible, development was frozen and the OS burned to ROM in late 1983. The ROM does not run about 20% of the PC software base, including Microsoft Flight Simulator. WordStar is one of the PC applications reported to run, and Mindset publicized a list of 60 applications that run unmodified. The software base was expected to increase dramatically once a final version of Windows was released. Before its release, in early 1984 Jack Tramiel is rumored to have tried to buy Mindset's technology. He would also do the same with Amiga, before ultimately buying Atari and designing a new machine from off-the-shelf parts, the Atari ST. Release The Mindset was released on 2 May 1984. The base model with 64K RAM and no floppy disk drive sold for US$1,099, a 128K model with single disk was available for $1,798, and a 256K dual-disk version cost $2,398. The disk-less version of the machine was still usable, as the system also included two ROM cartridge ports on the front of the machine that could be used for the operating system and another program. The canonical cartridge is an extended version of GW-BASIC. The machine is packaged in a unique enclosure designed by GVO of Menlo Park, visually separated into two sections with the ROM slots in the lower half and the optional diskettes on the upper half. It was sold complete with a custom nylon carrying case. Mindset's president said its graphics capabilities were unmatched except on US$50,000 workstations. At the time it garnered critical acclaim, with reviewers universally praising its graphics and overall performance which was much faster than contemporary PCs. although in many cases with the caveat that the market was rapidly standardizing. Disappearance By the summer of 1984, it was clear the system was not selling as expected, and the company re-purposed it for the video production and graphics design markets. That was followed in August by a round of layoffs, and another in January 1985, this time half the employees were let go. The company filed for Chapter 11 protection on 28 August 1985, and never emerged. By 1985, when it was clear the system was not living up to its promise and Windows 1.0 was a flop in general, John J. Anderson published a review of the system decrying that the personal computer market was beginning to value compatibility over technology. He wrote: Mindset II The Base System Unit is referred to as Model M1001; later a "Mindset II" computer was released, a badge engineered version of the M1001, with an adhesive label designating "II" under the embossed name. Internally the Video Processor Board is a separate mini-daughterboard. Its enhanced functionality is not totally understood - but from the "Mindset II Advanced Professional Videographics System" user'a guide it makes mention of "Chaining" two Mindset's: The Mindset II is referred to on the front of the user guide as Model# M1500, however other internal pages reference is an M1000-II and also make mention of Mindset Video Production Module Model# M1011. Description The system architecture is based on the Intel 80186, with proprietary VLSI chips that enhance and speed up the graphics. Although it is disk compatible with the IBM PC's DOS, its enhanced graphics capabilities make achieving full IBM compatibility more difficult than its competitors. Bill Gates became involved with development, assisting Mindset in emulating IBM character graphics without losing performance. Once Mindset officials determined that most of the desirable software was compatible, development was frozen and the OS burned to ROM, which locked out 20% of the PC software base, including Microsoft Flight Simulator. WordStar is one of the PC applications reported to run, and Mindset publicized a list of 60 applications that run unmodified. The software base was expected to increase dramatically once a final version of Windows was released. Mindset's design is modular in many aspects. The top of the case has an opening to access its system bus; this allows for the expansion module to plug into the main computer module to add memory and one or two disk drives. The Mindset was designed by several ex-Atari engineers like the Amiga 1000, another computer of the era with an advanced graphics subsystem and modular expandability. Jack Tramiel (forming Tramel Technology, Ltd.) tried to buy Mindset's technology in Spring of 1984. A dual 5.25" floppy drive module that sits above the main unit was available and part of the common sales configuration for the system. The module also includes expansion memory as well. Mindset has dual front-mounted ROM cartridge ports with a locking knob on the left side of the main computer module to lock the ROM modules into place. The Mindset has the option (through its System Configuration Utility) to be able to select whether the system boots from left or right ROM carts, or disk drive. Cartridges can also contain CMOS RAM, which is retained when unplugged by a battery in the cartridge case. Cartridges were envisioned to be a primary medium for software distribution on the Mindset, but sales of the system were too low for cartridges to be economical, and software was distributed on disk instead. While released in 1984, models of the M1001 Mindset computer with BIOS ROM code 1.07 and earlier show a copyright notice of (c) 1983 Mindset Computer Corp. Rear ports The rear of the computer is equipped with the following ports: Audio left Composite out TV/RF Channel 3/4 select switch RGB video EXT sync Aux in Aux out The rear of the main computer module also has 3× 36 Pin Expansion bus slots. The Dual Disk/Memory Expansion Unit adds an additional 3 36 Pin Expansion bus slots to the system. Expansion Modules Dual Disk Drive / Memory Expansion Module (Note: While no noticeable internal or external differences, some Dual Disk Drive/Memory Expansion modules are marked Model # M1003 and others have been found to be marked M1004) Parallel "Cartridge Module" Serial "Cartridge Module" Modem "Cartridge Module" 128 kb memory "Cartridge Module" Hard Drive System, consisting of an Interface "Cartridge Module" and HD loader on NVRAM cartridge Stereo "Cartridge Module" Peripherals Mouse Analog joystick Touch Tablet Video Fader References Citations Bibliography External links Specs, photos, and commentary regarding the Mindset Computer DigiBarn Computer Museum photos and commentary regarding the Mindset Computer Atari Museum entry regarding the Mindset Computer Computer workstations IBM PC compatibles Computer-related introductions in 1984
21463702
https://en.wikipedia.org/wiki/Aptean
Aptean
Enterprise application software providers CDC Software Corporation and Consona Corporation, with a total of 1,500 employees and 5,000 customers, merged in August 2012 to form Aptean Corporation. Aptean sells industry-specific software for the financial, manufacturing and supply chain industries integrating enterprise resource planning (ERP), supply chain management (SCM), and complaint management. History Consona Corporation completed its initial public offering (IPO) in 1997. In 2003, the venture capital firm Battery Ventures acquired Made2Manage Systems for $30 million in cash. Originally named China.com when founded in 1997, CDC corporation was based in Hong Kong. On August 6, 2009 CDC Software Corporation completed its IPO on the NASDAQ exchange under the symbol CDCS. In November 2009, Consona Corporation acquired Canadian-based Activplant Corporation (formerly Executive Manufacturing Technologies, Inc.), a provider of manufacturing business intelligence software for manufacturing operations management. In July 2010, Consona acquired open source ERP software provider Compiere with 130 customers. The CDC-Consona merger was billed as a merger, although most of the management team of the surviving company was connected with CDC. Aptean named TVN Reddy as CEO in July 2018. Vista Equity Partners and TA Associates announced a joint investment in the company in February 2019. One of Aptean's owners, Vista Equity Partners, tends to take a hands-on approach to management and implements standardized business processes and standard systems and tools. On March 20, 2020, Aptean announced the acquisition of UK-based Paragon Software Systems, a provider of transportation management software solutions serving the food and beverage, distribution and retail industries. Products Aptean provides mission-critical, industry-specific ERP, Supply Chain and compliance software. Industries served include food and beverage, discrete and process manufacturing, financial services, consumer goods, wholesale distribution and third-party logistics. Some of Aptean's most prominent product offerings include JustFood ERP for food and beverage, Respond for complaints and feedback management, Ross ERP for process management, Factory MES for manufacturing intelligence, Apprise ERP for consumer goods and Made2Manage for manufacturing ERP. References Business software companies Companies based in Atlanta Companies formerly listed on the Nasdaq Software companies of the United States 2012 establishments in Georgia (U.S. state) Software companies established in 2012 American companies established in 2012
700160
https://en.wikipedia.org/wiki/Remaster
Remaster
Remaster refers to changing the quality of the sound or of the image, or both, of previously created recordings, either audiophonic, cinematic, or videographic. The terms digital remastering and digitally remastered are also used. Mastering A master is the definitive recording version that will be replicated for the end user, commonly into other formats (e.g. LP records, CDs, DVDs, Blu-rays). A batch of copies is often made from a single original master recording, which might itself be based on previous recordings. For example, sound effects (e.g. a door opening, punching sounds, falling down the stairs, a bell ringing) might have been added from copies of sound effect tapes similar to modern sampling to make a radio play for broadcast. Problematically, several different levels of masters often exist for any one audio release. As an example, examine the way a typical music album from the 1960s was created. Musicians and vocalists were recorded on multi-track tape. This tape was mixed to create a stereo or mono master. A further master tape would likely be created from this original master recording consisting of equalization and other adjustments and improvements to the audio to make it sound better on record players for example. More master recordings would be duplicated from the equalized master for regional copying purposes (for example to send to several pressing plants). Pressing masters for vinyl recordings would be created. Often these interim recordings were referred to as Mother Tapes. All vinyl records would derive from one of the master recordings. Thus, mastering refers to the process of creating a master. This might be as simple as copying a tape for further duplication purposes, or might include the actual equalization and processing steps used to fine-tune material for release. The latter example usually requires the work of mastering engineers. With the advent of digital recording in the late 1970s, many mastering ideas changed. Previously, creating new masters meant incurring an analog generational loss; in other words, copying a tape to a tape meant reducing the signal-to-noise ratio. This means how much of the original intended "good" information is recorded against faults added to the recording as a result of the technical limitations of the equipment used (noise, e.g. tape hiss, static, etc.). Although noise reduction techniques exist, they also increase other audio distortions such as azimuth shift, wow and flutter, print-through and stereo image shift. With digital recording, masters could be created and duplicated without incurring the usual generational loss. As CDs were a digital format, digital masters created from original analog recordings became a necessity. Remastering Remastering is the process of making a new master for an album, film, or any other creation. It tends to refer to the process of porting a recording from an analog medium to a digital one, but this is not always the case. For example, a vinyl LP – originally pressed from a worn-out pressing master many tape generations removed from the "original" master recording – could be remastered and re-pressed from a better-condition tape. All CDs created from analog sources are technically digitally remastered. The process of creating a digital transfer of an analog tape remasters the material in the digital domain, even if no equalization, compression, or other processing is done to the material. Ideally, because of their higher resolution, a CD or DVD (or even higher quality like high-resolution audio or hi-def video) release should come from the best source possible, with the most care taken during its transfer. Additionally, the earliest days of the CD era found digital technology in its infancy, which sometimes resulted in poor-sounding digital transfers. The early DVD era was not much different, with copies of films frequently being produced from worn prints, with low bitrates and muffled audio. When the first CD remasters turned out to be bestsellers, companies soon realized that new editions of back-catalog items could compete with new releases as a source of revenue. Back-catalog values skyrocketed, and today it is not unusual to see expanded and remastered editions of relatively modern albums. Master tapes, or something close to them, can be used to make CD releases. Better processing choices can be used. Better prints can be utilized, with sound elements remixed to 5.1 surround sound and obvious print flaws digitally corrected. The modern era gives publishers almost unlimited ways to touch up, doctor, and "improve" their media, and as each release promises improved sound, video, extras and others, producers hope these upgrades will entice people into making a purchase. Music Remastering music for CD or even digital distribution first starts from locating the original analog version. The next step involves digitising the track or tracks so it can be edited using a computer. Then the track order is chosen. This is something engineers often worry about because if the track order is not right, it may seem sonically unbalanced. When the remastering starts, engineers use software tools such as a limiter, an equaliser, and a compressor. The compressor and limiters are ways of controlling the loudness of a track. This is not to be confused with the volume of a track, which is controlled by the listener during playback. The dynamic range of an audio track is measured by calculating the variation between the loudest and the quietest part of a track. In recording studios the loudness is measured with negative decibels, zero designating the loudest recordable sound. A limiter works by having a certain cap on the loudest parts and if that cap is exceeded, it is automatically lowered by a ratio preset by the engineer. Criticism Remastered audio has been the subject of criticism. Many remastered CDs from the late 1990s onwards have been affected by the "loudness war", where the average volume of the recording is increased and dynamic range is compressed at the expense of clarity, making the remastered version sound louder at regular listening volume and more distorted than an uncompressed version. Some have also criticized the overuse of noise reduction in the remastering process, as it affects not only the noise, but the signal too, and can leave audible artifacts. Equalisation can change the character of a recording noticeably. As EQ decisions are a matter of taste to some degree, they are often the subject of criticism. Mastering engineers such as Steve Hoffman have noted that using flat EQ on a mastering allows listeners to adjust the EQ on their equipment to their own preference, but mastering a release with a certain EQ means that it may not be possible to get a recording to sound right on high-end equipment. Additionally, from an artistic point of view, original mastering involved the original artist, but remastering often does not. Therefore, a remastered record may not sound how the artist originally intended. Film and television To remaster a movie digitally for DVD and Blu-ray, digital restoration operators must scan in the film frame by frame at a resolution of at least 2,048 pixels across (referred to as 2K resolution). Some films are scanned at 4K, 6K, or even 8K resolution to be ready for higher resolution devices. Scanning a film at 4K—a resolution of 4096 × 3092 for a full frame of film—generates at least 12 terabytes of data before any editing is done. Digital restoration operators then use specialist software such as MTI's Digital Restoration System (DRS) to remove scratches and dust from damaged film. Restoring the film to its original color is also included in this process. As well as remastering the video aspect, the audio is also remastered using such software as Pro Tools to remove background noise and boost dialogue volumes so when actors are speaking they are easier to understand and hear. Audio effects are also added or enhanced, as well as surround sound, which allows the soundtrack elements to be spread among multiple speakers for a more immersive experience. An example of a restored film is the 1939 film The Wizard of Oz. The color portions of Oz were shot in the three-strip Technicolor process, which in the 1930s yielded three black and white negatives created from red, green and blue light filters which were used to print the cyan, magenta and yellow portions of the final printed color film answer print. These three negatives were scanned individually into a computer system, where the digital images were tinted and combined using proprietary software. The cyan, magenta, and yellow records had suffered from shrinkage over the decades, and the software used in the restoration morphed all three records into the correct alignment. The software was also used to remove dust and scratches from the film by copying data, for example, from the cyan and yellow records to fix a blemish in the magenta record. Restoring the movie made it possible to see precise visual details not visible on earlier home releases: for example, when the Scarecrow says "I have a brain", burlap is noticeable on his cheeks. It was also not possible to see a rivet between the Tin Man's eyes prior to the restoration. Shows that were shot and edited entirely on film, such as Star Trek: The Original Series, are able to be re-released in HD through re-scanning the original film negatives; the remastering process for the show additionally enabled Paramount to digitally update certain special effects. Shows that were made between the early 1980s and the early 2000s were generally shot on film, then transferred to and edited on standard-definition videotape, making high-definition transfers impossible without re-editing the product from scratch, such as with the HD release of Star Trek: The Next Generation, which cost Paramount over $12 million to produce. Because of this release's commercial failure, Paramount chose not to give Deep Space Nine or Voyager the same treatment. Criticism Remastered movies have been the subject of criticism. When the Arnold Schwarzenegger film Predator was remastered, it was felt that the process was overdone, resulting in Schwarzenegger's skin looking waxy. As well as complaints about the way the picture looks, there have been other complaints about digital fixing. One notable complaint is from the 2002 remastered version of E.T. the Extra-Terrestrial (1982), where director Steven Spielberg replaced guns in the hands of police and federal agents with walkie talkies. A later 30th anniversary edition released in 2012 saw the return of the original scene. Canadian animator John Kricfalusi (of The Ren & Stimpy Show fame) has become a prominent critic of digital remastering, particularly in regards to its effects on Western animation. In his blog "John K. Stuff," he has admonished remasters for over-saturating colors and sharpening lines to the point of color bleeding (among other criticisms). He has gone on record in his blog to describe remastering as "digital ruination" and "digital destruction." Video games Remastering a video game is more difficult than remastering a film or music recording because the video game's graphics show their age. This can be due to a number of factors, notably lower resolutions and less complicated rendering engines at the time of release. A video game remaster typically has ambience and design updated to the capabilities of a more powerful console, while a video game remake is also updated but with recreated models. Modern computer monitors and high-definition televisions tend to have higher display resolutions and different aspect ratios than the monitors/televisions available when the video game was released. Because of this, classic games that are remastered typically have their graphics re-rendered at higher resolutions. An example of a game that has had its original graphics re-rendered at higher resolutions is Hitman HD Trilogy, which contains two games with high resolution graphics: Hitman 2: Silent Assassin and Hitman: Contracts. Both were originally released on PC, PlayStation 2, and Xbox. The original resolution was 480p on Xbox, while the remastered resolution is displayed at 720p on Xbox 360. There is some debate regarding whether graphics of an older game at higher resolutions make a video game look better or worse than the original artwork, with comparisons made to colorizing black-and-white-movies. More significant than low resolution is the age of the original game engine and simplicity of the original 3D models. Older computers and video game consoles had limited 3D rendering speed, which required simple 3D object geometry such as human hands without individual fingers but instead modeled like a mitten, while maps having a distinctly chunky appearance with no smoothly curving surfaces. Older computers also had less texture memory for 3D environments, requiring low resolution bitmap images that look visibly pixelated or blurry when viewed at high resolution. (Some early 3D games such as the 1993 version of DOOM also just used an animated two-dimensional image that is rotated to always face the player character, rather than attempt to render highly complex scenery objects or enemies in full 3D.) As a result, depending on the age of the original game, if the original assets are not compatible with the new technology for a remaster, it is often considered necessary to remake or remodel the graphical assets. An example of a game that has had its graphics redesigned is Halo: Combat Evolved Anniversary, while the core character and level information is exactly the same as in Halo: Combat Evolved. See also Special edition Remake and Video game remake Audio mastering Mastering engineer References Audio engineering
11289322
https://en.wikipedia.org/wiki/Software%20product%20line
Software product line
Software product lines (SPLs), or software product line development, refers to software engineering methods, tools and techniques for creating a collection of similar software systems from a shared set of software assets using a common means of production. The Carnegie Mellon Software Engineering Institute defines a software product line as "a set of software-intensive systems that share a common, managed set of features satisfying the specific needs of a particular market segment or mission and that are developed from a common set of core assets in a prescribed way." Description Manufacturers have long employed analogous engineering techniques to create a product line of similar products using a common factory that assembles and configures parts designed to be reused across the product line. For example, automotive manufacturers can create unique variations of one car model using a single pool of carefully designed parts and a factory specifically designed to configure and assemble those parts. The characteristic that distinguishes software product lines from previous efforts is predictive versus opportunistic software reuse. Rather than put general software components into a library in the hope that opportunities for reuse will arise, software product lines only call for software artifacts to be created when reuse is predicted in one or more products in a well defined product line. Recent advances in the software product line field have demonstrated that narrow and strategic application of these concepts can yield order of magnitude improvements in software engineering capability. The result is often a discontinuous jump in competitive business advantage, similar to that seen when manufacturers adopt mass production and mass customization paradigms. Development While early software product line methods at the genesis of the field provided the best software engineering improvement metrics seen in four decades, the latest generation of software product line methods and tools are exhibiting even greater improvements. New generation methods are extending benefits beyond product creation into maintenance and evolution, lowering the overall complexity of product line development, increasing the scalability of product line portfolios, and enabling organizations to make the transition to software product line practice with orders of magnitude less time, cost and effort. Recently the concepts of software product lines have been extended to cover systems and software engineering holistically. This is reflected by the emergence of industry standard families like ISO 265xx on systems and software engineering practices for product lines. See also Software factory Domain engineering Feature model Feature-oriented programming – a paradigm for software product line development Product Family Engineering References External links Software Product Lines Essentials, page 19. Carnegie Mellon Software Engineering Institute Web Site Software Products Lines Community Web Site and Discussion Forums Introduction to the Emerging Practice of Software Product Line Development AMPLE Project Software Product Line Engineering Course, B. Tekinerdogan, Bilkent University Software project management Software industry
43794419
https://en.wikipedia.org/wiki/Fran%C3%A7ois%20Benjamin%20Courvoisier
François Benjamin Courvoisier
François Benjamin Courvoisier (August 1816 – 6 July 1840) was a Swiss-born valet who was convicted of murdering his employer Lord William Russell in London, England, and hanged outside Newgate Prison on 6 July 1840. A crowd of around 40,000 witnessed his death, including Charles Dickens and William Makepeace Thackeray. Early life and career Courvoisier was born in the small village of Mont-la-Ville, Switzerland, in August 1816, the son of Abraham Courvoisier, a farmer. He was educated at the local village school, after which he assisted his father in farm duties, before moving to England in 1836. Based on evidence given at his trial in 1840, upon first arriving in England, for around a month, Courvoisier worked as a waiter at Madame Piolaine's Hotel du Port de Dippe in Leicester Square, London. Through the assistance of his uncle, who was employed as a butler by an English baronet, he secured a position as a footman in the household of Lady Julia Lockwood. After seven months, in March 1837, he moved to the home of John Minet Fector, Conservative M.P. for Dover, also as a footman, where he remained for three years. He left Fector for employment with Lord William on 31 March 1840. Valet to Lord William In conversation with Lady Julia Lockwood, Lord William mentioned that his valet, Ellis, was leaving his employ. Lockwood recommended Courvoisier and, upon inquiry to Fector, was assured Courvoisier was of the 'highest character... conduct and competency to fill the situation'. He was employed on terms of £45 per annum, to be reviewed after six months upon which, if he had proved suitable, would be raised to £50. Lord William's principal London residence was at No.14 Norfolk Street, Park Lane, now Dunraven Street, Mayfair, although the building, along with others on the east side, has since been demolished. Described as 'a small house', it had two rooms per floor with a kitchen and pantry in the basement, two parlours (used for dining) on the ground floor, a drawing room and library on the first floor, Russell's bedroom and dressing room on the third floor and servants rooms on the floor above. Aside from Courvoisier, Mary Hannell, the cook and Sarah Mancer, the housemaid were the only resident staff, a groom and coachman lived elsewhere. Crime and trial Russell was a member of Brooks's, a gentlemen's club in St James's Street, London, and was in the habit of spending much of his day there. On the day before his murder he left Courvoisier with a number of tasks, one of which was to advise his coachman to collect him from Brook's at five o'clock in his private carriage. Courvoisier, said to have been confused by the number of tasks he had been given, forgot to inform the coachman and Russell returned home by hired cab, showing 'some dissatisfaction at the neglect of his servant; but it does not appear that he exhibited any such anger as could well excite a feeling of hatred or ill-will' regarding the incident. The evening continued 'in the usual way' with Russell taking his dinner, alone, at seven o'clock and then retiring to his library. Hannell had to leave and on her return Courvoiser was observed securely fastening the gate and kitchen door. When Courvoiser had to leave to obtain beer Hannell secured the gate and door at his return. At ten o'clock, Courvoiser had his dinner with the two female servants and the women retired to their rooms half an hour later. Courvoisier remained on duty to attend to Russell until he was ready to retire at half past twelve. The following morning, at 6.30am, Sarah Mancer found the main rooms of the house in disarray, as if they had been ransacked by thieves. She alerted Hannell and Courvoisier and it was agreed they would check on Lord William. They found him in bed with his throat cut and his pillow soaked in blood. Mancer went for the Police and a doctor and, when they returned, found Courvoisier 'in a stupefied state' still in Russell's room. Courvoisier appeared to regain his senses and suggested Russell's son should be contacted before taking the Police downstairs where he pointed to marks on the pantry door and said 'It was here that they entered'. The Police detained all three staff and, for two days, examined the house. Concealed in various places in the pantry, which was also Courvoisier's bedroom, they found various valuable items belonging to Russell. Another item, a gold locket belonging to Russell, was also found in Courvoisier's possession and, on 8 May, he was arrested for murder. Courvoisier's trial began at the Old Bailey before Chief Justice Nicholas Conyngham Tindal and Justice Baron James Parke, and took three days. It was widely reported in the London newspapers and, among the many titled spectators, was Prince Augustus Frederick, Duke of Sussex, who was seated in a position normally reserved for the Lord Mayor of London. The defence benefited from the generosity of Sir George Beaumont, a longtime employer of Courvoisier's uncle, who donated £50 for the employment of a solicitor and barrister, while the prosecution was led by John Adolphus. Courvoisier pleaded not guilty when the indictment was read. Adolphus began his cross-examination of witnesses, including Hannell and Mancer, by admitting to the court that it appeared that Courvoisier had no apparent motive to murder Russell, and that the evidence against him was circumstantial, but he also suggested that the accused, 'as a foreigner', may behave differently to an Englishman. That night, a new witness, Madame Charlotte Piolaire, came forward, to Police and, during cross examination the following day, she testified that Courvoiser had left a parcel with her prior to the murder and, when it was opened by Police, it was found to contain items belonging to Russell. It was later reported that after Madame Piolaine's evidence, Courvoiser summoned his defence counsel and said 'I have committed the murder' but added that he did not want to change his plea. Phillips addressed the court, on the third day, continuing to argue that the evidence against his client was circumstantial. The jury found Courvoiser guilty of murder, and Chief Justice Tindal gave a sentence of death by hanging, to take place on 6 July 1840. The day after the sentence was passed Courvoiser made a confession, stating: 'His lordship was very cross with me and told me I must quit his service. As I was coming upstairs from the kitchen I thought it was all up with me; my character was gone, and I thought it was the only way I could cover my faults by murdering him. This was the first moment of any idea of the sort entering my head. I went into the dining-room and took a knife from the sideboard. I do not remember whether it was a carving-knife or not. I then went upstairs, I opened his bedroom door and heard him snoring in his sleep; there was a rushlight in his room burning at this time. I went near the bed by the side of the window, and then I murdered him. He just moved his arm a little; he never spoke a word.' Execution Courvoiser's confession was accompanied by a religious fervour and he spent most of his final days at Newgate Prison in prayer. His last appearance, the day before his death, was at a service in the prison chapel, where he stood before his own coffin, watched by the many spectators who had applied to attend. Returned to his cell, he was visited by the Swiss consul who gave him a letter from his mother in which she wrote that she forgave him, and he was permitted to write a short reply. On the day of Courvoiser's death the usual protocols were followed with the scaffold erected outside the debtor's door of the prison ready for the execution time of 8am. At the appointed time, Courvoiser was led to the scaffold, a noose placed around his neck and a hood over his eyes, and he was hanged. A crowd estimated at 40,000 people witnessed the execution, including British aristocrats, Members of Parliament, and 'distinguished Russian noblemen'. Elsewhere, watching separately in the crowd, were writers and death penalty opponents, Charles Dickens and William Makepeace Thackeray. Both would later write about the events of the morning, with Thackeray, in his essay On going to see a man hanged, stating 'I feel myself shamed and degraded at the brutal curiosity that took me to that spot.' At the end of January 2017, his plaster death mask was sold for £20,000 by Thomson Roddick auction house. The name of the buyer is unknown. References 1816 births 1840 deaths People from Morges District Murder in 1840 Swiss emigrants to the United Kingdom Executed people from London 19th-century executions by England and Wales English criminal law 1840s murders in the United Kingdom
22756468
https://en.wikipedia.org/wiki/Inner%20Worlds%20%28video%20game%29
Inner Worlds (video game)
Inner Worlds is a 1996 fantasy platform game by Sleepless Software Inc. incorporating some RPG elements. The game is concentrated on a werewolf character called Nikita who travels through a magical world fighting monsters and learning spells. Gameplay Although the game appears to be a typical platform game with levels grouped into three episodes, there are many RPG elements, which make the game much more complex. Although episodes must be played in proper order, the level structure of the game is not strictly linear as it is possible to skip some levels and return to them later in the episode. In addition to jumping, running and fighting monsters, Nikita is able to shapeshift into a wolf at any time, if she has enough mana, which gives her access to otherwise inaccessible locations. She also can collect many kinds of weapons and other special items such as keys, scrolls and potions. Some of them can give her some special abilities. It is even possible to enchant the weapon chosen by the player to dramatically change its power and behavior. Character development system Unlike other games at the time, in Inner Worlds the main character gets stronger through the course of the game in addition to collecting new items and weapons, as would be expected in a role playing game. On almost every level the player is able to find an amulet which increases maximum mana or health. Killing unusual monsters allows the player to learn some spells to create magical arrows or fireballs. Story The story is revealed to the player by long text introductions before each level. In the first episode Wizard's World, Nikita travels into the Castle Drofanayrb (whose name is the name of programmer Bryan A. Ford spelled backwards) to defeat the powerful monster called Gralob, which was created by a powerful mage and now is the scourge of the land. In the second episode World of Change, she returns to her homeland to discover further horrors – and to fight them. In the third episode Heart of the World, she descends into the large volcano in order to engage in a final confrontation with evil forces. Music The music tracks played during the game were written by different people who won a contest held for that purpose on the Internet. The game's creators offered $100 for all the songs they choose to put in the game, and $1,000 as the first prize for the contest winner. The winner was Daniel Hansson from Sweden for the track called Unplugged. The authors of other songs used in the game were from locations as disparate as Croatia, Netherlands, Slovakia, Australia, US and Finland. Development, release and reception The Sleepless Software team originally consisted of 3 people from Salt Lake City, with Inner Worlds being their first project. The team became much larger using the internet, and in the end consisted of 27 people from 9 countries. Creating the game took 3 years instead of 1 year originally planned. In 1996 Inner Worlds was released on DOS and Linux – its first episode Wizard's World was distributed as a shareware. Despite all this, the game was not well purchased, though decently received. Around 2001 the developers released the game as freely redistributable freeware on their website. References External links Official website Linux version at Internet Archive DOS version at Internet Archive 1996 video games DOS games Linux games Freeware games Video games developed in the United States Video games featuring female protagonists
42835394
https://en.wikipedia.org/wiki/Morgan%20Marquis-Boire
Morgan Marquis-Boire
Morgan Marquis-Boire is a New Zealand-born hacker, journalist, and security researcher. In late 2017 he was accused of at least ten sexual assaults. He was the Director of Security at First Look Media and a contributing writer at The Intercept. His research on security, surveillance and censorship has been on the front pages of The New York Times and The Washington Post, and covered in news media including the BBC News, Bloomberg, The Wall Street Journal, and Der Spiegel. His work tracking the digital component of the ongoing Syrian Civil War is in the book Black Code: Inside the Battle for Cyberspace. Marquis-Boire previously served as an advisor to the Freedom of the Press Foundation. He was a Special Advisor to the Electronic Frontier Foundation (EFF) and advisor to the United Nations Interregional Crime and Justice Research Institute. Marquis-Boire resigned in September 2017 from his position on the technical advisory group at Citizen Lab, a multi-disciplinary advanced research laboratory at the University of Toronto. Citizen Lab later disclosed that, after his resignation, it received an allegation of a 2014 sexual assault involving Marquis-Boire. He has been profiled by Wired, CNN, Süddeutsche Zeitung, and Tages Anzeiger. He was one of Wired Italy 's Top 50 people of 2014. In March 2015 he was named a Young Global Leader. Early life Marquis-Boire was born in New Zealand. He began hacking as a teenager under the name headhntr. He holds a bachelor's degree in political science from the University of Auckland. Internet censorship research Marquis-Boire conducted research into Blue Coat Systems, a Palo Alto company which provides Internet blocking and monitoring solutions. Reports include Some Devices Wander by Mistake: Planet Blue Coat Redux (2013), and Planet Blue Coat: Mapping Global Censorship and Surveillance Tools (2013). This research has been covered in news media including the front page of the Washington Post, the New York Times, the Globe and Mail, and the Jakarta Post. Following the publication of these reports, Blue Coat Systems announced that it would no longer provide “support, updates, or other services” to software in Syria. In April 2013, the US government's Bureau of Industry and Security imposed a fine of USD 2.8 million on the Emirati company responsible for purchasing filtering products from Blue Coat and exporting them to Syria without a license. Internet surveillance research Marquis-Boire has conducted research on the global proliferation of targeted surveillance software and toolkits, including FinFisher and Hacking Team. FinFisher is a suite of remote intrusion and surveillance software developed by Munich-based Gamma International GmbH, marketed and sold exclusively to law enforcement and intelligence agencies by the UK-based Gamma Group. In 2012, Morgan Marquis-Boire and Bill Marczak provided the first public identification of FinFisher's software. Marquis-Boire and collaborators have done investigations into FinFisher including: revealing its use against Bahraini activists, analyzing variants of the FinFisher suite that target mobile phone operating systems, uncovering targeted spying campaigns against political dissidents in Malaysia and Ethiopia, and documenting FinFisher command and control servers in 36 countries. This research has informed responses from civil society organizations in Pakistan, Mexico, and the United Kingdom. In Mexico, local activists and politicians collaborated to demand an investigation into the state’s acquisition of surveillance technologies. In the UK, it led to a crackdown on the sale of the software over worries of misuse by repressive regimes. Hacking Team is a Milan, Italy-based company that provides intrusion and surveillance software called Remote Control System (RCS) to law enforcement and intelligence agencies. Marquis-Boire and collaborators have mapped out RCS network endpoints in 21 countries, and have provided evidence of RCS being used to target a human rights activist in the United Arab Emirates, a Moroccan media organization, and an independent news agency run by members of the Ethiopian diaspora. Following the publication of these reports, the EFF and Privacy International took legal action related to allegations that the Ethiopian government had compromised the computers of Ethiopian expatriates in the US and the UK. At the 23rd USENIX Security Symposium, Marquis-Boire and other researchers released the paper When Governments Hack Opponents: A Look at Actors and Technology, examining the government targeting of activists, opposition members, and NGOs observed in Bahrain, Syria, and the United Arab Emirates. Digital campaigns in the Syrian Civil War From 2012 to 2017, Marquis-Boire reported on digital campaigns targeting Syrian activists with the EFF and Citizen Lab. Many of these findings were translated into Arabic and disseminated along with recommendations for detecting and removing malware. This work has been on the cover of BusinessWeek, and covered in The New York Times, Al Jazeera, and Wired. On December 31, 2013, Marquis-Boire gave an interview covering this work on the NPR radio show All Things Considered. Other work In 2012, he gave a presentation on the use of targeted malware attacks during the Arab Spring at the Black Hat Briefings in Las Vegas which covered the use of malware campaigns for the purposes of digital surveillance and espionage in Libya, Syria, Bahrain, Morocco, and Iran. He released a paper with Eva Galperin of the EFF on the targeting of the Vietnamese diaspora with malware attacks. This detailed an ongoing state-sponsored hacking campaign targeting prominent bloggers, academics, and journalists. Marquis-Boire has given interviews in the wake of the global surveillance disclosures with Die Zeit, International Business Times, and Dazed. He was named in Al Jazeera's "Media Trends to Watch in 2015". Shane Huntley and Marquis-Boire co-authored a paper on government targeting of journalists and media organizations presented at Black Hat Singapore 2014. This paper reported that 21 of the world's top 25 media organizations had been targeted by state-sponsored hacking. In April, 2015, Marquis-Boire spoke at the Western Regional Conference of the Society of Professional Journalists in San Francisco, California and presented a paper entitled "Data Security for Beginners". At Black Hat USA 2015, held in Las Vegas in August, Marquis-Boire presented a paper entitled "Big Game Hunting: The Peculiarities of Nation-State Malware Research". Marquis-Boire presented a paper entitled "Security for Humans: Privacy and Coercion Resistant Design" at the Strange Loop Conference in St. Louis, Missouri, in September 2015. In May 2016, he was in the "State of Surveillance" episode of the HBO series Vice, along with Edward Snowden and Ron Wyden. Resignation and sexual assault allegations In September 2017, Marquis-Boire resigned from his position as a senior researcher at Citizen Lab. In October, the organization cut all ties with him after it had been informed that he had been accused of sexually assaulting an individual at the 2014 Cyber Dialogue event. The EFF also released a statement saying that Marquis-Boire was no longer associated with them. In November, The Verge published a report of specific claims of assault and rape, and a second article contained more claims, including alleged quotes and chat extracts where Marquis-Boire admits to having "drunkenly sexually assaulted or raped women — the exact number of which I am currently determining." The number of women quoted in the articled as having been sexually assaulted or raped is at least ten. References External links Morgan Marquis-Boire at Citizen Lab Computer security specialists Living people University of Auckland alumni 1980 births New Zealand computer scientists
6701333
https://en.wikipedia.org/wiki/AOS
AOS
AOS, Aos or AoS may refer to: Military, police and government Armed Offenders Squad, branch of the New Zealand Police Armed Offenders Squad (Victoria), disbanded branch of Victoria Police Amook Bay Seaplane Base (IATA: AOS) Adjustment of status, immigration concept in the United States Schools and education Academy of the Sierras, boarding schools devoted to weight loss The Alice Ottley School in Worcester, England AO Springfield School in Worcester, England Associate of Occupational Studies, a type of two-year college degree Annunciation Orthodox School, a Greek Orthodox private school in Houston, Texas Loudoun Academy of Science, part-time alternative school program for high school students enrolled in Loudoun County Public Schools Science and academia Accessory olfactory system, sensory system often responsible for the detection of pheromones Accounting, Organizations and Society, an academic journal Acquisition of signal, in spacecraft communications Agricultural Ontology Service American Oriental Society, a learned society American Ornithological Society Angle of sideslip Apraxia of speech, an oral motor speech disorder Area of Search, geographical areas used in the selection of Sites of Special Scientific Interest in the UK Adams–Oliver syndrome, a rare congenital disorder Popular culture Agents of S.H.I.E.L.D., an American television series set in the Marvel Cinematic Universe Ace of Spades (video game) Ace of Spades HQ, blog Aeon of Strife, an early "multiplayer online battle arena (MOBA)" Age of Sail (computer game) and its sequel Age of Sail II Castlevania: Aria of Sorrow, a game in the Castlevania series for the Game Boy Advance Star Trek "Alternate Original Series", films created post-Enterprise "AOS (song)", a song on Yoko Ono's 1970 album Plastic Ono Band featuring Ornette Coleman All-Out Sundays, a Philippine Sunday variety show on GMA Network Warhammer Age of Sigmar A fantasy themed tabletop wargame. Technology Academic Operating System, IBM's version of 4.3 BSD Unix for the IBM RT Algebraic Operating System, expression evaluation rules with operator priorities and infix notation, as used on many Texas Instruments calculators AmigaOS Apple Online Store, an online store of Apple Inc. Array of structures, interleaved data format Data General AOS, Data General's Advanced Operating System Active Object System (AOS), computer software, renamed A2 (operating system) in 2008 Fedora AOS (Appliance Operating System), a small version of the Fedora project for use in software appliance system images Application Object Server, a service that controls all aspects of Microsoft Dynamics AX's operation People Austin Osman Spare (1886–1956), English occultist and artist See also AO (disambiguation) OS (disambiguation) ADOS (disambiguation) Aeos (disambiguation) EOS (disambiguation)
33894169
https://en.wikipedia.org/wiki/FinFisher
FinFisher
FinFisher, also known as FinSpy, is surveillance software marketed by Lench IT Solutions plc, which markets the spyware through law enforcement channels. FinFisher can be covertly installed on targets' computers by exploiting security lapses in the update procedures of non-suspect software. The company has been criticized by human rights organizations for selling these capabilities to repressive or non-democratic states known for monitoring and imprisoning political dissidents. Egyptian dissidents who ransacked the offices of Egypt's secret police following the overthrow of Egyptian President Hosni Mubarak reported that they had discovered a contract with Gamma International for €287,000 for a license to run the FinFisher software. In 2014, an American citizen sued the Ethiopian government for surreptitiously installing FinSpy onto his computer in America and using it to wiretap his private Skype calls and monitor his entire family's every use of the computer for a period of months. Lench IT Solutions plc has a UK-based branch, Gamma International Ltd in Andover, England, and a Germany-based branch, Gamma International GmbH in Munich. Gamma International is a subsidiary of the Gamma Group, specializing in surveillance and monitoring, including equipment, software, and training services. It was reportedly owned by William Louthean Nelson through a shell corporation in the British Virgin Islands. The shell corporation was signed by a nominee director in order to withhold the identity of the ultimate beneficiary, which was Nelson, a common system for companies that are established offshore. On August 6, 2014, FinFisher source code, pricing, support history, and other related data were retrieved from the Gamma International internal network and made available on the Internet. The FinFisher GmbH opened insolvency proceedings at the Munich Local Court on 02.12.2021. But this is only a restructuring and the company is to continue as Vilicius Holding GmbH. Elements of the FinFisher suite In addition to spyware, the FinFisher suite offered by Gamma to the intelligence community includes monitoring of ongoing developments and updating of solutions and techniques which complement those developed by intelligence agencies. The software suite, which the company calls "Remote Monitoring and Deployment Solutions", has the ability to take control of target computers and to capture even encrypted data and communications. Using "enhanced remote deployment methods" it can install software on target computers. An "IT Intrusion Training Program" is offered which includes training in methods and techniques and in the use of the company-supplied software. The suite is marketed in Arabic, English, German, French, Portuguese, and Russian and offered worldwide at trade shows offering an intelligence support system, ISS, training, and products to law enforcement and intelligence agencies. Method of infection FinFisher malware is installed in various ways, including fake software updates, emails with fake attachments, and security flaws in popular software. Sometimes the surveillance suite is installed after the target accepts installation of a fake update to commonly used software. Code which will install the malware has also been detected in emails. The software, which is designed to evade detection by antivirus software, has versions which work on mobile phones of all major brands. A security flaw in Apple's iTunes allowed unauthorized third parties to use iTunes online update procedures to install unauthorized programs. Gamma International offered presentations to government security officials at security software trade shows where they described how to covertly install the FinFisher spy software on suspects' computers using iTunes' update procedures. The security flaw in iTunes that FinFisher is reported to have exploited was first described in 2008 by security software commentator Brian Krebs. Apple did not patch the security flaw for more than three years, until November 2011. Apple officials have not offered an explanation as to why the flaw took so long to patch. Promotional videos used by the firm at trade shows which illustrate how to infect a computer with the surveillance suite were released by WikiLeaks in December, 2011. In 2014, the Ethiopian government was found to have installed FinSpy on the computer of an American citizen via a fake email attachment that appeared to be a Microsoft Word document. FinFisher has also been found to engage in politically motivated targeting. In Ethiopia, for instance, photos of a political opposition group are used to "bait" and infect users. Technical analysis of the malware, methods of infection and its persistence techniques has been published in Code And Security blog in four parts. Use by repressive regimes FinFisher's wide use by governments facing political resistance was reported in March 2011 after Egyptian protesters raided State Security Investigations Service and found letters from Gamma International UK Ltd., confirming that SSI had been using a trial version for five months. A similar report in August 2012 concerned e-mails received by Bahraini activists and passed on (via a Bloomberg News reporter) to University of Toronto computer researchers Bill Marczak and Morgan Marquis-Boire in May 2012. Analysis of the e-mails revealed code (FinSpy) designed to install spyware on the recipient's computer. A spokesman for Gamma claims no software was sold to Bahrain and that the software detected by the researchers was not a legitimate copy but perhaps a stolen, reverse-engineered or modified demonstration copy. In August 2014 Bahrain Watch claimed that the leak of FinFisher data contained evidence suggesting that the Bahraini government was using the software to spy on opposition figures, highlighting communications between Gamma International support staff and a customer in Bahrain, and identifying a number of human rights lawyers, politicians, activists and journalists who had apparently been targeted. According to a document dated 7 December 2012 from the Federal Ministry of the Interior to members of the Finance Committee of the German Parliament, the German "Bundesnachrichtendienst", the Federal Surveillance Agency, have licensed FinFisher/FinSpy, even though its legality in Germany is uncertain. In 2014, an America citizen sued the Ethiopian government for installing and using FinSpy to record a vast array of activities conducted by users of the machine, all whilst in America. Traces of the spyware inadvertently left on his computer show that information – including recordings of dozens of Skype phone calls – was surreptitiously sent to a secret control server located in Ethiopia and controlled by the Ethiopian government. FinSpy was downloaded on the plaintiff's computer when he opened an email with a Microsoft Word document attached. The attachment contained hidden malware that infected his computer. In March 2017, the United States Court of Appeals for the District of Columbia Circuit found that the Ethiopian government's conduct was protected from liability by the Foreign Sovereign Immunities Act. In 2015, FinFisher was reported to have been in use since 2012 for the 'Fungua Macho' surveillance programme of Uganda's President Museveni, spying upon the Ugandan opposition party, the Forum for Democratic Change. Reporters Without Borders On 12 March 2013 Reporters Without Borders named Gamma International as one of five "Corporate Enemies of the Internet" and “digital era mercenaries” for selling products that have been or are being used by governments to violate human rights and freedom of information. FinFisher technology was used in Bahrain and Reporters Without Borders, together with Privacy International, the European Center for Constitutional and Human Rights (ECCHR), the Bahrain Centre for Human Rights, and Bahrain Watch filed an Organisation for Economic Co-operation and Development (OECD) complaint, asking the National Contact Point in the United Kingdom to further investigate Gamma's possible involvement in Bahrain. Since then research has shown that FinFisher technology was used in Australia, Austria, Bahrain, Bangladesh, Britain, Brunei, Bulgaria, Canada, the Czech Republic, Estonia, Ethiopia, Germany, Hungary, India, Indonesia, Japan, Latvia, Lithuania, North Macedonia, Malaysia, Mexico, Mongolia, Netherlands, Nigeria, Pakistan, Panama, Qatar, Romania, Serbia, Singapore, South Africa, Turkey, Turkmenistan, the United Arab Emirates, the United States, Venezuela and Vietnam. Firefox masquerading FinFisher is capable of masquerading as other more legitimate programs, such as Mozilla Firefox. On April 30, 2013, Mozilla announced that they had sent Gamma a cease-and-desist letter for trademark infringement. Gamma had created an espionage program that was entitled firefox.exe and even provided a version number and trademark claims to appear to be legitimate Firefox software. Detection In an article of PC Magazine, Bill Marczak (member of Bahrain Watch and computer science PhD student at University of California, Berkeley doing research into FinFisher) said of FinSpy Mobile (Gamma's mobile spyware): "As we saw with respect to the desktop version of FinFisher, antivirus alone isn't enough, as it bypassed antivirus scans". The article's author Sara Yin, an analyst at PC Magazine, predicted that antivirus providers are likely to have updated their signatures to detect FinSpy Mobile. According to announcements from ESET, FinFisher and FinSpy are detected by ESET antivirus software as "Win32/Belesak.D" trojan. Other security vendors claim that their products will block any spyware they know about and can detect (regardless of who may have launched it), and Eugene Kaspersky, head of IT security company Kaspersky Lab, stated, "We detect all malware regardless its purpose and origin". Two years after that statement by Eugene Kaspersky in 2012 a description of the technique used by FinFisher to evade Kaspersky protection was published in Part 2 of the relevant blog at Code And Security. See also References External links Computer security software Spyware Computer surveillance Trojan horses Espionage techniques Espionage devices Malware toolkits 2012 in computing Computer access control Cyberwarfare Espionage scandals and incidents Content-control software
1081527
https://en.wikipedia.org/wiki/SPMD
SPMD
In computing, SPMD (single program, multiple data) is a technique employed to achieve parallelism; it is a subcategory of MIMD. Tasks are split up and run simultaneously on multiple processors with different input in order to obtain results faster. SPMD is the most common style of parallel programming. It is also a prerequisite for research concepts such as active messages and distributed shared memory. SPMD vs SIMD In SPMD, multiple autonomous processors simultaneously execute the same program at independent points, rather than in the lockstep that SIMD or SIMT imposes on different data. With SPMD, tasks can be executed on general purpose CPUs; SIMD requires vector processors to manipulate data streams. Note that the two are not mutually exclusive. Distributed memory SPMD usually refers to message passing programming on distributed memory computer architectures. A distributed memory computer consists of a collection of independent computers, called nodes. Each node starts its own program and communicates with other nodes by sending and receiving messages, calling send/receive routines for that purpose. Barrier synchronization may also be implemented by messages. The messages can be sent by a number of communication mechanisms, such as TCP/IP over Ethernet, or specialized high-speed interconnects such as Myrinet and Supercomputer Interconnect. Serial sections of the program are implemented by identical computation on all nodes rather than computing the result on one node and sending it to the others. Nowadays, the programmer is isolated from the details of the message passing by standard interfaces, such as PVM and MPI. Distributed memory is the programming style used on parallel supercomputers from homegrown Beowulf clusters to the largest clusters on the Teragrid. Shared memory On a shared memory machine (a computer with several CPUs that access the same memory space), messages can be sent by depositing their contents in a shared memory area. This is often the most efficient way to program shared memory computers with large number of processors, especially on NUMA machines, where memory is local to processors and accessing memory of another processor takes longer. SPMD on a shared memory machine is usually implemented by standard (heavyweight) processes. Unlike SPMD, shared memory multiprocessing (both symmetric multiprocessing, SMP, and non-uniform memory access, NUMA) presents the programmer with a common memory space and the possibility to parallelize execution by having the program take different paths on different processors. The program starts executing on one processor and the execution splits in a parallel region, which is started when parallel directives are encountered. In a parallel region, the processors execute a single program on different data. A typical example is the parallel DO loop, where different processors work on separate parts of the arrays involved in the loop. At the end of the loop, execution is synchronized, only one processor continues, and the others wait. The current standard interface for shared memory multiprocessing is OpenMP. It is usually implemented by lightweight processes, called threads. Combination of levels of parallelism Current computers allow exploiting of many parallel modes at the same time for maximum combined effect. A distributed memory program using MPI may run on a collection of nodes. Each node may be a shared memory computer and execute in parallel on multiple CPUs using OpenMP. Within each CPU, SIMD vector instructions (usually generated automatically by the compiler) and superscalar instruction execution (usually handled transparently by the CPU itself), such as pipelining and the use of multiple parallel functional units, are used for maximum single CPU speed. History SPMD was proposed first in 1983 by Michel Auguin (University of Nice Sophia-Antipolis) and François Larbey (Thomson/Sintra) in the OPSILA parallel computer and next in 1984 by Frederica Darema at IBM for highly parallel machines like the RP3 (the IBM Research Parallel Processor Prototype), in an unpublished IBM memo. By the late 1980s, there were many distributed computers with proprietary message passing libraries. The first SPMD standard was PVM. The current de facto standard is MPI. The Cray parallel directives were a direct predecessor of OpenMP. References External links Parallel job management and message passing Single Program Multiple Data stream SPMD Distributed-memory programming Parallel computing Flynn's taxonomy
4994343
https://en.wikipedia.org/wiki/Mobipocket
Mobipocket
Mobipocket SA was a French company incorporated in March 2000 that created the .mobi e-book file format and produced the Mobipocket Reader software for mobile phones, personal digital assistants (PDA) and desktop operating systems. The Mobipocket software package was free and consisted of various publishing and reading tools for PDAs, smartphones, mobile phones, the e-readers Kindle and iLiad, and applications on devices using Symbian, Windows, Palm OS, Java ME and Psion. Amazon.com bought Mobipocket.com in 2005 and kept it running until October 2016, when it permanently shut down the Mobipocket website and servers. History Amazon.com bought Mobipocket.com in 2005. Amazon's acquisition was believed to be a result of Adobe Systems' announcement that it would no longer sell its eBook-packaging and -serving software. An alpha release of the Java-based version of the Mobipocket reader became available for cellphones on June 30, 2008. There is also a reader for desktop computers running Microsoft Windows, which also works with computers running Mac OS X or Linux using Wine. It has been widely reported that since Amazon's acquisition of Mobipocket, software support, user support, and platform growth ended. In December 2011 Amazon reportedly officially notified the book publishers that it was ending support for Mobipocket. The status of Mobipocket digital rights management (DRM) content previously purchased by users remains unclear since no other ebook-reader supports its proprietary DRM method. On October 31, 2016, Amazon permanently shut down the Mobipocket website and servers. Design The software provides: A personalized press review using the Mobipocket Web Companion, an automated content extraction tool dedicated to press articles. eBooks, including for each book a biography of the writer. Each downloaded eBook is registered in the My Mobipocket personal virtual library, from which a user has access to any previously downloaded eBook. A secure reading system, as a result of the encryption of eBooks using DRM and unique signature, a timestamp added to each book at the time of purchase. Depending on the device, different functions are available. Those are usually managing of books and their metadata, assigning books to arbitrary categories, auto-scroll, rotate by 90° or 180°, bookmarks, custom hyperlinks within one or between different documents, highlighting, comments and by sketches. When transferring documents to other device types, functions that are not supported on the device will be ignored, but the information one is reading will not be altered or deleted. Each book has one or two language attribute(s); in the later case it is meant to be a dictionary. As a typical example, reading a book in Fr language, a word may be selected and asked to translate with Fr → En dictionary provided the appropriate dictionary is installed on the reader-device. Dictionaries are always unidirectional so Fr → En dictionary cannot be used in reverse - a separate En → Fr dictionary is needed for that. Implementations There is a reader for personal computers that works with either encrypted or unencrypted Mobipocket books. Unencrypted Mobipocket books can be read on the Amazon Kindle natively, as well as in Amazon Kindle programs on Mac OS X, iOS devices, Android devices, Windows, and Windows Phone devices. By using third-party programs such as Lexcycle Stanza, calibre or Okular, unencrypted Mobipocket books can also be read on Mac OS X, iOS, Android devices and Linux. Third party tools exist to decrypt encrypted Mobipocket books, allowing them to be read using software that does not support encryption. A user can thus create documents in the Mobipocket format .mobi (which is the same as the Palm OS format .PRC) and use personal comments, bookmarks, and more on all devices supporting those features. Additionally, Amazon offers a free program called KindleGen that can convert or create documents in the Mobipocket format. User-added information, such as annotations and bookmarks, are kept in separate ".mbp" files by the official Mobipocket Reader and Kindle applications. In October 2012, Amazon also introduced an encrypted variant of the file (".smbp"), preventing access to the information by third-party applications. Mobipocket has not released a version for Android. Owners of Android devices can download Amazon's Kindle application from the Android App store, which can read .mobi files, though no official Mobipocket reader for the Android platform has been released. Long term plans for the Mobipocket platform are in question in the wake of Amazon's announcement of the Kindle Format 8, which moves in the direction of HTML5 and CSS3. As one of the most popular e-readers, the Kindle has great sway in the popularity of e-reader formats. Legacy The Amazon Kindle's AZW format (a.k.a. Kindle File Format) is basically just the Mobipocket format with a slightly different serial number scheme (it uses an asterisk instead of a dollar sign). In late 2011, the Kindle Fire introduced "Kindle Format 8" (KF8), also known as AZW3 file format that supports a subset of HTML5 and CSS3 features, while acting as a container for a backwards-compatible MOBI content document. See also Comparison of e-book formats List of e-book readers References External links Last available Internet Archive save of the Mobipocket Website. Amazon (company) acquisitions French literature websites French companies established in 2000 Computer file formats Symbian software E-books Software companies established in 2000
907890
https://en.wikipedia.org/wiki/Dune%20%28franchise%29
Dune (franchise)
Dune, also known as the Dune Chronicles, is a science fiction media franchise that originated with the 1965 novel Dune by Frank Herbert and has continued to add new publications. Dune is frequently described as the best selling science fiction novel in history. It won the inaugural Nebula Award for Best Novel in 1965 and the 1966 Hugo Award, and was later adapted into a 1984 film, a 2000 television miniseries, and a 2021 film. Herbert wrote five sequels, the first two of which were adapted as a miniseries called Frank Herbert's Children of Dune in 2003. Dune has also inspired some traditional games and a series of video games. Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains and other features on Saturn's moon Titan. Frank Herbert died in 1986. Beginning in 1999, his son Brian Herbert and science fiction author Kevin J. Anderson published a number of prequel novels, as well as two sequels that complete the original Dune series (Hunters of Dune in 2006 and Sandworms of Dune in 2007), partially based on Frank Herbert's notes discovered a decade after his death. The political, scientific, and social fictional setting of Herbert's novels and derivative works is known as the Dune universe or Duniverse. Set tens of thousands of years in the future, the saga chronicles a civilization that has banned all "thinking machines", which include computers, robots and artificial intelligence. In their place, civilization has developed advanced mental and physical disciplines as well as advanced technologies that adhere to the ban on computers. Vital to this empire is the harsh desert planet Arrakis, the only known source of the spice melange, the most valuable substance in the universe. Due to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' "Islamic undertones" and themes, a Middle Eastern influence in Herbert's works has been widely noted. Premise The Dune saga is set thousands of years in humanity's future. Faster-than-light travel has been developed, and humans have colonized a vast number of worlds, but a great reaction against computers has resulted in a ban on any kind of “thinking machine”, with the creation or possession of such punishable by immediate death. Despite this prohibition, humanity continues to develop and advance other branches of technology, including ESP and instruments of war. At the time of the first book's setting, humanity has formed a feudal interstellar empire known as the Imperium, run by several Great Houses that oversee various planets. Of key interest is the planet Arrakis, known as "Dune". A desert planet with nearly no precipitation, it is the only planet where a special life-extending drug, melange or "the spice", can be found. In addition to life extension, melange enhances the mental capacity of humans: it enables humans known as Mentats to perform complex calculations without the aid of computers; allows for the mutated Spacing Guild pilots to navigate folded space and travel the distances between planets; and triggers some of the powers of the Bene Gesserit, a religious group that secretly seeks to control the direction humanity takes. Melange is difficult to acquire, not only due to the harsh environment of Arrakis, but also the presence of giant sandworms that are drawn towards any rhythmic sounds on the sands of the desert. Control of Arrakis, its spice production, and the impact on humanity's development become the centerpoints of a millennia-long conflict that develops through the series. Plot arc The Dune universe, set in the distant future of humanity, has a history that stretches thousands of years (some 15,000 years in total) and covers considerable changes in political, social, and religious structure as well as technology. Creative works set in the Dune universe can be said to fall into five general time periods: The Butlerian Jihad: Legends of Dune prequel trilogy (2002–2004) by Brian Herbert and Kevin J. Anderson; Great Schools of Dune (2014–2016) by Brian Herbert and Anderson The Corrino-led Imperium: Prelude to Dune prequel trilogy (1999–2001) by Brian Herbert and Anderson; Heroes of Dune series (2008–2009) by Brian Herbert and Anderson The rise of the Atreides: Dune (1965), Dune Messiah (1969), and Children of Dune (1976) by Frank Herbert; Heroes of Dune series (2008–2009) by Brian Herbert and Anderson The reign and fall of the God Emperor: God Emperor of Dune (1981) by Frank Herbert The return from the Scattering: Heretics of Dune (1984) and Chapterhouse: Dune (1985) by Frank Herbert; Hunters of Dune (2006) and Sandworms of Dune (2007) by Brian Herbert and Anderson The Butlerian Jihad As explained in Dune, the Butlerian Jihad is a conflict taking place over 11,000 years in the future (and over 10,000 years before the events of Dune), which results in the total destruction of virtually all forms of "computers, thinking machines, and conscious robots". With the prohibition "Thou shalt not make a machine in the likeness of a human mind," the creation of even the simplest thinking machines is outlawed and made taboo, which has a profound influence on the socio-political and technological development of humanity in the Dune series. Herbert refers to the Jihad several times in the novels, but does not give much detail on how he imagined the causes and nature of the conflict. Critical analysis has often associated the term with Samuel Butler and his 1863 essay "Darwin among the Machines", which advocated the destruction of all advanced machines. In Herbert's God Emperor of Dune (1981), Leto II Atreides indicates that the Jihad had been a semi-religious social upheaval initiated by humans who felt repulsed by how guided and controlled they had become by machines. This technological reversal leads to the creation of the universal Orange Catholic Bible and the rise of a new feudal pan-galactic empire that lasts for over 10,000 years before Herbert's series begins. Several secret societies also develop, using eugenics programs, intensive mental and physical training, and pharmaceutical enhancements to hone human skills to an astonishing degree. Artificial insemination is also prohibited, as explained in Dune Messiah (1969), when Paul Atreides negotiates with the Reverend Mother Gaius Helen Mohiam, who is appalled by Paul's suggestion that he impregnate his consort in this manner. Herbert died in 1986, leaving his vision of the actual events of the Butlerian Jihad unexplored and open to speculation. The Legends of Dune prequel trilogy (2002–2004) by Brian Herbert and Kevin J. Anderson presents the Jihad as a war between humans and the sentient machines they had created, who rise up and nearly destroy humanity. The series explains that humanity had become entirely complacent and dependent upon thinking machines; recognizing this weakness, a group of ambitious, militant humans calling themselves the Titans use this widespread reliance on machine intelligence to seize control of the entire universe. Their reign lasts for a century; eventually they give too much access and power to the AI program Omnius, which usurps control from the Titans themselves. Seeing no value in human life, the thinking machines—now including armies of robot soldiers and other aggressive machines—dominate and enslave nearly all of humanity in the universe for 900 years, until a jihad is ignited. This crusade against the machines lasts for nearly a century, with much loss of human life but ultimately ending in human victory. The Corrino-led Imperium The ancient Battle of Corrin—occurring 20 years after the end of the Butlerian Jihad—spawns the Padishah Emperors of House Corrino, who rule the known universe for millennia by controlling the Sardaukar, a brutally efficient military force. Ten thousand years later, Imperial power is balanced by the assembly of noble houses called the Landsraad, which enforces the Great Convention's ban on the use of atomics against human targets. Though the power of the Corrinos is unrivaled by any individual House, they are in constant competition with each other for political power and stakes in the omnipresent CHOAM company, a directorship that controls the wealth of the entire Empire. The third primary power in the universe is the Spacing Guild, which monopolizes interstellar travel and banking. Mutated Guild Navigators use the spice drug melange to successfully navigate "folded space" and safely guide enormous heighliner starships from planet to planet instantaneously. The matriarchal Bene Gesserit possess almost superhuman physical, sensory, and deductive powers developed through years of physical and mental conditioning. While positioning themselves to "serve" humanity, the Bene Gesserit pursue their goal to better the human race by subtly and secretly guiding and manipulating the affairs of others to serve their own purposes. By the time of Dune, they have secured a level of control over the current emperor, Shaddam IV, by marrying him to one of their own who intentionally bears him only daughters. The Bene Gesserit also have a secret, millennia-long selective breeding program to bolster and preserve valuable skills and bloodlines as well as to produce a theoretical superhuman male they call the Kwisatz Haderach. When Dune begins, the Sisterhood are only one generation away from their desired individual, having manipulated the threads of genes and power for thousands of years to produce the required confluence of events. But Lady Jessica, ordered by the Bene Gesserit to produce a daughter who would breed with the appropriate male to produce the Kwisatz Haderach, instead bears a son—unintentionally producing the Kwisatz Haderach a generation early. "Human computers" known as Mentats have been developed and perfected to replace the capacity for logical analysis lost through the prohibition of computers. Through specific training, they learn to enter a heightened mental state in which they can perform complex logical computations that are superior to those of the ancient thinking machines. The Bene Tleilax are amoral merchants who traffic in biological and genetically engineered products such as artificial eyes, "twisted" Mentats, and gholas. Finally, the Ixians produce cutting-edge technology that seemingly complies with (but pushes the boundaries of) the prohibitions against thinking machines. The Ixians are very secretive, not only to protect their valuable hold on the industry but also to hide any methods or inventions that may breach the anti-thinking machine protocols. Against this backdrop, the Prelude to Dune prequel trilogy (1999–2001) chronicles the return from obscurity of House Atreides, whose role in the Butlerian Jihad is all but forgotten. The Imperial House schemes to gain full control of the Empire through the control of melange, precisely at the time that the Bene Gesserit breeding program is nearing fruition. The rise of the Atreides As Frank Herbert's Dune (1965) begins, Duke Leto Atreides finds himself in a dangerous position. The 81st Padishah Emperor Shaddam IV has put him in control of the desert planet Arrakis, known as Dune, which is the only source of the all-important spice melange. The most valuable commodity in the known universe, the spice not only makes safe and reliable interstellar travel possible, but also prolongs life, protects against disease, and is used by the Bene Gesserit to enhance their abilities. The potential financial gains for House Atreides are mitigated by the fact that mining melange from the desert surface of Arrakis is an expensive and hazardous undertaking, thanks to the treacherous environment and constant threat of giant sandworms that protect the spice. In addition, Leto is aware that Shaddam, feeling threatened by the rising power and influence of the Atreides, has sent him into a trap. Failure to meet or exceed the production volume of his predecessor, the villainous Baron Vladimir Harkonnen, will negatively affect the position of House Atreides in CHOAM, which relies on spice profits. Further, the very presence of the Atreides on Arrakis inflames the long-simmering War of Assassins between House Atreides and House Harkonnen, a feud ignited 10,000 years before when an Atreides had a Harkonnen banished for cowardice after the Butlerian Jihad. The little-understood native population of Arrakis are the Fremen, long overlooked by the Imperium. Considered backward savages, the Fremen are an extremely hardy people and exist in large numbers, their culture built around the commodity of water, which is extremely scarce on Arrakis. The Fremen await the coming of a prophesied messiah, not suspecting that this prophecy had been planted in their legends by the Missionaria Protectiva, an arm of the Bene Gesserit dedicated to religious manipulation to ease the path of the Sisterhood when necessary. In Dune, the so-called "Arrakis Affair" puts unexpected Kwisatz Haderach Paul Atreides in control of first the Fremen people and then Arrakis itself. Absolute control over the spice supply allows Paul to depose Shaddam and become ruler of the known universe, with Shaddam's eldest daughter Princess Irulan as his consort. With a bloody jihad subsequently unleashed across the universe in Paul's name but out of his control, the Bene Gesserit, Tleilaxu, Spacing Guild, and House Corrino conspire to dethrone him in Dune Messiah (1969). Though the plot fails, the Atreides Empire continues to devolve in Children of Dune (1976) as the religion built around Paul falters, Irulan's sister Wensicia conspires to place her son Farad'n on the throne, and Paul's twin heirs Leto II and Ghanima rise to power. The Heroes of Dune series (2008–2009) by Brian Herbert and Kevin J. Anderson chronicles the major events that take place between Dune: House Corrino (2001) and Dune: The Duke of Caladan (2020), between Dune (1965) and Dune Messiah (1969), and between Dune Messiah and Children of Dune (1976). The reign and fall of the God Emperor At the time of God Emperor of Dune (1981), Paul's son, the God Emperor Leto II Atreides, has ruled the Empire for 3,500 years from the verdant face of a transformed Arrakis; melange production has ceased. Leto has forced the sandworms into extinction, except for the larval sandtrout with which he had forged a symbiosis, transforming him into a human-sandworm hybrid. Human civilization before his rule had suffered from twin weaknesses: that it could be controlled by a single authority, and that it was totally dependent upon melange, found on only one planet in the known universe. Leto's prescient visions had shown that humanity would be threatened by extinction in any number of ways; his solution was to place humanity on his "Golden Path," a plan for humanity's survival. Leto governs as a benevolent tyrant, providing for his people's physical needs, but denying them any spiritual outlets other than his own compulsory religion (as well as maintaining a monopoly on spice and thus total control of its use). Personal violence of any kind is banned, as is nearly all space travel, creating a pent-up demand for freedom and travel. The Bene Gesserit, Ixians, and Tleilaxu find themselves seeking ways to regain some of their former power or unseat Leto altogether. Leto also conducts his own selective breeding program among the descendants of his twin sister Ghanima, finally arriving at Siona, daughter of Moneo, whose actions are hidden from prescient vision. Leto engineers his own assassination, knowing it will result in rebellion and revolt but also in an explosion in travel and colonization. The death of Leto's body also produces new sandtrout, which will eventually give rise to a population of sandworms and a new cycle of spice production. The return from the Scattering In the aftermath of the fall of the God Emperor, chaos and severe famine on many worlds cause trillions of humans to set off into the freedom of unknown space and spread out across the universe. This diaspora is later called the Scattering and, combined with the invisibility of Atreides descendants to prescient vision, assures that humanity has forever escaped the threat of total extinction. At the time of Heretics of Dune (1984) and Chapterhouse: Dune (1985)—1500 years after Leto's death—the turmoil is settling into a new pattern; the balance of power in the, as it is now called, Old Empire rests among the Ixians, the Bene Gesserit, and the Tleilaxu. The Spacing Guild has been forever weakened by the development of Ixian machines capable of navigation in foldspace, practically replacing Guild Navigators. The Bene Gesserit, through manipulation of the Priesthood of the Divided God, control the sandworms and their planet, now called Rakis, but the Tleilaxu have discovered how to produce melange using their axlotl tanks in quantities that greatly exceed natural melange harvests. This balance of power is shattered by a large influx of people from the Scattering, some fleeing persecution by an as-yet unknown enemy. Among the returning people, the Bene Gesserit finds its match in a violent and corrupt matriarchal society known as the Honored Matres, whom they suspect may be descended from some of their own sent out in the Scattering. As a bitter and bloody war erupts between the orders, it ultimately becomes clear that joining the two organizations into a single New Sisterhood with shared abilities is their best chance to fight the approaching enemy. Development and publication Original series Herbert's interest in the desert setting of Dune and its challenges is attributed to research he began in 1957 for a never-completed article about a United States Department of Agriculture experiment using poverty grasses to stabilize damaging sand dunes, which could "swallow whole cities, lakes, rivers, and highways." Herbert spent the next five years researching, writing, and revising what would eventually become the novel Dune, which was initially serialized in Analog magazine as two shorter works, Dune World (1963) and The Prophet of Dune (1965). The serialized version was expanded and reworked—and rejected by more than 20 publishers—before being published by Chilton Books, a printing house best known for its auto repair manuals, in 1965. Dune won the inaugural Nebula Award for Best Novel in 1965, and the 1966 Hugo Award. The novel has been translated into dozens of languages, and has sold almost 20 million copies. Dune has been regularly cited as one of the world's best-selling science fiction novels. A sequel, Dune Messiah, followed in 1969. A third novel called Children of Dune was published in 1976, and was later nominated for a Hugo Award. Children of Dune became the first hardcover best-seller ever in the science fiction field. Parts of these two first sequels were written before Dune was completed. In 1978, Berkley Books published The Illustrated Dune, an edition of Dune with 33 black-and-white sketch drawings and eight full color paintings by John Schoenherr, who had done the cover art for the first printing of Dune and had illustrated the Analog serializations of Dune and Children of Dune. Herbert wrote in 1980 that though he had not spoken to Schoenherr prior to the artist creating the paintings, the author was surprised to find that the artwork appeared exactly as he had imagined its fictional subjects, including sandworms, Baron Harkonnen and the Sardaukar. In 1981, Herbert released God Emperor of Dune, which was ranked as the #11 hardcover fiction best seller of 1981 by Publishers Weekly. Heretics of Dune, the 1984 New York Times #13 hardcover fiction best seller, was followed in quick succession by Chapterhouse: Dune in 1985. Herbert died on February 11, 1986. Brian Herbert and Kevin J. Anderson Over a decade after Herbert's death, his son Brian Herbert enlisted science fiction author Kevin J. Anderson to coauthor a trilogy of Dune prequel novels that would come to be called the Prelude to Dune series. Using some of Frank Herbert's own notes, the duo wrote Dune: House Atreides (1999), Dune: House Harkonnen (2000), and Dune: House Corrino (2001). The series is set in the years immediately prior to the events of Dune. This was followed with a second prequel trilogy called the Legends of Dune, consisting of Dune: The Butlerian Jihad (2002), Dune: The Machine Crusade (2003), and Dune: The Battle of Corrin (2004). These were set during the Butlerian Jihad, an element of backstory that Frank Herbert had previously established as occurring 10,000 years before the events chronicled in Dune. Herbert's brief description of humanity's "crusade against computers, thinking machines, and conscious robots" was expanded by Brian Herbert and Anderson in this series. With an outline for the first book of Prelude to Dune series written and a proposal sent to publishers, Brian Herbert had discovered his father's 30-page outline for a sequel to Chapterhouse Dune, which the elder Herbert had dubbed Dune 7. After publishing their six prequel novels, Brian Herbert and Anderson released Hunters of Dune (2006) and Sandworms of Dune (2007), which complete the original series and wrap up storylines that began with Frank Herbert's Heretics of Dune. The Heroes of Dune series followed, focusing on the time periods between Frank Herbert's original novels. The first book, Paul of Dune, was published in 2008, followed by The Winds of Dune in 2009. The next two installments were to be called The Throne of Dune and Leto of Dune (possibly changing to The Golden Path of Dune), but were postponed due to plans to publish a trilogy about "the formation of the Bene Gesserit, the Mentats, the Suk Doctors, the Spacing Guild and the Navigators, as well as the solidifying of the Corrino imperium." Sisterhood of Dune was released in 2012, followed by Mentats of Dune in 2014. In a 2009 interview, Anderson stated that the third and final novel would be titled The Swordmasters of Dune, but by 2014 it had been renamed Navigators of Dune and was published in 2016. In July 2020, Herbert and Anderson announced a new trilogy of prequel novels called The Caladan Trilogy. The first novel in the series, Dune: The Duke of Caladan, was published in October 2020, and the second, Dune: The Lady of Caladan, was released in September 2021. The third novel, Dune: The Heir of Caladan, is scheduled to be published in October 2022. Jon Michaud of The New Yorker wrote in 2013, "The conversion of Dune into a franchise, while pleasing readers and earning royalties for the Herbert estate, has gone a long way toward obscuring the power of the original novel." Short stories In 1985, Frank Herbert wrote an illustrated short work called "The Road to Dune", set sometime between the events of Dune and Dune Messiah. Published in Herbert's short story collection Eye, it takes the form of a guidebook for pilgrims to Arrakis and features images (with descriptions) of some of the devices and characters presented in the novels. Brian Herbert and Anderson have written eight Dune short stories and three Dune novellas, most of them related to and published around their novels. The eight short stories include "Dune: A Whisper of Caladan Seas" (2001), "Dune: Hunting Harkonnens" (2002), "Dune: Whipping Mek" (2003), "Dune: The Faces of a Martyr" (2004), "Dune: Sea Child" (2006), "Dune: Treasure in the Sand" (2006), "Dune: Wedding Silk" (2008), and "Dune: Red Plague" (2016). The three novellas include "Dune: The Waters of Kanly" (2017), "Dune: Blood of the Sardaukar" (2019), and a forthcoming untitled origin story for the Shadout Mapes. The three novellas will be collected in the forthcoming collection Sands of Dune, which will release on July 28, 2022. By other authors In 1984, Herbert's publisher Putnam released The Dune Encyclopedia under its Berkley Books imprint. Approved by Herbert but not written by him, this collection of essays by 43 contributors describes in invented detail many aspects of the Dune universe not found in the novels themselves. Herbert's estate later confirmed its non-canonical status after Brian Herbert and Kevin J. Anderson had begun publishing prequel novels that directly contradict The Dune Encyclopedia. The 1984 Dune film spawned The Dune Storybook (September 1984, ), a novelization written by Joan D. Vinge, and The Making of Dune (December 1984, ), a making-of book by Ed Naha. In November 1984, Pocket Books published National Lampoon's Doon by Ellis Weiner (), a parody novel. In May 1992, Ace Books published Songs of Muad'dib (), a collection of Dune-related poems written by Frank Herbert and edited by his son Brian. Brian Herbert and Kevin J. Anderson released The Road to Dune on August 11, 2005. The book contains a novelette called Spice Planet (an alternative version of Dune based on an outline by Frank Herbert), a number of the Brian Herbert/Anderson short stories, and letters and unused chapters written by Frank Herbert. In the 1999 gazetteer The Stars and Planets of Frank Herbert's Dune: A Gazetteer (1999), Joseph M. Daniels estimates the distance from Earth in light-years (ly) for many Dune planets, based on the real-life distances of the stars and planetary systems referenced by Frank Herbert when discussing these planets in the glossary of the novel Dune. Though Herbert used the names of actual stars and planetary systems in his work, there is no documentation supporting or disputing the assumption that he was, in fact, referring to these real-life stars or systems. The Science of Dune (2008) analyzes and deconstructs many of Herbert's concepts and fictional inventions. Themes and influences The Dune series is a landmark of soft science fiction. Herbert deliberately suppressed technology in his Dune universe so he could address the politics of humanity, rather than the future of humanity's technology. Dune considers the way humans and their institutions might change over time. Jon Michaud of The New Yorker called the originating novel Dune "an epic of political betrayal, ecological brinkmanship, and messianic deliverance." Director John Harrison, who adapted Dune for Syfy's 2000 miniseries, called the novel a universal and timeless reflection of "the human condition and its moral dilemmas", and said: Novelist Brian Herbert, Frank Herbert's son and biographer, explained that "Frank Herbert drew parallels, used spectacular metaphors, and extrapolated present conditions into world systems that seem entirely alien at first blush. But close examination reveals they aren't so different from systems we know". He wrote that the invaluable drug melange "represents, among other things, the finite resource of oil". Michaud explained, "Imagine a substance with the combined worldwide value of cocaine and petroleum and you will have some idea of the power of melange." Each chapter of Dune begins with an epigraph excerpted from the fictional writings of the character Princess Irulan. In forms such as diary entries, historical commentary, biography, quotations and philosophy, these writings set tone and provide exposition, context, and other details intended by Herbert to enhance understanding of his complex fictional universe and themes. Michaud wrote in 2013, "With daily reminders of the intensifying effects of global warming, the spectre of a worldwide water shortage, and continued political upheaval in the oil-rich Middle East, it is possible that Dune is even more relevant now than when it was first published." Praising Herbert's "clever authorial decision" to excise robots and computers ("two staples of the genre") from his fictional universe, he suggested that "This de-emphasis on technology throws the focus back on people. It also allows for the presence of a religious mysticism uncommon in science fiction." Environmentalism and ecology The originating novel Dune has been called the "first planetary ecology novel on a grand scale". After the publication of Silent Spring by Rachel Carson in 1962, science fiction writers began treating the subject of ecological change and its consequences. Dune responded in 1965 with its complex descriptions of life on Arrakis, from giant sandworms (for whom water is deadly) to smaller, mouse-like life forms adapted to live with limited water. Dune was followed in its creation of complex and unique ecologies by other science fiction books such as A Door into Ocean (1986) and Red Mars (1992). Environmentalists have pointed out that Dune popularity as a novel depicting a planet as a complex, almost living, thing, in combination with the first images of Earth from space being published in the same time period, strongly influenced environmental movements such as the establishment of the international Earth Day. Declining empires Lorenzo DiTommaso compared Dune portrayal of the downfall of a galactic empire to Edward Gibbon's Decline and Fall of the Roman Empire, which argues that Christianity allied with the profligacy of the Roman elite led to the fall of Ancient Rome. In "History and Historical Effect in Frank Herbert's Dune" (1992), DiTommaso outlines similarities between the two works by highlighting the excesses of Padishah Emperor Shaddam IV on his home planet of Kaitain and of the Baron Harkonnen in his palace. The Emperor loses his effectiveness as a ruler through an excess of ceremony and pomp. The hairdressers and attendants he brings with him to Arrakis are even referred to as "parasites". The Baron Harkonnen is similarly corrupt, materially indulgent, and a sexual degenerate. Gibbon's Decline and Fall partly blames the fall of Rome on the rise of Christianity. Gibbon claimed that this exotic import from a conquered province weakened the soldiers of Rome and left it open to attack. Similarly, the Emperor's Sardaukar fighters are little match for the Fremen of Arrakis because of the Sardaukar's overconfidence and the Fremen's capacity for self-sacrifice. The Fremen put the community before themselves in every instance, while the world outside wallows in luxury at the expense of others. The decline and long peace of the Empire sets the stage for revolution and renewal by genetic mixing of successful and unsuccessful groups through war, a process culminating in the Jihad led by Paul Atreides, described by Herbert as depicting "war as a collective orgasm" (drawing on Norman Walter's 1950 The Sexual Cycle of Human Warfare). These themes reappear in God Emperor of Dune Scattering and Leto II's all-female Fish Speaker army. Heroism Brian Herbert wrote that "Dune is a modern-day conglomeration of familiar myths, a tale in which great sandworms guard a precious treasure of melange...[that] resembles the myth described by an unknown English poet in Beowulf, the compelling tale of a fearsome fire dragon who guarded a great treasure hoard in a lair under cliffs". Paul's rise to superhuman status follows the hero's journey template; after unfortunate circumstances are forced onto him, he suffers a long period of hardship and exile, and finally confronts and defeats the source of evil in his tale. As such, Dune is representative of a general trend beginning in 1960s American science fiction in that it features a character who attains godlike status through scientific means. Frank Herbert said in 1979, "The bottom line of the Dune trilogy is: beware of heroes. Much better [to] rely on your own judgment, and your own mistakes." He wrote in 1985, "Dune was aimed at this whole idea of the infallible leader because my view of history says that mistakes made by a leader (or made in a leader's name) are amplified by the numbers who follow without question." Juan A. Prieto-Pablos says Herbert achieves a new typology with Paul's superpowers, differentiating the heroes of Dune from earlier heroes such as Superman, van Vogt's Gilbert Gosseyn and Henry Kuttner's telepaths. Unlike previous superheroes who acquire their powers suddenly and accidentally, Paul's are the result of "painful and slow personal progress." And unlike other superheroes of the 1960s—who are the exception among ordinary people in their respective worlds—Herbert's characters grow their powers through "the application of mystical philosophies and techniques." For Herbert, the ordinary person can develop incredible fighting skills (Fremen, Swordmasters of Ginaz and Sardaukar) or mental abilities (Bene Gesserit, Mentats, Spacing Guild Navigators). Middle-Eastern and Islamic influences Due to the similarities between some of Herbert's terms and ideas and actual words and concepts in the Arabic language, as well as the series' "Islamic undertones" and themes, a Middle Eastern influence on Herbert's works has been noted repeatedly. As a foreigner who adopts the ways of a desert-dwelling people and then leads them in a military capacity, Paul Atreides' character bears many similarities to the historical T. E. Lawrence, whose 1962 biopic Lawrence of Arabia has also been identified as an influence. Lesley Blanch's novel The Sabres of Paradise (1960) about Muslim resistance to the Russian Empire in the Caucasus, has also been identified as a major influence upon Dune, with its depiction of Imam Shamil and the Islamic culture of the Caucasus inspiring some of the themes, characters, events and terminology of Dune. Multiple proverbs recorded by Blanch's The Sabres as originating from the Caucasus Mountains are included in Dune, such as “polish comes from the city, wisdom from the hills,” becoming “polish comes from the cities, wisdom from the desert” for Arrakis. The environment of the desert planet Arrakis is similar to the Middle East, particularly the Arabian Peninsula and Persian Gulf, as well as Mexico. The novel also contains references to the petroleum industries in the Arab states of the Persian Gulf as well as Mexico. The Fremen people of Arrakis were influenced by the Bedouin tribes of Arabia, and the Mahdi (messiah) prophecy originates from Islamic eschatology. Inspiration is also adopted from medieval historian Ibn Khaldun's cyclical history and his dynastic concept in North Africa, hinted by Herbert's reference to Khaldun's book Kitāb al-ʿibar ("The Book of Lessons") as known among the Fremen. Additional linguistic and historic influences In addition to Arabic, Dune derives words and names from multiple other languages, including Hebrew, Navajo, Latin, Chakobsa, the Nahuatl language of the Aztecs, Greek, Persian, East Indian, Russian, Turkish, Finnish, Dutch and Old English. Through the inspiration from Lesley Blanch's The Sabres of Paradise, there are also allusions to the tsarist-era Russian nobility and Cossacks. Frank Herbert stated that bureaucracy that lasted long enough would become a hereditary nobility, and a significant theme behind the aristocratic families in Dune was "aristocratic bureaucracy" which he saw as analogous to the Soviet Union. Zen and religion Early in his newspaper career, Herbert was introduced to Zen by two Jungian psychologists, Ralph and Irene Slattery, who "gave a crucial boost to his thinking". Zen teachings ultimately had "a profound and continuing influence on [Herbert's] work". Throughout the Dune series and particularly in Dune, Herbert employs concepts and forms borrowed from Zen Buddhism. The Fremen are Zensunni adherents, and many of Herbert's epigraphs are Zen-spirited. In "Dune Genesis", Frank Herbert wrote: Brian Herbert called the Dune universe "a spiritual melting pot", noting that his father incorporated elements of a variety of religions, including Buddhism, Sufi mysticism and other Islamic belief systems, Catholicism, Protestantism, Judaism, and Hinduism. He added that Frank Herbert's fictional future in which "religious beliefs have combined into interesting forms" represents the author's solution to eliminating arguments between religions, each of which claim to have "the one and only revelation." Frank Herbert writes that, in the aftermath of the technology-purging Butlerian Jihad, the Bene Gesserit composed the Azhar Book, which "preserves the great secrets of the most ancient faiths". Soon after, an ecumenical council created a syncretic religion defined by the Orange Catholic Bible, which would become the primary orthodox religious text in the universe. Its title suggests a merging of Protestantism (Orange Order) and Catholicism. Herbert writes in the glossary of Dune: The Bene Gesserit also practice "religious engineering" through the Missionaria Protectiva, which spreads contrived myths, prophecies and superstition on primitive worlds so that the Sisterhood may later exploit those regions. Herbert suggests that the Fremen religion on Arrakis has been thus influenced, allowing Paul to embody their prophesied messiah. Bene Gesserit is derived from the Latin meaning "it will have been well borne" symbolizing their doctrine in the story. Legacy The political, scientific, and social fictional setting of Herbert's novels and derivative works is known as the Dune universe or Duniverse. Dune has been widely influential, inspiring numerous novels, music, films, television, games, and comic books. It is considered one of the greatest and most influential science fiction novels of all time, with numerous modern science fiction works such as Star Wars owing their existence to Dune. Dune has also been referenced in numerous other works of popular culture, such as Star Trek, The Chronicles of Riddick, The Kingkiller Chronicle, and Futurama. Dune was cited as a major source of inspiration for Hayao Miyazaki's anime film Nausicaä of the Valley of the Wind (1984). Jon Michaud noted in 2013 in The New Yorker, "what's curious about Dune stature is that it has not penetrated popular culture in the way that The Lord of the Rings and Star Wars have." He praised Herbert's "clever authorial decision" to excise robots and computers ("two staples of the genre") from his fictional universe, but suggested that this may be one explanation why Dune lacks "true fandom among science-fiction fans". Since 2009, the names of planets from the Dune novels have been adopted for the real-world nomenclature of plains (planitiae) and complexes of valleys (labyrinthi) on Saturn's moon Titan. Planet names used to date include Arrakis, Caladan, Giedi Prime, Kaitain, Salusa Secundus, and Tleilax. The Hagal dune field and other sites on Mars are informally named after planets mentioned in the Dune series. The city of Tacoma, Washington, Herbert's birthplace, dedicated part of Point Defiance Park as the "Dune Peninsula" to honor the writer and the series. In other media Film In 1973, director and writer Alejandro Jodorowsky set about creating a cinematic adaptation, taking over the option that producer Arthur P. Jacobs had taken on the film adaptation rights in 1973 shortly before his death. Jodorowsky approached, among others, Peter Gabriel, the prog rock groups Pink Floyd and Magma for some of the music, artists H. R. Giger and Jean Giraud for set and character design, and Dan O'Bannon for special effects. Jodorowsky cast his own son Brontis Jodorowsky in the lead role of Paul Atreides, Salvador Dalí as Shaddam IV, Padishah Emperor, Amanda Lear as Princess Irulan, Orson Welles as Baron Vladimir Harkonnen, Gloria Swanson as Reverend Mother Gaius Helen Mohiam, David Carradine as Duke Leto Atreides, Geraldine Chaplin as Lady Jessica, Alain Delon as Duncan Idaho, Hervé Villechaize as Gurney Halleck, Udo Kier as Piter De Vries, and Mick Jagger as Feyd-Rautha. He began writing a vast script, so expansive that the movie was thought to potentially last 14 hours. The project, nevertheless, was scrapped for financial reasons, leaving Jodorowsky's unfinished handwritten script in a notebook that was partially published as a facsimile in 2012 as part of the 100 Notes – 100 Thoughts catalog of the 13th documenta exhibition. Frank Pavich directed a documentary about this unrealized project entitled Jodorowsky's Dune, which premiered at the 2013 Cannes Film Festival in May 2013, and was released theatrically in March 2014. In 1984, Dino De Laurentiis and Universal Pictures released Dune, a feature film adaptation of the novel by director and writer David Lynch. The film stars Kyle MacLachlan as Paul Atreides, Jürgen Prochnow as Duke Leto Atreides, Francesca Annis as Lady Jessica, Sean Young as Chani, Kenneth McMillan as Baron Vladimir Harkonnen, Siân Phillips as Reverend Mother Gaius Helen Mohiam, Max von Sydow as Doctor Kynes, Sting as Feyd-Rautha, Freddie Jones as Thufir Hawat, Richard Jordan as Duncan Idaho, Everett McGill as Stilgar, Patrick Stewart as Gurney Halleck, Dean Stockwell as Doctor Wellington Yueh, and José Ferrer as Padishah Emperor Shaddam IV. Although a commercial and critical failure upon release, Frank Herbert himself was reportedly pleased with the movie, as it stayed more faithful to the book than earlier movie adaptation attempts. However, he had his reservations on its failures at the time, citing the lack of "imagination" in its marketing and estimated costs, and some of the filmmaker's production techniques. In 2008, Paramount Pictures announced that they had a new feature film adaptation of Dune in development with Peter Berg set to direct; Berg dropped out of the project in October 2009, and director Pierre Morel was signed in January 2010. Paramount dropped the project in March 2011. In November 2016, Legendary Entertainment acquired the film and TV rights for Dune. Variety reported in December 2016 that Denis Villeneuve was in negotiations to direct the project, which was confirmed in February 2017. In early 2018, Villeneuve stated that his goal was to adapt the novel into a two-part film series. He said in May 2018 that the first draft of the script had been finished. Villeneuve said, "Most of the main ideas of Star Wars are coming from Dune so it's going to be a challenge to [tackle] this. The ambition is to do the Star Wars movie I never saw. In a way, it's Star Wars for adults." In July 2018, Brian Herbert confirmed that the latest draft of the screenplay covered "approximately half of the novel Dune." Timothée Chalamet is to play Paul Atreides. Greig Fraser joined the project as cinematographer in December 2018. In September 2018, it was reported that Rebecca Ferguson was in talks to play Jessica Atreides. In January 2019, Dave Bautista and Stellan Skarsgård joined the production, playing Glossu Rabban and Vladimir Harkonnen, respectively. It was reported later that month that Charlotte Rampling had been cast as Reverend Mother Mohiam, Oscar Isaac as Duke Leto, Zendaya as Chani, and Javier Bardem as Stilgar. In February 2019, Josh Brolin was cast as Gurney Halleck, Jason Momoa as Duncan Idaho, and David Dastmalchian as Piter De Vries. Filming began March 18, 2019, and the film was shot on location in Budapest, Hungary and Jordan. Warner Bros. are distributing the film, which was released on October 22, 2021. Dune performed very well at the box office at its opening, leading Legendary Pictures to greenlight Dune: Part Two within that week, with a planned released date in October 2023. Television The Sci-Fi Channel (now branded as Syfy) premiered a three-part miniseries adaptation called Frank Herbert's Dune on December 3, 2000. Its March 16, 2003 sequel, Frank Herbert's Children of Dune, combined both Dune Messiah and Children of Dune. As of 2004, both miniseries were two of the three highest-rated programs ever to be broadcast on Syfy. Frank Herbert's Dune won two Primetime Emmy Awards in 2001, for Outstanding Cinematography for a Miniseries or Movie and Outstanding Special Visual Effects for a Miniseries, Movie or a Special. The miniseries was also nominated for an Emmy for Outstanding Sound Editing for a Miniseries, Movie or a Special. Frank Herbert's Children of Dune won the Primetime Emmy Award for Outstanding Special Visual Effects for a Miniseries, Movie or a Special in 2003. The miniseries was also nominated for Emmys for Outstanding Sound Editing for a Miniseries, Movie or a Special, Outstanding Hairstyling for a Limited Series or Movie, and Outstanding Makeup for a Limited Series or Movie (Non-Prosthetic). In June 2019 it was announced that Legendary Television will be producing a spin-off television series, Dune: The Sisterhood, for WarnerMedia's streaming service, HBO Max. The series will focus on the Bene Gesserit and serve as a prequel to the 2021 film. Villeneuve will direct the series' pilot with Jon Spaihts writing the screenplay, and both will serve as executive producers alongside Brian Herbert. Though he initially served as showrunner, on November 5, 2019, The Hollywood Reporter reported that Spaihts had stepped down from this position to focus more on the sequel to the 2021 film. Diane Ademu-John had been hired as the new showrunner by July 2021. Comics and graphic novels On December 1, 1984, Marvel Comics and Berkley published Dune: The Official Comic Book (), a comic adaptation of David Lynch's film Dune. Marvel Super Special #36: Dune featuring an adaptation of the film by writer Ralph Macchio and artist Bill Sienkiewicz was released on April 1, 1985, as well as a three-issue limited comic series from Marvel entitled Dune from April to June 1985. In January 2020, Entertainment Weekly reported that Abrams Books was developing a three-part graphic novel adaptation of Dune, which will be the first time the novel has been published in this format. The graphic novel will be written by Brian Herbert and Anderson and illustrated by Raúl Allén and Patricia Martín, with covers by Bill Sienkiewicz. In May 2020, Boom! Studios was announced to have acquired the comic and graphic novel rights to the 1999 prequel novel Dune: House Atreides, with the intent of doing a 12-issue comic adaptation written by the original authors Brian Herbert and Anderson. In 2021 they announced another 12-issue comic series based on Brian Herbert and Kevin J. Anderson's 2019 Dune short story "Blood of the Sardaukar." Video games To date, there have been five licensed Dune computer and video games released. The first was Dune (1992) from Cryo Interactive/Virgin Interactive. Another game developed at the same time, Westwood Studios' Dune II (1992), is generally credited for popularizing and setting the template for the real-time strategy genre of computer games. Dune II is considered to be among the most influential video games of all time. Dune 2000 (1998) is a remake of Dune II from Intelligent Games/Westwood Studios/Virgin Interactive. Its sequel was the 3D video game Emperor: Battle for Dune (2001) by Intelligent Games/Westwood Studios/Electronic Arts. The 3D game Frank Herbert's Dune (2001) by Cryo Interactive/DreamCatcher Interactive is based on the 2000 Sci Fi Channel miniseries of the same name. On February 26, 2019, Funcom announced that it was entering into an exclusive partnership with Legendary Entertainment to develop games related to the upcoming Dune films. Other games The board game Dune was released by Avalon Hill in 1979, followed by a Parker Brothers game Dune in 1984. A 1997 collectible card game called Dune was followed by the role-playing game Dune: Chronicles of the Imperium in 2000. The 1979 Avalon Hill game was republished by Gale Force Nine in 2019. The board game Dune: Imperium was published by Dire Wolf in 2020. Merchandising A line of Dune action figures from toy company LJN was released to lackluster sales in 1984. Styled after David Lynch's film, the collection featured figures of Paul Atreides, Baron Harkonnen, Feyd, Rabban, Stilgar, and a Sardaukar warrior, plus a poseable sandworm, several vehicles and weapons, and a set of View-Master stereoscope reels. Figures of Gurney and Lady Jessica previewed in LJN's catalog were never produced. In 2006, SOTA Toys produced a Baron Harkonnen action figure for their "Now Playing Presents" line. In October 2019, Funko announced a "Dune Classic" line of POP! vinyl figures, the first of which are Paul in a stillsuit and Feyd in a blue jumpsuit, styled after Lynch's film. An alternate version of Feyd in his blue loincloth was released for the 2019 New York Comic Con. Soundtrack albums have been released for the 1984 film, the 2000 TV miniseries, and the 2003 Children of Dune miniseries, as well as the 1992 video game, the 2001 computer game Emperor: Battle for Dune, and select tracks from the entire series of Dune video games. See also Hydraulic empire References External links of the Dune novel series Dune (series) at the Science Fiction Encyclopedia. Book series introduced in 1965 Family saga novels Mining in fiction Planetary romances Science fiction book series Soft science fiction Fiction set in the 7th millennium or beyond Space opera Science fiction Space warfare in fiction Wars in fiction
56788383
https://en.wikipedia.org/wiki/Electronics%20right%20to%20repair
Electronics right to repair
Electronics right to repair is proposed legislation that would provide the practical means for equipment owners to repair their devices, and not a new legal right. Advocates observe that while repair is legal under copyright law and patent law, owners are often prohibited from making their own repairs or hiring technicians they trust to help by manufacturer limitations on access to repair materials such as parts, tools, diagnostics, documentation and firmware. Proposed legislation has taken note of the specific power of state governments in the US to require both fair and reasonable contracts ("UDAP") law and General Business Law which allows states to make specific requirements of businesses seeking to do business within their borders. Additionally, under US Law, the Federal Trade Commission has the specific authority to restrict UDAP violations. While a global concern, the primary debate over the issue has been centered on the United States and within the European Union. Additional efforts are now ongoing in Canada and Australia. Bloggers, activists and volunteer groups such as Louis Rossmann and the Repair Cafe movement started by Martine Postma are also active promoters of repair rights. Definition The right to repair for electronics refers to the concept of allowing end users, consumers as well as businesses, to repair electronic devices they own or service without any manufacturer or technical restrictions. The idea behind this concept is to render electronics easier and cheaper to repair with the goal of prolonging the lifecycle of such devices and reducing electronic waste caused by broken or unused devices. Four requirements for electronic devices are of particular importance: the device should be constructed and designed in a manner that allows repairs to be made easily; End users and independent repair providers should be able to access original spare parts and tools (software as well as physical tools) needed to repair the device at fair market conditions; repairs should be possible by design and not hindered by software programming; the repairability of a device should be clearly communicated by the manufacturer. While initially driven majorly by automotive consumers protection agencies and the automotive after sales service industry, the discussion of establishing a right to repair not only for vehicles but for any kind of electronic product gained traction as consumer electronics such as smartphones and computers became universally available causing broken and used electronics to become the fastest growing waste stream. Today it's estimated that more than half of the population of the western world has one or more used or broken electronic devices at home that are not introduced back into the market due to a lack of affordable repair. The right to repair movement tries to address these issues by proposing legislation obligating manufacturers to allow access to spare parts and repair tools at fair market prices and design devices in a manner that allows easy repair with the goal of favouring repair over replacement. Factors that made independent repair more difficult Product design Many right to repair advocates claim that modern electronic devices have components that are glued in place or attached in a way that makes them difficult to remove. However, the motivations behind such designs not always clearcut. For example, the Google Pixel 6 Pro has a glued battery but includes a plastic tab to aid removal, suggesting that the adhesive was meant for a purpose other than to stymie repair. Part pairing New ways to lock devices like part pairing (components of a device are serialized and can not be swapped against others) became increasingly popular among manufacturers. Even the most common repairs such as the replacement of a smartphone display can cause malfunctions due to locks implemented in the software. For example, Apple has gradually restricted the swap of iPhone displays, going from warning messages to removing security features such as Face ID if the display was not swapped by a manufacturer-authorized repair facility. While this trend started in the agricultural sector by tractor manufacturer John Deere, it became a widespread phenomenon in consumer electronics over the past 5 years. Right to repair by jurisdiction European Union In the 2010s the trend of making one's repairs to devices spread from the east into the Western Europe. In July 2017, the European Parliament approved recommendations that member states should pass laws that give consumers the right to repair their electronics, as part of a larger update to its previous Ecodesign Directive from 2009 which called for manufacturers to produce more energy-efficient and cleaner consumer devices. The ability to repair devices is seen by these recommendations as a means to reduce waste to the environment. With these recommendations, work began on establishing the legal Directive for the EU to support the recommendations, and from which member states would then pass laws to meet the Directive. One of the first areas of focus was consumer appliances such as refrigerators and washing machines. Some were assembled using adhesives instead of mechanical fasteners which made it impossible for consumers or repair technicians from making non-destructive repairs. The right-to-repair facets of appliances were a point of contention between consumer groups and appliance manufacturers in Europe, the latter who lobbied the various national governments to gain favorable language in the Directive. Ultimately, the EU passed legislation in October 2019 that, after 2021, required manufacturers of these appliances to be able to supply replacement parts to professional repairmen for ten years from manufacture. The legislation did not address other facets related to right-to-repair, and activists noted that this still limited the consumer's ability to perform their own repairs. The EU also has directives toward a circular economy which are aimed toward reducing greenhouse gas emissions and other excessive wastes through recycling and other programs. A new "Circular Economy Action Plan" draft introduced in 2020 includes the electronics right to repair for EU citizens as this would allow device owners to replace only malfunctioning parts rather than replace the entire device, reducing electronics waste. The Action Plan includes additional standardization that would aid toward rights to repair, such as common power ports on mobile devices. United Kingdom The British government introduced Right to Repair law that went into effect on July 1, 2021. Under the law, electronic appliance manufacturers are required to be able to provide consumers with spare parts for "simple and safe" repairs, such as a door hinge for a washing machine, while requiring manufacturers to make other parts available to professional repair shops for more complicated parts. The law gave companies a two-year grace period to come into compliance. France France has taken a somewhat different approach than the EU in general and has adopted a requirement that manufacturers contribute to a repairability scoring system The scope of the system is limited at this time and the results are not yet audited, although both auditing and expansion are contemplated. The goal is to allow consumers to consider repair as a buying criteria before making a purchase. This has already resulted in the disclosure of repair documentation that had previously not been widely available - at least in the case of Samsung. United States General background The right to repair concept has generally come from the United States. The earliest known published reference using the phrase comes from the auto industry dating back to 2003 with repeated attempts in the US Congress to pass legislation. Within the automotive industry, Massachusetts passed the United States' first Motor Vehicle Owners' Right to Repair Act in 2012, which required automobile manufacturers to sell the same service materials and diagnostics directly to consumers or to independent mechanics as they provide exclusively to their dealerships. The Massachusetts statute was the first to pass among several states, such as New Jersey, which had also passed a similar bill through their Assembly. Facing the potential of a variety of slightly different requirements, major automobile trade organizations signed a Memorandum of Understanding in January 2014 using the Massachusetts' law as the basis of their agreement for all 50 states starting in the 2018 automotive year. A similar agreement was reached by the Commercial Vehicle Solutions Network to apply to over-the-road trucks. Digital Right to Repair Coalition Officially founded in 2013 - the Digital Right to Repair Coalition, also known as The Repair Association using the website repair.org, has led nearly all state legislative efforts in the United States and has influenced the formation of similarly focused advocacy groups around the world. The Coalition is a 501 c6 trade association incorporated in New Jersey and funded entirely by membership dues. The goal of the coalition is to support the aftermarket for technology products through advocating for repair-friendly laws, standards, regulations and policies. As such, its members are engaged in repairs, resale, refurbishment, reconfiguration and recycling regardless of industry. Members of the Coalition Advisory Board include industry experts in repair, cyber-security, copyright law, medicine, agriculture, international trade, consumer rights, contracts, e-waste, eco-design standards, software engineering and legislative advocacy. The Coalition filed their first legislative action in South Dakota in January 2014 as SB.136 (Latterell). Four states followed in 2015 - New York (S.3998 Boyle/A.6068 Morelle), Minnesota (SF 873 Osmek/ HF 1048 Hertaus), Massachusetts (H.3383 Cronin/S. Kennedy), and Nebraska (LB 1072 Haar). Tennessee (SB888/H1382 Jernigan) and Wyoming (HB 0091 Hunt) were added in 2016. The following year - 2017 - new bills were filed in North Carolina (HB663 Richardson), Kansas (HB2122 Barker), Illinois (HB3030 Harris), Iowa (HF556 and SF2028), Missouri (HB1178 McCreery), New Hampshire (HB1733 Luneau) and New Jersey (A4934 Moriarty). 2018 added Oklahoma, Hawaii, Georgia, Virginia, Vermont and Washington. 2019 added Oregon, Nevada, Indiana and Montana. 2020 was shortened by the Pandemic but added Maine, Idaho, Alabama, Maryland, Pennsylvania and Colorado. 2021 added Florida, Delaware, Texas, and South Carolina for a total of 27 states involved in 2021. Legislative focus for state legislation Legislation intended to use the power of general business law in states for general repair of devices including a digital electronic part is based on the Automotive MOU from 2014. Template legislation avoided any requirements to change the format of documentation, the method of delivery of existing parts, tools, diagnostics or information, nor any requirements to disclose any trade secrets. Manufacturers are permitted to charge fair and reasonable prices for physical parts and tools, and are limited in their charges for information that is already posted online. The Coalition suggests that state legislation will broadly enable many repairs, but that federal copyright law needs revision with respect to limitations posed by Section 1201 specific to Digital Rights Management ("DRM") and software locks. Opposition Lobbying in opposition has been consistent across 4 major industries - consumer technology, agriculture, home appliances, and medical equipment. The tech industry has lobbied in opposition through groups including TechNet, the Consumer Technology Association ("CTA"), the Entertainment Software Alliance ("ESA"), and the Security Innovation Center (now no longer active). Large equipment manufacturers for agriculture and construction have lobbied through the Association of Equipment Manufacturers ("AEM") and their dealership counterparts the Equipment Dealers Association. Their joint release of the 2018 Statement of Principles became the subject of media backlash when in January 2021 the promised means to make complete repairs had not been visibly available. In Nebraska, State Senators behind LB543 - Right to Repair held their bill in committee pending a negotiation with equipment manufacturers over these promises. Pending the outcome of negotiations, LB543 may be pushed ahead in 2022. Medical device manufacturers have used associations Advamed and MITA in opposition. Several states have exempted medical devices from their legislation - such as New York, Massachusetts and Minnesota, but other states have specifically targeted medical equipment in Arkansas (SB332 passed in the Senate) and California (SB605) as the pandemic raised questions about how effectively hospitals can control repairs of critical infrastructure. The California bill was approved by two committees of reference before being blocked in the Appropriations Committee. The home appliance industry has been represented by the Association of Home Appliance Manufacturers ("AHAM"). Smaller groups such as the Toy Industry Association and the Outdoor Power Equipment Industry ("OPEI") have also been in official opposition as part of the Security Innovation Center group. New Hampshire is the only state so far to consider a bill specific to home appliances in 2021. Federal action Led by Coalition members iFixit, and the Electronic Frontier Foundation ("EFF"), the Coalition has been regularly engaged in triennial requests for Copyright Exemptions to Section 1201 ("anti-circumvention") since 2015 including the request of exemptions for tinkering with tractors, computers and cell phones. The US Congress requested an analysis of limitations on repair caused by software-enabled products from the Librarian of Congress in 2016 and a report on the impacts of Section 1201 in 2017 in which members of the Coalition were extensively engaged and widely quoted. In the 2018 round, the exemption for making software modifications to "land-based motor vehicles" was expanded to allow equipment owners to engage the services of third parties to assist with making changes. These changes were endorsed by the American Farm Bureau Federation. US Senator Ron Wyden (D-OR) and Congresswoman Yvette Clarke (D-NY) filed the first Medical Right to Repair bill in August 2020 in response to the pandemic crisis and availability of ventilators in particular. In addition to requiring access to manuals and service materials, the bill also lifts the provisions of patent law governing the production of spare parts for the duration of the pandemic. Congressman Joseph Morelle (D-NY) filed his Fair Repair Act in Congress on June 21, 2021 using his experience as Majority Leader in the NY Assembly and prime sponsor of Digital Fair Repair in NY while in leadership in the Assembly. The federal bill closely resembles the state version. In July 2019 the Federal Trade Commission ("FTC") hosted a workshop titled "Nixing the Fix" The Commissioners invited multiple panels of experts to provide testimony in person as well as for all interested parties to provide Evidence of harms, or justification for actions directly to the FTC. After nearly two years of study, the FTC Report to Congress of May 6, 2021 outlining that their study provided "Scant Evidence" that restriction on repair were to the benefit of consumers. The Biden Administration issued an Executive Order to the FTC and the Department of Agriculture on July 6, 2021, to widely improve access to repair for both consumers and farmers. Subsequently, the FTC Chair Lina Khan held two public commission events where commissioners voted to advance Right to Repair as a policy objective. Additional considerations In addition to the work of the Coalition - various individuals have stepped forward to drive action directly, such as for starting ballot initiatives. A ballot initiative was filed in Missouri and certified for inclusion on the 2022 ballot. In March 2021, Louis Rossmann started a crowdfunding campaign to raise $6 million using the GoFundMe platform in order to start a direct ballot initiative to protect consumer right to repair in the Commonwealth of Massachusetts, citing previous similar successes in the automotive industry. The outcome of these efforts is not yet known. Companies like Apple, John Deere, and AT&T lobbied against these bills, and created a number of "strange bedfellows" from high tech and agricultural sectors on both sides of the issue, according to Time. In late 2017, users of Apple, Inc. older iPhone models discovered evidence that recent updates to the phone's operating system, iOS, were purposely throttling the speed of the phone. This led many to accuse Apple of deliberately sabotaging the performance of older iPhones to compel customers to buy new models more frequently. Apple disputed this assumed intention, stating instead that the goal of the software was to prevent overtaxing older lithium-ion batteries, which have degraded over time, to avoid unexpected shutdowns of the phone. Furthermore, Apple allowed users to disable the battery throttling feature in an iOS update but maintained that it would not be advisable to do so, since the throttling feature only kicked in when a battery had significantly degraded. Additionally, Apple allowed users of affected iPhones to obtain service to replace batteries in their phones for a reduced cost of service ( compared to ) for the next six months. However, the "right to repair" movement pointed out that such a scenario could have been handled if Apple allowed consumers to purchase third-party batteries and possess the instructions to replace it at lower cost to the consumer. In April 2018, the Federal Trade Commission sent notice to six automobile, consumer electronics, and video game console manufacturers, later revealed through a Freedom of Information Act request to be Hyundai, Asus, HTC, Microsoft, Sony, and Nintendo, stating that their warranty practices may violate the Magnuson-Moss Warranty Act. The FTC specifically identified that informing consumers that warranties are voided if they break a warranty sticker or seal on the unit's packaging, use third-party replacement parts, or use third-party repair services is a deceptive practice, as these terms are only valid if the manufacturer provides free warranty service or replacement parts. Both Sony and Nintendo released updated warranty statements following this notice. In April 2018, US Public Interest Research Group issued a statement defending Eric Lundgren over his sentencing for creating the ‘restore disks’ to extend the life of computers. The Library of Congress, as part of its three-year review of exemptions to the DMCA, approved an exemption in October 2018 that would allow for one to bypass copyright-protection mechanisms used in land vehicles, smartphones and home appliances for the ability to maintain ("to make it works in accordance with its original specifications and any changes to those specifications authorized for that device or system") or repair ("restoring of the device or system to the state of working in accordance with its original specifications and any changes to those specifications authorized for that device or system") the device. () In its 2021 recommendations, the Library of Congress further extend the exemption, with favorable right-to-repair considerations for automobiles, boats, agricultural vehicles, and medical equipment, as well as modifying prior rules related to other consumer goods. Senator Elizabeth Warren, as part of her campaign for president, laid out plans for legislation related to agriculture in March 2019, stated her intent to introduce legislation to affirm the right to repair farm equipment, potentially expanding this to other electronic devices. In August 2019, Apple announced a program where independent repair shops may have the ability to buy official replacement parts for Apple products. Several operators became Authorized under their "IRP" program but many smaller repair operators avoided the option due to legally onerous burdens. A list of authorized IRP Providers is not available on the website making it difficult to assess the level of adoption. In the midst of the COVID-19 pandemic, where medical equipment became critical for many hospitals, iFixit and a team of volunteers worked to publish the largest known collection of manuals and service guides for medical equipment, using information crowdsourced from hospitals and medical institutions. They incorporated all the materials found on Frank's Hospital Workshop and expanded it more broadly with a more intuitive search tool. iFixit had found, like with consumer electronics, some of the more expensive medical equipment had used means to make non-routine servicing difficult for end-users and requiring authorized repair processes, which during the emergency conditions of the pandemic was not acceptable. On August 6, 2020, senator Ron Wyden and representative Yvette Clarke introduced the Critical Medical Infrastructure Right-to-Repair Act of 2020 (, ,text) which focuses on preventing health professionals from being liable to copyright law when attempting to repair devices that would make it easier for "COVID-19 aid." The Federal Trade Commission (FTC) issued a report "Nixing the Fix" in May 2021 to Congress, outlining issues around corporations' policies that limit repairs on consumer goods that it considered in violation of trade laws, and outlined steps that could be done to better enforce this. This included self-regulation by the industries involved, as well as expansion of existing laws such as the Magnuson-Moss Warranty Act or new laws to give the FTC better enforcement to protect consumers from overzealous repair restrictions. On July 9, 2021, President Joe Biden signed Executive Order 14036, "Promoting Competition in the American Economy", a sweeping array of initiatives across the executive branch. Among them included instructions to the FTC to craft rules to prevent manufacturers from preventing repairs performed by owners or independent repair shops. About two weeks after the EO was issued, the FTC made a unanimous vote to enforce the right to repair as policy and will look to take action against companies that limit the type of repair work that can be done at independent repair shops. Apple announced in November 2021 that it would be allowing consumers to order parts and make repairs on Apple products, initially with iPhone 12 and 13 devices but eventually rolling out to include Mac computers. The service will be available early next year in the US and expand to additional countries throughout 2022. According to Apple, "Customers join more than 5,000 Apple Authorized Service Providers (AASPs) and 2,800 Independent Repair Providers who have access to these parts, tools, and manuals." See also Repair café Louis Rossmann Repairability References Right to Repair Consumer electronics de:Reparatur#Recht auf Reparatur
3151313
https://en.wikipedia.org/wiki/LocationFree%20Player
LocationFree Player
Sony's LocationFree is the marketing name for a group of products and technologies for timeshifting and placeshifting streaming video. The LocationFree Player is an Internet-based multifunctional device used to stream live television broadcasts (including digital cable and satellite), DVDs and DVR content over a home network or the Internet. It is in essence a remote video streaming server product (similar to the Slingbox). It was first announced by Sony in Q1 2004 and launched early in Q4 2004 alongside a co-branded wireless tablet TV. The last LocationFree product was the LF-V30 released in 2007. The LocationFree base station connects to a home network via a wired Ethernet cable, or for newer models, via a wireless connection. Up to three attached video sources can stream content through the network to local content provision devices or across the internet to remote devices. A remote user can connect to the internet at a wireless hotspot or any other internet connection anywhere in the world and receive streamed content. Content may only be streamed to one computer at a time. In addition, the original LocationFree Player software contained a license for only one client computer. Additional (paid) licenses were required to connect to the base station for up to a total of four clients. On November 29, 2007 Sony modified its LocationFree Player policy to provide free access to the latest LocationFree Player LFA-PC30 software for Windows XP/Vista. In addition, the software no longer requires a unique serial number in order to pair it with a LocationFree base station. In December, 2007 Sony Dropped the $30 license fee for the LocationFree client. However, the software still requires registration to Sony's servers after 30 days. Note (2016)...When attempting to register on-line, you are presented with a failed message and told that there is no internet connection. This may be operating system version related (Win 7, 8, 10) as the software was written for older Windows versions. The "free" version of the player, which can still be downloaded, will work for a time (30 days?) but then wants to be registered or have a key input. There does not seem to be a key available for the free version so the software must be removed and re-installed to continue using it. There is a work around for this, that may, or may not work for you, depending on your OS. After installing, in the windows registry go to HKEY_CURRENT_USER/Software/Sony Corporation/LFXLF-PC3US/App/ Look for StartDate. This is a UNIX time stamp number. Change this to a date in the distant future. This is based on using the Free Version ( LFA-PC30US-40353.exe) which was available for download from Sony. Many links no longer work. Any new units came with a CD of the software (usually an older version) along with a key, which can't be used on the free version. Two devices using the same key could not be registered to the same device. The same limitation is not present with the "free" software. Clients The player (server) can stream content to the following (client) devices: Windows or Mac computer - requires additional software Mobile/cellular phones - coming later in 2007 Pocket PCs running Windows Mobile Smartphones/tablets running Android 2.2+ Televisions - requires a Sony adapter Sony (Client) Products: Sony wireless Tablets (listed below) PlayStation Portable (PSP) - system software version 2.50 or later (version 3.11 or later recommended due to inclusion of AVC support) Sony Ericsson P990i - European Base Stations only Sony VAIO laptops - starting Summer 2007, these laptops included an LF Vaio branded compatible client These products do not act as DVRs, since they do not allow content to be recorded to a hard drive. A user can also access and control from anywhere in the world any device connected to the unit, and switch between multiple devices. BASE Station Models Base stations packaged with LocationFree Player installation disc and instructional DVD LF-PK1 ("PK" stands for "Pack" or "Package" as it is a package of the LF-B1 base station, LFA-PC2 LocationFree player software for the PC and instructional DVD) First standalone model sold without a tablet Only model in the North American market to ever come equipped with an RF coaxial input. However the European model did not have an RF coaxial input. Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Three firmware versions released. Version 1.000 was the original release version for Japan and North America. Version 2.000 added support for the Sony PSP (with PSP firmware version 2.50 or higher). An update was made available for Japanese and North American owners. Version 3.000 was the original release version for the European version of the LF-PK1 meaning all European models shipped with this latest firmware version. It increased the maximum number of clients that can be registered from 4 to 8. It also enhanced the way settings were changed through the web interface. Previously, whenever a setting was changed, the base station would have to be rebooted. With the 3.000, setting changes no longer created a reboot. An update program was offered for Japanese owners of the LF-PK1, however no update to firmware 3.000 has ever been made available for the North American model, nor can the Japanese 3.000 update be installed on the North American model. LF-B10 Bundled with LFA-PC20 LocationFree player for the PC and instructional DVD Wired 10/100 Two Infrared Ports LF-B20 Bundled with LFA-PC20 LocationFree player for the PC and instructional DVD Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Two Infrared Ports LF-V30 Bundled with LFA-PC30 LocationFree player for the PC and instructional DVD Wireless 11a/b/g. Can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Component Support Possibly known as LF-W1HD in Japan Notes: - Wired models can be used via normal wireless routers. Access via internet via firewall provided necessary ports are opened. Can be used with DDNS. Client box - enables users to watch streamed content on a television set, without the need for a PC or laptop. LF-Box 1 LocationFree wireless tablet TV In October 2004 Sony unveiled a portable, wireless and rechargeable SVGA 12.1" LCD tablet screen with dualband Wi-Fi technology (IEEE 802.11a/b/g) which can receive pictures from the LocationFree player up to 100 feet from the source signal. The TV also has web-browsing and email functions, a Memory Stick Duo slot and an on-screen hand-drawing function for use as a drawing tablet. The screen can also be used as an intelligent universal AV remote control. These tablets were bundled with Base Stations. Three versions have been released: LF-X1 Original 12" Model, Aspect ratio 4:3 (LF-X1M is monitor only) Besides included tablet, base station ONLY compatible with LFA-PC1 LocationFree player for the PC, sold separately. LFA-PC2 or later, as well as all other software players and the PSP are NOT compatible with this base station. LF-X5 7" Model, Aspect ratio 16:9 Besides included tablet, base station ONLY compatible with LFA-PC1 LocationFree player for the PC, sold separately. LFA-PC2 or later, as well as all other software players and the PSP are NOT compatible with this base station. LF-X11 Bundled with same LF-B1 base station as LF-PK1. This means the base station also can be paired with other devices just like the LF-PK1 such as a PSP. However the LF-X11 tablet cannot be paired with another LF-B1 or other LocationFree base station, it is permanently bonded with the included LF-B1 base station. Bundled with LFA-PC2 LocationFree player for the PC Wireless 11a/b/g. Base station can also be used as a conventional Wi-fi access point if connected to a wired router via Ethernet. Please read LF-PK1 description above for more information and details about the LF-B1 base station and its different firmware versions, as they are the same base station. Software LFA-PC1 LocationFree Player Software for Windows (Only compatible with LF-X1 and LF-X5 base stations. No other software is compatible with these base stations, including all later versions of LocationFree Player for Windows, and players for Macintosh and all other platforms) LFA-PC2 LocationFree Player Software for Windows (Only compatible with LF-PK1 and LF-X11) LFA-PC20 LocationFree Player Software for Windows (Compatible with LF-PK1, LF-X11, LF-B10 and LF-B20) LFA-PC30 LocationFree Player Software for Windows (Latest Windows Version & available as a one week trial. Compatible with all LocationFree base stations except LFA-PC1 specific base stations. Version 4.0.3.53 maintains Windows XP & Vista support. NOTE: This software is NOT YET AVAILABLE for UK LocationFree boxes - the software does region checks on the base station and refuses to install.) TLF-MAC/J is a retail software package for Mac OS X (Only compatible with Japanese base stations) TLF-MAC/E is a retail software package for Mac OS X (Only compatible with North American base stations) Miglia software package for Mac OS X (Only compatible with European base stations) NetFront LocationFree Player for Pocket PC (See link below) ThereTV free Android client for the LocationFree Protocol (See link below) See also Slingbox HDHomeRun Monsoon HAVA Dreambox DBox2 Unibox References External links LocationFree Player and TV at Sony.com Mac Version of Software 3rd party? PocketPC aka Windows mobile software New base station hardware unveiled at Engadget.com LocationFree LF-B20 Review at SpicyGadget.com Time Magazine Bakeoff of Sling box and LocationFree Rumor of Licence fee drop on Gizmodo Review of LF-X5 7" Screen CNET review of V30 model. PC World review of V30 model Download the LFA-PC30 player software Android client for LocationFree Protocol Television technology Television placeshifting technology Consumer electronics Sony products
29407701
https://en.wikipedia.org/wiki/HeliumV
HeliumV
Helium V is an open-source ERP suite. It has been developed by Helium V IT-Solutions GmbH in Austria starting in 2005. The industry of initial focus has been electronic manufacturing. The targeted customers are small and medium-sized companies (SMEs). In this SME context, the evaluation of KPIs is of great importance, so this is very high on the developer's agenda. The software is available as open-source since October 2010. It is an integral part of the Lisog open-source stack initiative. Features Helium V covers the whole process cycle of a company, providing modules for: Quotation Purchase Production planning Procurement Capacity planning Merchandise management (ERP) Item master Time tracking Production Delivery Post calculation Accounting Payment management In practice, the ERP shows a high degree of flexibility and customization opportunities, allowing solutions to be tailored to fit the needs of each company. Industries As a result of constant development the ERP-suite has managed to cover all major industries: Electrical Electronics Metal processing Mechanical engineering Plastics technology Food Advertising agencies Services Architecture The system is based on a J2EE architecture. Therefore, it is written in Java, using JBoss as applications server and Postgres as default database. MS-SQL or Oracle are supported on special request. The GUI is written with the Java SWING toolkit. As a result, Helium V runs on any platform which supports Java (Linux, Mac OS X, Windows). A key feature of the suite is its customizability: For example, the administrator can define the screens, tabs or elements visible for users. The time tracking functionality can already be accessed via an API; more APIs are planned and in development. Helium V provides multilingual support through Unicode. Languages available as of today are German (Germany/Austria/Switzerland) and English. Business model The business model of Helium V is based on the dual licensing model. The open-source edition comes with no warranty or professional support; such services are provided by Helium V IT Systems GmbH (Austria) for the enterprise edition. The developer does, however, provide an open platform to nurture open source engagement. Hence, community contributions are accepted and appreciated. The contributor has to sign a contribution agreement or provide the source code under a liberal open source license like the MIT license. Integration The Helium V development team is interested in a seamless integration with other (open source) components. The first official integration partner is agorum with the DMS system agorum core. Co-operations The developers of Helium V are working closely together with academic institutions in order to improve the quality of the software. The following universities are supporting the further development of Helium V at the moment: Technische Universität Darmstadt, Germany University of Applied Sciences Kufstein, Austria See also Agorum Core Lisog References External links Helium V Company Profile on LinkedIn (German) Helium V Company Profile on Xing (German) Official site for the Business Edition Community site for the Open Source Edition (German) Community site for the Open Source Edition (English) Lisog Homepage Free ERP software ERP software Free business software Enterprise resource planning software for Linux Software using the GNU AGPL license
2071878
https://en.wikipedia.org/wiki/H.323%20Gatekeeper
H.323 Gatekeeper
A H.323 Gatekeeper serves the purpose of Call Admission Control and translation services from E.164 IDs (commonly a phone number) to IP addresses in an H.323 telephony network. Gatekeepers can be combined with a gateway function to proxy H.323 calls and are sometimes referred to as Session Border Controllers. A gatekeeper can also deny access or limit the number of simultaneous connections to prevent network congestion. H.323 endpoints are not required to register with a gatekeeper to be able to place point to point calls, but they are essential for any serious H.323 network to control call prefix routing and link capacities among other functions. A typical H.323 Gatekeeper call flow for a successful call may look like:- | | | | Endpoint A Endpoint B 1234 1123 Endpoint A dials 1123 from the system. Endpoint A sends ARQ (Admission Request) to the Gatekeeper. Gatekeeper returns ACF (Admission Confirmation) with IP address of endpoint B. Endpoint A sends Q.931 call setup messages to endpoint B. Endpoint B sends the Gatekeeper an ARQ, asking if it can answer call. Gatekeeper returns an ACF with IP address of endpoint A. Endpoint B answers and sends Q.931 call setup messages to endpoint A. IRR sent to Gatekeeper from both endpoints. Either endpoint disconnects the call by sending a DRQ (Disconnect Request) to the Gatekeeper. Gatekeeper sends a DCF (Disconnect Confirmation) to both endpoints. The gatekeeper allows calls to be placed either: Directly between endpoints (Direct Endpoint Model), or Route the call signaling through itself (Gatekeeper Routed Model). See also GNU Gatekeeper (GnuGK) Sources Cisco Technotes: Understanding H.323 Gatekeepers Microsoft TechNet: H.323 Gatekeeper Packetizer: A Primer on the H.323 Series Standard ITU-T recommendations Videotelephony
2635003
https://en.wikipedia.org/wiki/Common%20Image%20Generator%20Interface
Common Image Generator Interface
The Common Image Generator Interface (CIGI) (pronounced sig-ee), is an on-the-wire data protocol that allows communication between an Image Generator and its host simulation. The interface is designed to promote a standard way for a host device to communicate with an image generator (IG) within the industry. CIGI enables plug-and-play by standard-compliant image generator vendors and reduces integration costs when upgrading visual systems. Background Most high-end simulators do not have everything running on a single machine the way popular home software Flight Simulators are currently implemented. The airplane model is run on one machine, normally referred to as the host, and the out the window visuals or scene graph program is run on another, usually referred to as an Image Generator (IG). Frequently there are multiple IGs required to display the surrounding environment created by a host. CIGI is the interface between the 'host' and the IGs. The main goal of CIGI is to capitalize on previous investments through the use of a common interface. CIGI is designed to assist suppliers and integrators of IG systems with ease of integration, code reuse, and overall cost reduction. In the past most image generators provided their own proprietary interface; every host had to implement that interface making changing image generators a costly ordeal. CIGI was created to standardize the interface between the host and the image generator so that little modification would be needed to switch image generators. The CIGI initiative was largely spearheaded by The Boeing Company during the early 21st century. The latest version of CIGI (CIGI 4.0) was developed by the Simulation Interoperability Standards Organization (SISO) in the form of SISO-STD-013-2014, Standard for Common Image Generator Interface (CIGI), Version 4.0, dated 22 August 2014. SISO-STD-013-2014 is freely available from SISO. Definitions Image generator – In this context an image generator consists of one or more rendering channels that produce an image that can be used to visualize an “Out-The-Window” scene, or images produced by various sensor simulations such as Infra-red, Day TV, Electro-Optical, and Night Vision. Host simulation – In this context a “Host” is the computational system that provides information about the device being simulated so that the image generator can portray the correct scenery to the user. This information is passed via CIGI to the image generator. Maturation CIGI 4 is the latest version of the standard as was approved by the Simulation Interoperability Standards Organization on August 22, 2014. CIGI became an international SISO standard known as SISO-STD-013-2014; which contains the CIGI version 4.0 Interface Control Document (ICD). CIGI 4.0 is the official standard, published by SISO. Previous versions of CIGI were spearheaded by Boeing include CIGI v3.3, in November 2008, v3.2 April 2006, v3.1 June 2004, v3 November 2003, v2 in March 2002, and the original (v1) in March 2001 Protocol dependencies UDP: Typically, CIGI uses UDP as its transport protocol, but CIGI does not require a specific transport mechanism, only packet definition conformance. CIGI traffic does not have a well known port; however, the use of ports 8004-8005 has been widely adopted by commercial image generator vendors implementations. [Ref:] Development Tools Host Emulator The Host Emulator can be used as a surrogate to manipulate the interface when a simulation Host is not available. It is a Windows-based image generator Host application used to develop, integrate and test image generators that use the CIGI protocol. It provides a graphical user interface (GUI) for the creation, modification and deletion of entities; manipulation of views; control of environmental attributes and phenomena; and other host functions. The Host Emulator has several features that are useful for integration and testing. A free-flight mode allows for fixed-wing and rotorcraft flight, movement along entity axes and free rotation using a joystick or a joystick-like widget. Scripting and record/playback features support regression testing, demonstrations and other tasks needing exact reproduction of certain sequences of events. A packet-level snoop feature allows the user to examine the contents of CIGI messages, image generator response times and latencies. A Heartbeat Monitor Window shows a graphical timing history of the Image Generator's data frame rate. Other features include explicit packet creation, animation control, missile flyouts and a situation display window (Host Emulator 3.x only). Multi-Purpose Viewer The Multi-Purpose Viewer (MPV) provides the basic functionality expected of an Image Generator, such as loading and displaying a terrain database, displaying entities and so forth. The Multi-Purpose Viewer can be used as a surrogate to manipulate the interface when a real Image Generator is not available. The MPV is capable of operating with both the Windows and Linux operating systems. CIGI Class Library The CCL is an object-oriented software interface that automatically handles message composition and decomposition (i.e. packing, unpacking and byte swapping to the ICD specification) on both the Host and Image Generator sides of the interface. The CCL interprets Host or Image Generator messages based on compile time parameters. It also performs error handling and translation between different versions of CIGI. Each packet type has its own class. The individual packet members are accessed through packet class accessors. Outgoing messages are constructed by placing each packet into the outgoing buffer using a streaming operator. Incoming messages are parsed using callback or event-based mechanisms that supply the using program with fully populated packet objects. Current tool suite A set of CIGI development tools are managed and maintained by the SISO CIGI Product Support Group. The latest packages are available on SourceForge. Comments/Suggestions to the package can be directed to the SISO discussion board at: https://discussions.sisostds.org/index.htm?A0=SAC-PSG-CIGI Wireshark Wireshark is a free and open source packet analyzer. It is used for network troubleshooting, analysis, software and communications protocol development, and education. Wireshark provides a dissector for CIGI packets. As of October 2016, “The CIGI dissector is fully functional for CIGI version 2 and 3. Version 1 is not yet implemented.” Older versions of CIGI A CIGI Interface Control Document (ICD) and development suite is available in open source format. The tools, ICD, and accompanying user documentation can be found and downloaded from the CIGI sourceforge web site. The SourceForge version of the MPV is limited in its support of CIGI data packets and is intended to grow as needs arise. The MPV uses CIGI 3 as its interface, but the MPV is backward-compatible with earlier CIGI versions through the use of the CCL. The MPV uses the Open Scene Graph library to render a scene. The scene graph is manipulated according to the CIGI commands received from the Host via the CCL. The MPV itself is an application layer that consists of a small kernel leveraging heavily on a plug-in architecture for ease of maintainability and flexibility. An implementer can implement the interface from scratch, however a full suite of integration tools is available. These tools consist of three elements. The Host Emulator (HE), the Multi-Purpose Viewer (MPV), and the CIGI Class Library (CCL). References External links Simulation Interoperability Standards Organization website Sourceforge website AAcuity PC-IG (CIGI compliant Image Generator) Computer graphics E
67441727
https://en.wikipedia.org/wiki/Arthur%20M.%20Langer
Arthur M. Langer
Arthur M. Langer is an American academic whose work focuses on the effect of technology on organizational structure, behavior and workforce development. Langer is a Professor of Professional Practice at Columbia University, as well as the director of Columbia University's Center for Technology Management and Academic Director of the Master of Science program in Technology Management. Additionally, he is a faculty member in the Department of Organization and Leadership at Teachers College Graduate School of Education. In 2005, Langer founded Workforce Opportunity Services (WOS), a nonprofit organization that trains and places underserved and Veteran job seekers into long-term careers. Early life and education Arthur Langer was born to Eastern European immigrant parents in the Bronx, New York. He did not plan to go to college until he was mentored by a local businessman who offered him a scholarship on condition that he apply to the city's selective high schools. Langer attended The High School of Music and Art, followed by night school at Queens College, where he received his bachelor's degree in Computer Science. His MBA is from Iona College, and he earned his Doctorate in Education at Columbia University. Career Prior to joining the full-time faculty at Columbia, Langer was Executive Director of Computer Support Services at Coopers and Lybrand, General Manager and Partner of Software Plus, and President and founder of Macco Software. Langer teaches courses in information technology, human development, leadership, management, and higher education at Columbia University. He also consults for corporations and universities on these topics as well as on staff development, management transformation, and curriculum. In his teaching, Langer developed a “theory-to-practice-to-theory” (TPT) approach in order to help adult students engage in transformative learning. TPT underscores that adults learn best when they can relate theory to their experiences, and then revisit theory for its applicability to their personal situations. Students then determine through personal critical reflection whether and how to modify their existing belief systems. By consistently providing students with critical feedback and multiple opportunities to apply new concepts as well as revise their work, TPT helps them gradually examine where their new knowledge can be integrated with their existing ideologies. Notable publications Books Analysis and Design of Next-Generation Software Architectures (2020) Information Technology and Organizational Learning (3rd Edition, 2018), Guide to Software Development: Designing & Managing the Life Cycle (2nd Edition, 2016), Strategic IT: Best Practices for Managers and Executives (2013 with Lyle Yorks) Analysis and Design of Information Systems (2008), Applied Ecommerce (2002), The Art of Analysis (1997). Articles “Designing the digital organization” (with C. Snow and O. Feljstad in Journal of Organizational Design 2017), “Cyber security: The new business opportunity facing executives” (Cyber Security Review 2016), "Employing Young Talent from Underserved Populations: Designing a Flexible Organizational Process for Assimiliation and Productivity." in the Journal of Organization Design. Workforce Opportunity Services In 2001, as part of the Workplace Literacy Program, Langer launched The Inner-City Workplace Literacy Study at Columbia University. The study included more than 40 low-income adults from Harlem, and it investigated how to prepare them for roles in information technology given their lack of experience in this field. The project identified challenges they faced as they trained to compete in the job market and showed that in order to successfully integrate underrepresented talent into the workforce, programs must merge technical training with teaching interpersonal and self-esteem building skills. As he carried out the Literacy Study, Langer developed the Langer Workforce Maturity Arc (LWMA), a tool designed to measure the job readiness of adult learners from underserved communities. Inspired by his own coming of age in the Bronx, Langer founded Workforce Opportunity Services (WOS) in 2005 to provide mentoring and workforce training opportunities to people from underserved communities and underrepresented groups. Today Workforce Opportunity Services creates custom training programs – along with mentorship and material as well as social supports - that deliver a pipeline of early-career talent to employers from under-represented populations, including military Veterans and spouses. As of April 2021, WOS has served over 5,300 individuals through partnerships with more than 65 corporations in 60+ locations worldwide. References 1953 births Columbia University faculty Columbia University alumni 21st-century American non-fiction writers People from the Bronx Queens College, City University of New York alumni Living people
14702580
https://en.wikipedia.org/wiki/Vector%20Graphic
Vector Graphic
Vector Graphic was an early microcomputer company founded in 1976, the same year as Apple Computer, during the pre-IBM PC era, along with the NorthStar Horizon, IMSAI, and MITS Altair. History The first product was a memory card for the S-100 bus. A full microcomputer using the Z80 microprocessor, the Vector 1, was introduced in 1977. There were several Vector Graphic models produced. The Vector 1+ had a floppy disk drive. The Vector Graphic 3 had a fixed keyboard housed anchoring a combined screen terminal and CPU case. The Vector Graphic 4 was a transitional 8-bit and 16-bit hybrid model. Although primarily used with the CP/M operating system, the Vector 3 ran several others including OASIS, Micropolis Disk Operating System (MDOS), and Micropolis Z80 Operating System (MZOS). Early Vector Graphic models used the Micropolis floppy disk controller and Micropolis floppy disk drives. Later models were designed with the integrated floppy drive-hard drive controller and used Tandon floppy drives. Almost all used unusual 100-track per inch 5 ¼-inch floppy drives and 16-sector hard sector media. Some models included 8-inch floppy drives and hard disk drives. Vector Graphic sales peaked in 1982, by which time the company was publicly traded, at $36 million. It faltered soon after due to several factors. The introduction of the IBM PC in August 1981 shifted the market and smaller players lost momentum. The Vector 4 was accidentally pre-announced in April 1982, the same month that founder and chief hardware designer Robert Harp left the company after a dispute with co-founder (and wife) Lore Harp over control of the company. The early announcement of the Vector 4, which had a separate keyboard tethered to the computer (as opposed to a combined keyboard and terminal) resulted in a sharp decrease in sales of the Vector 3 as customers delayed purchases up to six months until the new product was available. In addition, the company had decided to use the CP/M operating system in the Vector 4, which they considered a superior operating system than MDOS; management recognized the nature of their gamble, as IBM would move the market in a different direction if it elected to use the DOS operating system for their competing product, the IBM 8080. The gamble did not pay off, and by the end of 1984 Lore Harp was gone and venture capital investors took over. By summer 1985 only three dozen employees remained, down from a peak of 425 workers in 1982. Ultimately, the Vector Graphic headquarters and assembly factory, across from a 17-person company (Amgen) and next to the 101 freeway, was converted into a Home Depot store. Chapter 11 bankruptcy followed in December 1985. A sought-for merger partner was not found and chapter 7 liquidation of remaining assets resulted in October 1987. Vector Graphic computers had many innovations, such as the Flashwriter integrated video and keyboard controller. Vector Graphic was known for their Memorite word processing application. When combined with the Flashwriter, the Vector Graphic Memorite software gave low-cost word processing capability, which had previously only been available with dedicated word processors. As of 2007, Vector Graphic still had a small but active user community. See also Corona Data Systems - founded in 1982 by Robert Harp References Further reading Vector Graphic S-100 Documentation and Tech Info. By Herbert R. Johnson Inspection of Tandon TM100 designs for 96TPI vs 100TPI operation, By Herb Johnson Micropolis/Vector Graphic S-100 FDC, S100 Computers External links old-computers.com computersgh.com retrotechnology.com www.classiccmp.org www.vintage-computer.com lagidesain.com American companies established in 1976 American companies disestablished in 1987 Computer companies established in 1976 Computer companies disestablished in 1987 Defunct computer companies of the United States Home computer hardware companies
5550368
https://en.wikipedia.org/wiki/Pixel%20aspect%20ratio
Pixel aspect ratio
Pixel aspect ratio (often abbreviated PAR) is a mathematical ratio that describes how the width of a pixel in a digital image compares to the height of that pixel. Most digital imaging systems display an image as a grid of tiny, square pixels. However, some imaging systems, especially those that must be compatible with standard-definition television motion pictures, display an image as a grid of rectangular pixels, in which the pixel width and height are different. Pixel aspect ratio describes this difference. Use of pixel aspect ratio mostly involves pictures pertaining to standard-definition television and some other exceptional cases. Most other imaging systems, including those that comply with SMPTE standards and practices, use square pixels. Introduction The ratio of the width to the height of an image is known as the aspect ratio, or more precisely the display aspect ratio (DAR) – the aspect ratio of the image as displayed; for TV, DAR was traditionally 4:3 (a.k.a. fullscreen), with 16:9 (a.k.a. widescreen) now the standard for HDTV. In digital images, there is a distinction with the storage aspect ratio (SAR), which is the ratio of pixel dimensions. If an image is displayed with square pixels, then these ratios agree; if not, then non-square, "rectangular" pixels are used, and these ratios disagree. The aspect ratio of the pixels themselves is known as the pixel aspect ratio (PAR) – for square pixels this is 1:1 – and these are related by the identity: SAR × PAR = DAR. Rearranging (solving for PAR) yields: PAR = DAR/SAR. For example, a 640 × 480 VGA image has a SAR of 640/480 = 4:3, and if displayed on a 4:3 display (DAR = 4:3) has square pixels, hence a PAR of 1:1. By contrast, a 720 × 576 D-1 PAL image has a SAR of 720/576 = 5:4, but is displayed on a 4:3 display (DAR = 4:3). In analog images such as film there is no notion of pixel, nor notion of SAR or PAR, but in the digitization of analog images the resulting digital image has pixels, hence SAR (and accordingly PAR, if displayed at the same aspect ratio as the original). Non-square pixels arise often in early digital TV standards, related to digitalization of analog TV signals – whose vertical and "effective" horizontal resolutions differ and are thus best described by non-square pixels – and also in some digital video cameras and computer display modes, such as Color Graphics Adapter (CGA). Today they arise also in transcoding between resolutions with different SARs. Actual displays do not generally have non-square pixels, though digital sensors might; they are rather a mathematical abstraction used in resampling images to convert between resolutions. There are several complicating factors in understanding PAR, particularly as it pertains to digitization of analog video: First, analog video does not have pixels, but rather a raster scan, and thus has a well-defined vertical resolution (the lines of the raster), but not a well-defined horizontal resolution, since each line is an analog signal. However, by a standardized sampling rate, the effective horizontal resolution can be determined by the sampling theorem, as is done below. Second, due to overscan, some of the lines at the top and bottom of the raster are not visible, as are some of the possible image on the left and right – see Overscan: Analog to digital resolution issues. Also, the resolution may be rounded (DV NTSC uses 480 lines, rather than the 486 that are possible). Third, analog video signals are interlaced – each image (frame) is sent as two "fields", each with half the lines. Thus either the pixels are twice as tall as they would be without interlacing, or the image is deinterlaced. Background Video is presented as a sequential series of images called video frames. Historically, video frames were created and recorded in analog form. As digital display technology, digital broadcast technology, and digital video compression evolved separately, it resulted in video frame differences that must be addressed using pixel aspect ratio. Digital video frames are generally defined as a grid of pixels used to present each sequential image. The horizontal component is defined by pixels (or samples), and is known as a video line. The vertical component is defined by the number of lines, as in 480 lines. Standard-definition television standards and practices were developed as broadcast technologies and intended for terrestrial broadcasting, and were therefore not designed for digital video presentation. Such standards define an image as an array of well-defined horizontal "Lines", well-defined vertical "Line Duration" and a well-defined picture center. However, there is not a standard-definition television standard that properly defines image edges or explicitly demands a certain number of picture elements per line. Furthermore, analog video systems such as NTSC 480i and PAL 576i, instead of employing progressively displayed frames, employ fields or interlaced half-frames displayed in an interwoven manner to reduce flicker and double the image rate for smoother motion. Analog-to-digital conversion As a result of computers becoming powerful enough to serve as video editing tools, video digital-to-analog converters and analog-to-digital converters were made to overcome this incompatibility. To convert analog video lines into a series of square pixels, the industry adopted a default sampling rate at which luma values were extracted into pixels. The luma sampling rate for 480i pictures was  MHz and for 576i pictures was  MHz. The term pixel aspect ratio was first coined when ITU-R BT.601 (commonly known as "Rec. 601") specified that standard-definition television pictures are made of lines of exactly 720 non-square pixels. ITU-R BT.601 did not define the exact pixel aspect ratio but did provide enough information to calculate the exact pixel aspect ratio based on industry practices: The standard luma sampling rate of precisely  MHz. Based on this information: The pixel aspect ratio for 480i would be 10:11 as: The pixel aspect ratio for 576i would be 59:54 as: SMPTE RP 187 further attempted to standardize the pixel aspect ratio values for 480i and 576i. It designated 177:160 for 480i or 1035:1132 for 576i. However, due to significant difference with practices in effect by industry and the computational load that they imposed upon the involved hardware, SMPTE RP 187 was simply ignored. SMPTE RP 187 information annex A.4 further suggested the use of 10:11 for 480i. As of this writing, ITU-R BT.601-6, which is the latest edition of ITU-R BT.601, still implies that the pixel aspect ratios mentioned above are correct. Digital video processing As stated above, ITU-R BT.601 specified that standard-definition television pictures are made of lines of 720 non-square pixels, sampled with a precisely specified sampling rate. A simple mathematical calculation reveals that a 704 pixel width would be enough to contain a 480i or 576i standard 4:3 picture: A 4:3 480-line picture, digitized with the Rec. 601-recommended sampling rate, would be 704 non-square pixels wide. A 4:3 576-line picture, digitized with the Rec. 601-recommended sampling rate, would be 702.915254 non-square pixels wide. Unfortunately, not all standard TV pictures are exactly 4:3: As mentioned earlier, in analog video, the center of a picture is well-defined but the edges of the picture are not standardized. As a result, some analog devices (mostly PAL devices but also some NTSC devices) generated motion pictures that were horizontally (slightly) wider. This also proportionately applies to anamorphic widescreen (16:9) pictures. Therefore, to maintain a safe margin of error, ITU-R BT.601 required sampling 16 more non-square pixels per line (8 more at each edge) to ensure saving all video data near the margins. This requirement, however, had implications for PAL motion pictures. PAL pixel aspect ratios for standard (4:3) and anamorphic wide screen (16:9), respectively 59:54 and 118:81, were awkward for digital image processing, especially for mixing PAL and NTSC video clips. Therefore, video editing products chose the almost equivalent values, respectively 12:11 and 16:11, which were more elegant and could create PAL digital images at exactly 704 pixels wide, as illustrated: For PAL 4:3: For PAL 16:9: Inconsistency in defined pixel aspect ratio values Commonly found on the Internet and in various other published media are numerous sources that introduce different and highly incompatible values as the pixel aspect ratios of various video pictures and video systems. (See the Supplementary sources section.) To neutrally judge the accuracy and/or feasibility of these sources, please note that as the digital motion picture was invented years after the traditional motion picture, all video pictures targeted for standard definition television and compatible media, digital or otherwise, have (and must have) specifications compatible with standard definition television. Therefore, the pixel aspect ratio of digital video must be calculated from the specification of common traditional equipment rather than the specifications of digital video. Otherwise, any pixel aspect ratio that is calculated from a digital video source is only usable in certain cases for the same kind of video sources and cannot be considered/used as a general pixel aspect ratio of any standard definition television system. In addition, unlike digital video that has well-defined picture edges, traditional video systems have never standardized a well-defined edge for the picture. Therefore, the pixel aspect ratio of common standard television systems cannot be calculated based on edges of pictures. Such a calculated aspect ratio value would not be entirely wrong, but also cannot be considered as the general pixel aspect ratio of any specific video system. The use of such values would be restricted only to certain cases. Modern standards and practices In modern digital imaging systems and high-definition televisions, especially those that comply with SMPTE standards and practices, only square pixels are used for broadcast and display. However, some formats (ex., HDV, DVCPRO HD) use non-square pixels internally for image storage, as a way to reduce the amount of data that must be processed, thus limiting the necessary transfer rates and maintaining compatibility with existing interfaces. Issues of non-square pixels Directly mapping an image with a certain pixel aspect ratio on a device whose pixel aspect ratio is different makes the image look unnaturally stretched or squashed in either the horizontal or vertical direction. For example, a circle generated for a computer display with square pixels looks like a vertical ellipse on a standard-definition NTSC television that uses vertically rectangular pixels. This issue is more evident on wide-screen TVs. Pixel aspect ratio must be taken into consideration by video editing software products that edit video files with non-square pixels, especially when mixing video clips with different pixel aspect ratios. This would be the case when creating a video montage from various cameras employing different video standards (a relatively rare situation). Special effects software products must also take the pixel aspect ratio into consideration, since some special effects require calculation of the distances from a certain point so that they look visually correct. An example of such effects would be radial blur, motion blur, or even a simple image rotation. Use of pixel aspect ratio Pixel aspect ratio value is used mainly in digital video software, where motion pictures must be converted or reconditioned to use video systems other than the original. The video player software may use pixel aspect ratio to properly render digital video on screen. Video editing software uses pixel aspect ratio to properly scale and render a video into a new format. The pixel aspect ratio support is also required to display, without distortion, legacy digital images from computer standards and video-games what existed in the 80s. In that generation, square pixels were too expensive to produce, so machines and video cards like the SNES, CGA, EGA, Hercules, C64, MSX, PC-88, X68000 etc had non-square pixels. Confusion with display aspect ratio Pixel aspect ratio is often confused with different types of image aspect ratios; the ratio of the image width and height. Due to non-squareness of pixels in Standard-definition TV, there are two types of such aspect ratios: storage aspect ratio (SAR) and display aspect ratio (abbreviated DAR, also known as image aspect ratio and picture aspect ratio). Note the reuse of the abbreviation PAR. This article uses only the terms pixel aspect ratio and display aspect ratio to avoid ambiguity. Storage aspect ratio is the ratio of the image width to height in pixels, and can be easily calculated from the video file. Display aspect ratio is the ratio of image width to height (in a unit of length such as centimeters or inches) when displayed on screen, and is calculated from the combination of pixel aspect ratio and storage aspect ratio. However, users who know the definition of these concepts may get confused as well. Poorly crafted user-interfaces or poorly written documentations can easily cause such confusion: Some video-editing software applications often ask users to specify an "aspect ratio" for their video file, presenting him or her with the choices of "4:3" and "16:9". Sometimes, these choices may be "PAL 4:3", "NTSC 4:3", "PAL 16:9" and "NTSC 16:9". In such situations, the video editing program is implicitly asking for the pixel aspect ratio of the video file by asking for information about the video system from which the video file originated. The program then uses a table (similar to the one below) to determine the correct pixel aspect ratio value. Generally speaking, to avoid confusion, it can be assumed that video editing products never ask for the storage aspect ratio as they can directly retrieve or calculate it. Non-square-pixel–aware applications also need only to ask for either pixel aspect ratio or display aspect ratio, from either of which they can calculate the other. Pixel aspect ratios of common video formats Pixel aspect ratio values for common standard-definition video formats are listed below. Note that for PAL video formats, two different types of pixel aspect ratio values are listed: Rec.601, a Rec.601-compliant value, which is considered the real pixel aspect ratio of standard-definition video of that type. Digital, which is roughly equivalent to Rec.601 and is more suitable to use in Digital Video Editing software. Note that sources differ on PARs for common formats – for example, 576 lines (PAL) displayed at 4:3 (DAR) corresponds to either PAR of 12:11 (if 704×576, SAR = 11:9), or a PAR of 16:15 (if 720×576, SAR = 5:4). See references for sources giving both, and SDTV: Resolution for a table of storage, display and pixel aspect ratios. Also note that CRT televisions do not have pixels, but scanlines. References Main sources As of the retrieval date, a free membership of ITU Online Bookstore would allow free download of up to three ITU-R Recommendations. This standard, which is the basis for HDMI, specifies 16:15 (1.0666) as the pixel aspect ratio of 4:3 576i/p and 8:9 (0.888) as the pixel aspect ratio of 4:3 480i/p. Supplementary sources A PDF version of Adobe Premiere Pro CS4 Documentations is also available from Adobe web site. This source specifies 12:11 (1.09) as the pixel aspect ratio of 576i. A PDF version of Adobe After Effects CS4 Documentations is also available from Adobe web site. This source specifies 12:11 (1.09) as the pixel aspect ratio of 576i. This source calculates different pixel aspect ratio values for 480i and 576i pictures. An in depth analysis on the discrepancies of the pixel aspect ratios provided in various specifications. Creative Commons Attribution, Noncommercial-Share Alike 3.0 Germany (CC by-sa) English translation: Notes External links The Pixel Aspect Ratio Acid Test Pixel Calculator Engineering ratios Digital television Film and video technology Computer graphics data structures Image processing Digital geometry Digital imaging
4488561
https://en.wikipedia.org/wiki/Ilija%20Trojanow
Ilija Trojanow
Ilija Trojanow (Bulgarian: Илия Троянов, also transliterated as Ilya Troyanov; born 23 August 1965 in Sofia) is a Bulgarian–German writer, translator and publisher. Life and literary career Trojanow was born in Sofia, Bulgaria in 1965. In 1971 his family fled Bulgaria through Yugoslavia and Italy to Germany, where they received political asylum. In 1972 the family travelled on to Kenya, where Ilija's father had obtained a job as engineer. With one interruption from 1977–1981, Ilija Trojanow lived in Nairobi until 1984, and attended the German School Nairobi. After a stay in Paris, he studied law and ethnology at Munich University from 1985 to 1989. He interrupted these studies to found Kyrill-und-Method-Verlag in 1989, and after that Marino-Verlag in 1992, both of which specialised in African literature. In 1999 Trojanow moved to Mumbai and became intensely involved with Indian life and culture. He has lived in Cape Town, returned to Germany (Mainz), and then to Austria, where he currently resides in Vienna. In the 1990s Trojanow wrote several non-fiction and travel books about Africa, published an anthology of contemporary African literature and translated African authors into German. His first novel, "Die Welt ist groß und Rettung lauert überall", appeared in 1996. In it he recounts his family's experiences as political refugees and asylum seekers. After that appeared the science fiction novel "Autopol", created on the Internet as a "novel in progress," "Hundezeiten", a travel account of a visit to his Bulgarian homeland, and books dealing with his experiences in India. His reportage "Zu den heiligen Quellen des Islam" describes a pilgrimage to Mecca. Since 2002 Ilija Trojanow has been member of the PEN centre of the Federal Republic of Germany. Among other awards he received the Bertelsmann Literature Prize at the Ingeborg Bachmann competition in Klagenfurt in 1995, the Marburg Literature Prize in 1996, the Thomas Valentin Prize in 1997, the Adelbert von Chamisso Prize in 2000 and the Leipzig Book Fair Prize in the category of fiction for his novel "Der Weltensammler". Published in English as The Collector of Worlds in 2006, this novel was inspired by the biography and travel writings of British colonial officer Richard Francis Burton, some of whose travels Trojanow followed to places in present-day India, Saudi-Arabia or Tanzania. In 2014, Trojanow participated in the writer in residence programme of the one world foundation in Sri Lanka. Miscellaneous In 2013, Trojanow, who has also written on freedom of expression and surveillance of citizens by government agencies in Germany, had criticized the National Security Agency (NSA). In the same year, he was denied entry into the USA for undisclosed reasons. He planned to attend an academic conference. Upon intervention by representatives of the P.E.N. and the German cultural institution Goethe-Institut, he could finally travel to New York at the end of 2013. Works In Afrika, Munich, 1993 (with Michael Martin) Naturwunder Ostafrika, Munich, 1994 (with Michael Martin) Hüter der Sonne (Custodians of the Sun), Munich, 1996 (with Chenjerai Hove) Kenia mit Nordtansania, Munich, 1996 Die Welt ist groß und Rettung lauert überall, Munich, 1996 Autopol, Munich, 1997 Zimbabwe, Munich, 1998 Hundezeiten, Munich, 1999 Der Sadhu an der Teufelswand, Munich, 2001 An den inneren Ufern Indiens (Along the Ganges), Munich, 2003 Zu den heiligen Quellen des Islam (Mumbai to Mecca), Munich, 2004 Der Weltensammler (The Collector of Worlds), Munich, 2006 Indien. Land des kleinen Glücks, Cadolzburg, 2006 Gebrauchsanweisung für Indien, Munich, 2006 Die fingierte Revolution. Bulgarien, eine exemplarische Geschichte, Munich, 2006 Nomade auf vier Kontinenten, Frankfurt, 2007 Kampfabsage. Kulturen bekämpfen sich nicht – sie fließen zusammen, Munich, 2007 (with Ranjit Hoskote) Der entfesselte Globus, Munich, 2008 Sehnsucht, Freiburg, 2008 (edited by Fatma Sagir) Kumbh Mela. Das größte Fest der Welt, München 2008 (photographs by Thomas Dorn) Angriff auf die Freiheit. Sicherheitswahn, Überwachungsstaat und der Abbau bürgerlicher Rechte, Munich 2009 (with Juli Zeh) EisTau, Munich, 2011 (novel) Die Versuchungen der Fremde: Unterwegs in Arabien, Indien und Afrika, Munich, 2011 Confluences: Forgotten Histories From East And West (co-authored with Ranjit Hoskote), New Delhi, Yoda Press 2012 Der überflüssige Mensch (), Salzburg, 2013 Macht und Widerstand, Frankfurt am Main 2015 Meine Olympiade, Frankfurt am Main, 2016 English translations Custodians of the Sun Along the Ganges, translation by Ranjit Hoskote, Penguin Books India & Haus Publishing, 2005 Mumbai to Mecca, London, 2007, Haus Publishing The Collector of Worlds, London, 2008 The Lamentations of Zeno, translation by Philip Boehm of Eis Tau, Verso Books, New York, 2016 Publishing Afrikanissimo, Wuppertal, 1991 (with Peter Ripken) Das Huhn das schreit gehört dem Fremden (The screaming Chicken belongs to the Stranger), Munich, 1998 Döner in Walhalla (Doner in Valhalla), Cologne, 2000 Die Welt des Ryszard Kapuściński. Seine besten Geschichten und Reportagen, Frankfurt, 2007 Egon Erwin Kisch. Die schönsten Geschichten und Reportagen, Berlin, 2008 Translations into German Sobornost. Kirche, Bibel, Tradition (Bible, church, tradition) by Georgij V. Florovskij, München 1989 Der Berg am Rande des Himmels (The Mountain on the Edge of the Sky) by Timothy Wangusa, Munich, 1989 Knochen (Bones) by Chenjerai Hove, Munich, 1990 Der Preis der Freiheit (The Price of Freedom) by Tsitsi Dangarembga, Reinbek bei Hamburg, 1991 Buckingham Palace (Buckingham Palace, district six) by Richard Rive, München 1994 Die Sklaverei der Gewürze by Shafi Adam Shafi, München 1997 Der letzte Ausweis (Identity card) by F. M. Esfandiary, Frankfurt, 2009 Film adaptations The World is Big and Salvation Lurks Around the Corner, 2007, directed by Stefan Komandarev with Miki Manojlovic as Bai Dan and Carlo Ljubek as Alexander References External links Literature by and about Ilija Trojanow (in German) in the catalogue of the DDB, or German National Library Biography, Lettre Ulysses Award jury member (in English) The collector of worlds, on Trojanov's novel of the same name, at signandsight.com Ilija Trojanow on F.M. Esfandiary: Searching for Identity in Iran's Labyrinthine Bureaucracy 1965 births Living people Bulgarian emigrants to Germany Translators to German Writers from Sofia Ludwig Maximilian University of Munich alumni Bulgarian translators People denied entry to the United States German male writers
52589974
https://en.wikipedia.org/wiki/Parallax%20%28video%20game%29
Parallax (video game)
Parallax is a shoot 'em up video game developed by British company Sensible Software for the Commodore 64. It was released in 1986 by Ocean Software in Europe and Mindscape in North America. The game was named after its primary graphical feature, parallax scrolling, which gives the illusion of depth to side-scrolling video games. On release, reviews praised the game's mix of traditional side-scrolling action and adventure game-inspired puzzles. Gameplay On a routine exploratory mission, five astronauts discover a friendly-seeming planet run by an artificial intelligence. The inhabitants drop their pretense of friendship after the astronauts uncover a plan to invade Earth. Four of the astronauts are captured, and the player takes control of the fifth, who must free his companions and stop the invasion. Gameplay is split between two modes. The main part of the game is a side-scrolling shoot 'em up aboard a spaceship. The player scores points by destroying enemy ships and turrets. At hangars, the player can land and exit the spaceship. In this action-adventure mode, the player drugs enemy scientists and retrieves keycards to unlock the password to advance to the next of five zones (attempting to leave the zone without disabling the system results in instant death). The first scientist drugged in each zone also counts as a rescued astronaut. Once the password is unlocked in the fifth zone, the computer controlling the invasion shuts down, and the player wins after a final escape. Development Parallax was Sensible Software's first game. It was designed after signing an agreement with publisher Ocean Software; Ocean was the first publisher Sensible approached. The founders, programmer Chris Yates and artist Jon Hare were 19 years old at the time. Yates came up with the initial concept of a shoot 'em up game where players could fly above and below platforms. Hare designed the levels and graphics, and Yates added additional effects, such as sliding walls. The puzzle elements were planned to be more complex, but the Commodore 64's limited memory did not allow it. The ending of the game, which simply outputs "System Off", was all they could fit in the remaining memory. Programming the game took six months, and it was released in October 1986. The game's score was inspired by Jean-Michel Jarre's album Rendez-Vous, which composer Martin Galway had been listening to during development. Reception Contemporary reviews were positive and highlighted Parallaxs combination of shoot 'em up action and adventure-inspired puzzles. Zzap!64 rated the game 93/100 and called it "a neat mix between shoot em up and an arcade adventure, with a few other things thrown in for good measure". Lee Noel, Jr. of Compute!'s Gazette wrote that the game has "excellent graphics" and simulates depth and perspective well. Describing the gameplay, Noel said it initially seems like "just basic components of a fairly good shoot 'em up" but later incorporates elements of adventure games, though stripped of their characterization and complex interactions. Noel concluded, "Although it's not particularly deep or complex, Parallax and its arcadelike graphics present an entertaining and incredibly challenging puzzle." In a 1988 roundup of space combat games, David W. Wilson of Computer Gaming World praised the mix of genres, calling it the game's "cleverest aspect". The Australian Commodore and Amiga Review, in a 1990 roundup of shoot 'em up games, wrote that Parallax "hasn't mellowed with age and still impresses as much as it did then". The review called it a "special blend of strategy and action", rating it 90/100. Eurogamer's retrospective review from 2006 rated it 7/10 stars and said that it has "intriguing gameplay variety" and "neat parallax effects", though it is not the most technically advanced Commodore 64 game. The reviewer, Dan Pearson, criticized the game's ending, writing that it must have seemed anticlimactic to anyone who won it. Author Roberto Dillon wrote that the game's chiptune score differentiated it from other games and has become popular with retrogamers. Pearson called the main theme "truly demented but utterly mesmerising". References External links 1986 video games Commodore 64 games Commodore 64-only games Horizontally scrolling shooters Ocean Software games Sensible Software Video games scored by Martin Galway Video games developed in the United Kingdom Mindscape games
60259563
https://en.wikipedia.org/wiki/Shumin%20Zhai
Shumin Zhai
Shumin Zhai (Chinese simplified: 翟树民) (born 1961) is an American-Canadian-Chinese human-computer interaction (HCI) research scientist and inventor. He is known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. His studies have contributed to both foundational models and understandings of HCI and practical user interface designs and flagship products. He previously worked at IBM where he invented the ShapeWriter text entry method for smartphones, which is a predecessor to the modern Swype keyboard. Dr. Zhai's publications have won the ACM UIST Lasting Impact Award and the IEEE Computer Society Best Paper Award, among others, and he is most known for his research specifically on input devices and interaction methods, swipe-gesture-based touchscreen keyboards, eye-tracking interfaces, and models of human performance in human-computer interaction. Dr. Zhai is currently a Principal Scientist at Google where he leads and directs research, design, and development of human-device input methods and haptics systems. Education Born in Harbin, China in 1961, Dr. Zhai received his bachelor's degree in Electrical Engineering in 1982, and his master's degree in Computer Science in 1984 from Xidian University. After that, he served on the faculty of the Northwest Institute of Telecommunication Engineering (now Xidian University) in Xi'an, China where he taught and conducted research in computer control systems until 1989. In 1995, received his PhD degree in Human Factors Engineering at the University of Toronto. Career From 2001 to 2007, Dr. Zhai was a visiting adjunct professor in the Department of Computer and Information Science (IDA) at Linköping University, where he also supervised graduate research. He was a consultant at Autodesk in 1995 before joining IBM Almaden Research Center in 1996. From 1996 to 2011, he worked at the IBM Almaden Research Center. In January 2007, he originated and led the SHARK/ShapeWriter project at IBM Research and a start-up company that pioneered the touchscreen word-gesture keyboard paradigm, filing the first patents of this paradigm, publishing the first generation of scientific papers. In 2010, ShapeWriter was acquired by Nuance Communications, and taken off the market. During his tenure at IBM, Dr. Zhai also worked with a team of engineers from IBM and IBM vendors to bring the ScrollPoint mouse from research to market, and received a CES award and millions of users. From 2009 to 2015, Dr. Zhai was also the editor-in-chief of the ACM Transactions on Computer-Human Interaction. At the time he had been deeply involved in both the conference side and the journal side of publishing HCI research as an author, reviewer, editor, committee member, and papers chair. From 2011 till present, Dr. Zhai has been working at Google as a Principal Scientist, where he leads and directs research, design, and development of human-device input methods and haptics systems. Specifically, he has led research and design of Google's keyboard products, Pixel phone haptics, and novel Google Assistant invocation methods. Notably, Dr. Zhai led the design of Active Edge, a headline feature of Google Pixel 2, which enables the user to reach Google Assistant faster and more intuitively using a gentle device squeeze rather than the touch screen. Work Dr. Zhai researches primarily in human-computer interaction, and is currently working on the research, design and development of manual and text input methods and haptics systems. Besides text input and haptics, his other research interests include system user interface design, human-performance modeling, multi-modal interaction, computer input devices and methods, and theories of human-computer interaction. He has published over 200 research papers and received 30 patents. Word-gesture keyboard In 2003, Dr. Zhai and Per Ola Kristensson proposed a method of speed-writing for pen-based computing, SHARK (shorthand aided rapid keyboarding), which augments stylus keyboarding with shorthand gesturing. SHARK defines a shorthand symbol for each word according to its movement pattern on an optimized stylus keyboard. In 2004, they presented SHARK2 that increased recognition accuracy and relaxed precision requirements by using the shape and location of gestures in addition to context based language models. In doing so, Dr. Zhai and Kristensson delivered a paradigm of touch screen gesture typing as an efficient method for text entry that has continued to drive the development of mobile text entry across the industry. One of the most important rationales of gesture keyboards is facilitating transition from primarily visual-guidance drive letter-to-letter tracing to memory-recall driven gesturing. By releasing the first word-gesture keyboard in 2004 through IBM AlphaWorks and a top ranked iPhone app called ShapeWriter WritingPad in 2008, Dr. Zhai and his colleagues were able to facilitate this transition and brought the invention from the laboratory to real world users. Laws and models of action One of Dr. Zhai's main HCI research threads is Fitts’ law type of human performance models. From 1996, Dr. Zhai, alongside his colleagues, has pursued research on “Laws of Action” that attempted to carry the spirit of Fitts' law forward. In the HCI context, Fitts' law can be considered the “Law of Pointing”, while they believe there are other robust human performance regularities in action. The two new classes of action relevant to user interface design and evaluation that they have explored are crossing and steering. “Law of Pointing”: Refining Fitts’ law models for bivariate pointing, 2003 “Law of Steering”: Human Action Laws in Electronic Virtual Worlds - an empirical study of path steering performance in VR, 2004 “Law of Crossing”: Foundations for designing and evaluating user interfaces based on the crossing paradigm, 2010 Modeling human performance of pen stroke gestures, 2007 FFitts' law: modeling finger touch with Fitts' law, 2013 Modeling Gesture-Typing Movements, 2018 Manipulation and navigation in 3D interfaces Dr. Zhai started working on multiple degrees of freedom (DOF) input during his graduate years at the University of Toronto. In his Ph.D. thesis, he systematically examined human performance as a function of design variations of a 6 DOF control device, such as control resistance (isometric, elastic, and isotonic), transfer function (position vs. rate control), muscle groups used, and display format. He investigated people's ability to coordinate multiple degrees of freedom, based on three ways of quantification: simultaneous time-on-target, error correlation, and efficiency. Eye-tracking augmented user interfaces Dr. Zhai has been involved in two applications about eye-tracking augmented user interfaces, MAGIC pointing and RealTourist. In 1999, he worked together with his colleagues (Carlos Morimoto and Steven Ihde) at IBM Almaden Research Center and published a paper Manual and gaze input cascaded (MAGIC) pointing. This work explored a new direction in utilizing eye gaze for computer input, showing that the MAGIC pointing techniques might offer many advantages, including less physical effort and fatigue than traditional manual pointing, greater accuracy and naturalness than traditional gaze pointing, and possibly faster speed than manual pointing. In 2005, he developed and studied an experimental system, RealTourist, with Pernilla Qvarfordt and David Beymer. RealTourist lets a user to plan a conference trip with the help of a remote tourist consultant who could view the tourist's eye-gaze superimposed onto a shared map. Data collected from the experiment were analyzed in conjunction with literature review on speech and eye-gaze patterns. This inspective, exploratory research identified various functions of gaze-overlay on shared spatial material including: accurate and direct display of partner's eye-gaze, implicit deictic referencing, interest detection, common focus and topic switching, increased redundancy and ambiguity reduction, and an increase of assurance, confidence, and understanding. This study identified patterns that can serve as a basis for designing multimodal human-computer dialogue systems with eye-gaze locus as a contributing channel, and investigated how computer-mediated communication can be supported by the display of the partner's eye-gaze. FonePal FonePal is a system developed to improve the experience of accessing call centers or help desks. Known as "touchtone hell", voice menu navigation has long been recognized as a frustrating user experience due to the nature of voice presentation. In contrast, FonePal allows a user to scan and select from a visual menu at the user's own pace, typically much faster than waiting for the voice menus to be spoken. FonePal uses the Internet infrastructure, specifically Instant Messaging, to deliver a visual menu on a nearby computer screen simultaneously with the voice menu over the phone. In 2005 and 2006, Dr. Zhai and his colleague Min Yin at IBM Almaden Research Center published two papers about this project. Their study shows that FonePal enables easier navigation of IVR phone tree, higher navigation speed, less routing error and greater satisfaction. FonePal can also seamlessly bridge the caller to a searchable web knowledge base, promoting relevant self-help and reducing call center operation cost. Awards and honors Dr.Zhai is a Fellow of the Association for Computing Machinery (ACM) and a Member of the CHI Academy. He has received many awards and honors. Among them: IEEE Computer Society Best Paper Award One of ACM's inaugural class of Distinguished Scientists (2006) Member of the CHI Academy (2010) Fellow of the ACM. (2010) ACM UIST Lasting Impact Award (2014) References External links Shumin Zhai's page on Google Google employees Fellows of the Association for Computing Machinery Living people 1961 births
14089329
https://en.wikipedia.org/wiki/Open%20Handset%20Alliance
Open Handset Alliance
The Open Handset Alliance (OHA) is a consortium of 84 firms to develop open standards for mobile devices. Member firms include HTC, Sony, Dell, Intel, Motorola, Qualcomm, Texas Instruments, Google, Samsung Electronics, LG Electronics, T-Mobile, Sprint Corporation (now merged with T-Mobile US), Nvidia, and Wind River Systems. The OHA was established on 5 November 2007, led by Google with 34 members, including mobile handset makers, application developers, some mobile network operators and chip makers. Android, the flagship software of the alliance (first developed by Google in 2007), is based on an open-source license and has competed against mobile platforms from Apple (iOS), Microsoft (Windows Phone), Nokia (Symbian), HP (formerly Palm), Samsung Electronics / Intel (Tizen, bada), and BlackBerry (BlackBerry OS). As part of its efforts to promote a unified Android platform, OHA members are contractually forbidden from producing devices that are based on competing forks of Android. Products At the same time as the announcement of the formation of the Open Handset Alliance on November 5, 2007, the OHA also unveiled Android, an open-source mobile phone platform based on the Linux kernel. An early look at the SDK was released to developers on 12 November 2007. The first commercially available phone running Android was the HTC Dream (also known as the T-Mobile G1). It was approved by the Federal Communications Commission (FCC) on 18 August 2008, and became available on 22 October of that year. Members The members of the Open Handset Alliance are: See also Google Nexus Symbian Foundation LiMo Foundation Open Mobile Alliance Automotive Grade Linux References Citations Sources Google enters the wireless world Google's wireless initiatives go beyond Android External links Open Handset Alliance official website Automotive Grade Linux official website 2007 establishments in California Android (operating system) Business organizations based in the United States Mobile technology Mobile telecommunications standards Open standards Organizations based in Santa Clara County, California Organizations established in 2007 Technology consortia Telecommunications organizations Mountain View, California
1298881
https://en.wikipedia.org/wiki/Jason%20Rohrer
Jason Rohrer
Jason Rohrer (born November 14, 1977) is an American computer programmer, writer, musician, and game designer. He publishes most of his software into the public domain (Public domain software) and charges for commercial platform distributed versions of his games, like on the iPhone appstore or Steam. He is a graduate of Cornell University. From 2004 until 2011 he practiced simple living, stating in 2009 that his family of four had an annual budget of less than $14,500. They have since relocated from Las Cruces, New Mexico to Davis, California. In 2005 Jason Rohrer worked on a local currency, called North Country Notes (NCN), for Potsdam, New York. In 2016 Rohrer became the first videogame artist to have a solo retrospective in an art museum. His exhibition, The Game Worlds of Jason Rohrer, was on view at The Davis Museum at Wellesley College until June 2016. Games Rohrer has placed most of his creative work, like video games' source code and assets, into the public domain as he is a supporter of a copyright-less free distribution economy. Many of his project are hosted on SourceForge. Transcend – Rohrer's first game, released in 2005. Transcend is "an abstract 2D shooting game that doubles as a multimedia sculpture." Cultivation – Rohrer's second game, released in 2007, is "a social simulation about a community of gardeners." Passage – Rohrer's third game, which was released in 2007 and garnered much attention from the mainstream and independent gaming communities. The game lasts exactly five minutes, and focuses on life, mortality and the costs and benefits of marriage. It was featured in Kokoromi's curated GAMMA 256 event. In 2012 Passage became part of the permanent collection at the Museum of Modern Art. Gravitation – Rohrer's fourth game, released in 2008. That same year, it won the Jury award at IndieCade. Between – Rohrer's fifth game, released in 2008. It is hosted by Esquire Magazine as an adjunct to Rohrer's profile in the December 2008 issue and was the recipient of the 2009 Independent Games Festival's Innovation Award. Primrose – Rohrer's sixth game, designed for the iPhone (although released for home computers as well). It was released on February 19, 2009. It is a departure from the art-game theme, and is a simple puzzle game. Sleep is Death – Adventure-game-making software, released April 16, 2010. Sleep is Death games require the creator to be present to respond to the player's actions in near real-time. It has received favorable reviews from a number of mainstream game review sites. Game Design Sketchbook – In 2008 Rohrer created a number of games for The Escapist. These would usually be unpolished prototype games that explore a single theme, with an accompanying article by Rohrer describing the creative process of making games. Inside a Star-filled Sky – An "infinite, recursive tactical shooter" released in February 2011, favorably reviewed. Selected for presentation at the 2011 Tokyo Game Show's Sense of Wonder Night. The game was put by Rohrer into public domain, like many other games of Rohrer. Diamond Trust of London – A 2012 crowdfunded two-player strategy game for the Nintendo DS into public domain. The Castle Doctrine – An MMO burglary and home defense video game. Sold on Steam while being Public domain software. Cordial Minuet – A two-player online gambling strategy game played anonymously for real money. One Hour One Life – A multiplayer survival game of parenting and civilization building, released February 2018 and exclusively sold via the developer's webpage. Like the games before, public domain software and hosted on GitHub. GDC 2011 Game Design Challenge At the 2011 Game Developers Conference Rohrer won the annual Game Design Challenge by proposing a game that could only be played once by a single player and then passed on to another. This idea was based on stories of his late grandfather that had been passed down. He stated "We become like gods to those who come after us." With this in mind he created a Minecraft mod, Chain World, that was put on a single USB flash drive, which he then passed to an audience member. The rules of the game were simple: No text signs are allowed in the game, players may play until they die once, upon respawning they must quit the game and the game must then be passed onto someone that is interested and willing to respect the rules. GDC 2013 Game Design Challenge In March 2013 the Game Design Challenge was held at the Game Developers Conference for the final time. Its theme was "Humanity's Final Game." Rohrer was among the six contestants and won with his entry A Game For Someone, a physical game constructed of titanium. After its completion Rohrer buried it in an undisclosed location in the Nevada desert. At the challenge he released lists containing over one million discrete GPS coordinates, one of which was the actual burial spot. He estimated that with coordinated searching it would take at least 2,700 years to locate the game. The Game Worlds of Jason Rohrer In February 2016, the Davis Museum at Wellesley College exhibited The Game Worlds of Jason Rohrer, the first museum retrospective dedicated to the work of a single video game maker. The museum stated "Rohrer's exhibited work is deft, engaging, and often surprisingly moving. It refers to a diverse set of cultural influences ranging from the fiction of Borges to Black Magic; at the same time, it also engages pressing emotional, intellectual, philosophical, and social issues. Rohrer's substantial recognition, which has included feature coverage in Wired, Esquire and The Wall Street Journal, as well as inclusion in MoMA's initial videogame acquisition, has been built on a singularly fascinating body of games. These range from the elegantly simple—such as Gravitation (2008), a game about flights of creative mania and melancholy—to others of Byzantine complexity. The exhibition featured four large build-outs that translate Rohrer’s games into unique spatial experiences, alongside a section dedicated to exploring a large body of his work." The exhibit was designed by IKD, a Boston-based design firm. Other projects konspire2b, a pseudonymous channel-based distributed file system token word, a Xanadu-style text editing system tangle, a proxy server which attempts to find relationships between websites and user visits. MUTE, a file sharing network with anonymity in mind. Monolith, a thought experiment that might be relevant to digital copyright. This has expanded to a computer program implemented on his ideas. seedBlogs, a modular building block that lets users add PHP and MySQL-backed dynamic content to any website. silk, a web-based hypertext system to simplify web page linking. Similar to Wiki markup. hyperlit, a literary hypertext authoring system. subreal, a distributed evolution system. Project December, an online conversation AI using GPT-2 and GPT-3 technology. References External links Jason Rohrer official website , with Jason Rohrer and Chris Crawford (2009) American computer programmers American video game designers 1977 births Cornell University alumni Living people Video game developers Open content activists Free software programmers Independent video game developers
59876550
https://en.wikipedia.org/wiki/Antonio%20Zamora
Antonio Zamora
Antonio Zamora is a consultant in the fields of computer programming, chemical information science, and computational linguistics who worked about chemical search systems and automatic spelling correction algorithms. Career Zamora studied chemistry at the University of Texas (B.S. 1962), and served in the U.S. Army during the Vietnam era from 1962 to 1965. He studied medical technology at the Medical Field Service School (MFSS) in Fort Sam Houston and worked in hematology at Brooke Army Medical Center. After concluding his military service, he worked at Chemical Abstracts Service (CAS) in Columbus, Ohio as an editor of one of the first computer-produced publications in the United States. While working for CAS, he gained a master's degree in Computer Science from Ohio State University (M.S. 1969), and began working in their programming department; eventually he transferred to the research department where he was able to combine his chemical background with programming. He contributed to the development of a chemical registry system, chemical structure input systems, devised an algorithm for determining the Smallest Set of Smallest Rings (SSSR), a cheminformatics term for the minimal cycle basis of a molecular graph, experimental automatic abstracting, indexing programs, and spelling aid algorithms. In 1982 he joined IBM Corporation as a senior programmer working on spell checkers and multilingual information retrieval tools. After his retirement from IBM in 1996, Zamora established Zamora Consulting, LLC and worked as a consultant for the American Chemical Society (ACS), the National Library of Medicine (NLM), and the US Department of Energy (DOE) to support semantic enhancements for search engines. Post-retirement In his retirement Zamora has also self-published a science fiction book, and several small books while investigating the Carolina Bays; in his 2017 paper "A model for the geomorphology of the Carolina Bays" he proposed that the "Carolina Bays are the remodeled remains of oblique conical craters formed on ground liquefied by the seismic shock waves of secondary impacts of glacier ice boulders ejected by an extraterrestrial impact on the Laurentide Ice Sheet". His research was based on geometrical analysis of the Carolina Bays using Google Earth in combination with LiDAR data. The theory is not widely accepted. Many other theories have been proposed to account for their formation. SPEEDCOP project Zamora carried out pioneering research on the automatic spelling correction SPEEDCOP project (SPEIIing Error Detection correction Project); the project was supported by National Science Foundation (NSF) at Chemical Abstracts Service (CAS) and extracted over 50,000 misspellings from approximately 25,000,000 words of text from seven scientific and scholarly databases. The purpose of the project was to automatically correct spelling errors, predominantly typing errors, in a database of scientific abstracts. For each word in a dictionary, a key is computed consisting of the first letter, followed by the consonant letters in order of occurrence, followed by the vowel letters in order of occurrence, each letter recorded once only, e.g. inoculation will produce a key INCLTOUA, the keys are sorted in order. The key of each word in the text is compared with the dictionary keys and if no exact match is found it compares with keys either side to find a probable match. The use of the key reduces the portion of the dictionary that has to be searched. Awards 1971: the best paper of the year award of the Journal of the American Society for Information Science was awarded to James E. Rush, R. Salvador, and Zamora, for the paper "Automatic Abstracting and Indexing". 2011: winner of the National Library of Medicine's award "Show off Your Apps: Innovative Uses of NLM Information". Publications Papers Antonio Zamora, 2017, A model for the geomorphology of the Carolina Bays. Geomorphology. 282: 209–216. Rudolf Frisch, Antonio Zamora, 1988, Spelling Assistance for Compound Words. IBM Journal of Research and Development 32(2): 195-200 Joseph J. Pollock, Antonio Zamora, 1984 Automatic Spelling Correction in Scientific and Scholarly Text. Commun. ACM 27(4): 358-368 Joseph J. Pollock, Antonio Zamora, 1984 System design for detection and correction of spelling errors in scientific and scholarly text. JASIS 35(2): 104-109 Joseph J. Pollock, Antonio Zamora, 1983 Collection and characterization of spelling errors in scientific and scholarly text. JASIS 34(1): 51-58 E. M. Zamora, Joseph J. Pollock, Antonio Zamora, 1981 The use of trigram analysis for spelling error detection. Inf. Process. Manage. 17(6): 305-316 Antonio Zamora, 1980 Automatic detection and correction of spelling errors in a large data base. JASIS 31(1): 51-57 Karen A. Hamill, Antonio Zamora, 1980 The use of titles for automatic document classification. JASIS 31(6): 396-402 David L. Dayton, M. J. Fletcher, Charles W. Moulton, Joseph J. Pollock, Antonio Zamora, 1977 Comparison of the Retrieval Effectiveness of CA Condensates (CACon) and CA Subject Index Alert (CASIA). Journal of Chemical Information and Computer Sciences 17(1): 20-28 Ronald G. Dunn, William Fisanick, Antonio Zamora, 1977 A Chemical Substructure Search System Based on Chemical Abstracts Index Nomenclature. Journal of Chemical Information and Computer Sciences 17(4): 212-219 Tommy Ebe, Antonio Zamora, 1976 Wiswesser Line Notation Processing at Chemical Abstracts Service. Journal of Chemical Information and Computer Sciences 16(1): 33-35 Antonio Zamora, David L. Dayton, 1976. The Chemical Abstracts Service Chemical Registry System. V. Structure Input and Editing. Journal of Chemical Information and Computer Sciences 16(4): 219-222 Joseph J. Pollock, Antonio Zamora, 1975. Automatic Abstracting Research at Chemical Abstracts Service. Journal of Chemical Information and Computer Sciences 15(4): 226-232 Available also as PDF: https://cosmictusk.com/wp-content/uploads/A-model-for-the-geomorphology-of-the-Carolina-Bays.pdf Books Further reading Su Zhang, University of Kent, Spell Checking using the Google Web API (contains details of SPEEDCOP algorithm) Bakar, Z. A., Sembok, T. M. and Yusoff, M. (2000), An evaluation of retrieval effectiveness using spelling‐correction and string‐similarity matching methods on Malay texts. J. Am. Soc. Inf. Sci., 51: 691-706. (Evaluates SPEEDCOP and other algorithms) References Living people Information retrieval researchers Year of birth missing (living people) University of Texas alumni Ohio State University alumni
30581559
https://en.wikipedia.org/wiki/POSIX%20terminal%20interface
POSIX terminal interface
The POSIX terminal interface is the generalized abstraction, comprising both an Application Programming Interface for programs, and a set of behavioural expectations for users of a terminal, as defined by the POSIX standard and the Single Unix Specification. It is a historical development from the terminal interfaces of BSD version 4 and Seventh Edition Unix. General underlying concepts Hardware A multiplicity of I/O devices are regarded as "terminals" in Unix systems. These include: serial devices connected by a serial port such as printers/teleprinters, teletypewriters, modems supporting remote terminals via dial-up access, and directly-connected local terminals display adapter and keyboard hardware directly incorporated into the system unit, taken together to form a local "console", which may be presented to users and to programs as a single CRT terminal or as multiple virtual terminals software terminal emulators, such as the xterm, Konsole, GNOME Terminal, and Terminal programs, and network servers such as the rlogin daemon and the SSH daemon, which make use of pseudoterminals Terminal intelligence and capabilities Intelligence: terminals are dumb, not intelligent Unlike its mainframe and minicomputer contemporaries, the original Unix system was developed solely for dumb terminals, and that remains the case today. A terminal is a character-oriented device, comprising streams of characters received from and sent to the device. Although the streams of characters are structured, incorporating control characters, escape codes, and special characters, the I/O protocol is not structured as would be the I/O protocol of smart, or intelligent, terminals. There are no field format specifications. There's no block transmission of entire screens (input forms) of input data. By contrast mainframes and minicomputers in closed architectures commonly use Block-oriented terminals. Capabilities: terminfo, termcap, curses, et al. The "capabilities" of a terminal comprise various dumb terminal features that are above and beyond what is available from a pure teletypewriter, which programs can make use of. They (mainly) comprise escape codes that can be sent to or received from the terminal. The escape codes sent to the terminal perform various functions that a CRT terminal (or software terminal emulator) is capable of that a teletypewriter is not, such as moving the terminal's cursor to positions on the screen, clearing and scrolling all or parts of the screen, turning on and off attached printer devices, programmable function keys, changing display colours and attributes (such as reverse video), and setting display title strings. The escape codes received from the terminal signify things such as function key, arrow key, and other special keystrokes (home key, end key, help key, PgUp key, PgDn key, insert key, delete key, and so forth). These capabilities are encoded in databases that are configured by a system administrator and accessed from programs via the terminfo library (which supersedes the older termcap library), upon which in turn are built libraries such as the curses and ncurses libraries. Application programs use the terminal capabilities to provide textual user interfaces with windows, dialogue boxes, buttons, labels, input fields, menus, and so forth. Controlling environment variables: TERM et al. The particular set of capabilities for the terminal that a (terminal-aware) program's input and output uses is obtained from the database rather than hardwired into programs and libraries, and is controlled by the TERM environment variable (and, optionally for the termcap and terminfo libraries, the TERMCAP and TERMINFO environment variables, respectively). This variable is set by whatever terminal monitor program spawns the programs that then use that terminal for its input and output, or sometimes explicitly. For example: The getty program (or equivalent) sets the TERM environment variable according to a system database (variously inittab or the configuration files for the ttymon or launchd programs) defining what local terminals are attached to what serial ports and what terminal types are provided by local virtual terminals or the local system console. A dial-up user on a remote terminal is not using the type of terminal that the system commonly expects on that dial-up line, and so manually sets the TERM environment variable immediately after login to the correct type. (More usually, the terminal type set by the getty program for the dial-up line, that the system administrator has determined to be used most often by dial-up users with remote terminals, matches the one used by the dial-up user and that user has no need to override the terminal type.) The SSH server daemon (or equivalent such as the rlogin daemon) sets the TERM environment variable to the same terminal type as the SSH client. The software terminal emulator, using a pseudoterminal, sets the TERM environment variable to specify the type of terminal that it is emulating. Emulated terminals often do not exactly match real terminal hardware, and terminal emulators have type names dedicated for their use. The xterm program (by default) sets xterm as the terminal type, for example. The GNU Screen program sets screen as the terminal type. Job control Terminals provide job control facilities. Interactively, the user at the terminal can send control characters that suspend the currently running job, reverting to the interactive job control shell that spawned the job, and can run commands that place jobs in the "background" or that switch another, background, job into the foreground (unsuspending it if necessary). Line disciplines Strictly speaking, in Unices a terminal device comprises the underlying tty device driver, responsible for the physical control of the device hardware via I/O instructions and handling device interrupt requests for character input and output, and the line discipline. A line discipline is independent of the actual device hardware, and the same line discipline can be used for a terminal concentrator device responsible for multiple controlling terminals as for a pseudoterminal. In fact, the line discipline (or, in the case of BSD, AIX, and other systems, line disciplines) are the same across all terminal devices. It is the line discipline that is responsible for local echo, line editing, processing of input modes, processing of output modes, and character mapping. All these things are independent of the actual hardware, dealing as they do in the simple abstractions provided by tty device drivers: transmit a character, receive a character, set various hardware states. In Seventh Edition Unix, BSD systems and derivatives including macOS, and Linux, each terminal device can be switched amongst multiple line disciplines. In the AT&T STREAMS system, line disciplines are STREAMS modules that may be pushed onto and popped off a STREAMS I/O stack. History The POSIX terminal interface is derived from the terminal interfaces of various Unix systems. Early Unices: Seventh Edition Unix The terminal interface provided by Unix 32V and Seventh Edition Unix, and also presented by BSD version 4 as the old terminal driver, was a simple one, largely geared towards teletypewriters as terminals. Input was entered a line at a time, with the terminal driver in the operating system (and not the terminals themselves) providing simple line editing capabilities. A buffer was maintained by the kernel in which editing took place. Applications reading terminal input would receive the contents of the buffer only when the key was pressed on the terminal to end line editing. The key sent from the terminal to the system would erase ("kill") the entire current contents of the editing buffer, and would be normally displayed as an '' symbol followed by a newline sequence to move the print position to a fresh blank line. The key sent from the terminal to the system would erase the last character from the end of the editing buffer, and would be normally displayed as an '' symbol, which users would have to recognize as denoting a "rubout" of the preceding character (teletypewriters not being physically capable of erasing characters once they have been printed on the paper). From a programming point of view, a terminal device had transmit and receive baud rates, "erase" and "kill" characters (that performed line editing, as explained), "interrupt" and "quit" characters (generating signals to all of the processes for which the terminal was a controlling terminal), "start" and "stop" characters (used for modem flow control), an "end of file" character (acting like a carriage return except discarded from the buffer by the read() system call and therefore potentially causing a zero-length result to be returned) and various basic mode flags determining whether local echo was emulated by the kernel's terminal driver, whether modem flow control was enabled, the lengths of various output delays, mapping for the carriage return character, and the three input modes. The three input modes were: line mode (also called "cooked" mode) In line mode the line discipline performs all line editing functions and recognizes the "interrupt" and "quit" control characters and transforms them into signals sent to processes. Applications programs reading from the terminal receive entire lines, after line editing has been completed by the user pressing return. cbreak mode cbreak mode is one of two character-at-a-time modes. (Stephen R. Bourne jokingly referred to it as a "half-cooked" and therefore "rare" mode.) The line discipline performs no line editing, and the control sequences for line editing functions are treated as normal character input. Applications programs reading from the terminal receive characters immediately, as soon as they are available in the input queue to be read. However, the "interrupt" and "quit" control characters, as well as modem flow control characters, are still handled specially and stripped from the input stream. raw moderaw mode is the other of the two character-at-a-time modes. The line discipline performs no line editing, and the control sequences for both line editing functions and the various special characters ("interrupt", "quit", and flow control) are treated as normal character input. Applications programs reading from the terminal receive characters immediately, and receive the entire character stream unaltered, just as it came from the terminal device itself. The programmatic interface for querying and modifying all of these modes and control characters was the ioctl() system call. (This replaced the stty() and gtty() system calls of Sixth Edition Unix.) Although the "erase" and "kill" characters were modifiable from their defaults of and , for many years they were the pre-set defaults in the terminal device drivers, and on many Unix systems, which only altered terminal device settings as part of the login process, in system login scripts that ran after the user had entered username and password, any mistakes at the login and password prompts had to be corrected using the historical editing key characters inherited from teletypewriter terminals. BSD: the advent of job control With the BSD Unices came job control, and a new terminal driver with extended capabilities. These extensions comprised additional (again programmatically modifiable) special characters: The "suspend" and "delayed suspend" characters (by default and — ASCII SUB and EM) caused the generation of a new SIGTSTP signal to processes in the terminal's controlling process group. The "word erase", "literal next", and "reprint" characters (by default , , and — ASCII ETB, SYN, and DC2) performed additional line editing functions. "word erase" erased the last word at the end of the line editing buffer. "literal next" allowed any special character to be entered into the line editing buffer (a function available, somewhat inconveniently, in Seventh Edition Unix via the backslash character). "reprint" caused the line discipline to reprint the current contents of the line editing buffer on a new line (useful for when another, background, process had generated output that had intermingled with line editing). The programmatic interface for querying and modifying all of these extra modes and control characters was still the ioctl() system call, which its creators described as a "rather cluttered interface". All of the original Seventh Edition Unix functionality was retained, and the new functionality was added via additional ioctl() operation codes, resulting in a programmatic interface that had clearly grown, and that presented some duplication of functionality. System III and System V System III introduced a new programming interface that combined Seventh Edition's separate ioctl() operations to get and set flags and to get and set control characters into calls that used a termio structure to hold both flags and control characters and that could get them in a single operation and set them in another single operation. It also split some of the flags from the Seventh Edition interface into multiple separate flags, and added some additional capabilities, although it did not support job control or the cooked-mode enhancements of 4BSD. For example, it replaced the "cooked", "cbreak", and "raw" modes of Seventh Edition with different abstractions. The recognition of signal-generating characters is independent of input mode, and there are only the two input modes: canonical and non-canonical. (This allows a terminal input mode not present in Seventh Edition and BSD: canonical mode with signal generation disabled.) System III's successors, including System V, used the same interface. POSIX: Consolidation and abstraction One of the major problems that the POSIX standard addressed with its definition of a general terminal interface was the plethora of programmatic interfaces. Although by the time of the standard the behaviour of terminals was fairly uniform from system to system, most Unices having adopted the notions of line disciplines and the BSD job control capabilities, the programmatic interface to terminals via the ioctl() system call was a mess. Different Unices supplied different ioctl() operations, with different (symbolic) names, and different flags. Portable source code had to contain a significant amount of conditional compilation to accommodate the differences across software platforms, even though they were all notionally Unix. The POSIX standard replaces the ioctl() system entirely, with a set of library functions (which, of course, may be implemented under the covers via platform-specific ioctl() operations) with standardized names and parameters. The termio data structure of System V Unix was used as a template for the POSIX termios data structure, whose fields were largely unchanged except that they now used alias data types for specifying the fields, allowing them to be easily ported across multiple processor architectures by implementors, rather than explicitly requiring the unsigned short and char data types of the C and C++ programming languages (which might be inconvenient sizes on some processor architectures). POSIX also introduced support for job control, with the termios structure containing suspend and delayed-suspend characters in addition to the control characters supported by System III and System V. It did not add any of the cooked-mode extensions from BSD, although SunOS 4.x, System V Release 4, Solaris, HP-UX, AIX, newer BSDs, macOS, and Linux have implemented them as extensions to termios. What the standard defines Controlling terminals and process groups Each process in the system has either a single controlling terminal, or no controlling terminal at all. A process inherits its controlling terminal from its parent, and the only operations upon a process are acquiring a controlling terminal, by a process that has no controlling terminal, and relinquishing it, by a process that has a controlling terminal. No portable way of acquiring a controlling terminal is defined, the method being implementation defined. The standard defines the O_NOCTTY flag for the open() system call, which is the way of preventing what is otherwise the conventional way of acquiring a controlling terminal (a process with no controlling terminal open()s a terminal device file that isn't already the controlling terminal for some other process, without specifying the O_NOCTTY flag) but leaves its conventional semantics optional. Each process also is a member of a process group. Each terminal device records a process group that is termed its foreground process group. The process groups control terminal access and signal delivery. Signals generated at the terminal are sent to all processes that are members of the terminal's foreground process group. read() and write() I/O operations on a terminal by a process that is not a member of the terminal's foreground process group will and may optionally (respectively) cause signals (SIGTTIN and SIGTTOU respectively) to be sent to the invoking process. Various terminal-mode-altering library functions have the same behaviour as write(), except that they always generate the signals, even if that functionality is turned off for write() itself. The termios data structure The data structure used by all of the terminal library calls is the termios structure, whose C and C++ programming language definition is as follows:struct termios { tcflag_t c_iflag ; // Input modes tcflag_t c_oflag ; // Output modes tcflag_t c_cflag ; // Control modes tcflag_t c_lflag ; // Local modes cc_t c_cc[NCCS] ; // Control characters } ; The order of the fields within the termios structure is not defined, and implementations are allowed to add non-standard fields. Indeed, implementations have to add non-standard fields for recording input and output baud rates. These are recorded in the structure, in an implementation-defined form, and accessed via accessor functions, rather than by direct manipulation of the field values, as is the case for the standardized structure fields. The data type aliases tcflag_t and cc_t, as well as the symbolic constant NCCS and symbolic constants for the various mode flags, control character names, and baud rates, are all defined in a standard header termios.h. (This is not to be confused with the similarly named header termio.h from System III and System V, which defines a similar termio structure and a lot of similarly named symbolic constants. This interface is specific to System III and System V, and code that uses it will not necessarily be portable to other systems.) The structure's fields are (in summary, for details see the main article): c_iflaginput mode flags for controlling input parity, input newline translation, modem flow control, 8-bit cleanliness, and response to a (serial port's) "break" condition c_oflagoutput mode flags for controlling implementation-defined output postprocessing, output newline translation, and output delays after various control characters have been sent c_cflagterminal hardware control flags for controlling the actual terminal device rather than the line discipline: the number of bits in a character, parity type, hangup control, and serial line flow control c_lflaglocal control flags for controlling the line discipline rather than the terminal hardware: canonical mode, echo modes, signal-generation character recognition and handling, and enabling the generation of the SIGTTOU signal by the write() system call The library functions are (in summary, for details see the main article): tcgetattr()query a terminal device's current attribute settings into a termios structure tcsetattr()set a terminal device's current attribute settings from a termios structure, optionally waiting for queued output to drain and flushing queued input cfgetispeed()query the input baud rate from the implementation-defined fields in a termios structure cfgetospeed()query the output baud rate from the implementation-defined fields in a termios structure cfsetispeed()set the input baud rate in the implementation-defined fields in a termios structure cfsetospeed()set the output baud rate in the implementation-defined fields in a termios structure tcsendbreak()send a modem "break" signal on a serial device terminal tcdrain()wait for queued output to drain tcflush()discard queued input tcflow()change flow control tcgetpgrp()query the terminal's foreground process group tcsetpgrp()set the terminal's foreground process group Special characters The c_cc[] array member of the termios data structure specifies all of the (programmatically modifiable) special characters. The indexes into the array are symbolic constants, one for each special character type, as in the table at right. (Two further entries in the array are relevant to non-canonical mode input processing and are discussed below.) Non-programmatically modifiable special characters are linefeed (ASCII LF) and carriage return (ASCII CR). Input processing Input processing determines the behaviour of the read() system call on a terminal device and the line editing and signal-generation characteristics of the line discipline. Unlike the case of Seventh Edition Unix and BSD version 4, and like the case of System III and System V, line editing operates in one of just two modes: canonical mode and non-canonical mode. The basic difference between them is when, from the point of view of the blocking/non-blocking requirements of the read() system call (specified with the O_NONBLOCK flag on the file descriptor via open() or fcntl()), data "are available for reading". Canonical mode processing In canonical mode, data are accumulated in a line editing buffer, and do not become "available for reading" until line editing has been terminated by the user (at the terminal) sending a line delimiter character. Line delimiter characters are special characters, and they are end of file, end of line, and linefeed (ASCII LF). The former two are settable programmatically, whilst the latter is fixed. The latter two are included in the line editing buffer, whilst the former one is not. More strictly, zero or more lines are accumulated in the line editing buffer, separated by line delimiters (which may or may not be discarded once read() comes around to reading them), and line editing operates upon the part of the line editing buffer that follows the last (if any) line delimiter in the buffer. So, for example, the "erase" character (whatever that has been programmed to be) will erase the last character in the line buffer only up to (but not including) a preceding line delimiter. Non-canonical mode processing In non-canonical mode, data are accumulated in a buffer (which may or may not be the line editing buffer — some implementations having separate "processed input" and "raw input" queues) and become "available for reading" according to the values of two input control parameters, the c_cc[MIN] and c_cc[TIME] members of the termios data structure. Both are unsigned quantities (because cc_t is required to be an alias for an unsigned type). The former specifies a minimum number of characters, and the latter specifies a timeout in tenths of a second. There are four possibilities: c_cc[TIME] and c_cc[MIN] are both zero In this case, the data in the buffer are "available for reading" immediately, and read() returns immediately with whatever data are in the buffer (potentially returning zero if there are zero data available). c_cc[TIME] is non-zero and c_cc[MIN] is zero In this case, the data in the buffer are "available for reading" after the specified timeout has elapsed, the timer being triggered by the start of the read() system call, or if a single character is received. In other words, read() waits for a maximum specified total time, and may return zero data, and returns any data as soon as they are received. c_cc[TIME] is zero and c_cc[MIN] is non-zero In this case, the data in the buffer are "available for reading" after the specified number of characters have been received in the buffer. In other words, read() waits for a minimum amount of data (which may be larger than what the caller is prepared to read in the system call), will not return zero data, and may wait indefinitely. c_cc[TIME] and c_cc[MIN] are both non-zero In this case, the data in the buffer are "available for reading" after the specified number of characters have been received in the buffer or the timeout has expired since the last character was entered. There is no timeout for the very first character. In other words, read() waits for a minimum amount of data (which may be larger than what the caller is prepared to read in the system call), will not return zero data, may wait indefinitely, but won't wait longer than the specified timeout if at least one character is in the buffer to be read. Output processing Output processing is largely unchanged from its System III/System V roots. Output mode control flags determine various options: Carriage returns may be inserted before each linefeed character, to translate Unix newline semantics to the ASCII semantics that many terminals expect. Terminals may be given time to exercise various control codes that would (on a teletypewriter or similar) result in physical movements of the carriage that may take significant (from the computer's point of view) amounts of time, such as backspaces, horizontal tabs, carriage returns, form feeds, and line feeds. Notes Sources Further reading Computer terminals POSIX
50587571
https://en.wikipedia.org/wiki/ProWorkflow
ProWorkflow
ProWorkflow is web-based project management, application designed for managers and staff to plan, track and collaborate to improve project delivery. ProWorkflow is now on its 8th iteration. History ProWorkflow was founded in 2002 by CEO, Julian Stone. The idea for ProWorkflow was to assist in internal workflow but early sales of the product suggested that there was opportunity to expand to assist external companies. In 2003, ProActive Software Limited acquired ProWorkflow. Upon acquisition, it was offered as a download (and is still available as a download in some instances) – however, due to evolution of technology, the preferred method of supply is now software-as-a-service. The idea originated from the creation of a basic PalmPilot job tracking app with time tracking ability later added. Realizing that it could also benefit other businesses, Julian worked with software developer Alan Barlow to build it as a webapp that would become ProWorkflow v1. The first release was a code download and the first sale, a $70 one-time code download fee, occurred only an hour after launch. In 2003, John Walley joined ProActive Software as Director/Chairman to provide mentor-ship and guidance to support business growth and strategy. According to merchantmaveric, in July 2014 ProWorkflow used to create over 1,171,315 projects by companies, most of these were creative agencies and technology companies. ProWorkflow is a project management tool that helps to manage projects and workflow internally for businesses. There are many reporting features that comes in-built in ProWorkflow and is one of the five cloud based project management tools listed by TechRepublic for wide range of projects. ProWorkflow offers month-to-month and short term contract options which is helpful for any business. PCMag lists ProWorkflow as a strong contender against Zoho Projects and Teamwork Projects because of ProWorkflow's ability to collaborate for different size of businesses. ProWorkflow is for freelancers, start-ups and young entrepreneurs. ProWorkflow offers further apps for its software which are available in their app store and help in extending the functionality of ProWorkflow further. Companies use ProWorkflow for time tracking, invoicing and reporting purposes as a separate solution as well. ProWorkflow's mobile app with a wealth of features provides ability to manage projects easily and from anywhere. Versions ProWorkflow V3 launched in 2003. ProWorkflow V4 launched in 2004. ProWorkflow V8: Ground up re-build to ensure a high level of security, scalability and a platform for apps. Infrastructure ProWorkflow is built on a ColdFusion backend that powers all the API end points but majority of its done on client side with heavy use of native JavaScript and JQuery libraries. Infrastructure is hosted by Datapipe in Chicago, Illinois. See also Project management Project management software Web 2.0 References Project management software
151593
https://en.wikipedia.org/wiki/NewTek
NewTek
NewTek, Inc. is a San Antonio, Texas–based hardware and software company that produces live and post-production video tools and visual imaging software for personal computers. The company was founded in 1985 in Topeka, Kansas, United States, by Tim Jenison and Paul Montgomery. On 1 April 2019, it was announced that NewTek would be wholly acquired by Vizrt. Products In 2005, NewTek introduced TriCaster, a product that merges live video switching, broadcast graphics, virtual sets, special effects, audio mixing, recording, social media publishing and web streaming into an integrated, portable and compact appliance. TriCaster was announced at DEMO@15 and then launched at NAB 2005. At NAB 2006, NewTek announced TriCaster PRO, which introduced professional video and audio connections and virtual sets (using proprietary NewTek LiveSet technology) to the TriCaster line. At NAB 2007, NewTek introduced TriCaster STUDIO, the first TriCaster to support six cameras. At NAB 2008, NewTek introduced TriCaster BROADCAST, the first model to deliver SDI video and audio support also video. In early 2009, NewTek introduced 3PLAY, a portable multi-channel HD/SD slow motion replay system. At NAB 2009, NewTek introduced TriCaster TCXD300, the first high definition TriCaster. At NAB 2010, NewTek introduced TriCaster TCXD850, a 22-channel high definition model in a rack mount form factor. The TCXD850 won four industry awards: the Winners Circle Award, STAR, Vidy and Black Diamond awards from EventDV, TV Technology, Videography and DV magazines, respectively, at NAB 2010. In 2004, NewTek released the source code to some of their Amiga Platform products through DiscreetFX. In 2015, NewTek announced the Network Device Interface (NDI®) protocol which allows applications and devices to transport high quality, low latency video over gigabit Ethernet networks. The protocol was available public products starting in early 2016. In 2017, version 3 of the protocol was released, which adds multicast support, a high-efficiency mode called NDI-HX and other new features. Company history The company's first products included DigiView in 1986 and DigiPaint, both for the Commodore Amiga personal computer. DigiView was the first full-color video digitizer, and added slow-scan digitizing capabilities to the Amiga platform, allowing images to be imported at low cost, before modern image scanning technology was widely available. Consisting of an input module that allowed the connection of a standard black-and-white video camera (security cameras were popularly used), greyscale images could be captured to the Amiga. With the addition of a color wheel, full-color images could be captured by rotating the wheel's red, green, and blue segments in front of the lens and capturing the same image three times, once through each filter. This could be done manually, or with a further motorized accessory. The software combined the color information from the three images into one color image. According to the company, DigiView sold over 100,000 units. The Amiga hardware included the ability to display 4096 colors on the screen simultaneously, and DigiPaint allowed graphic artists to draw with a variety of tools in that full-color space at a time when IBM PCs were typically limited to between 4 and 16 colors. The DigiPaint product offered at release the unique capability of editing and painting on images in the Amiga's unique Hold-And-Modify high color mode in real time. The company found widespread fame and started the desktop-video revolution with the release of the Video Toaster, an innovative system for low-cost video switching and post production. The company was featured in magazine articles in such mainstream publications as Rolling Stone and was featured on the NBC Nightly News. In the early 1990s, a proliferation of video effects in television shows is directly attributable to the Video Toaster's effect of lowering the cost of video-processing hardware from the $100K range into the $4K range. One specific example is the television show Home Improvement, which used a video toaster transition for every cut between scenes—beginning with black-and-white transitions in the early 1990s, and upgrading to color and 3D transition effects as later versions of the Video Toaster were released. In addition, the company developed LightWave 3D, a 3D modeling, rendering, and animation system, which has been used extensively in television and film, with early adoption by the television series Babylon 5, which eschewed models for space scenes, and was 100% CGI from the first episode using the NewTek software. The fame of Video Toaster extended beyond the product; the company's founder Tim Jenison and its Vice President Paul Montgomery also were presented as new types of entrepreneurs running a new and different kind of company. Jenison and Montgomery eventually split, with Montgomery leaving to help form a new company called Play, Inc., which ceased operations after Montgomery's untimely death. In 1997 the company moved to San Antonio, Texas, U.S.A. In 2005, NewTek founder, Tim Jenison was inducted into the San Antonio Inventors Hall of Fame as the "Father of Desktop Video". In April 2019, NewTek was acquired by Vizrt for an undisclosed sum. Notable personalities Tim Jenison The founder of the company, Tim Jenison is well recognized in the Amiga Computer community and at SIGGRAPH. In addition to his efforts at NewTek, a personal interest in the artwork and skill of the Dutch painter Johannes Vermeer led to an investigation of the artist's technique, and a feature film documentary entitled Tim's Vermeer. The movie was released in early 2014, and was directed by Teller, and executive produced by Penn & Teller, with distribution by Sony Pictures Classics. Kiki Stockhammer A spokesperson for the NewTek products, Kiki Stockhammer provided many demonstration images that were used in introductory videos, as well as providing her silhouette for a number of transition effects included with the Video Toaster. References External links Behind the scenes at NewTek Amiga Hardware Database - Descriptions and photos of NewTek's Amiga products. Software companies based in Kansas Manufacturing companies based in San Antonio Amiga Film and video technology Video equipment manufacturers Software companies of the United States Software companies established in 1985 Manufacturing companies established in 1985 American companies established in 1985 1985 establishments in Kansas
20556944
https://en.wikipedia.org/wiki/Tor%20%28network%29
Tor (network)
Tor, short for The Onion Router, is free and open-source software for enabling anonymous communication. It directs Internet traffic through a free, worldwide, volunteer overlay network, consisting of more than six thousand relays, for concealing a user's location and usage from anyone conducting network surveillance or traffic analysis. Using Tor makes it more difficult to trace the Internet activity to the user. Tor's intended use is to protect the personal privacy of its users, as well as their freedom and ability to conduct confidential communication by keeping their Internet activities unmonitored. History The core principle of Tor, Onion routing, was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson, and computer scientists Michael G. Reed and David Goldschlag, to protect U.S. intelligence communications online. Onion routing is implemented by encryption in the application layer of the communication protocol stack, nested like the layers of an onion. The alpha version of Tor, developed by Syverson and computer scientists Roger Dingledine and Nick Mathewson and then called The Onion Routing project (which later simply became "Tor", as an acronym for the former name), was launched on 20 September 2002. The first public release occurred a year later. In 2004, the Naval Research Laboratory released the code for Tor under a free license, and the Electronic Frontier Foundation (EFF) began funding Dingledine and Mathewson to continue its development. In 2006, Dingledine, Mathewson, and five others founded The Tor Project, a Massachusetts-based 501(c)(3) research-education nonprofit organization responsible for maintaining Tor. The EFF acted as The Tor Project's fiscal sponsor in its early years, and early financial supporters of The Tor Project included the U.S. Bureau of Democracy, Human Rights, and Labor and International Broadcasting Bureau, Internews, Human Rights Watch, the University of Cambridge, Google, and Netherlands-based Stichting NLnet. Over the course of its existence, various Tor attacks and weaknesses have been discovered and occasionally used. Attacks against Tor are an active area of academic research which is welcomed by the Tor Project itself. Usage Tor enables its users to surf the Internet, chat and send instant messages anonymously, and is used by a wide variety of people for both licit and illicit purposes. Tor has, for example, been used by criminal enterprises, hacktivism groups, and law enforcement agencies at cross purposes, sometimes simultaneously; likewise, agencies within the U.S. government variously fund Tor (the U.S. State Department, the National Science Foundation, and – through the Broadcasting Board of Governors, which itself partially funded Tor until October 2012 – Radio Free Asia) and seek to subvert it. Tor is not meant to completely solve the issue of anonymity on the web. Tor is not designed to completely erase tracking but instead to reduce the likelihood for sites to trace actions and data back to the user. Tor is also used for illegal activities. These can include privacy protection or censorship circumvention, as well as distribution of child abuse content, drug sales, or malware distribution. According to one estimate, "overall, on an average country/day, ∼6.7% of Tor network users connect to Onion/Hidden Services that are disproportionately used for illicit purposes." Tor has been described by The Economist, in relation to Bitcoin and Silk Road, as being "a dark corner of the web". It has been targeted by the American National Security Agency and the British GCHQ signals intelligence agencies, albeit with marginal success, and more successfully by the British National Crime Agency in its Operation Notarise. At the same time, GCHQ has been using a tool named "Shadowcat" for "end-to-end encrypted access to VPS over SSH using the Tor network". Tor can be used for anonymous defamation, unauthorized news leaks of sensitive information, copyright infringement, distribution of illegal sexual content, selling controlled substances, weapons, and stolen credit card numbers, money laundering, bank fraud, credit card fraud, identity theft and the exchange of counterfeit currency; the black market utilizes the Tor infrastructure, at least in part, in conjunction with Bitcoin. It has also been used to brick IoT devices. In its complaint against Ross William Ulbricht of Silk Road, the US Federal Bureau of Investigation acknowledged that Tor has "known legitimate uses". According to CNET, Tor's anonymity function is "endorsed by the Electronic Frontier Foundation (EFF) and other civil liberties groups as a method for whistleblowers and human rights workers to communicate with journalists". EFF's Surveillance Self-Defense guide includes a description of where Tor fits in a larger strategy for protecting privacy and anonymity. In 2014, the EFF's Eva Galperin told Businessweek that "Tor’s biggest problem is press. No one hears about that time someone wasn't stalked by their abuser. They hear how somebody got away with downloading child porn." The Tor Project states that Tor users include "normal people" who wish to keep their Internet activities private from websites and advertisers, people concerned about cyber-spying, and users who are evading censorship such as activists, journalists, and military professionals. , Tor had about four million users. According to the Wall Street Journal, in 2012 about 14% of Tor's traffic connected from the United States, with people in "Internet-censoring countries" as its second-largest user base. Tor is increasingly used by victims of domestic violence and the social workers and agencies that assist them, even though shelter workers may or may not have had professional training on cybersecurity matters. Properly deployed, however, it precludes digital stalking, which has increased due to the prevalence of digital media in contemporary online life. Along with SecureDrop, Tor is used by news organizations such as The Guardian, The New Yorker, ProPublica and The Intercept to protect the privacy of whistleblowers. In March 2015, the Parliamentary Office of Science and Technology released a briefing which stated that "There is widespread agreement that banning online anonymity systems altogether is not seen as an acceptable policy option in the U.K." and that "Even if it were, there would be technical challenges." The report further noted that Tor "plays only a minor role in the online viewing and distribution of indecent images of children" (due in part to its inherent latency); its usage by the Internet Watch Foundation, the utility of its onion services for whistleblowers, and its circumvention of the Great Firewall of China were touted. Tor's executive director, Andrew Lewman, also said in August 2014 that agents of the NSA and the GCHQ have anonymously provided Tor with bug reports. The Tor Project's FAQ offers supporting reasons for the EFF's endorsement: Operation Tor aims to conceal its users' identities and their online activity from surveillance and traffic analysis by separating identification and routing. It is an implementation of onion routing, which encrypts and then randomly bounces communications through a network of relays run by volunteers around the globe. These onion routers employ encryption in a multi-layered manner (hence the onion metaphor) to ensure perfect forward secrecy between relays, thereby providing users with anonymity in a network location. That anonymity extends to the hosting of censorship-resistant content by Tor's anonymous onion service feature. Furthermore, by keeping some of the entry relays (bridge relays) secret, users can evade Internet censorship that relies upon blocking public Tor relays. Because the IP address of the sender and the recipient are not both in cleartext at any hop along the way, anyone eavesdropping at any point along the communication channel cannot directly identify both ends. Furthermore, to the recipient it appears that the last Tor node (called the exit node), rather than the sender, is the originator of the communication. Originating traffic A Tor user's SOCKS-aware applications can be configured to direct their network traffic through a Tor instance's SOCKS interface, which is listening on TCP port 9050 (for standalone Tor) or 9150 (for Tor Browser bundle) at localhost. Tor periodically creates virtual circuits through the Tor network through which it can multiplex and onion-route that traffic to its destination. Once inside a Tor network, the traffic is sent from router to router along the circuit, ultimately reaching an exit node at which point the cleartext packet is available and is forwarded on to its original destination. Viewed from the destination, the traffic appears to originate at the Tor exit node. Tor's application independence sets it apart from most other anonymity networks: it works at the Transmission Control Protocol (TCP) stream level. Applications whose traffic is commonly anonymized using Tor include Internet Relay Chat (IRC), instant messaging, and World Wide Web browsing. Onion services Tor can also provide anonymity to websites and other servers. Servers configured to receive inbound connections only through Tor are called onion services (formerly, hidden services). Rather than revealing a server's IP address (and thus its network location), an onion service is accessed through its onion address, usually via the Tor Browser. The Tor network understands these addresses by looking up their corresponding public keys and introduction points from a distributed hash table within the network. It can route data to and from onion services, even those hosted behind firewalls or network address translators (NAT), while preserving the anonymity of both parties. Tor is necessary to access these onion services. Onion services were first specified in 2003 and have been deployed on the Tor network since 2004. Other than the database that stores the onion service descriptors, Tor is decentralized by design; there is no direct readable list of all onion services, although a number of onion services catalog publicly known onion addresses. Because onion services route their traffic entirely through the Tor network, connection to an onion service is encrypted end-to-end and not subject to eavesdropping. There are, however, security issues involving Tor onion services. For example, services that are reachable through Tor onion services and the public Internet are susceptible to correlation attacks and thus not perfectly hidden. Other pitfalls include poorly configured services (e.g. identifying information included by default in web server error responses), uptime and downtime statistics, intersection attacks, and user error. The open source OnionScan program, written by independent security researcher Sarah Jamie Lewis, comprehensively examines onion services for numerous flaws and vulnerabilities. (Lewis has also pioneered the field of onion dildonics, inasmuch as sex toys can be insecurely connected over the Internet.) Onion services can also be accessed from a standard web browser without client-side connection to the Tor network, using services like Tor2web. Popular sources of dark web .onion links include Pastebin, Twitter, Reddit, and other Internet forums. Nyx status monitor Nyx (formerly ARM) is a command-line status monitor written in Python for Tor. This functions much like top does for system usage, providing real time statistics for: resource usage (bandwidth, CPU, and memory usage) general relaying information (nickname, fingerprint, flags, or/dir/controlports) event log with optional regex filtering and deduplication connections correlated against Tor's consensus data (IP address, connection types, relay details, etc.) torrc configuration file with syntax highlighting and validation Most of Nyx's attributes are configurable through an optional configuration file. It runs on any platform supported by curses including Linux, macOS, and other Unix-like variants. The project began in the summer of 2009, and since 18 July 2010 it has been an official part of the Tor Project. It is free software, available under the GNU General Public License. Weaknesses Like all current low-latency anonymity networks, Tor cannot and does not attempt to protect against monitoring of traffic at the boundaries of the Tor network (i.e., the traffic entering and exiting the network). While Tor does provide protection against traffic analysis, it cannot prevent traffic confirmation (also called end-to-end correlation). In spite of known weaknesses and attacks listed here, a 2009 study revealed Tor and the alternative network system JonDonym (Java Anon Proxy, JAP) are considered more resilient to website fingerprinting techniques than other tunneling protocols. The reason for this is conventional single-hop VPN protocols do not need to reconstruct packet data nearly as much as a multi-hop service like Tor or JonDonym. Website fingerprinting yielded greater than 90% accuracy for identifying HTTP packets on conventional VPN protocols versus Tor which yielded only 2.96% accuracy. However, some protocols like OpenSSH and OpenVPN required a large amount of data before HTTP packets were identified. Researchers from the University of Michigan developed a network scanner allowing identification of 86% of live Tor "bridges" with a single scan. Consensus blocking Like many decentralized systems, Tor relies on a consensus mechanism to periodically update its current operating parameters, which for Tor are network parameters like which nodes are good/bad relays, exits, guards, and how much traffic each can handle. Tor's architecture for deciding the consensus relies on a small number of directory authority nodes voting on current network parameters. Currently, there are ten directory authority nodes, and their health is publicly monitored. The IP addresses of the authority nodes are hard coded into each Tor client. The authority nodes vote every hour to update the consensus, and clients download the most recent consensus on startup. A network congestion attack, such as a DDoS, can prevent the consensus nodes from communicating and thus prevent voting to update the consensus. Eavesdropping Autonomous system (AS) eavesdropping If an autonomous system (AS) exists on both path segments from a client to entry relay and from exit relay to destination, such an AS can statistically correlate traffic on the entry and exit segments of the path and potentially infer the destination with which the client communicated. In 2012, LASTor proposed a method to predict a set of potential ASes on these two segments and then avoid choosing this path during the path selection algorithm on the client side. In this paper, they also improve latency by choosing shorter geographical paths between a client and destination. Exit node eavesdropping In September 2007, Dan Egerstad, a Swedish security consultant, revealed he had intercepted usernames and passwords for email accounts by operating and monitoring Tor exit nodes. As Tor cannot encrypt the traffic between an exit node and the target server, any exit node is in a position to capture traffic passing through it that does not use end-to-end encryption such as Secure Sockets Layer (SSL) or Transport Layer Security (TLS). While this may not inherently breach the anonymity of the source, traffic intercepted in this way by self-selected third parties can expose information about the source in either or both of payload and protocol data. Furthermore, Egerstad is circumspect about the possible subversion of Tor by intelligence agencies: In October 2019 a Tor researcher revealed that since at least 2017 there were a more than one hundred highly suspicious relay nodes that were running on previously unprecedented scale by an unknown group. It was alleged that this number of servers could pose the risk of a sybil attack as it could map Tor users' routes inside the network, increasing risk of deanonymization. At some point there were about 900 relay nodes running and by November 2021 about 600 of them were purged. There was no evidence of compromise, however. Internal communication attack In October 2011, a research team from ESIEA claimed to have discovered a way to compromise the Tor network by decrypting communication passing over it. The technique they describe requires creating a map of Tor network nodes, controlling one-third of them, and then acquiring their encryption keys and algorithm seeds. Then, using these known keys and seeds, they claim the ability to decrypt two encryption layers out of three. They claim to break the third key by a statistical attack. In order to redirect Tor traffic to the nodes they controlled, they used a denial-of-service attack. A response to this claim has been published on the official Tor Blog stating these rumors of Tor's compromise are greatly exaggerated. Traffic-analysis attack There are two methods of traffic-analysis attack, passive and active. In the passive traffic-analysis method, the attacker extracts features from the traffic of a specific flow on one side of the network and looks for those features on the other side of the network. In the active traffic-analysis method, the attacker alters the timings of the packets of a flow according to a specific pattern and looks for that pattern on the other side of the network; therefore, the attacker can link the flows in one side to the other side of the network and break the anonymity of it. It is shown that, although timing noise is added to the packets, there are active traffic analysis methods that are robust against such a noise. Steven Murdoch and George Danezis from University of Cambridge presented an article at the 2005 IEEE Symposium on security and privacy on traffic-analysis techniques that allow adversaries with only a partial view of the network to infer which nodes are being used to relay the anonymous streams. These techniques greatly reduce the anonymity provided by Tor. Murdoch and Danezis have also shown that otherwise unrelated streams can be linked back to the same initiator. This attack, however, fails to reveal the identity of the original user. Murdoch has been working with and has been funded by Tor since 2006. Tor exit node block Operators of Internet sites have the ability to prevent traffic from Tor exit nodes or to offer reduced functionality for Tor users. For example, it is not generally possible to edit Wikipedia when using Tor or when using an IP address also used by a Tor exit node. The BBC blocks the IP addresses of all known Tor guards and exit nodes from its iPlayer service, although relays and bridges are not blocked. Bad apple attack In March 2011, researchers with the Rocquencourt French Institute for Research in Computer Science and Automation (Institut national de recherche en informatique et en automatique, INRIA), documented an attack that is capable of revealing the IP addresses of BitTorrent users on the Tor network. The "bad apple attack" exploits Tor's design and takes advantage of insecure application use to associate the simultaneous use of a secure application with the IP address of the Tor user in question. One method of attack depends on control of an exit node or hijacking tracker responses, while a secondary attack method is based in part on the statistical exploitation of distributed hash table tracking. According to the study: The results presented in the bad apple attack research paper are based on an attack launched against the Tor network by the authors of the study. The attack targeted six exit nodes, lasted for twenty-three days, and revealed a total of 10,000 IP addresses of active Tor users. This study is significant because it is the first documented attack designed to target P2P file-sharing applications on Tor. BitTorrent may generate as much as 40% of all traffic on Tor. Furthermore, the bad apple attack is effective against insecure use of any application over Tor, not just BitTorrent. Some protocols exposing IP addresses Researchers from the French Institute for Research in Computer Science and Automation (INRIA) showed that the Tor dissimulation technique in BitTorrent can be bypassed by attackers controlling a Tor exit node. The study was conducted by monitoring six exit nodes for a period of twenty-three days. Researches used three attack vectors: Inspection of BitTorrent control messages Tracker announces and extension protocol handshakes may optionally contain a client IP address. Analysis of collected data revealed that 35% and 33% of messages, respectively, contained addresses of clients. Hijacking trackers' responses Due to lack of encryption or authentication in communication between the tracker and peer, typical man-in-the-middle attacks allow attackers to determine peer IP addresses and even verify the distribution of content. Such attacks work when Tor is used only for tracker communication. Exploiting distributed hash tables (DHT) This attack exploits the fact that distributed hash table (DHT) connections through Tor are impossible, so an attacker is able to reveal a target's IP address by looking it up in the DHT even if the target uses Tor to connect to other peers. With this technique, researchers were able to identify other streams initiated by users, whose IP addresses were revealed. Sniper attack Jansen et al., describes a DDoS attack targeted at the Tor node software, as well as defenses against that attack and its variants. The attack works using a colluding client and server, and filling the queues of the exit node until the node runs out of memory, and hence can serve no other (genuine) clients. By attacking a significant proportion of the exit nodes this way, an attacker can degrade the network and increase the chance of targets using nodes controlled by the attacker. Heartbleed bug The Heartbleed OpenSSL bug disrupted the Tor network for several days in April 2014 while private keys were renewed. The Tor Project recommended Tor relay operators and onion service operators revoke and generate fresh keys after patching OpenSSL, but noted Tor relays use two sets of keys and Tor's multi-hop design minimizes the impact of exploiting a single relay. Five hundred eighty-six relays later found to be susceptible to the Heartbleed bug were taken offline as a precautionary measure. Relay early traffic confirmation attack On 30 July 2014, the Tor Project issued the security advisory "relay early traffic confirmation attack" in which the project discovered a group of relays that tried to de-anonymize onion service users and operators. In summary, the attacking onion service directory node changed the headers of cells being relayed tagging them as "relay" or "relay early" cells differently to encode additional information and sent them back to the requesting user/operator. If the user's/operator's guard/entry node was also part of the attacking relays, the attacking relays might be able to capture the IP address of the user/operator along with the onion service information that the user/operator was requesting. The attacking relays were stable enough to be designated as "suitable as hidden service directory" and "suitable as entry guard"; therefore, both the onion service users and the onion services might have used those relays as guards and hidden service directory nodes. The attacking nodes joined the network early in the year on 30 January and the project removed them on 4 July. Although when the attack began was unclear, the project implied that between February and July, onion service users' and operators' IP addresses might be exposed. The project mentioned the following mitigations besides removing the attacking relays from the network: patched relay software to prevent relays from relaying cells with "relay early" headers that were not intended. planned update for users' proxy software so that they could inspect if they received "relay early" cells from the relays (as they are not supposed to), along with the settings to connect to just one guard node instead of selecting randomly from 3 to reduce the probability of connecting to an attacking relay recommended that onion services should consider changing their locations reminded users and onion service operators that Tor could not prevent de-anonymization if the attacker controlled or could listen to both ends of the Tor circuit, like in this attack. In November 2014 there was speculation in the aftermath of Operation Onymous, resulting in 17 arrests internationally, that a Tor weakness had been exploited. A representative of Europol was secretive about the method used, saying: "This is something we want to keep for ourselves. The way we do this, we can’t share with the whole world, because we want to do it again and again and again." A BBC source cited a "technical breakthrough" that allowed tracking physical locations of servers, and the initial number of infiltrated sites led to the exploit speculation. Andrew Lewman—a Tor Project representative—downplayed this possibility, suggesting that execution of more traditional police work was more likely. In November 2015 court documents on the matter addressed concerns about security research ethics and the right of not being unreasonably searched as guaranteed by the US Fourth Amendment. Moreover, the documents, along with expert opinions, may also show the connection between the network attack and the law enforcement operation including: the search warrant for an administrator of Silkroad 2.0 indicated that from January 2014 until July, the FBI received information from a "university-based research institute" with the information being "reliable IP addresses for Tor and onion services such as SR2" that led to the identification of "at least another seventeen black markets on Tor" and "approximately 78 IP addresses that accessed a vendor .onion address." One of these IP addresses led to the arrest of the administrator the chronology and nature of the attack fitted well with the operation a senior researcher of International Computer Science Institute, part of University of California, Berkeley, said in an interview that the institute which worked with the FBI was "almost certainly" Carnegie Mellon University (CMU), and this concurred with the Tor Project's assessment and with an earlier analysis of Edward Felten, a computer security professor at Princeton University, about researchers from CMU's CERT/CC being involved In his analysis published on 31 July, besides raising ethical issues, Felten also questioned the fulfillment of CERT/CC's purposes which were to prevent attacks, inform the implementers of vulnerabilities, and eventually inform the public. Because in this case, CERT/CC's staff did the opposite which was to carry out a large-scale long-lasting attack, withhold vulnerability information from the implementers, and withhold the same information from the public. CERT/CC is a non-profit, computer security research organization publicly funded through the US federal government. Mouse fingerprinting In March 2016, a security researcher based in Barcelona demonstrated laboratory techniques using time measurement via JavaScript at the 1-millisecond level which could potentially identify and correlate a user's unique mouse movements, provided the user has visited the same "fingerprinting" website with both the Tor browser and a regular browser. This proof of concept exploits the "time measurement via JavaScript" issue, which had been an open ticket on the Tor Project for ten months. Circuit fingerprinting attack In 2015, the administrators of Agora, a darknet market, announced they were taking the site offline in response to a recently discovered security vulnerability in Tor. They did not say what the vulnerability was, but Wired speculated it was the "Circuit Fingerprinting Attack" presented at the Usenix security conference. Volume information A study showed "anonymization solutions protect only partially against target selection that may lead to efficient surveillance" as they typically "do not hide the volume information necessary to do target selection". Implementations The main implementation of Tor is written primarily in C, along with Python, JavaScript, and several other programming languages, and consists of 505,034 lines of code . Tor Browser The Tor Browser is the flagship product of the Tor Project. It was created as the Tor Browser Bundle by Steven J. Murdoch and announced in January 2008. The Tor Browser consists of a modified Mozilla Firefox ESR web browser, the TorButton, TorLauncher, NoScript, and HTTPS Everywhere Firefox extensions and the Tor proxy. Users can run the Tor Browser from removable media. It can operate under Microsoft Windows, macOS, and Linux. A default search engine is DuckDuckGo.(to ver.4.5, Startpage.com was its default). The Tor Browser automatically starts Tor background processes and routes traffic through the Tor network. Upon termination of a session the browser which moves by a Private browsing mode deletes privacy-sensitive data such as HTTP cookies and the browsing history. This effective in reducing web tracking and canvas fingerprinting, and it also helps to prevent a filter bubble. To allow download from places where accessing the Tor Project URL may be risky or blocked, a GitHub repository is maintained with links for releases hosted in other domains. Firefox/Tor browser attack In 2011, the Dutch authority investigating child pornography discovered the IP address of a Tor onion service site called "Pedoboard" from an unprotected administrator's account and gave it to the FBI, who traced it to Aaron McGrath. After a year of surveillance, the FBI launched "Operation Torpedo" which resulted in McGrath's arrest and allowed them to install their Network Investigative Technique (NIT) malware on the servers for retrieving information from the users of the three onion service sites that McGrath controlled. The technique, exploiting a Firefox/Tor browser's vulnerability that had been patched and targeting users that had not updated, had a Flash application pinging a user's IP address directly back to an FBI server, and resulted in revealing at least 25 US users as well as numerous users from other countries. McGrath was sentenced to 20 years in prison in early 2014, with at least 18 other users including a former Acting HHS Cyber Security Director being sentenced in subsequent cases. In August 2013 it was discovered that the Firefox browsers in many older versions of the Tor Browser Bundle were vulnerable to a JavaScript-deployed shellcode attack, as NoScript was not enabled by default. Attackers used this vulnerability to extract users' MAC and IP addresses and Windows computer names. News reports linked this to an Federal Bureau of Investigation (FBI) operation targeting Freedom Hosting's owner, Eric Eoin Marques, who was arrested on a provisional extradition warrant issued by a United States' court on 29th July. The FBI extradited Marques from Ireland to the state of Maryland on 4 charges: distributing; conspiring to distribute; and advertising child pornography, as well as aiding and abetting advertising of child pornography. The warrant alleged that Marques was "the largest facilitator of child porn on the planet". The FBI acknowledged the attack in a 12th September 2013 court filing in Dublin; further technical details from a training presentation leaked by Edward Snowden revealed the codename for the exploit as "EgotisticalGiraffe". Tor Messenger On 29 October 2015, the Tor Project released Tor Messenger Beta, an instant messaging program based on Instantbird with Tor and OTR built in and used by default. Like Pidgin and Adium, Tor Messenger supports multiple different instant messaging protocols; however, it accomplishes this without relying on libpurple, implementing all chat protocols in the memory-safe language JavaScript instead. In April 2018, the Tor Project shut down the Tor Messenger project because the developers of Instantbird discontinued support for their own software. The Tor Messenger developers explained that overcoming any vulnerabilities discovered in the future would be impossible due to the project relying on outdated software dependencies. Third-party applications The Vuze (formerly Azureus) BitTorrent client, Bitmessage anonymous messaging system, and TorChat instant messenger include Tor support. OnionShare allows to share files using Tor. The Guardian Project is actively developing a free and open-source suite of applications and firmware for the Android operating system to improve the security of mobile communications. The applications include the ChatSecure instant messaging client, Orbot Tor implementation, Orweb (discontinued) privacy-enhanced mobile browser, Orfox, the mobile counterpart of the Tor Browser, ProxyMob Firefox add-on, and ObscuraCam. Onion Browser is open-source, privacy-enhancing web browser for iOS, which uses Tor. It is available in the iOS App Store, and source code is available on GitHub. Brave added support for Tor in its desktop browser's private-browsing mode. Users can switch to Tor-enabled browsing by clicking on the hamburger menu on the top right corner of the browser. Security-focused operating systems Several security-focused operating systems make extensive use of Tor. These include Tails Live operating system, Hardened Linux From Scratch, Incognito, Liberté Linux, Qubes OS, Subgraph, Tails, Tor-ramdisk, and Whonix. Reception, impact, and legislation Tor has been praised for providing privacy and anonymity to vulnerable Internet users such as political activists fearing surveillance and arrest, ordinary web users seeking to circumvent censorship, and people who have been threatened with violence or abuse by stalkers. The U.S. National Security Agency (NSA) has called Tor "the king of high-secure, low-latency Internet anonymity", and BusinessWeek magazine has described it as "perhaps the most effective means of defeating the online surveillance efforts of intelligence agencies around the world". Other media have described Tor as "a sophisticated privacy tool", "easy to use" and "so secure that even the world's most sophisticated electronic spies haven't figured out how to crack it". Advocates for Tor say it supports freedom of expression, including in countries where the Internet is censored, by protecting the privacy and anonymity of users. The mathematical underpinnings of Tor lead it to be characterized as acting "like a piece of infrastructure, and governments naturally fall into paying for infrastructure they want to use". The project was originally developed on behalf of the U.S. intelligence community and continues to receive U.S. government funding, and has been criticized as "more resembl[ing] a spook project than a tool designed by a culture that values accountability or transparency". , 80% of The Tor Project's $2M annual budget came from the United States government, with the U.S. State Department, the Broadcasting Board of Governors, and the National Science Foundation as major contributors, aiming "to aid democracy advocates in authoritarian states". Other public sources of funding include DARPA, the U.S. Naval Research Laboratory, and the Government of Sweden. Some have proposed that the government values Tor's commitment to free speech, and uses the darknet to gather intelligence. Tor also receives funding from NGOs including Human Rights Watch, and private sponsors including Reddit and Google. Dingledine said that the United States Department of Defense funds are more similar to a research grant than a procurement contract. Tor executive director Andrew Lewman said that even though it accepts funds from the U.S. federal government, the Tor service did not collaborate with the NSA to reveal identities of users. Critics say that Tor is not as secure as it claims, pointing to U.S. law enforcement's investigations and shutdowns of Tor-using sites such as web-hosting company Freedom Hosting and online marketplace Silk Road. In October 2013, after analyzing documents leaked by Edward Snowden, The Guardian reported that the NSA had repeatedly tried to crack Tor and had failed to break its core security, although it had had some success attacking the computers of individual Tor users. The Guardian also published a 2012 NSA classified slide deck, entitled "Tor Stinks", which said: "We will never be able to de-anonymize all Tor users all the time", but "with manual analysis we can de-anonymize a very small fraction of Tor users". When Tor users are arrested, it is typically due to human error, not to the core technology being hacked or cracked. On 7 November 2014, for example, a joint operation by the FBI, ICE Homeland Security investigations and European Law enforcement agencies led to 17 arrests and the seizure of 27 sites containing 400 pages. A late 2014 report by Der Spiegel using a new cache of Snowden leaks revealed, however, that the NSA deemed Tor on its own as a "major threat" to its mission, and when used in conjunction with other privacy tools such as OTR, Cspace, ZRTP, RedPhone, Tails, and TrueCrypt was ranked as "catastrophic," leading to a "near-total loss/lack of insight to target communications, presence..." 2011 In March 2011, The Tor Project received the Free Software Foundation's 2010 Award for Projects of Social Benefit. The citation read, "Using free software, Tor has enabled roughly 36 million people around the world to experience freedom of access and expression on the Internet while keeping them in control of their privacy and anonymity. Its network has proved pivotal in dissident movements in both Iran and more recently Egypt." 2012 In 2012, Foreign Policy magazine named Dingledine, Mathewson, and Syverson among its Top 100 Global Thinkers "for making the web safe for whistleblowers". 2013 In 2013, Jacob Appelbaum described Tor as a "part of an ecosystem of software that helps people regain and reclaim their autonomy. It helps to enable people to have agency of all kinds; it helps others to help each other and it helps you to help yourself. It runs, it is open and it is supported by a large community spread across all walks of life." In June 2013, whistleblower Edward Snowden used Tor to send information about PRISM to The Washington Post and The Guardian. 2014 In 2014, the Russian government offered a $111,000 contract to "study the possibility of obtaining technical information about users and users' equipment on the Tor anonymous network". In September 2014, in response to reports that Comcast had been discouraging customers from using the Tor Browser, Comcast issued a public statement that "We have no policy against Tor, or any other browser or software." In October 2014, The Tor Project hired the public relations firm Thomson Communications to improve its public image (particularly regarding the terms "Dark Net" and "hidden services," which are widely viewed as being problematic) and to educate journalists about the technical aspects of Tor. 2015 In June 2015, the special rapporteur from the United Nations' Office of the High Commissioner for Human Rights specifically mentioned Tor in the context of the debate in the U.S. about allowing so-called backdoors in encryption programs for law enforcement purposes in an interview for The Washington Post. In July 2015, the Tor Project announced an alliance with the Library Freedom Project to establish exit nodes in public libraries. The pilot program, which established a middle relay running on the excess bandwidth afforded by the Kilton Library in Lebanon, New Hampshire, making it the first library in the U.S. to host a Tor node, was briefly put on hold when the local city manager and deputy sheriff voiced concerns over the cost of defending search warrants for information passed through the Tor exit node. Although the DHS had alerted New Hampshire authorities to the fact that Tor is sometimes used by criminals, the Lebanon Deputy Police Chief and the Deputy City Manager averred that no pressure to strong-arm the library was applied, and the service was re-established on 15 September 2015. U.S. Rep. Zoe Lofgren (D-Calif) released a letter on 10 December 2015, in which she asked the DHS to clarify its procedures, stating that “While the Kilton Public Library’s board ultimately voted to restore their Tor relay, I am no less disturbed by the possibility that DHS employees are pressuring or persuading public and private entities to discontinue or degrade services that protect the privacy and anonymity of U.S. citizens.” In a 2016 interview, Kilton Library IT Manager Chuck McAndrew stressed the importance of getting libraries involved with Tor: "Librarians have always cared deeply about protecting privacy, intellectual freedom, and access to information (the freedom to read). Surveillance has a very well-documented chilling effect on intellectual freedom. It is the job of librarians to remove barriers to information." The second library to host a Tor node was the Las Naves Public Library in Valencia, Spain, implemented in the first months of 2016. In August 2015, an IBM security research group, called "X-Force", put out a quarterly report that advised companies to block Tor on security grounds, citing a "steady increase" in attacks from Tor exit nodes as well as botnet traffic. In September 2015, Luke Millanta created OnionView, a web service that plots the location of active Tor relay nodes onto an interactive map of the world. The project's purpose was to detail the network's size and escalating growth rate. In December 2015, Daniel Ellsberg (of the Pentagon Papers), Cory Doctorow (of Boing Boing), Edward Snowden, and artist-activist Molly Crabapple, amongst others, announced their support of Tor. 2016 In March 2016, New Hampshire state representative Keith Ammon introduced a bill allowing public libraries to run privacy software. The bill specifically referenced Tor. The text was crafted with extensive input from Alison Macrina, the director of the Library Freedom Project. The bill was passed by the House 268–62. Also in March 2016, the first Tor node, specifically a middle relay, was established at a library in Canada, the Graduate Resource Centre (GRC) in the Faculty of Information and Media Studies (FIMS) at the University of Western Ontario. Given that the running of a Tor exit node is an unsettled area of Canadian law, and that in general institutions are more capable than individuals to cope with legal pressures, Alison Macrina of the Library Freedom Project has opined that in some ways she would like to see intelligence agencies and law enforcement attempt to intervene in the event that an exit node were established. On 16 May 2016, CNN reported on the case of core Tor developer isis agora lovecruft, who had fled to Germany under the threat of a subpoena by the FBI during the Thanksgiving break of the previous year. The Electronic Frontier Foundation legally represented lovecruft. On 2 December 2016, The New Yorker reported on burgeoning digital privacy and security workshops in the San Francisco Bay Area, particularly at the hackerspace Noisebridge, in the wake of the 2016 United States presidential election; downloading the Tor browser was mentioned. Also, in December 2016, Turkey has blocked the usage of Tor, together with ten of the most used VPN services in Turkey, which were popular ways of accessing banned social media sites and services. Tor (and Bitcoin) was fundamental to the operation of the darkweb marketplace AlphaBay, which was taken down in an international law enforcement operation in July 2017. Despite federal claims that Tor would not shield a user, however, elementary operational security errors outside of the ambit of the Tor network led to the site's downfall. 2017 In June 2017 the Democratic Socialists of America recommended intermittent Tor usage. And in August 2017, according to reportage cybersecurity firms which specialize in monitoring and researching the dark web (which rely on Tor as its infrastructure) on behalf of banks and retailers routinely share their findings with the FBI and with other law enforcement agencies "when possible and necessary" regarding illegal content. The Russian-speaking underground offering a crime-as-a-service model is regarded as being particularly robust. 2018 In June 2018, Venezuela blocked access to the Tor network. The block affected both direct connections to the network and connections being made via bridge relays. On 20 June 2018, Bavarian police raided the homes of the board members of the non-profit Zwiebelfreunde, a member of torservers.net, which handles the European financial transactions of riseup.net in connection with a blog post there which apparently promised violence against the upcoming Alternative for Germany convention. Tor came out strongly against the raid against its support organization, which provides legal and financial aid for the setting up and maintenance of high-speed relays and exit nodes. According to Torservers.net, on 23 August 2018 the German court at Landgericht München ruled that the raid and seizures were illegal. The hardware and documentation seized had been kept under seal, and purportedly were neither analyzed nor evaluated by the Bavarian police. Since October 2018, Chinese online communities within Tor have begun to dwindle due to increased efforts to stop them by the Chinese government. 2019 In November 2019, Edward Snowden called for a full, unabridged simplified Chinese translation of his autobiography, Permanent Record, as the Chinese publisher had violated their agreement by expurgating all mentions of Tor and other matters deemed politically sensitive by the Communist Party of China. 2021 On 8 December 2021, the Russian government agency Roskomnadzor announced it has banned Tor and six VPN services for failing to abide by the Russian Internet blacklist. Tor's main website as well as several bridges were blocked by Russian ISPs beginning on 1 December 2021. Improved security Tor responded to earlier vulnerabilities listed above by patching them and improving security. In one way or another, human (user) errors can lead to detection. The Tor Project website provides the best practices (instructions) on how to properly use the Tor browser. When improperly used, Tor is not secure. For example, Tor warns its users that not all traffic is protected; only the traffic routed through the Tor browser is protected. Users are also warned to use HTTPS versions of websites, not to torrent with Tor, not to enable browser plugins, not to open documents downloaded through Tor while online, and to use safe bridges. Users are also warned that they cannot provide their name or other revealing information in web forums over Tor and stay anonymous at the same time. Despite intelligence agencies' claims that 80% of Tor users would be de-anonymized within 6 months in the year 2013, that has still not happened. In fact, as late as September 2016, the FBI could not locate, de-anonymize and identify the Tor user who hacked into the email account of a staffer on Hillary Clinton's email server. The best tactic of law enforcement agencies to de-anonymize users appears to remain with Tor-relay adversaries running poisoned nodes, as well as counting on the users themselves using the Tor browser improperly. For example, downloading a video through the Tor browser and then opening the same file on an unprotected hard drive while online can make the users' real IP addresses available to authorities. Odds of detection When properly used, odds of being de-anonymized through Tor are said to be extremely low. Tor project's co-founder Nick Mathewson recently explained that the problem of "Tor-relay adversaries" running poisoned nodes means that a theoretical adversary of this kind is not the network's greatest threat: Tor does not provide protection against end-to-end timing attacks: if an attacker can watch the traffic coming out of the target computer, and also the traffic arriving at the target's chosen destination (e.g. a server hosting a .onion site), that attacker can use statistical analysis to discover that they are part of the same circuit. Levels of security Depending on individual user needs, Tor browser offers three levels of security located under the Security Level (the small gray shield at the top-right of the screen) icon > Advanced Security Settings. In addition to encrypting the data, including constantly changing an IP address through a virtual circuit comprising successive, randomly selected Tor relays, several other layers of security are at a user's disposal: Standard (default) – at this security level, all browser features are enabled. This level provides the most usable experience, and the lowest level of security. Safer – at this security level, the following changes apply: JavaScript is disabled on non-HTTPS sites. On sites where JavaScript is enabled, performance optimizations are disabled. Scripts on some sites may run slower. Some mechanisms of displaying math equations are disabled. Audio and video (HTML5 media), and WebGL are click-to-play. Safest – at this security level, these additional changes apply: JavaScript is disabled by default on all sites. Some fonts, icons, math symbols, and images are disabled. Audio and video (HTML5 media), and WebGL are click-to-play. See also .onion Anonymous P2P Anonymous web browsing Briar: messaging app on Tor network Crypto-anarchism Darknet Dark web Deep web Freedom of information Freenet GNUnet I2P Internet censorship Internet censorship circumvention Internet privacy Privoxy Proxy server Psiphon Tor2web Tor Phone torservers.net Footnotes References External links Anonymity Bibliography Old website Archived: Official List of mirror websites Animated introduction Tor: Hidden Services and Deanonymisation presentation at the 31st Chaos Computer Conference TorFlow, a dynamic visualization of data flowing over the Tor network Tor onion services: more useful than you think in a 2016 presentation at the 32nd Annual Chaos Communication Congress A core Tor developer lectures at the Radboud University Nijmegen in The Netherlands on anonymity systems in 2016 A technical presentation given at the University of Waterloo in Canada: Tor's Circuit-Layer Cryptography: Attacks, Hacks, and Improvements A Presentation at the March 2017 BSides Vancouver Conference on security practices on Tor's hidden services given by Sarah Jamie Lewis Anonymity networks Application layer protocols Hash based data structures File sharing Cross-platform free software Cross-platform software Cryptographic software Cryptographic protocols Cryptography Dark web Free network-related software Free routing software Free software programmed in C Free software programmed in Rust Internet privacy software Internet protocols Internet security Internet Standards Proxy servers Secure communication Software using the BSD license . 2002 software Internet properties established in 2002 Computer networking Overlay networks Onion routing Key-based routing Mix networks
51795367
https://en.wikipedia.org/wiki/Nidhogg%202
Nidhogg 2
Nidhogg 2 is a fighting game and sequel to Nidhogg by indie developer Messhof. It was released for Microsoft Windows, macOS and PlayStation 4 in 2017. An Xbox One version was released in July 2018 and the Nintendo Switch version was released in November 2018. The game received generally positive reviews from critics upon release. Gameplay Nidhogg 2 is a fighting game in which two players duel against each other. The player has to reach the end of their opponent's side first to win. Players have a variety of moves, including sliding and leaping. They can deflect their opponents' attacks. Weapons, such as daggers, throwing knives and bows, as well as a character creator, are introduced. Development The title was announced in September 2016 and demonstrated at the 2016 TwitchCon and 2017 Electronic Entertainment Expo Indie Megabooth. It was released on August 15, 2017, on macOS, PlayStation 4, and Microsoft Windows platforms. A Nintendo Switch version was released on November 22, 2018. Unlike the original game, which features a minimalistic art style, the game's creator Mark Essen and artist Toby Dixon decided to use a more detailed graphical style with a higher resolution so as to reflect the game's increased depth. Another reason for the change is that Essen considered Nidhogg 2 a "spectator game". The art style allowed spectators to have more things to look at when they were watching the gameplay. Reception The game received "generally favorable" reviews, according to review aggregator Metacritic. GamesMaster said it was "An excellent sword-fighting game that at times has trouble remembering its brilliant roots." Game Informer said, "Though it doesn’t add much for players looking to play around with its improvements solo, Nidhogg 2 adds layers of depth to a simple formula without breaking what made it so appealing in the first place." Destructoid said, "It has expanded on the wonderful mechanics of the original and has one of the best soundtracks in recent memory. There isn't much content here for the solo player, but if you've got friends coming over for some friendly competition, the night would not be complete without Nidhogg 2." PlayStation Official Magazine – UK said it had "wicked local multiplayer appeal", while GamesTM said it was "Fast, intense, but flawed". The game was nominated for "Best Fighting Game" and "Best Multiplayer" in IGN's Best of 2017 Awards. Accolades See also List of GameMaker Studio games References Further reading External links 2017 video games Fighting games Indie video games MacOS games PlayStation 4 games Windows games Multiplayer and single-player video games Video games developed in the United States Nintendo Switch games Video game sequels Xbox One games GameMaker Studio games
18311349
https://en.wikipedia.org/wiki/Royal%20Signals%20trades
Royal Signals trades
The Royal Signals trades are the employment specialisations of the Royal Corps of Signals in the British Army. Every soldier in the Corps is trained both as a field soldier and a tradesman. There are currently seven different trades, each of which is open to both men and women: Communication Systems Operator: an expert in military radio communications. Communication Systems Engineer: an expert in data communications and computer networks. Royal Signals Electrician: an expert in maintaining and repairing generators and providing electrical power. Driver Lineman: an expert in driving, laying line and installing cabling. Installation Technician: an expert in installing and repairing fibreoptics and telephone systems. Electronic Warfare Systems Operator: an expert in intercepting and jamming enemy communications. Technical Supply Specialist: an expert in managing and accounting for communications equipment. Initial training common to all trades Every tradesman trains first as a soldier at the Army Foundation College at Harrogate or the Army Training Regiment at Winchester or at the Army Training Centre Pirbright, in Surrey. Recruits complete a 14-week course which teaches basic military skills such as military foot drill, how to handle and fire a weapon, how to live and work outdoors and how to tackle an assault course. In addition they develop their stamina and fitness. On completing his or her initial training every soldier then moves to 11th Signal Regiment at Blandford Camp in Dorset to commence their trade training. Trade skills and training Communication Systems Operator Former Data Telegraphists and Radio Operators Communication Systems Operators form the largest trade in the Royal Signals and are trained to operate secure digital radio systems, satellite communications and wide-area computer networks. Their course at the Royal School of Signals lasts 28 weeks and covers the following disciplines: Area communications systems. Mobile multi-channel microwave radio relay. Civilian and military satellite communications. Public switched telephone networks. Information systems training. Keyboard skills and elements of the European Computer Driving Licence. Driving cars (with and without trailers) and, in some cases, Large Goods Vehicles. At the rank of sergeant selected Communication Systems Operator go on to become Yeomen of Signals. Communication Systems Engineer Communication Systems Engineers are the technical experts of the Royal Signals. They install, maintain and repair the British Army's battlefield communication networks and information systems. Their course at the Royal School of Signals lasts 36 weeks. Their training includes the following elements: Basic computer software systems: how to install computer workstations into a systems network and then maintain, engineer and control these systems. Information systems: computer systems training and other skills required to complete elements of the European Computer Driving Licence to Level 2 standard. Radio: HF to UHF and Bowman radio equipment, satellite and theatre-wide area digital communications networks. Teleconferencing: operating video teleconferencing equipment and digital telephone exchanges. Service management: providing help-desk support and troubleshooting. Driving: cars (with and without trailers). At the rank of corporal Communication Systems Engineers can be highlighted for potential supervisory roles. These roles are Foreman of Signals or Foreman of Signals (Information Systems). With the merging of the two trade groups, Systems Engineering Technicians and Information Systems Engineers these supervisory roles are still being scrutinized to match the new trade group. At present the Foreman of Signals deals with the technical aspect of Squadron life, working with the squadron technical workshops, dealing with 1st and 2nd line inspections and holding and maintaining the sqn master works register and technical inventory. The Foreman of Signals (Information Systems) deals with the information systems aspect of life within a squadron, arranging for information systems courses relevant to the Squadron assets, network management and the deployment and tracking of squadron Information system equipments. Whilst both supervisory trade groups act independently, there is a need for the two to interact and exchange information on a regular basis. Royal Signals Electrician Royal Signals Electricians install, maintain and repair field-distribution power supplies and lighting. They are responsible for the mechanical and electrical repair of the Army's field generator systems. Their course at the Royal School of Signals lasts 25 weeks. It covers the following disciplines: Electrical engineering theory and practice: the skills required to repair and maintain power distribution equipment, generator and battery systems. Safety practices and procedures: preventing electrocution. Driving: cars and large goods vehicles with or without trailers. Driver Lineman Linemen drive, maintain and service vehicles from cars to Large Goods Vehicles. Their role includes the movement of hazardous materials, constructing field cable routes and laying fibre-optic cabling. Their course at the Royal School of Signals lasts 6 weeks. It covers the following disciplines: Installing and testing different types of field communication cables and telephones. Antenna rigging skills: working on high masts in difficult conditions. First aid and safety: the correct and safe way to wear a harness and climb telegraph poles. Computer skills: elements of the European Computer Driving Licence. The final stage of training for Driver Linemen is 14 weeks at the Defence School of Transport at Leconfield, East Yorkshire, learning how to drive cars and Large Goods Vehicles both with and without trailers. Installation Technician Installation Technicians install, maintain and repair the Army's telephone systems and fibre-optic networks, including cable infrastructures, local area networks, closed circuit television and video conferencing systems. Their course at the Royal School of Signals lasts 40 weeks and covers the following disciplines: Repairing, installing and maintaining telephone networks. Copper and fibre-optic cabling skills. Working at heights Elements of the European Computer Driving Licence Driving cars with and without trailers Electronic Warfare Systems Operator Electronic Warfare Systems Operators are responsible for intercepting and disrupting enemy radio transmissions. They deploy alongside Intelligence Corps linguists, and some work with bomb disposal teams. They train alongside Communication Systems Operators on a 23-week course at the Royal School of Signals, followed by a five-week aptitude course and a 17-week Communications Exploitation course at the Defence College of Intelligence, Chicksands in Bedfordshire. Their training covers the following disciplines: Operating communications equipment. Learning to use HF, VHF, UHF and SHF radio equipment. Computer skills. Keyboard skills and completion of parts of the European Computer Driving Licence. Special message handling and intercept skills. Driving cars (with and without trailers) and Large Goods Vehicles. Technical Supply Specialist Technical Supply Specialists are responsible for the storage and distribution of technical supplies, both on base and when deployed on operations. Managing technical stores is the core responsibility of this trade, but Supply Specialists must have a thorough understanding of the communications equipment used by Royal Signals units. Their course at the Royal School of Signals lasts 13 weeks and covers the following disciplines: Manual accounting systems. Computer-based accounting systems. Driving cars and military vehicles including Large Goods Vehicles. Elements of the European Computer Driving Licence. Supervisory trades Staff sergeants and warrant officers work in one of five supervisory rosters: Yeoman of Signals (YofS) Yeoman of Signals (Electronic Warfare) Foreman of Signals (FofS) Foreman of Signals (Information Systems) Regimental Duty Candidates for YofS and FofS are selected from the Operator and Technician trades for training to first degree and Honours degree respectively. Both are obtained through the Royal School of Signals Blandford whilst being validated by Bournemouth University. Subsequent employment After basic and trade training most Royal Signals tradesmen are posted to the Field Army as Class 3 trained soldiers in the rank of signaller. Communication Systems Engineers and Electronic Warfare Operators, however, leave training as lance corporals. After a year's experience all tradesmen become eligible for upgrading to Class 2 and a pay rise. Throughout their careers tradesmen attend further training courses (including upgrading to Class 1). Promotion is based on experience, ability and merit. Depending on their trade, upon reaching the rank of sergeant, soldiers may apply to join one of the supervisory rosters, which brings extra responsibility and qualifications. Alternatively, soldiers from any trade may choose to follow a career path at Regimental Duty, in which they specialise in delivering military training and, if successful, fill roles such as squadron sergeant major, regimental quartermaster sergeant (RQMS) and regimental sergeant major (RSM). Soldiers from any trade can volunteer for service with airborne forces or as a Special Forces Communicator, a small number may undertake All Arms Commando Course for service with 3 Commando Brigade. Signallers of all trades could previously apply to join the Royal Signals Motorcycle Display Team, better known as the White Helmets, however the White Helmets were disbanded at the end of 2017. Commissioning Signallers may apply for commissioning, either as a Direct Entry officer undertaking the complete training package at Royal Military Academy Sandhurst, or as a Late Entry officer, undertaking a short commissioning course at Sandhurst. LE Officers are employed as Traffic Officers, Technical Officer (Telecommunications) or General Duties based on experience as a Yeoman of Signals, Foreman of Signals or Regimental Duty. References Intelligence, IT and Comms British Army specialisms Royal Corps of Signals
1962599
https://en.wikipedia.org/wiki/Tux%20Paint
Tux Paint
Tux Paint is a raster graphics editor (a program for creating and processing raster graphics) geared towards young children. The project was started in 2002 by Bill Kendrick who continues to maintain and improve it, with help from numerous volunteers. Tux Paint is seen by many as a free software alternative to Kid Pix, a similar proprietary educational software product. History Tux Paint was initially created for the Linux operating system, as there was no suitable drawing program for young children available for Linux at that time. It is written in the C programming language and uses various free and open source helper libraries, including the Simple DirectMedia Layer (SDL), and has since been made available for Microsoft Windows, Apple macOS, Android, Haiku, and other platforms. Selected milestone releases: 2002.06.16 (June 16, 2002) - Initial release (brushes, stamps, lines, eraser), two days after coding started 2002.06.30 (June 30, 2002) - First Magic tools added (blur, blocks, negative) 2002.07.31 (July 31, 2002) - Localization support added 0.9.11 (June 17, 2003) - Right-to-left support, UTF-8 support in Text tool 0.9.14 (October 12, 2004) - Tux Paint Config. configuration tool released, Starter image support 0.9.16 (October 21, 2006) - Slideshow feature, animated and directional brushes 0.9.17 (July 1, 2007) - Arbitrary screen size and orientation support, SVG support, input method support 0.9.18 (November 21, 2007) - Magic Tools turned into plug-ins, Pango text rendering 0.9.25 (December 20, 2020) - Support for exporting individual drawings and slideshows (as animated GIFs) Features Tux Paint stands apart from typical graphics editing software (such as GIMP or Photoshop) that it was designed to be usable by children as young as 3 years of age. The user interface is meant to be intuitive, and utilizes icons, audible feedback and textual hints to help explain how the software works. The brightly colored interface, sound effects and cartoon mascot (Tux, the mascot of the Linux kernel) are meant to engage children. Tux Paint's normal interface is split into five sections: Toolbox, containing the various basic tools (see below) and application controls (undo, save, new, print) Canvas, where the images are drawn and edited Color palette, where colors can be chosen (when applicable to the current tool) Selector, providing various selectable objects (e.g., brushes, fonts or sub-tools, depending on the current tool) Information area, where instructions, tips and encouragement are provided A simple slideshow feature allows previously saved images to be displayed as a basic flip-book animation or as a slide presentation. Basic drawing tools Like most popular graphics editing and composition tools, Tux Paint includes a paintbrush, an eraser, and tools to draw lines, polygonal shapes and text. Tux Paint provides multiple levels of undo and redo, allowing accidental or unwanted changes to be removed while editing a picture. Files and printing Tux Paint was designed in such a way that the user does not need to understand the underlying operating system or how to deal with files. The "Save" and "Open" commands were designed to mimic those of software for personal digital assistant devices, such as the Palm handheld. When one saves a picture in Tux Paint, they do not need to provide a file name or browse for where to place it. When one goes to open a previously saved picture, a collection of thumbnails of saved images is shown. Similarly, printing is typically a 'no questions asked' process, as well. Beginning with version 0.9.25, Tux Paint offers the ability to export individual drawings, as well as slideshow animations (in animated GIF format), to the user's home folder (e.g., "`$HOME/Pictures`" on [Freedesktop.org] environments). Advanced drawing tools Tux Paint includes a number of 'filters' and 'special effects' which can be applied to a drawing, such as blurring, fading, and making the picture look as though it was drawn in chalk on pavement. These are available through the 'Magic' tool in Tux Paint. Starting with version 0.9.18, Tux Paint's 'Magic' tools are built as plugins that are loaded at runtime and use a C API specifically for creating such tools. A large collection of artwork and photographic imagery are also available (under a license allowing free redistribution), and may be placed inside drawings using Tux Paint's "Rubber Stamp" tool. Stamps can be in either raster (bitmap) format (in PNG format, supporting 24bpp and full alpha transparency), or as vector graphics (in SVG format) on many platforms Tux Paint supports. As of mid-2008, over 800 stamps are included in the stamps collection. Parental and teacher controls As features are added to Tux Paint, configuration options have been added that allow parents and teachers to disable features and alter the behavior to better suit their children's or students' needs, or to better integrate the software in their home or school computing environment. Typical options, such as enabling or disabling sound effects and full-screen mode are available. There are also options that help make Tux Paint suitable for younger or disabled children, such as displaying text using only uppercase letters or ignoring the distinction between buttons on the mouse. Localization Tux Paint has been translated into numerous languages, and has support for the display of text in languages that use non-Latin character sets, such as Japanese, Greek, or Telugu. As of November 2021, 130 languages are supported. Correct support for complex languages requires Pango. Sound effects and descriptive sounds for stamp imagery can also be localized. Tux Paint includes its own form of input method support, allowing entry of non-Latin characters using the 'Text' tool. Japanese (Romanized Hiragana and Romanized Katakana), Korean (Hangul 2-bul) and Traditional Chinese are currently supported. Accessibility Tux Paint offers built-in accessibility features, including a on-screen keyboard for use with the text entry tools, keyboard and joystick/gamepad control of the pointer, options to increase the size of UI elements (useful for coarse assistive technology, such as eye gaze trackers), and an option to play sounds monaurally. See also GCompris List of raster graphics editors Comparison of raster graphics editors Tux Typing Tux, of Math Command MyPaint References External links Tux4Kids Free raster graphics editors Free software programmed in C Software for children Tux paint Cross-platform software Educational video games Open-source video games Portable software Free educational software GNOME Kids Free and open-source Android software Raster graphics editors
307119
https://en.wikipedia.org/wiki/Music%20workstation
Music workstation
A music workstation is an electronic musical instrument providing the facilities of: a sound module, a music sequencer and (usually) a musical keyboard. It enables a musician to compose electronic music using just one piece of equipment. Origin of concept The concept of a music sequencer combined with a synthesizer originated in the late 1970s with the combination of microprocessors, mini-computers, digital synthesis, disk-based storage, and control devices such as musical keyboards becoming feasible to combine into a single piece of equipment that was affordable to high-end studios and producers, as well as being portable for performers. Prior to this, the integration between sequencing and synthesis was generally a manual function based on wiring of components in large modular synthesizers, and the storage of notes was simply based on potentiometer settings in an analog sequencer. Multitimbrality Polyphonic synthesizers such as Sequential Circuit Prophet-5 and Yamaha DX7 were capable of playing only one patch at a time (the DX7II could play 2 patches on 2 separate MIDI channels) There was some sequencing ability in some keyboards, but it was not MIDI sequencing. In the mid to late 80s, workstation synths were manufactured more than single-patch keyboards. A workstation such as the Korg M1 was able to play out 8 different patches on 8 different MIDI channels, as well as playing a drum track, and had an onboard MIDI sequencer. The patches were often samples, but users could not record their own samples, as they could on a Fairlight. Having samples as the sound source is what made it possible to have various drum sounds in one patch. In contrast, a DX7 or a JX3P did not have the synthesis features to create all the sounds in a drum kit. First generation music workstations Examples of early music workstations included the New England Digital Synclavier and the Fairlight CMI. Key technologies for the first generation Low-cost computer hardwareLeveraging the technology of personal computers, adding a microprocessor enabled complex control functions to be expressed in software rather than wiring. In 1977, the Sequential Circuits Prophet-5 and other polyphonic synthesizers had used microprocessors to control patch storage and recall, and the music workstations applied it to control sequence storage and recall as well. The Fairlight used a dual Motorola 6800 configuration, while the Synclavier used a mini-computer called the ABLE. Digital synthesisWhile it was possible to create a music workstation with digitally controlled analog synthesis modules, few companies did this, instead seeking to produce new sounds and capabilities based on digital synthesis (early units were based on FM synthesis or sample playback). Disk-based storageAgain leveraging the technology of personal computers, music workstations used floppy disks to record patches, sequences, and samples. Hard disk storage appeared in the second generation. Control devicesIn a music workstation, the keyboard was not directly connected to the synthesis modules, as in a Minimoog or ARP Odyssey. Instead, the keyboard switches were digitally scanned, and control signals sent over a computer backplane where they were inputs to the computer processor, which would then route the signals to the synthesis modules, which were output devices on the backplane. This approach had been used for years in computer systems, and allowed the addition of new input and output peripherals without obsoleting the entire computer. In the case of the music workstations, the next output devices to be added were typically computer terminal displays (some with graphics), and in the case of the Fairlight, the next input device was a light pen for "drawing" on the display screen. The result was that music workstations evolved rapidly during this period, as new software releases could add more functionality, new voice cards developed, and new input technologies added. Second generation music workstations By 1982, the Fairlight CMI Series II represented another advance as it now offered more RAM-based sample memory than any other system with an improved sample rate, and in the Series III (1985) changed from 8-bit to 16-bit samples. The Synclavier introduced hard-disk based sampling in 1982, storing megabytes of samples for the first time. Other products also combined synthesis and sequencing. For instance the Sequential Circuits Six-Trak provided this possibility. The Six-Trak was a polyphonic analog synthesizer, which featured an on-board six-track sequencer. Still other products focused on combining sampling and sequencing. For instance the E-mu Emulator models, first introduced in 1981, combined sample memory (read from floppy disks) with a simple sequencer in the initial model, and an 8-track sequencer in later models. The biggest change in the industry was the development of the MIDI standard in 1983 for representing musical note sequences. For the first time, sequences could be moved from one digitally controlled music device to another. The Ensoniq ESQ-1, released in 1985, combined for the first time a multi-track, polyphonic MIDI sequencer with a dynamically-assigned multi-timbral synthesizer. In the late 1980s, on-board MIDI sequencers began to appear more frequently on professional synthesizers. The Korg M1 (released 1988) a widely known and popular music workstation, and became the world's best-selling digital keyboard synthesizer of all time. During its six-year production period, more than 250,000 units were sold. Key technologies for the second generation MIDIAs mentioned above, MIDI data represents pitches, velocities, and controller events (e.g. pitch bend, modulation wheel). MIDI information could be used on the backplane that linked the elements of the workstation together, connecting the input devices to the synthesizers, or it could be sent to another device or received from another device. Display technologiesMusic workstations adopted the most effective input/output devices available for their price range, since there were complex control settings to display, complex waveforms, and complex sequences. The lower-end devices began to use LED displays that showed multiple lines of characters and later simple graphics, while the higher-end devices began to adopt personal computers with graphics as their front-ends (the Synclavier PostPro used an Apple Macintosh). Large memory banksMusic workstations soon had megabytes of memory, located on large racks of cards. Modular softwareMusic workstations had software that was organized around a set of common control functions, and then a set of options. In many cases, these options were organized as 'pages'. The Fairlight was known for its "Page R" functions which provided real-time composition in a graphical form which was similar to that later used on drum machines such as the Roland TR-808. The Synclavier offered music notation. Digital signal processingThis enabled the music workstation to generate effects such as reverb or chorus within its hardware, rather than relying on external devices. SMPTESince the primary users of the high-end workstations were film composers, the music workstations added hardware and software to generate SMPTE timecode, which is a standard in the motion picture industry. This allowed one to generate events that were matched to scenes and cuts in the film. Third generation music workstations Although many music workstations have a keyboard, this is not always the case. In the 1990s, Yamaha, and then Roland, released a series of portable music workstations (starting with the Yamaha QY10 (1990)). These are sometimes called walkstations. The concept of the workstation mutated around mid-1990s by the emergence of groove machine-concept birthed in mid-1980s - a keyless version of a workstation, still with a self-contained sound source and sequencer, mostly aimed at dance. Again, nowadays they also feature a sampler. The groove machines were realized in mid-1980s (ex. Linn 9000 (1984), SCI Studio 440 (1986), Korg DDD-1 (1986), Yamaha RX5 (1986), Simmons SDX (1987)), Kawai R-50e (1987), and by the wide acceptance of E-mu SP-12/SP-1200 (1985/1987) and Akai MPC60 (1988), finally the concept have been widely accepted. Then in mid 1990s, Roland entered to the hype with the MC-303 (1996), and also Korg and Yamaha re-entered the market. Korg created the much-used Electribe series (1999–). Akai developed and refined the idea of the keyboard-less workstation, with the Music Production Center series (1988–) of sampler workstations. The MPC breed of sampler freed the composer from the rigidity of step sequencing which was a limitation of earlier groove machines. Key technologies for the third generation Low-cost, high-capacity memoryBy 1995, a music workstation might have 16 to 64 megabytes of memory in a few chips, which had required a rack of cards in 1985. Sample librariesWhile a second-generation workstation could be sold with just a few sounds or samples and the ability for the owner to create more, by 1995 most workstations had several additional sample sets available for purchase on ROM, and an industry had been created for third-party sample libraries. In addition, there were now standard formats for sound samples to achieve interoperability. Battery powerSince music workstations were now used by wide range of performers, down to individual dance music DJ's and even street performers, portable designs avoided power-intensive components such as disk storage and began to rely on persistent memory and later flash-memory storage. Interoperability with personal computersInitially through custom interfaces and later USB standards. Modern music workstations Yamaha, Roland and Korg now have sampling as a default option with the Yamaha Motif line (introduced 2001), the Roland Fantom series (introduced 2001) and the Korg Triton (introduced 1999), Korg OASYS, and Korg M3 Workstations have a fairly large screen to give a comprehensive overview of the sound, sequencer and sampling options. Since the display is one of the most expensive components of these workstations, Roland and Yamaha initially chose to keep costs down by not using a touch screen or high-resolution display, but have added such in later models. Another path of music product development that started with the feature set of music workstations is to provide entirely software-based products, using virtual instruments. This is the concept of the digital audio workstation, and many of these products have emulated the multitrack recording metaphors of sequencers first developed in the music workstations. Open Labs introduced the Production Station in 2003, which changed the relationship of the music workstation and the personal computer from a model where the music workstation interfaces to the PC into one where the music workstation is a PC with a music keyboard and a touch screen display. A variation on Open Labs' approach, Korg released the Korg OASYS in 2005. OASYS housed inside a keyboard music workstation housing a computer running a custom operating system built on the Linux kernel. OASYS was an acronym for Open Architecture SYnthesis Studio, underscoring Korg's ability to release new capabilities via ongoing software updates. OASYS not only included a synthesizer, sampling, and a sequencer, but the ability to digitally record multi-track audio. OASYS was discontinued in 2009, and Korg Kronos, an updated version built on the same concept, was introduced in January, 2011. Evaluation of a music workstation While advances in digital technology have greatly reduced the price of a professional-grade music workstation, the 'time cost' of learning to operate a complex instrument like this cannot be underestimated. Hence, product selection is critical, and is typically based upon: Ease of use Number of tracks in the sequencer Expansion options and modularity Size of user and support community Support for standards such as MIDI, SMPTE, Internet, etc. Reliable functioning Adaptation to most requirements of music production. References Further reading Electronic musical instruments Music sequencers Sound production technology
9574815
https://en.wikipedia.org/wiki/John%20Harris%20%28software%20developer%29
John Harris (software developer)
John D. Harris is a computer programmer, hacker and author of several 1980s Atari computer games. His impact on the early years of the video game industry are chronicled in the book Hackers: Heroes of the Computer Revolution. His love for the Atari 8-bit computers led him to creating several popular games, perhaps most of all Frogger, which by the end of development had been written from scratch, twice. The reason for this is that his entire back catalogue of development tools and libraries he had developed were stolen at a game developer conference at which he was presenting. The delay in writing the game also led to complications between Harris and his employer, Ken Williams (Director of Sierra On-Line). During John's time at Sierra, he became one of the most influential young developers in America, at 24 years of age he was earning a 6 figure income off the back of royalties for games which Sierra were marketing for him. As time went on, John's increasingly worrying relationship with Sierra began to get worse, the cutting of royalties and the lack of recognition for his work soon became a catalyst which led to him leaving the company to work at Synapse (despite many offers of employment from new startup EA Games). Works Atari 8-bit Jawbreaker, Sierra On-Line, 1981 Frogger, Sierra On-Line, 1982 Mouskattack Maneuvering Bankster MAE Atari 2600 Jawbreaker, Tiger Vision AmigaDE Gobbler Solitaire Employment Pulsar Interactive Corp., 1997–2003 Tachyon Studios, Inc. Atari Synapse Sierra On-Line References Interview with John Harris regarding Hackers and his career and views on game development Interview with John Harris regarding developing for AmigaDE, July 2002 Year of birth missing (living people) Living people American computer programmers American video game designers Video game programmers
45714554
https://en.wikipedia.org/wiki/OptiSLang
OptiSLang
optiSLang is a software platform for CAE-based sensitivity analysis, multi-disciplinary optimization (MDO) and robustness evaluation. It is developed by Dynardo GmbH and provides a framework for numerical Robust Design Optimization (RDO) and stochastic analysis by identifying variables which contribute most to a predefined optimization goal. This includes also the evaluation of robustness, i.e. the sensitivity towards scatter of design variables or random fluctuations of parameters. In 2019, Dynardo GmbH was acquired by Ansys. Methodology Sensitivity analysis: Representing continuous optimization variables by uniform distributions without variable interactions, variance based sensitivity analysis quantifies the contribution of the optimization variables for a possible improvement of the model responses. In contrast to local derivative based sensitivity methods, the variance based approach quantifies the contribution with respect to the defined variable ranges. Coefficient of Prognosis (CoP) The CoP is a model independent measure to assess the model quality and is defined as follows: Where is the sum of squared prediction errors. These errors are estimated based on cross validation. In the cross validation procedure, the set of support points is mapped to subsets. Then the approximation model is built by removing subset from the support points and approximating the subset model output using the remaining point set. This means that the model quality is estimated only at those points which are not used to build the approximation model. Since the prediction error is used instead of the fit, this approach applies to regression and even interpolation models. Metamodel of Optimal Prognosis (MOP): The prediction quality of an approximation model may be improved if unimportant variables are removed from the model. This idea is adopted in the Metamodel of Optimal Prognosis (MOP) which is based on the search for the optimal input variable set and the most appropriate approximation model (polynomial or Moving Least Squares with linear or quadratic basis). Due to the model independence and objectivity of the CoP measure, it is well suited to compare the different models in the different subspaces. Multi-disciplinary optimization: The optimal variable subspace and approximation model found by a CoP/MOP procedure can also be used for a pre-optimization before global optimizers (evolutionary algorithms, Adaptive Response Surface Methods, Gradient-based methods, biological-based methods) are used for a direct single-objective optimization. After conducting a sensitivity analysis using MOP/CoP, also a multi-objective optimization can be performed to determine the optimization potential within opposing objectives and to derive suitable weighting factors for a following single-objective optimization. Finally this single-objective optimization determines an optimal design. Robustness evaluation: In variance-based robustness analysis, the variations of the critical model responses are investigated. In optiSLang, random sampling methods are used to generate discrete samples of the joined probability density function of the given random variables. Based on these samples, which are evaluated by the solver similarly as in the sensitivity analysis, the statistical properties of the model responses as mean value, standard deviation, quantiles and higher order stochastic moments are estimated. Reliability analysis: Within the framework of probabilistic safety assessment or reliability analysis, the scattering influences are modelled as random variables, which are defined by distribution type, stochastic moments and mutual correlations. The result of the analysis is the complementary of reliability, the probability of failure, which can be represented on a logarithmic scale. Process integration optiSLang is designed to use several solvers to investigate mechanical, mathematical, technical and any other quantifiable problems. Herein optiSLang provides direct interfaces for external programs: ANSYS MATLAB GNU Octave Excel OpenOffice Calc Python Abaqus SimulationX CATIA LS-DYNA multiPlas any software with text-based input definition History Since the 1980s, research teams at the University of Innsbruck and Bauhaus-Universität Weimar had been developing algorithms for optimization and reliability analysis in conjunction with finite element simulations. As a result, the software "Structural Language (SLang)" was created. In 2000, CAE engineers first applied it to conducted optimization and robustness analysis in the automotive industry. In 2001, the Dynardo GmbH was founded in 2003. Based on SLang, the software optiSLang was launched as an industrial solution for CAE-based sensitivity analysis, optimization, robustness evaluation and reliability analysis. In 2013, the current version optiSLang 4 was completely restructured with a new graphical user interface and extended interfaces to external CAE processes. References External links Dynardo GmbH Website Cadfem Products Automotive CAE Companion 2013 ANSYS Advantage Magazine 02_2013 Konstruktionspraxis.de 03_2013 Konstruktionspraxis.de 10_2012 Konstruktionspraxis.de 06_2012 Konstruktionspraxis.de 09_2011 Computer system optimization software Computer-aided design software Computer-aided engineering software Mathematical optimization software Simulation software