id
stringlengths
3
8
url
stringlengths
32
207
title
stringlengths
1
114
text
stringlengths
93
492k
1547135
https://en.wikipedia.org/wiki/C.mmp
C.mmp
The C.mmp was an early multiple instruction, multiple data (MIMD) multiprocessor system developed at Carnegie Mellon University (CMU) by William Wulf (1971). The notation C.mmp came from the PMS notation of Gordon Bell and Allen Newell, where a central processing unit (CPU) was designated as C, a variant was noted by the dot notation, and mmp stood for Multi-Mini-Processor. , the machine is on display at CMU, in Wean Hall, on the ninth floor. Structure Sixteen Digital Equipment Corporation PDP-11 minicomputers were used as the processing elements, named Compute Modules (CMs) in the system. Each CM had a local memory of 8K and a local set of peripheral devices. One of the challenges was that a device was only available through its unique connected processor, so the input/output (I/O) system (designed by Roy Levien) hid the connectivity of the devices and routed the requests to the hosting processor. If a processor went down, the devices connected to its Unibus became unavailable, which became a problem in overall system reliability. Processor 0 (the boot processor) had the disk drives attached. Each of the Compute Modules shared these communication pathways: An Interprocessor bus – used to distribute system-wide clock, interrupt, and process control messaging among the CMs A 16x16 crossbar switch – used to connect the 16 CMs on one side and 16 banks of shared memory on the other. If all 16 processors were accessing different banks of memory, the memory accesses would all be concurrent. If two or more processors were trying to access the same bank of memory, one of them would be granted access on one cycle and the remainder would be negotiated on subsequent memory cycles. Since the PDP-11 had a logical address space of 16-bits, another address translation unit was added to expand the address space to 25 bits for the shared memory space. The Unibus architecture provided 18 bits of physical address, and the two high-order bits were used to select one of four relocation registers which selected a bank of memory. Properly managing these registers was one of the challenges of programming the operating system (OS) kernel. The original C.mmp design used magnetic-core memory, but during its lifetime, higher performance dynamic random-access memory (RAM) became available and the system was upgraded. The original processors were PDP-11/20 processors, but in the final system, only five of these were used; the remaining 11 were PDP-11/40 processors, which were modified by having extra writeable microcode space. All modifications to these machines were designed and built at CMU. Most of the 11/20 modifications were custom changes to the wire-wrapped backplane, but because the PDP-11/40 was implemented in microcode, a separate proc-mod board was designed that intercepted certain instructions and implemented the protected operating system requirements. For example, it was necessary, for operating system integrity, that the stack pointer register never be odd. On the 11/20, this was done by clipping the lead to the low-order bit of the stack register. On the 11/40, any access to the stack was intercepted by the proc-mod board and generated an illegal data access trap if the low-order bit was 1. Operating system The operating system (OS) was named Hydra. It was capability-based, object-oriented, multi-user, and a microkernel. System resources were represented as objects and protected through capabilities. The OS and most application software was written in the programming language BLISS-11, which required cross-compiling on a PDP-10. The OS used very little assembly language. Among the programming languages available on the system was an ALGOL 68 variant which included extensions supporting parallel computing, to make good use of the C.mmp. The ALGOL compiler ran native on Hydra OS. Reliability Because overall system reliability depended on having all 16 CPUs running, there were serious problems with overall hardware reliability. If the mean time between failures (MTBF) of one processor was 24 hours, then the overall system reliability was 16/24 hours, or about 40 minutes. Overall, the system usually ran for between two and six hours. Many of these failures were due to timing glitches in the many custom circuits added to the processors. Great effort was expended to improve hardware reliability, and when a processor was noticeably failing, it was partitioned out, and would run diagnostics for several hours. When it had passed a first set of diagnostics, it was partitioned back in as an I/O processor and would not run application code (but its peripheral devices were now available); it continued to run diagnostics. If it passed these after several more hours, it was reinstated as a full member of the processor set. Similarly, if a block of memory (one page) was detected as faulty, it was removed from the pool of available pages, and until otherwise notified, the OS would ignore this page. Thus, the OS became an early example of a fault-tolerant system, able to deal with hardware problems which arose, inevitably. References ) Capability systems History of computing Parallel computing
216003
https://en.wikipedia.org/wiki/Entertainment%20law
Entertainment law
Entertainment law, also referred to as media law, is legal services provided to the entertainment industry. These services in entertainment law overlap with intellectual property law. Intellectual property has many moving parts that include trademarks, copyright, and the "Right of Publicity". However, the practice of entertainment law often involves questions of employment law, contract law, torts, labor law, bankruptcy law, immigration, securities law, security interests, agency, right of privacy, defamation, advertising, criminal law, tax law, International law (especially Private international law), and insurance law. Much of the work of an entertainment law practice is transaction based, i.e., drafting contracts, negotiation and mediation. Some situations may lead to litigation or arbitration. Overview Entertainment law covers an area of law that involves media of all different types (e.g. TV, film, music, publishing, advertising, Internet & news media, etc.) and stretches over various legal fields, which include corporate, finance, intellectual property, publicity and privacy, and the First Amendment to the United States Constitution in the United States. For film, entertainment attorneys work with the actor's agent to finalize the actor's contracts for upcoming projects. After an agent lines up work for a star, the entertainment attorney negotiates with the agent and buyer of the actor's talent for compensation and profit participation. Entertainment attorneys are under strict confidentiality agreements, so the specifics of their job are kept top secret. But, some entertainment attorney's job descriptions have become comparable to those of a star's agent, manager or publicist. Most entertainment attorneys have many other roles as well such as assisting in building a client's career. History As the popularity of media became widespread, the field of media law became more popular and needed leaving certain corporate professionals wanting to participate more in media. As a result, many young lawyers fledged into media law for the opportunity to build more connections in media, become a media presenter, or even land an acting role. As technology continues to make huge advancements, many lawsuits have begun to arise, which makes the demand for lawyers extremely necessary. Categories Entertainment law is generally sub-divided into the following areas related to the types of activities that have their own specific trade unions, production techniques, rules, customs, case law, and negotiation strategies: FILM: option agreements, chain of title issues, talent agreements (screenwriters, film directors, actors, composers, production designers), production and post production and trade union issues, distribution issues, motion picture industry negotiations, distribution, and general intellectual property issues especially relating to copyright and, to a lesser extent, trademarks; INTERNET: Censorship, Copyright, Freedom of information, Information Technology, Privacy, and Telecommunications issues; MULTIMEDIA: software licensing issues, video game development and production, Information technology law, and general intellectual property issues; MUSIC: talent agreements (musicians, composers), producer agreements, and synchronization rights, music industry negotiation and general intellectual property issues, especially relating to copyright (see music law); PUBLISHING and PRINT MEDIA: advertising, models, author agreements and general intellectual property issues, especially relating to copyright; TELEVISION and RADIO: broadcast licensing and regulatory issues, mechanical licenses, and general intellectual property issues, especially relating to copyright; THEATRE: rental agreements and co-production agreements, and other performance oriented legal issues; VISUAL ARTS AND DESIGN: fine arts, issues of consignment of artworks to art dealers, moral rights of sculptors regarding works in public places; and industrial design, issues related to the protection of graphic design elements in products. Defamation (libel and slander), personality rights and privacy rights issues also arise in entertainment law. Media law is a legal field that refers to the following: Advertising Broadcasting Censorship Confidentiality Contempt Copyright Corporate Law Defamation Entertainment Freedom of information Internet Information Technology Privacy Telecommunications Cases Copyright: In Golan v. Holder, the Supreme Court ruled, in a 6–2 vote, the judges dismissed contentions in light of the First Amendment and the Constitution's copyright provision, stating that the general population was not "a class of sacred centrality" and that copyright insurances may be extended regardless of whether they did not strive for new attempts to be made. Internet: In 2007, Viacom, a media aggregate that possesses MTV and Comedy Central TV, sued YouTube for $1 billion in light of copyright infringement claims for the unapproved posting of Viacom copyrighted material. In May 2008, YouTube began utilizing its advanced fingerprinting innovation to secure copyright-ensured content. Television: In an 8-0 choice, the Supreme Court held that in light of the fact that the FCC rules at the time did not cover "short lived exclamations," the fines issued against Fox were unethical and subsequently discredited as "illegally unclear". Music: Kesha v. Dr. Luke – In 2014, singer Kesha filed a civil suit against music producer Lukasz Sebastian Gottwald, also referred to some as Dr. Luke for gender-based hate crimes and emotional distress. This civil suit caused Gottwald to in return sue Kesha for defamation and breach of contract. This case ended with a judge declining to release Kesha from her binding contract that prohibited her from continuing her career effectively. The judge took note that Kesha had entered an agreement after she had sworn under oath that no harassment was taking place. Many celebrities such as Miley Cyrus, Lady Gaga, and Demi Lovato have shown support for Kesha in an attempt to broadcast the injustice contract laws have played in the outcome of this case. Singer-songwriter Taylor Swift donated $250,000 to relieve Kesha of any financial obligations. Katie Armiger – In 2016, singer/songwriter Katie Armiger claimed that a handful of DJs’ at radio stations across the United States harassed her during her radio tour while her record label, Cold River Records, justified it by telling her that radio programmers are her ticket to fame. Cold River Records caused defamation and false light against Armigers’ character when they challenged her in a court case that played out in the public eye. Cold River Records held Armiger hostage in a lengthy prosecution that forbade her to release new music or perform live by strictly enforcing the terms and legality of her record contract. Because of this, Armiger lost countless fans with her lack of presence on social media and most likely lost credibility as well by not giving her fans an explanation. There is no law created for instances like this where an artist is unable to breach their contract immediately when harassment is involved. Libel per quod can be applied to this case by showing that Armiger’s statements show plenty of proof and negligence on not only Cold Rivers’ behalf but the DJs’ behalf as well. In the end, Armigers’ character was defamed and a false light was shone upon her when Cold River Records’ head Pete O’Heeron claimed that Armigers’ suit was baseless. Defamation, false light and contract laws have played a significant role in Katie Armigers’ reality through her pursuit to breach her contract with Cold River Records. See also Communications law Morals clause Intellectual property Media reform Media regulation Music law Engineering law Sports law Sunshine in the Courtroom Act Safe listening Performing arts education Performing arts References Mass media Media law Entertainment Performing arts
54533486
https://en.wikipedia.org/wiki/Differential%20testing
Differential testing
Differential testing, also known as differential fuzzing, is a popular software testing technique that attempts to detect bugs, by providing the same input to a series of similar applications (or to different implementations of the same application), and observing differences in their execution. Differential testing complements traditional software testing, because it is well-suited to find semantic or logic bugs that do not exhibit explicit erroneous behaviors like crashes or assertion failures. Differential testing is sometimes called back-to-back testing. Differential testing finds semantic bugs by using different implementations of the same functionality as cross-referencing oracles, pinpointing differences in their outputs over the same input: any discrepancy between the program behaviors on the same input is marked as a potential bug. Application domains Differential testing has been used to find semantic bugs successfully in diverse domains like SSL/TLS implementations, C compilers, Java decompilers, JVM implementations, Web application firewalls, security policies for APIs, and antivirus software. Differential testing has also been used for automated fingerprint generation from different network protocol implementations. Input generation Unguided Unguided differential testing tools generate test inputs independently across iterations without considering the test program’s behavior on past inputs. Such an input generation process does not use any information from past inputs and essentially creates new inputs at random from a prohibitively large input space. This can make the testing process highly inefficient, since large numbers of inputs need to be generated to find a single bug. An example of a differential testing system that performs unguided input generation is "Frankencerts". This work synthesizes Frankencerts by randomly combining parts of real certificates. It uses syntactically valid certificates to test for semantic violations of SSL/TLS certificate validation across multiple implementations. However, since the creation and selection of Frankencerts are completely unguided, it is significantly inefficient compared to the guided tools. Guided Guided input generation process aims to minimize the number of inputs needed to find each bug by taking program behavior information for past inputs into account. Domain-specific evolutionary guidance An example of a differential testing system that performs domain-specific coverage-guided input generation is Mucerts. Mucerts relies on the knowledge of the partial grammar of the X.509 certificate format and uses a stochastic sampling algorithm to drive its input generation while tracking the program coverage. Another line of research builds on the observation that the problem of new input generation from existing inputs can be modeled as a stochastic process. An example of a differential testing tool that uses such a stochastic process modeling for input generation is Chen et al.’s tool. It performs differential testing of Java virtual machines (JVM) using Markov chain Monte Carlo (MCMC) sampling for input generation. It uses custom domain-specific mutations by leveraging detailed knowledge of the Java class file format. Domain-independent evolutionary guidance NEZHA is an example of a differential testing tool that has a path selection mechanism geared towards domain-independent differential testing. It uses specific metrics (dubbed as delta-diversity) that summarize and quantify the observed asymmetries between the behaviors of multiple test applications. Such metrics that promote the relative diversity of observed program behavior have shown to be effective in applying differential testing in a domain-independent and black-box manner. Automata-learning-based guidance For applications, such as cross-site scripting (XSS) filters and X.509 certificate hostname verification, which can be modeled accurately with finite-state automata (FSA), counter-example-driven FSA learning techniques can be used to generate inputs that are more likely to find bugs. Symbolic-execution-based guidance Symbolic execution is a white-box technique that executes a program symbolically, computes constraints along different paths, and uses a constraint solver to generate inputs that satisfy the collected constraints along each path. Symbolic execution can also be used to generate input for differential testing. The inherent limitation of symbolic-execution-assisted testing tools—path explosion and scalability—is magnified especially in the context of differential testing where multiple test programs are used. Therefore, it is very hard to scale symbolic execution techniques to perform differential testing of multiple large programs. See also Software testing Software diversity References Software testing
46753689
https://en.wikipedia.org/wiki/Rescale
Rescale
Rescale is a software technology company that claims to provide "Intelligent Computing for Digital R&D", with a focus on high-performance computing, cloud management, and computer aided engineering. Overview Rescale helps organizations across industries accelerate science and engineering breakthroughs by eliminating computing complexity. From supersonic aircraft to personalized medicine, industry leaders bring new product innovations to market with unprecedented speed, agility, and efficiency on the Rescale Platform - an intelligent full-stack automation HPC solution for digital R&D in the cloud. IT leaders use Rescale to deliver high-performance computing as-a-service to their organization by harnessing the power of automation on a hybrid cloud control plane with security, architecture, and financial controls. Design Engineering magazine describes Rescale as "...a good fit for manufacturers who need to run complex simulation and optimization jobs, but don't have the HPC hardware required." As of 2015, Rescale established the largest globally available HPC network which enables hybrid and multi-cloud operations across major cloud service providers and on-premises data centers. Rescale provides IT and HPC practitioners with access to the latest computing architectures, tuned and optimized for a variety of workloads in industries like Aerospace, Automotive Industry, Pharmaceutical Industry, Computational genomics, Manufacturing, Electronic design automation, Semiconductor industry, and other computationally intensive science and engineering research. History Founded in 2011 in San Francisco, California by Joris Poort (CEO) and Adam McKenzie (CTO), Rescale launched in Y Combinator and has been recognized as a top company multiple times as recently as 2021. The founders, Poort and McKenzie, previously built a software technology stack at Boeing saving the company over $180M through weight improvements on the 787 Dreamliner. Investors Rescale has received investment funding> from high-profile investors including Sam Altman, Jeff Bezos, Richard Branson, Chris Dixon, Paul Graham, and Peter Thiel. Notable corporate and venture capital investors include Nvidia, M12 (venture capital) arm of (Microsoft), Hitachi, Samsung, Initialized Capital, and Andreessen Horowitz. Partnerships Rescale has strategic partnerships with major cloud infrastructure providers Amazon Web Services, Microsoft Azure, Google Cloud Platform, and Oracle Cloud Platform and with engineering software companies in the computer aided engineering space including: ANSYS, AutoForm, Siemens Digital Industries Software, Dassault Systemes and MSC Software. See also Cloud computing Enterprise software High performance computing Computer aided engineering Product lifecycle management Software-defined infrastructure Digital Twin Multidisciplinary design optimization References External links Rescale's website Rescale is a part of the Big Compute consortium Business software
537555
https://en.wikipedia.org/wiki/Confusion%20and%20diffusion
Confusion and diffusion
In cryptography, confusion and diffusion are two properties of the operation of a secure cipher identified by Claude Shannon in his 1945 classified report A Mathematical Theory of Cryptography. These properties, when present, work to thwart the application of statistics and other methods of cryptanalysis. These concepts are also important in the design of robust hash functions and pseudorandom number generators where decorrelation of the generated values is of paramount importance. Definition Confusion Confusion means that each binary digit (bit) of the ciphertext should depend on several parts of the key, obscuring the connections between the two. The property of confusion hides the relationship between the ciphertext and the key. This property makes it difficult to find the key from the ciphertext and if a single bit in a key is changed, the calculation of most or all of the bits in the ciphertext will be affected. Confusion increases the ambiguity of ciphertext and it is used by both block and stream ciphers. In substitution–permutation networks, confusion is provided by substitution boxes. Diffusion Diffusion means that if we change a single bit of the plaintext, then about half of the bits in the ciphertext should change, and similarly, if we change one bit of the ciphertext, then about half of the plaintext bits should change. This is equivalent to the expectation that encryption schemes exhibit an avalanche effect. The purpose of diffusion is to hide the statistical relationship between the ciphertext and the plain text. For example, diffusion ensures that any patterns in the plaintext, such as redundant bits, are not apparent in the ciphertext. Block ciphers achieve this by "diffusing" the information about the plaintext's structure across the rows and columns of the cipher. In substitution–permutation networks, diffusion is provided by permutation boxes. Theory In Shannon's original definitions, confusion refers to making the relationship between the ciphertext and the symmetric key as complex and involved as possible; diffusion refers to dissipating the statistical structure of plaintext over the bulk of ciphertext. This complexity is generally implemented through a well-defined and repeatable series of substitutions and permutations. Substitution refers to the replacement of certain components (usually bits) with other components, following certain rules. Permutation refers to manipulation of the order of bits according to some algorithm. To be effective, any non-uniformity of plaintext bits needs to be redistributed across much larger structures in the ciphertext, making that non-uniformity much harder to detect. In particular, for a randomly chosen input, if one flips the i-th bit, then the probability that the j-th output bit will change should be one half, for any i and j—this is termed the strict avalanche criterion. More generally, one may require that flipping a fixed set of bits should change each output bit with probability one half. One aim of confusion is to make it very hard to find the key even if one has a large number of plaintext-ciphertext pairs produced with the same key. Therefore, each bit of the ciphertext should depend on the entire key, and in different ways on different bits of the key. In particular, changing one bit of the key should change the ciphertext completely. The simplest way to achieve both diffusion and confusion is to use a substitution–permutation network. In these systems, the plaintext and the key often have a very similar role in producing the output, hence the same mechanism ensures both diffusion and confusion. Applied to encryption Designing an encryption method uses both of the principles of confusion and diffusion. Confusion means that the process drastically changes data from the input to the output, for example, by translating the data through a non-linear table created from the key. There are many ways to reverse linear calculations, so the more non-linear it is, the more analysis tools it breaks. Diffusion means that changing a single character of the input will change many characters of the output. Done well, every part of the input affects every part of the output, making analysis much harder. No diffusion process is perfect: it always lets through some patterns. Good diffusion scatters those patterns widely through the output, and if there are several patterns making it through they scramble each other. This makes patterns vastly harder to spot, and vastly increases the amount of data to analyze to break the cipher. Analysis of AES The Advanced Encryption Standard (AES) has both excellent confusion and diffusion. Its confusion look-up tables are very non-linear and good at destroying patterns. Its diffusion stage spreads every part of the input to every part of the output: changing one bit of input changes half the output bits on average. Both confusion and diffusion are repeated multiple times for each input to increase the amount of scrambling. The secret key is mixed in at every stage so that an attacker cannot precalculate what the cipher does. None of this happens when a simple one-stage scramble is based on a key. Input patterns would flow straight through to the output. It might look random to the eye but analysis would find obvious patterns and the cipher could be broken. See also Substitution–permutation network Avalanche effect References Works cited Claude E. Shannon, "A Mathematical Theory of Cryptography", Bell System Technical Memo MM 45-110-02, September 1, 1945. Claude E. Shannon, "Communication Theory of Secrecy Systems", Bell System Technical Journal, vol. 28-4, pages 656–715, 1949. Wade Trappe and Lawrence C. Washington, Introduction to Cryptography with Coding Theory. Second edition. Pearson Prentice Hall, 2006. Symmetric-key cryptography
11945800
https://en.wikipedia.org/wiki/Smallfoot
Smallfoot
Smallfoot was the name of both a rapid application development toolkit and an embedded operating system designed and released by Caldera Systems/Caldera International/The SCO Group in both UnixWare and Linux formats. Created for use in embedded environments such as point of sale systems and video gaming, the toolkits were intended to create specifically tailored operating systems geared towards the desired use. These customized and stripped down versions of the operating systems made less of a footprint, hence the names Smallfoot embedded UNIX and Smallfoot embedded Linux respectively. Smallfoot is also notable in that it was a key Linux product of The SCO Group, developed for both the UNIX and Linux platforms and distributed by SCO and Caldera Systems/Caldera International after its purchase of SCO. In the SCO v. IBM lawsuit, SCO denied distribution of Linux kernel code, however SCO Smallfoot is based on both 2.4.10 and 2.6.1 Linux kernel versions. History Smallfoot was first proposed in 2001. The name Smallfoot (whilst trademarked by SCO) was never the intended product's final name, but rather was a working name that stuck. A first prototype was built around the Linux platform. A deal was signed in January 2003 for Smallfoot to work on Beetle point-of-sale terminals from Wincor Nixdorf. But given the SCO–Linux disputes that were underway a couple of months later, the Smallfoot Toolkit development switched to a Unix-based OS in May 2003. The formatting of the toolkit configuration language drew heavily on Tcl. The toolkit included extensive configuration of many parts of the system, JavaPOS library, newly developed drivers for Point-of-Sale (POS) devices, and a POS application. A complete POS terminal developed with the Smallfoot Toolkit release 1.0 was demonstrated at SCO Forum in 2004 in Las Vegas, where breakout sessions entitled "Build a Smallfoot OS Using the Smallfoot Toolkit" and "Smallfoot is Not Just for Retail Anymore" were held. The further development, including a GUI, was shelved until the sales of the command-line version of the toolkit would pick up and provide a revenue stream. The product itself was announced in June 2004, as part of a roadmap presented by SCO intended to show renewed investment in their Unix product lines. The Smallfoot Toolkit product went onto the SCO price list in July 2004. The minimal bundle was priced at approximately $35,000 and included the Toolkit, UnixWare license for the development machine running the toolkit, 500 deployment UnixWare licenses for the generated images, 10 hours of support. Larger volumes of the deployment licenses provided extra per-license discounts. None were ever sold and eventually the product was discontinued. Eventually an outgrowth of Smallfoot found a customer, Budgens supermarkets. Budgens, a part of the Musgrave Group, were looking to implement Linux at their point of sale systems. The project became an early success story in terms of stores taking a chance on a Linux-based solution. See also Lineo Embedix References External links Deploying Customized Solutions with the Smallfoot Toolkit – SCO documentation Embedded Linux distributions Embedded operating systems Linux distributions
6903069
https://en.wikipedia.org/wiki/Logos%20Bible%20Software
Logos Bible Software
Logos Bible Software is a digital library application designed for electronic Bible study. In addition to basic eBook functionality, it includes extensive resource linking, note-taking functionality, and linguistic analysis for study of the Bible both in translation and in its original languages. It is developed by Faithlife Corporation. As of October 26, 2020, Logos Bible Software is in its 9th version. Logos Bible Software is compatible with more than 200,000 titles related to the Bible from 200 publishers, including Baker, Bantam, Catholic University of America Press, Eerdmans, Harvest House, Merriam Webster, Moody Press, Oxford University Press, Thomas Nelson, Tyndale House, and Zondervan. Logos also recently published its own Lexham Bible Reference series, featuring new scholarship on the original Biblical languages. Until October 2014, the name Logos Bible Software was often used to refer to the company behind the software (incorporated as Logos Research Systems, Inc). At that date, the company was rebranded as Faithlife Corporation as a response to the greater diversity in products and services the company then offered. On September 18, 2020 it was announced that Lifeway's WORDSearch Bible software was bought by Faithlife. Therefore, Wordsearch's customer base all will receive a copy of Logos free of charge and the titles included would be fasttracked to Logos format. History Windows and Macintosh versions Logos Bible Software was launched in 1992 by two Microsoft employees, Bob Pritchett and Kiernon Reiniger, along with Bob's father, Dale Pritchett. The three quit their jobs to develop Christian software. After acquiring data from the CDWordLibrary project at Dallas Theological Seminary (an earlier Bible software package for use on Windows 2), Logos released an updated version called the Logos Library System platform in 1995, which added support for more resources and introduced the concept of a digital library. After a long beta cycle that began in 1999, the LLS was replaced by the Libronix Digital Library Systems (or Libronix DLS) in 2001. This was a 32-bit application (LLS was 16-bit) and had been rewritten from the ground up in a more modular fashion that made it easier to add future expansions. As with all other versions of Logos Bible Software, it was offered as a free update to existing customers. In terms of branding, Libronix Digital Library System refers to the software itself, whilst Logos Bible Software Series X was used for packages that included both the software and electronic Biblical studies resources. Version 2 of Libronix DLS appeared in July 2003 as Logos Bible Software Series X 2.0. This added support for documents such as notes and word lists, visual filters (which allow users to create rules to add highlighting and markup to resources automatically), and a graphical query editor. Version 3 was launched on May 1, 2006 and introduced reverse-interlinear Bibles, the Bible Word Study tool, and syntax searches. The Series X name was dropped, and the software was known simply as Logos Bible Software 3. In March 2008 an alpha version of Logos Bible Software for Mac was released for testing, with the retail edition shipping in December. This was known as Logos Bible Software for Mac 1.0, and although based on the Windows version, full parity was never achieved, even with versions 1.1 and 1.2 which shipped in 2009. However, on November 2, 2009, Logos announced Logos Bible Software 4 for Windows, along with an early alpha version of Mac edition and a cut-down iPhone version. Like the original release of the Libronix Digital Library System, the application had been substantially rewritten, and featured a very different graphical user interface than its predecessor. Crucially, once the Mac version was completed, both editions of the software would be almost identical in function, and settings, documents and resources would seamlessly sync between the different versions. The Mac version reached beta in July 2010, and was released in September 2010. Various updates later came to both platforms, with version 4.1 (October 2010, Windows only) adding sentence diagramming and print/export, 4.2 (December 2010 on Windows, March 2011 on Mac) adding various minor features and bug fixes, 4.3 (August 2011) adding Personal Books to allow users to add their own content, 4.5 (January 2012) adding improved notes and highlighting (4.4 was skipped), and 4.6 (August 2012) offering bug fixes and a few tweaks. Logos Bible Software 5 was released for both Mac and Windows on November 1, 2012, with an emphasis on connecting disparate features and databases, making Bible study easier and more efficient. Datasets and tagging added to Bibles meant users could now explore the roots of words and their sense, and the Sermon Starter Guide and Topical Guide made accessing Bible topics much simpler and quicker. Logos 5.1 (July 2013) added read-along audio and a new topic layout, with several more minor improvements in 5.2 (November 2013). Logos Bible Software 6 was released on October 28, 2014, and became the first version to support 64-bit architecture. It too added a number of new datasets and features, including Ancient Literature cross-references, Cultural Concepts, original manuscript images, multimedia and the new Factbook that attempted to integrate the increasing number of databases to an even greater extent than was possible in Logos 5. Logos 6 also integrates with the Send to Kindle service provided by Amazon. Logos Bible Software 7 was released on August 24, 2016. Features added with this full version include, Sermon Editor, Course Tool, Figurative Language (interactive), Hebrew Grammatical Constructions, Longacre Genre Analysis, Sentence Types of the New Testament Dataset, Quickstart Layouts, Speech Acts, An Empty Tomb (interactive), Exploring Biblical Manuscripts. Logos Bible Software 8 was released on October 29, 2018. Logos Bible Software 9 was released on October 26, 2020. Mobile versions An iPhone app was released alongside Logos 4 in November 2009. It allows users to access most of their Logos resources on the iPhone, with basic search and study features. Resources can be accessed over the cloud, or downloaded onto the device for offline access. Native iPad support was added with version 1.4 in April 2010. Version 2.0 (January 2012) added notes, highlights and inline footnotes. Version 3.0 (August 2012) added reading plans and community notes, and version 4.0 a new UI updated for iOS 7. A topic guide was added in 4.3 (June 2014), and a scrolling view in 4.4 (December 2014). The iOS app was awarded the DBW Publishing Innovation Award in 2011. An Android app entered a public alpha in May 2011, with a beta in July, and 1.0 released a year later. The initial release allowed little more than the reading of Logos books, so version 2.0 followed quickly in August 2012, which added notes, highlighting, reading plans, Bible Word Study, the Passage Guide and a split-screen view. This brought much closer parity with the iOS app, and future development has continued along similar lines to the iOS version. On both platforms, the mobile app is now available in several "flavors". In addition to the standard Logos Bible Software, other very similar apps exist under the Faithlife Ebooks, Faithlife Study Bible, and Verbum brands. These apps offer similar functionality, different branding, and a slightly different UI. Rebranded versions Faithlife Corporation has also produced rebranded versions of Logos Bible Software with almost identical functionality. Verbum Catholic Software is aimed at Roman Catholics (and adds databases of Catholic topics and Saints, and more data from the Deuterocanonical Books). From 2014 to 2020, Faithlife produced Noet, which focused on scholarly work in the humanities, particularly the classics and philosophy. Reception Each version of Logos Bible Software has generally been received very positively by reviewers and Christian leaders. It is frequently praised for being user-friendly, having the largest number of available resources of any comparable software, and offering unique tools and datasets not found in any comparable products. However, it has also received some criticisms for its high cost and lack of speed when compared with other Bible software packages. Notes References External links Logos Bible Software official websites: Logos Verbum Electronic Bibles Electronic publishing Digital library software
360788
https://en.wikipedia.org/wiki/Backdoor%20%28computing%29
Backdoor (computing)
A backdoor is a typically covert method of bypassing normal authentication or encryption in a computer, product, embedded device (e.g. a home router), or its embodiment (e.g. part of a cryptosystem, algorithm, chipset, or even a "homunculus computer" —a tiny computer-within-a-computer such as that found in Intel's AMT technology). Backdoors are most often used for securing remote access to a computer, or obtaining access to plaintext in cryptographic systems. From there it may be used to gain access to privileged information like passwords, corrupt or delete data on hard drives, or transfer information within autoschediastic networks. A backdoor may take the form of a hidden part of a program, a separate program (e.g. Back Orifice may subvert the system through a rootkit), code in the firmware of the hardware, or parts of an operating system such as Windows. Trojan horses can be used to create vulnerabilities in a device. A Trojan horse may appear to be an entirely legitimate program, but when executed, it triggers an activity that may install a backdoor. Although some are secretly installed, other backdoors are deliberate and widely known. These kinds of backdoors have "legitimate" uses such as providing the manufacturer with a way to restore user passwords. Many systems that store information within the cloud fail to create accurate security measures. If many systems are connected within the cloud, hackers can gain access to all other platforms through the most vulnerable system. Default passwords (or other default credentials) can function as backdoors if they are not changed by the user. Some debugging features can also act as backdoors if they are not removed in the release version. In 1993, the United States government attempted to deploy an encryption system, the Clipper chip, with an explicit backdoor for law enforcement and national security access. The chip was unsuccessful. Overview The threat of backdoors surfaced when multiuser and networked operating systems became widely adopted. Petersen and Turn discussed computer subversion in a paper published in the proceedings of the 1967 AFIPS Conference. They noted a class of active infiltration attacks that use "trapdoor" entry points into the system to bypass security facilities and permit direct access to data. The use of the word trapdoor here clearly coincides with more recent definitions of a backdoor. However, since the advent of public key cryptography the term trapdoor has acquired a different meaning (see trapdoor function), and thus the term "backdoor" is now preferred, only after the term trapdoor went out of use. More generally, such security breaches were discussed at length in a RAND Corporation task force report published under ARPA sponsorship by J.P. Anderson and D.J. Edwards in 1970. A backdoor in a login system might take the form of a hard coded user and password combination which gives access to the system. An example of this sort of backdoor was used as a plot device in the 1983 film WarGames, in which the architect of the "WOPR" computer system had inserted a hardcoded password which gave the user access to the system, and to undocumented parts of the system (in particular, a video game-like simulation mode and direct interaction with the artificial intelligence). Although the number of backdoors in systems using proprietary software (software whose source code is not publicly available) is not widely credited, they are nevertheless frequently exposed. Programmers have even succeeded in secretly installing large amounts of benign code as Easter eggs in programs, although such cases may involve official forbearance, if not actual permission. Politics and attribution There are a number of cloak and dagger considerations that come into play when apportioning responsibility. Covert backdoors sometimes masquerade as inadvertent defects (bugs) for reasons of plausible deniability. In some cases, these might begin life as an actual bug (inadvertent error), which, once discovered are then deliberately left unfixed and undisclosed, whether by a rogue employee for personal advantage, or with C-level executive awareness and oversight. It is also possible for an entirely above-board corporation's technology base to be covertly and untraceably tainted by external agents (hackers), though this level of sophistication is thought to exist mainly at the level of nation state actors. For example, if a photomask obtained from a photomask supplier differs in a few gates from its photomask specification, a chip manufacturer would be hard-pressed to detect this if otherwise functionally silent; a covert rootkit running in the photomask etching equipment could enact this discrepancy unbeknown to the photomask manufacturer, either, and by such means, one backdoor potentially leads to another. (This hypothetical scenario is essentially a silicon version of the undetectable compiler backdoor, discussed below.) In general terms, the long dependency-chains in the modern, highly specialized technological economy and innumerable human-elements process control-points make it difficult to conclusively pinpoint responsibility at such time as a covert backdoor becomes unveiled. Even direct admissions of responsibility must be scrutinized carefully if the confessing party is beholden to other powerful interests. Examples Worms Many computer worms, such as Sobig and Mydoom, install a backdoor on the affected computer (generally a PC on broadband running Microsoft Windows and Microsoft Outlook). Such backdoors appear to be installed so that spammers can send junk e-mail from the infected machines. Others, such as the Sony/BMG rootkit, placed secretly on millions of music CDs through late 2005, are intended as DRM measures—and, in that case, as data-gathering agents, since both surreptitious programs they installed routinely contacted central servers. A sophisticated attempt to plant a backdoor in the Linux kernel, exposed in November 2003, added a small and subtle code change by subverting the revision control system. In this case, a two-line change appeared to check root access permissions of a caller to the sys_wait4 function, but because it used assignment = instead of equality checking ==, it actually granted permissions to the system. This difference is easily overlooked, and could even be interpreted as an accidental typographical error, rather than an intentional attack. In January 2014, a backdoor was discovered in certain Samsung Android products, like the Galaxy devices. The Samsung proprietary Android versions are fitted with a backdoor that provides remote access to the data stored on the device. In particular, the Samsung Android software that is in charge of handling the communications with the modem, using the Samsung IPC protocol, implements a class of requests known as remote file server (RFS) commands, that allows the backdoor operator to perform via modem remote I/O operations on the device hard disk or other storage. As the modem is running Samsung proprietary Android software, it is likely that it offers over-the-air remote control that could then be used to issue the RFS commands and thus to access the file system on the device. Object code backdoors Harder to detect backdoors involve modifying object code, rather than source code – object code is much harder to inspect, as it is designed to be machine-readable, not human-readable. These backdoors can be inserted either directly in the on-disk object code, or inserted at some point during compilation, assembly linking, or loading – in the latter case the backdoor never appears on disk, only in memory. Object code backdoors are difficult to detect by inspection of the object code, but are easily detected by simply checking for changes (differences), notably in length or in checksum, and in some cases can be detected or analyzed by disassembling the object code. Further, object code backdoors can be removed (assuming source code is available) by simply recompiling from source on a trusted system. Thus for such backdoors to avoid detection, all extant copies of a binary must be subverted, and any validation checksums must also be compromised, and source must be unavailable, to prevent recompilation. Alternatively, these other tools (length checks, diff, checksumming, disassemblers) can themselves be compromised to conceal the backdoor, for example detecting that the subverted binary is being checksummed and returning the expected value, not the actual value. To conceal these further subversions, the tools must also conceal the changes in themselves – for example, a subverted checksummer must also detect if it is checksumming itself (or other subverted tools) and return false values. This leads to extensive changes in the system and tools being needed to conceal a single change. Because object code can be regenerated by recompiling (reassembling, relinking) the original source code, making a persistent object code backdoor (without modifying source code) requires subverting the compiler itself – so that when it detects that it is compiling the program under attack it inserts the backdoor – or alternatively the assembler, linker, or loader. As this requires subverting the compiler, this in turn can be fixed by recompiling the compiler, removing the backdoor insertion code. This defense can in turn be subverted by putting a source meta-backdoor in the compiler, so that when it detects that it is compiling itself it then inserts this meta-backdoor generator, together with the original backdoor generator for the original program under attack. After this is done, the source meta-backdoor can be removed, and the compiler recompiled from original source with the compromised compiler executable: the backdoor has been bootstrapped. This attack dates to , and was popularized in Thompson's 1984 article, entitled "Reflections on Trusting Trust"; it is hence colloquially known as the "Trusting Trust" attack. See compiler backdoors, below, for details. Analogous attacks can target lower levels of the system, such as the operating system, and can be inserted during the system booting process; these are also mentioned in , and now exist in the form of boot sector viruses. Asymmetric backdoors A traditional backdoor is a symmetric backdoor: anyone that finds the backdoor can in turn use it. The notion of an asymmetric backdoor was introduced by Adam Young and Moti Yung in the Proceedings of Advances in Cryptology: Crypto '96. An asymmetric backdoor can only be used by the attacker who plants it, even if the full implementation of the backdoor becomes public (e.g., via publishing, being discovered and disclosed by reverse engineering, etc.). Also, it is computationally intractable to detect the presence of an asymmetric backdoor under black-box queries. This class of attacks have been termed kleptography; they can be carried out in software, hardware (for example, smartcards), or a combination of the two. The theory of asymmetric backdoors is part of a larger field now called cryptovirology. Notably, NSA inserted a kleptographic backdoor into the Dual EC DRBG standard. There exists an experimental asymmetric backdoor in RSA key generation. This OpenSSL RSA backdoor, designed by Young and Yung, utilizes a twisted pair of elliptic curves, and has been made available. Compiler backdoors A sophisticated form of black box backdoor is a compiler backdoor, where not only is a compiler subverted (to insert a backdoor in some other program, such as a login program), but it is further modified to detect when it is compiling itself and then inserts both the backdoor insertion code (targeting the other program) and the code-modifying self-compilation, like the mechanism through which retroviruses infect their host. This can be done by modifying the source code, and the resulting compromised compiler (object code) can compile the original (unmodified) source code and insert itself: the exploit has been boot-strapped. This attack was originally presented in , which was a United States Air Force security analysis of Multics, where they described such an attack on a PL/I compiler, and call it a "compiler trap door"; they also mention a variant where the system initialization code is modified to insert a backdoor during booting, as this is complex and poorly understood, and call it an "initialization trapdoor"; this is now known as a boot sector virus. This attack was then actually implemented by Ken Thompson, and popularized in his Turing Award acceptance speech in 1983 (published 1984), "Reflections on Trusting Trust", which points out that trust is relative, and the only software one can truly trust is code where every step of the bootstrapping has been inspected. This backdoor mechanism is based on the fact that people only review source (human-written) code, and not compiled machine code (object code). A program called a compiler is used to create the second from the first, and the compiler is usually trusted to do an honest job. Thompson's paper describes a modified version of the Unix C compiler that would put an invisible backdoor in the Unix login command when it noticed that the login program was being compiled, and would also add this feature undetectably to future compiler versions upon their compilation as well. Because the compiler itself was a compiled program, users would be extremely unlikely to notice the machine code instructions that performed these tasks. (Because of the second task, the compiler's source code would appear "clean".) What's worse, in Thompson's proof of concept implementation, the subverted compiler also subverted the analysis program (the disassembler), so that anyone who examined the binaries in the usual way would not actually see the real code that was running, but something else instead. An updated analysis of the original exploit is given in , and a historical overview and survey of the literature is given in . Occurrences Thompson's version was, officially, never released into the wild. It is believed, however, that a version was distributed to BBN and at least one use of the backdoor was recorded. There are scattered anecdotal reports of such backdoors in subsequent years. In August 2009, an attack of this kind was discovered by Sophos labs. The W32/Induc-A virus infected the program compiler for Delphi, a Windows programming language. The virus introduced its own code to the compilation of new Delphi programs, allowing it to infect and propagate to many systems, without the knowledge of the software programmer. An attack that propagates by building its own Trojan horse can be especially hard to discover. It is believed that the Induc-A virus had been propagating for at least a year before it was discovered. Countermeasures Once a system has been compromised with a backdoor or Trojan horse, such as the Trusting Trust compiler, it is very hard for the "rightful" user to regain control of the system – typically one should rebuild a clean system and transfer data (but not executables) over. However, several practical weaknesses in the Trusting Trust scheme have been suggested. For example, a sufficiently motivated user could painstakingly review the machine code of the untrusted compiler before using it. As mentioned above, there are ways to hide the Trojan horse, such as subverting the disassembler; but there are ways to counter that defense, too, such as writing a disassembler from scratch. A generic method to counter trusting trust attacks is called Diverse Double-Compiling (DDC). The method requires a different compiler and the source code of the compiler-under-test. That source, compiled with both compilers, results in two different stage-1 compilers, which however should have the same behavior. Thus the same source compiled with both stage-1 compilers must then result in two identical stage-2 compilers. A formal proof is given that the latter comparison guarantees that the purported source code and executable of the compiler-under-test correspond, under some assumptions. This method was applied by its author to verify that the C compiler of the GCC suite (v. 3.0.4) contained no trojan, using icc (v. 11.0) as the different compiler. In practice such verifications are not done by end users, except in extreme circumstances of intrusion detection and analysis, due to the rarity of such sophisticated attacks, and because programs are typically distributed in binary form. Removing backdoors (including compiler backdoors) is typically done by simply rebuilding a clean system. However, the sophisticated verifications are of interest to operating system vendors, to ensure that they are not distributing a compromised system, and in high-security settings, where such attacks are a realistic concern. List of known backdoors Back Orifice was created in 1998 by hackers from Cult of the Dead Cow group as a remote administration tool. It allowed Windows computers to be remotely controlled over a network and parodied the name of Microsoft's BackOffice. The Dual EC DRBG cryptographically secure pseudorandom number generator was revealed in 2013 to possibly have a kleptographic backdoor deliberately inserted by NSA, who also had the private key to the backdoor. Several backdoors in the unlicensed copies of WordPress plug-ins were discovered in March 2014. They were inserted as obfuscated JavaScript code and silently created, for example, an admin account in the website database. A similar scheme was later exposed in the Joomla plugin. Borland Interbase versions 4.0 through 6.0 had a hard-coded backdoor, put there by the developers. The server code contains a compiled-in backdoor account (username: politically, password: correct), which could be accessed over a network connection; a user logging in with this backdoor account could take full control over all Interbase databases. The backdoor was detected in 2001 and a patch was released. Juniper Networks backdoor inserted in the year 2008 into the versions of firmware ScreenOS from 6.2.0r15 to 6.2.0r18 and from 6.3.0r12 to 6.3.0r20 that gives any user administrative access when using a special master password. Several backdoors were discovered in C-DATA Optical Line Termination (OLT) devices. Researchers released the findings without notifying C-DATA because they believe the backdoors were intentionally placed by the vendor. See also Backdoor:Win32.Hupigon Backdoor.Win32.Seed Hardware backdoor Titanium (malware) References Further reading External links Finding and Removing Backdoors Three Archaic Backdoor Trojan Programs That Still Serve Great Pranks Backdoors removal — List of backdoors and their removal instructions. FAQ Farm's Backdoors FAQ: wiki question and answer forum List of backdoors and Removal — Types of malware Spyware Espionage techniques Rootkits Cryptography
36041047
https://en.wikipedia.org/wiki/Watch%20Dogs%20%28video%20game%29
Watch Dogs (video game)
Watch Dogs (stylized as WATCH_DOGS) is a 2014 action-adventure game developed by Ubisoft Montreal and published by Ubisoft. It is the first installment in the Watch Dogs series. The game is played from a third-person perspective, and its world is navigated on foot or by vehicle. Set within a fictionalized version of the Chicago area in 2013, the single-player story follows grey hat hacker and vigilante Aiden Pearce's quest for revenge after the killing of his niece. An online multiplayer mode allows up to eight players to engage in cooperative and competitive gameplay. Development of the game began in 2009, and continued for over five years. Duties were shared by many of Ubisoft's studios worldwide, with more than a thousand people involved. The developers visited Chicago to conduct field research on the setting, and used regional language for authenticity. Hacking features were created in consultation with the cyber-security company Kaspersky Lab, and the in-game control system was based on SCADA. The score was composed by Brian Reitzell, who infused it with krautrock. Following its announcement in June 2012, Watch Dogs was widely anticipated. It was released for Microsoft Windows, PlayStation 3, PlayStation 4, Xbox 360, Xbox One, and Wii U in 2014. The game received generally favorable reviews; praise was directed at the gameplay, mission and open world design, combat system, hacking elements and mission variety, while criticism was expressed concerning technical issues, the discrepancy in graphical quality between marketing and the real game, plot, and protagonist. Watch Dogs was a commercial success, breaking the record for the biggest first-day sales of a Ubisoft game and becoming the biggest launch of a new intellectual property in the United Kingdom at the time. The game has shipped over 10 million copies. A sequel, Watch Dogs 2, was released in November 2016, and a third game, Watch Dogs: Legion, was released in October 2020. Gameplay Watch Dogs is an action-adventure game, played from a third-person view. The player controls hacker Aiden Pearce, who uses his smartphone to control trains and traffic lights, infiltrate security systems, jam cellphones, access pedestrians' private information, and empty their bank accounts. System hacking involves the solving of puzzles. The game is set in a fictionalized version of Chicago ("Windy City"), an open world environment which permits free-roaming. It has a day-night cycle and dynamic weather system, which changes the behavior of non-player characters (NPCs). For melee combat, Aiden has an extensible truncheon; other combat uses handguns, shotguns, sniper rifles, machine guns, and grenade launchers. There is a slow motion option for gunplay, and the player can use proximity IEDs, grenades, and electronic lures. Lethal and non-lethal mission approaches can be enacted. Aiden can scale vertical surfaces, hack forklifts and aerial work platforms to reach places otherwise unreachable, and can crouch behind walls to hide from enemies. The player has an array of vehicles with which to navigate the setting, including motorcycles, muscle cars, off-road vehicles, SUVs, luxury vehicles, sports cars, and speedboats. The car radio is customizable, with about fifty songs. If the player steals a car, its driver can summon the police; if the player has a good reputation, the driver may acquiesce. A good reputation may be gained by detecting (and stopping) crimes, and a bad reputation results from committing crimes. The skill tree is upgraded with points earned from hacking, combat, driving, and crafted items. Money can be used to purchase guns, outfits, and vehicles. There are several minigames, ranging from killing aliens to controlling a large, robotic spider. QR codes and audio logs are included as collectibles; ctOS towers unlock map icons and side missions. Multiplayer mode can host up to seven other free-roaming players, with whom the player may complete hacking contracts and engage in races. The player can also be hacked by others, who will perceive the target as an NPC (leaving the target to find the perpetrator). In-game invasions can be disabled, but this will reset the multiplayer skill rank to zero. Free-roaming with multiple players and decryption mode, where two teams of four are tasked with acquiring and holding data, were excluded from the Xbox 360 and PlayStation 3 versions. The game's recreation of the Chicago metropolitan area is anchored by six regions on the map: Parker Square, which resembles the city's northern and western areas; the Loop, based on the Chicago Loop; Brandon Docks, a loose recreation of the south side's industrial district; the Wards, based on Englewood; Mad Mile, based on the Magnificent Mile; and Pawnee, a faux rural town resembling some of the city's suburbs. Plot In 2012, Chicago becomes the first city in the world to implement ctOS (central Operating System) – a computing network connecting every device together into a single system, developed by technology company Blume. While conducting an electronic heist at the high-end Merlaut Hotel, hacker Aiden Pearce and his mentor and partner, Damien Brenks, trigger a silent alarm set by another hacker. Damien tries to find the hacker, giving himself and Aiden away. Fearing for his family, Aiden drives them to safety in the guise of a surprise trip to the country. On the way, hitman Maurice Vega attacks them, resulting in a car crash that kills Aiden's six-year-old niece Lena. A year later, Aiden tracks down Vega at a baseball stadium - but is unsuccessful in learning the identity of his contractor. Leaving Vega in the hands of his partner, hired fixer Jordi Chin, Aiden visits his sister Nicole and nephew Jackson for the latter's birthday but learns someone is harassing them. With the help of Clara Lille, a member of the hacking syndicate DedSec that is trying to expose Blume's corruption, Aiden tracks down the harasser, revealed to be Damien, who wanted to get Aiden's attention so that he will help him find the other hacker from the Merlaut job. Aiden refuses but, after dealing with a witness from the stadium, he learns that Damien kidnapped Nicole to force him to comply. After setting up a new hideout in the Bunker – an undetectable former Blume base with access to ctOS – Aiden tracks down the hacker with Clara's help: gang leader and Army veteran Delford 'Iraq' Wade. To reach Iraq's servers, Aiden infiltrates a human auction he is attending to copy his access key, and blackmails his cousin Tyrone "Bedbug" Hayes into acting as his inside man. Bedbug manages to obtain a data sample revealing that Iraq has information on almost every citizen of Chicago, protecting his gang from the authorities via blackmail. When Aiden and Clara come across encrypted data beyond their abilities, they track down legendary hacker and former Blume whistleblower Raymond "T-Bone" Kenney, who caused the Northeast blackout of 2003 while trying to expose the dangers of ctOS, which he had helped to create. Aiden infiltrates the Blume headquarters to erase Kenney's identity from ctOS and allow him to return to Chicago. However, Damien gives up Kenney's location in exchange for full access to ctOS, forcing Aiden to rescue him before he is killed by Blume's private security forces. Afterward, he assaults Iraq's compound to finish downloading its server data and kills Iraq when he confronts him. Aiden, Kenney, and Clara are unable to decrypt the data because another hacker, JB "Defalt" Markowicz, infiltrates their system, steals it, and deletes it from their servers. Defalt also reveals that Clara was hired to track down Aiden after the Merlaut job, therefore being indirectly responsible for Lena's death, which causes Aiden to angrily dismiss her. Meanwhile, annoyed with Aiden's lack of progress, Damien reveals his identity to the authorities. After retrieving the data from Defalt, Kenney helps Aiden track down Nicole, allowing him to rescue her and take her and Jackson out of Chicago for their safety. As Kenney finishes decrypting the data, he informs Aiden of who ordered the hit on him: Irish Mob boss and Merlaut owner Dermot "Lucky" Quinn. Aiden confronts Quinn, who reveals he ordered the hit because he believed Aiden was searching for secret video footage of Mayor Donovan Rushmore killing his secretary after she learned of his dealings with Quinn, which the latter had used to blackmail Rushmore. After killing Quinn by hacking his pacemaker, Aiden is informed by Damien that Quinn sent hitmen after Clara for being a liability. Unable to save Clara, Aiden makes all the blackmail material public, enraging Damien, who begins to wreak havoc in Chicago using ctOS. Aiden shuts down the system using a virus created by Kenney and tracks down Damien to a lighthouse. Jordi arrives, having been hired to kill both men, but Aiden injures him and kills Damien. Later, Jordi calls Aiden one last time to tell him where Vega is kept; Aiden heads there and decides his fate. Bad Blood In 2014, a year after the main events of Watch Dogs, Kenney decides to leave Chicago after performing what he thinks is his last hacking job: removing more data about him from the Blume servers and planting a fake trail to lead Blume away from him. However, after rescuing his former colleague Tobias Frewer from some Fixers who had kidnapped him, Kenney decides to stay in Chicago until he learns who is out to get him and Frewer. The pair investigate the Fixers trying to capture them and eventually discover that they were hired by Defalt, who is working with Blume to exact revenge on Kenney. Kenney manages to track down Defalt, but while investigating his hideout, he finds mannequins representing the people who died during the blackout he had caused eleven years ago. Among them is a mannequin wearing a mask similar to Defalt's, which leads Kenney to realize the true reason for Defalt's vendetta against him: his brother was among the blackout victims. This enrages Kenney, as he had no intention for anyone to die during the blackout and he has been living with remorse ever since. After fending off Fixers sent by Defalt to kill them, Kenney and Frewer infiltrate his hideout, but are eventually separated and Kenney becomes trapped in a room where he is forced to confront Defalt and the families of the other blackout victims. Defalt holds a vote to decide Kenney's fate and the majority choose for him to die, causing the room he is in to be filled with gas. Kenney begins to asphyxiate, but manages to hack the building's ventilation system through Frewer's phone, rerouting the gas to Defalt's room and seemingly killing him. Kenney is rescued by Frewer and decides to stay in Chicago to fight Blume with his help, hoping to recruit Aiden into their team as well. Development Beginning in 2009 with ten people and expanding to over a thousand, Watch Dogs was developed over five years (from prototype to finished game) on a budget of about $68million. Before its announcement in 2012, the game had the working title of Nexus. The initial sales pitch was the notion that one could control an entire city with the push of a button. Watch Dogs runs on the game engine Disrupt, which developer Ubisoft Montreal created for it (although it was originally intended for another game in the Driver franchise). According to producer Dominic Guay, Disrupt has three pillars: simulation of the environment and its contents, how the environment can be affected, and the connectivity of a seamless online experience. Director Jonathan Morin said that the PlayStation 4 technology allowed for better wind and water simulations and artificial intelligence. Regional colloquialisms specific to Chicago were added to give the game authenticity. The developer traveled to the city several times for field research, photos, recording audio, meeting people, and interviewing the Chicago Police Department to gain insight. Landmarks were designed to only resemble their real-life counterparts, because their artistic rights would have tripled the budget. Fictionalizing Chicago allowed them to fit several district themes on one map. The city was chosen as the setting for its contradictory nature, with great wealth and abject poverty. Morin said that after it was chosen, Chicago became the most-surveilled city in America. The developers noted that character movements in games like Assassin's Creed were the same in every situation, and attempted to rectify this in Watch Dogs to contextualize protagonist Aiden Pearce. The in-game control system ctOS was based on the SCADA system, and the story was inspired by the cyber-attack by the computer worm Stuxnet on SCADA. The developers consulted the cyber-security company Kaspersky Lab (which discovered the Stuxnet worm) about the hacking features to increase their authenticity. To create hacking factions in the game (like DedSec), the developer was influenced by the hacktivist group Anonymous, state-sponsored hackers, and tales of corporate espionage. Grand Theft Auto and Saints Row were studied to develop complex controls, so the core hacking mechanic was reduced to a single button. When the game was delayed, ideas long set aside (like the hacking of a headset) could be implemented. The score, by Brian Reitzell, was released on vinyl and CD by Invada Records in partnership with Ubisoft Music on July 28, 2014. Reitzell began composing by reading the script, concluding that krautrock would suit Watch Dogs. The game was released to manufacturing in May 2014. Release Ubisoft announced Watch Dogs at its June 4, 2012 E3 press conference, confirming it for Microsoft Windows, PlayStation 3, and Xbox 360. A QR code that appeared in the first gameplay demonstration as viral marketing led to a website called DotConnexion, which contained information about the in-game world. PlayStation 4, Xbox One, and Wii U were announced in early 2013 as additional platforms for Watch Dogs, which was scheduled for release later that year. Originally set to be released that November, the developer delayed it until early 2014 to "not compromise on quality". The final release date was set for May 27, 2014, before its release for the Wii U on November 18 in North America, November 20 in Australia and New Zealand, and November 21 in Europe. A free mobile app was released for iOS and Android devices, with Watch Dogs players connecting with console or PC users for two racing modes. Four collector's editions were available, and the GameStop pre-order version contained a poster by Alex Ross. An eBook, //n/Dark Clouds by John Shirley as a continuation of Watch Dogs, was released in conjunction with the game. Watch Dogs was marketed with a live-action sequence directed by Devin Graham and a three-part documentary series by Motherboard, Phreaked Out. After promotional material was sent to Australian media as a safe containing the game and a voicemail explaining the delivery, the Sydney offices of Nine.com.au (unaware of the voicemail) called a bomb disposal unit when the safe beeped; Ubisoft apologized to the staff. A story expansion titled Bad Blood was released in September 2014. It is set one year after the events of Watch Dogs and follows the exploits of Raymond Kenney after Aiden Pearce left Chicago. The expansion is separate from the main game and features ten story missions, as well as new side content and gameplay mechanics, such as an RC car. Reception Watch Dogs reveal at E3 2012 generated favorable reception from critics. According to leaked marketing material, it received over eighty E3 awards and nominations and G4tv called it a "truly next-gen adventure". At E3 2013, the game received over fifty awards and nominations. The following year, Ubisoft was accused of graphical downgrading after a NeoGAF forum post compared its 2012 visuals to 2014 video footage. According to Ubisoft researcher Thomas Geffroyd, studies and quantitative analysis indicated that about sixty percent of gamers changed their view of technology after playing Watch Dogs. Chris Carter of Destructoid liked the virtual rendition of Chicago and the detail of non-player characters, concluding that the gameplay was fun. His favorite feature was the extra content, like collectibles and minigames. Eurogamers Dan Whitehead found the car-handling "slick and intuitive" and called the visuals "dazzling". In Game Informer, Jeff Marchiafava wrote that the hacking added meaning to the combat, the shooting mechanic "make[s] full-scale firefights enjoyable", and praised the stealth approach. He found that the variety of gameplay and environments in the campaign missions provided an "entertaining" experience. Kevin VanOrd of GameSpot thought the game's primary strength was in combat and was grateful for the hacking component's contribution. VanOrd said that the story only flourished when it left behind the "revenge-story cliches", and he felt more attached to supporting characters than to Aiden Pearce. He praised the game's open world, particularly its extra content. At GamesRadar, Andy Hartup said he enjoyed the set pieces and individual missions: "You'll enjoy Watch Dogs narrative in piecemeal, rather than as a whole". He praised the setting's details (which he thought revealed "the city's true beauty"), and called the combat and hacking "satisfying". Dan Stapleton of IGN praised the game's visuals and the open world's "intricately detailed" map. He also enjoyed the combat: "The cover-based gunplay feels good". PC Gamers Christopher Livingston called the hacking the game's most positive feature; although he liked the stealth, his favorite approach to battle was gunplay. For Polygon, Arthur Gies praised the combat's gunplay feature and wrote that it was "aided by a good, functional third-person cover system, which helps with more than just shooting — it also allows for effective stealth". Steven Burns of VideoGamer.com called the game "undoubtedly enjoyable, but it won't linger long in the memory". Carter was dissatisfied with the story, citing "lifeless" characters and calling the plot's events "fairly predictable and clichéd"; the graphics were thought inferior to the game's marketing footage. Whitehead criticized the hacking for resorting to "tired old PipeMania-style" puzzles and saw the story and main character as the game's weakest aspects, saying that the script avoided the moral dilemmas offered by its set-up. Marchiafava agreed with Carter that the graphics were less impressive than they were in early videos, criticizing poor character design and the story for not living up to its gameplay. Like Whitehead, VanOrd said that the predicament surrounding technology "rarely reaches any conclusions or digs very deeply". Hartup criticized the main character, calling him "a bit of a dullard", and faulted the story for its reliance on "unimaginative stereotypes". Jeff Gerstmann of Giant Bomb criticized the story and characters for a lack of organization, which he said made it difficult to care about the game. Stapleton disliked the main character's lack of personality, and found the supporting cast more interesting. Livingston saw little to like in the story, and also preferred another character to Aiden. Gies wrote, "After a promising (albeit well-trod) start, Watch Dogs plot struggles to remain coherent", and likened the characters to caricatures. Tom Watson wrote in New Statesman that the game "has so many complex side missions and obligatory tasks that it becomes dull; it's humourlessly derivative of the open world of Grand Theft Auto V". Sales Watch Dogs was Ubisoft's most pre-ordered new intellectual property (IP), their second-highest pre-ordered title, the most pre-ordered game of the year (more than 800,000 copies), and the most pre-ordered game for the eighth generation of video game consoles. The company's executives predicted over 6.3 million copies in overall sales. The day it was released, Watch Dogs sold the most copies of any Ubisoft title in a 24-hour period. It was the bestselling new IP ever in the United Kingdom in its first week (beating Assassin's Creed III record by more than 17 percent), and was the seventeenth-largest game launch of all time in the UK. Most sales were for PlayStation 4, whose hardware sales increased by 94 percent because of Watch Dogs. After the first week, four million copies of the game had been sold. Watch Dogs sold 63,000 copies of the PlayStation 4 version and 31,000 for PlayStation 3 in its Japanese debut. By July 2014, the game had sold over eight million copies. It was the biggest video-game launch of the year in Britain until Destiny was released in September, and the third-bestselling game during the second week of that month. Nine million copies had been shipped worldwide in October, two-thirds of which were for Microsoft Windows, PlayStation 4, and Xbox One. According to Ubisoft's sales figures, Watch Dogs had sold ten million copies by the fiscal quarter ending December 31, 2014. Awards Adaptations and sequels In June 2013, Variety reported that Ubisoft Motion Pictures was planning a film adaptation. That August, Sony Pictures Entertainment and New Regency were announced to be partnering with Ubisoft Motion Pictures on the project. Paul Wernick and Rhett Reese were later commissioned to write the film. A sequel, Watch Dogs 2, was released in November 2016, followed by Watch Dogs: Legion in October 2020. An animated series is also in development. Notes References Further reading External links 2014 video games Action-adventure games Advertising and marketing controversies Asymmetrical multiplayer video games Augmented reality in fiction Cameras in fiction Corporate warfare in fiction Cybernetted society in fiction Cyberpunk video games Gangs in fiction Hacking video games Malware in fiction Video games about mass surveillance Multiplayer and single-player video games Neo-noir video games Nintendo Network games Organized crime video games Open-world video games Parkour video games PlayStation 3 games PlayStation 4 games Postcyberpunk Video games set in prison Propaganda in fiction Science fiction video games Stadia games Stealth video games Ubisoft games Video games about revenge Video games developed in Canada Video games set in 2012 Video games set in 2013 Video games set in 2014 Video games set in Chicago Video games set in Illinois Video games with downloadable content Wii U eShop games Wii U games Windows games Xbox 360 games Xbox One games Video games using Havok Video games about virtual reality
7255138
https://en.wikipedia.org/wiki/Computer%20network%20operations
Computer network operations
Computer network operations (CNO) is a broad term that has both military and civilian application. Conventional wisdom is that information is power, and more and more of the information necessary to make decisions is digitized and conveyed over an ever-expanding network of computers and other electronic devices. Computer network operations are deliberate actions taken to leverage and optimize these networks to improve human endeavor and enterprise or, in warfare, to gain information superiority and deny the enemy this enabling capability. In the military domain Within the United States military domain, CNO is considered one of five core capabilities under Information Operations (IO) Information Warfare. The other capabilities are Psychological Operations (PSYOP), Military Deception (MILDEC), Operations Security (OPSEC) and Electronic Warfare (EW). Other national military organizations may use different designations. Computer Network Operations, in concert with electronic warfare (EW), is used primarily to disrupt, disable, degrade or deceive an enemy's command and control, thereby crippling the enemy's ability to make effective and timely decisions, while simultaneously protecting and preserving friendly command and control. Types of military CNO According to Joint Pub 3-13, CNO consists of computer network attack (CNA), computer network defense (CND) and computer network exploitation (CNE). Computer Network Attack (CNA): Includes actions taken via computer networks to disrupt, deny, degrade, or destroy the information within computers and computer networks and/or the computers/networks themselves. Computer Network Defense (CND): Includes actions taken via computer networks to protect, monitor, analyze, detect and respond to network attacks, intrusions, disruptions or other unauthorized actions that would compromise or cripple defense information systems and networks. Joint Pub 6.0 further outlines Computer Network Defense as an aspect of NetOps: Computer Network Exploitation (CNE): Includes enabling actions and intelligence collection via computer networks that exploit data gathered from target or enemy information systems or networks. See also Cyberwarfare in the United States Chinese information operations and information warfare Cyberwarfare by Russia References External links Cyber, War and Law United States Army Combined Arms Center United States Department of Defense doctrine
30164499
https://en.wikipedia.org/wiki/MalCon
MalCon
The International Malware Conference, abbreviated as MalCon and stylized as MALCON is a computer security conference targeted on the development of malware. Some new announcements made at MalCon include malware that can share USB smart card reader data, Windows Phone 8 malware, security problems with counterfeit phones and the AirHopper attack. References External links Hacker conventions Recurring events established in 2010
12283899
https://en.wikipedia.org/wiki/Katharina%20Hacker
Katharina Hacker
Katharina Hacker (born 11 January 1967) is a German author best known for her award-winning novel Die Habenichtse (The Have-Nots). Hacker studied philosophy, history and Jewish studies at the University of Freiburg and the University of Jerusalem. Her studies in Israel have been seen as an attempt to compensate for the strong anti-Semitic feelings of her Silesian grandmother. She did not finish her studies with an academic degree. Since 1996, she has been living as a freelance writer in Berlin. In 2006, she was the second writer to be awarded the German Book Prize for Die Habenichtse. In this and other works, Hacker examines the consequences of globalization and neoliberalism on the working life, social relations, and family interactions of her German protagonists. Works Tel Aviv. Eine Stadterzählung (narrative, 1997) Morpheus oder Der Schnabelschuh (narratives, 1998, published in English as Morpheus, 2003) Der Bademeister (novel, 2000, published in English as The Lifeguard, 2002) Eine Art Liebe (novel, 2003) Die Habenichtse (novel, 2006, published in English as The Have-Nots, 2007) Überlandleitung (prose poems, 2007) Alix, Anton und die anderen (novel. Suhrkamp, Frankfurt am Main 2009, ) Die Erdbeeren von Antons Mutter (novel. Fischer, Frankfurt am Main 2010, ) Eine Dorfgeschichte (short novel, S. Fischer Verlag, Frankfurt am Main 2011. Skip (novel, S. Fischer Verlag, Frankfurt am Main 2015. ) Translations Leah Aini: Eine muß da sein. Novel. Suhrkamp, Frankfurt am Main 1997, Jossi Avni: Der Garten der toten Bäume. Novel in 15 parts, Hamburg 2000; new edition: Hamburg 2006 Notes 1967 births Living people Writers from Frankfurt 21st-century German novelists German women novelists 21st-century German women writers German Book Prize winners
1471164
https://en.wikipedia.org/wiki/Direct%20Rendering%20Manager
Direct Rendering Manager
The Direct Rendering Manager (DRM) is a subsystem of the Linux kernel responsible for interfacing with GPUs of modern video cards. DRM exposes an API that user-space programs can use to send commands and data to the GPU and perform operations such as configuring the mode setting of the display. DRM was first developed as the kernel-space component of the X Server Direct Rendering Infrastructure, but since then it has been used by other graphic stack alternatives such as Wayland. User-space programs can use the DRM API to command the GPU to do hardware-accelerated 3D rendering and video decoding, as well as GPGPU computing. Overview The Linux kernel already had an API called fbdev, used to manage the framebuffer of a graphics adapter, but it couldn't be used to handle the needs of modern 3D-accelerated GPU-based video hardware. These devices usually require setting and managing a command queue in their own memory to dispatch commands to the GPU and also require management of buffers and free space within that memory. Initially, user-space programs (such as the X Server) directly managed these resources, but they usually acted as if they were the only ones with access to them. When two or more programs tried to control the same hardware at the same time, and set its resources each one in its own way, most times they ended catastrophically. The Direct Rendering Manager was created to allow multiple programs to use video hardware resources cooperatively. The DRM gets exclusive access to the GPU and is responsible for initializing and maintaining the command queue, memory, and any other hardware resource. Programs wishing to use the GPU send requests to DRM, which acts as an arbitrator and takes care to avoid possible conflicts. The scope of DRM has been expanded over the years to cover more functionality previously handled by user-space programs, such as framebuffer managing and mode setting, memory-sharing objects and memory synchronization. Some of these expansions were given specific names, such as Graphics Execution Manager (GEM) or kernel mode-setting (KMS), and the terminology prevails when the functionality they provide is specifically alluded. But they are really parts of the whole kernel DRM subsystem. The trend to include two GPUs in a computer—a discrete GPU and an integrated one—led to new problems such as GPU switching that also needed to be solved at the DRM layer. In order to match the Nvidia Optimus technology, DRM was provided with GPU offloading abilities, called PRIME. Software architecture The Direct Rendering Manager resides in kernel space, so user-space programs must use kernel system calls to request its services. However, DRM doesn't define its own customized system calls. Instead, it follows the Unix principle of "everything is a file" to expose the GPUs through the filesystem name space, using device files under the /dev hierarchy. Each GPU detected by DRM is referred to as a DRM device, and a device file /dev/dri/cardX (where X is a sequential number) is created to interface with it. User-space programs that want to talk to the GPU must open this file and use ioctl calls to communicate with DRM. Different ioctls correspond to different functions of the DRM API. A library called libdrm was created to facilitate the interface of user-space programs with the DRM subsystem. This library is merely a wrapper that provides a function written in C for every ioctl of the DRM API, as well as constants, structures and other helper elements. The use of libdrm not only avoids exposing the kernel interface directly to applications, but presents the usual advantages of reusing and sharing code between programs. DRM consists of two parts: a generic "DRM core" and a specific one ("DRM driver") for each type of supported hardware. DRM core provides the basic framework where different DRM drivers can register and also provides to user space a minimal set of ioctls with common, hardware-independent functionality. A DRM driver, on the other hand, implements the hardware-dependent part of the API, specific to the type of GPU it supports; it should provide the implementation of the remaining ioctls not covered by DRM core, but it may also extend the API, offering additional ioctls with extra functionality only available on such hardware. When a specific DRM driver provides an enhanced API, user-space libdrm is also extended by an extra library libdrm-driver that can be used by user space to interface with the additional ioctls. API The DRM core exports several interfaces to user-space applications, generally intended to be used through corresponding libdrm wrapper functions. In addition, drivers export device-specific interfaces for use by user-space drivers and device-aware applications through ioctls and sysfs files. External interfaces include: memory mapping, context management, DMA operations, AGP management, vblank control, fence management, memory management, and output management. DRM-Master and DRM-Auth There are several operations (ioctls) in the DRM API that either for security purposes or for concurrency issues must be restricted to be used by a single user-space process per device. To implement this restriction, DRM limits such ioctls to be only invoked by the process considered the "master" of a DRM device, usually called DRM-Master. Only one of all processes that have the device node /dev/dri/cardX opened will have its file handle marked as master, specifically the first calling the ioctl. Any attempt to use one of these restricted ioctls without being the DRM-Master will return an error. A process can also give up its master role—and let another process acquire it—by calling the ioctl. The X Server—or any other display server—is commonly the process that acquires the DRM-Master status in every DRM device it manages, usually when it opens the corresponding device node during its startup, and keeps these privileges for the entire graphical session until it finishes or dies. For the remaining user-space processes there is another way to gain the privilege to invoke some restricted operations on the DRM device called DRM-Auth. It is basically a method of authentication against the DRM device, in order to prove to it that the process has the DRM-Master's approval to get such privileges. The procedure consists of: The client gets a unique token—a 32-bit integer—from the DRM device using the ioctl and passes it to the DRM-Master process by whatever means (normally some sort of IPC; for example, in DRI2 there is a request that any X client can send to the X Server.) The DRM-Master process, in turn, sends back the token to the DRM device by invoking the ioctl. The device grants special rights to the process file handle whose auth token matches the received token from the DRM-Master. Graphics Execution Manager Due to the increasing size of video memory and the growing complexity of graphics APIs such as OpenGL, the strategy of reinitializing the graphics card state at each context switch was too expensive, performance-wise. Also, modern Linux desktops needed an optimal way to share off-screen buffers with the compositing manager. These requirements led to the development of new methods to manage graphics buffers inside the kernel. The Graphics Execution Manager (GEM) emerged as one of these methods. GEM provides an API with explicit memory management primitives. Through GEM, a user-space program can create, handle and destroy memory objects living in the GPU video memory. These objects, called "GEM objects", are persistent from the user-space program's perspective and don't need to be reloaded every time the program regains control of the GPU. When a user-space program needs a chunk of video memory (to store a framebuffer, texture or any other data required by the GPU), it requests the allocation to the DRM driver using the GEM API. The DRM driver keeps track of the used video memory and is able to comply with the request if there is free memory available, returning a "handle" to user space to further refer the allocated memory in coming operations. GEM API also provides operations to populate the buffer and to release it when it is not needed anymore. Memory from unreleased GEM handles gets recovered when the user-space process closes the DRM device file descriptor—intentionally or because it terminates. GEM also allows two or more user-space processes using the same DRM device (hence the same DRM driver) to share a GEM object. GEM handles are local 32-bit integers unique to a process but repeatable in other processes, therefore not suitable for sharing. What is needed is a global namespace, and GEM provides one through the use of global handles called GEM names. A GEM name refers to one, and only one, GEM object created within the same DRM device by the same DRM driver, by using a unique 32-bit integer. GEM provides an operation flink to obtain a GEM name from a GEM handle. The process can then pass this GEM name (32-bit integer) to another process using any IPC mechanism available. The GEM name can be used by the recipient process to obtain a local GEM handle pointing to the original GEM object. Unfortunately, the use of GEM names to share buffers is not secure. A malicious third-party process accessing the same DRM device could try to guess the GEM name of a buffer shared by other two processes, simply by probing 32-bit integers. Once a GEM name is found, its contents can be accessed and modified, violating the confidentiality and integrity of the information of the buffer. This drawback was overcome later by the introduction of DMA-BUF support into DRM, as DMA-BUF represents buffers in userspace as file descriptors, which may be shared securely. Another important task for any video-memory management system besides managing the video-memory space is handling the memory synchronization between the GPU and the CPU. Current memory architectures are very complex and usually involve various levels of caches for the system memory and sometimes for the video memory too. Therefore, video-memory managers should also handle the cache coherence to ensure the data shared between CPU and GPU is consistent. This means that often video-memory management internals are highly dependent on hardware details of the GPU and memory architecture, and thus driver-specific. GEM was initially developed by Intel engineers to provide a video-memory manager for its i915 driver. The Intel GMA 9xx family are integrated GPUs with a Uniform Memory Architecture (UMA), where the GPU and CPU share the physical memory, and there is not a dedicated VRAM. GEM defines "memory domains" for memory synchronization, and while these memory domains are GPU-independent, they are specifically designed with an UMA memory architecture in mind, making them less suitable for other memory architectures like those with a separate VRAM. For this reason, other DRM drivers have decided to expose to user-space programs the GEM API, but internally they implemented a different memory manager better suited for their particular hardware and memory architecture. The GEM API also provides ioctls for control of the execution flow (command buffers), but they are Intel-specific, to be used with Intel i915 and later GPUs. No other DRM driver has attempted to implement any part of the GEM API beyond the memory-management specific ioctls. Translation Table Maps Translation Table Maps (TTM) is the name of the generic memory manager for GPUs that was developed before GEM. It was specifically designed to manage the different types of memory that a GPU might access, including dedicated Video RAM (commonly installed in the video card) and system memory accessible through an I/O memory management unit called the Graphics Address Remapping Table (GART). TTM should also handle the portions of the video RAM that are not directly addressable by the CPU and do it with the best possible performance, considering that user-space graphics applications typically work with large amounts of video data. Another important matter was to maintain the consistency between the different memories and caches involved. The main concept of TTM are the "buffer objects", regions of video memory that at some point must be addressable by the GPU. When a user-space graphics application wants access to a certain buffer object (usually to fill it with content), TTM may require relocating it to a type of memory addressable by the CPU. Further relocations—or GART mapping operations—could happen when the GPU needs access to a buffer object but it isn't in the GPU's address space yet. Each of these relocation operations must handle any related data and cache-coherency issues. Another important TTM concept is fences. Fences are essentially a mechanism to manage concurrency between the CPU and the GPU. A fence tracks when a buffer object is no longer used by the GPU, generally to notify any user-space process with access to it. The fact that TTM tried to manage all kind of memory architectures, including those with and without a dedicated VRAM, in a suitable way, and to provide every conceivable feature in a memory manager for use with any type of hardware, led to an overly complex solution with an API far larger than needed. Some DRM developers thought that it wouldn't fit well with any specific driver, especially the API. When GEM emerged as a simpler memory manager, its API was preferred over the TTM one. But some driver developers considered that the approach taken by TTM was more suitable for discrete video cards with dedicated video memory and IOMMUs, so they decided to use TTM internally, while exposing their buffer objects as GEM objects and thus supporting the GEM API. Examples of current drivers using TTM as an internal memory manager but providing a GEM API are the radeon driver for AMD video cards and the nouveau driver for NVIDIA video cards. DMA Buffer Sharing and PRIME The DMA Buffer Sharing API (often abbreviated as DMA-BUF) is a Linux kernel internal API designed to provide a generic mechanism to share DMA buffers across multiple devices, possibly managed by different types of device drivers. For example, a Video4Linux device and a graphics adapter device could share buffers through DMA-BUF to achieve zero-copy of the data of a video stream produced by the first and consumed by the latter. Any Linux device driver can implement this API as exporter, as user (consumer) or both. This feature was exploited for the first time in DRM to implement PRIME, a solution for GPU offloading that uses DMA-BUF to share the resulting framebuffers between the DRM drivers of the discrete and the integrated GPU. An important feature of DMA-BUF is that a shared buffer is presented to user space as a file descriptor. For the development of PRIME two new ioctls were added to the DRM API, one to convert a local GEM handle to a DMA-BUF file descriptor and another for the exact opposite operation. These two new ioctls were later reused as a way to fix the inherent unsafety of GEM buffer sharing. Unlike GEM names, file descriptors can not be guessed (they are not a global namespace), and Unix operating systems provide a safe way to pass them through a Unix domain socket using the SCM_RIGHTS semantics. A process that wants to share a GEM object with another process can convert its local GEM handle to a DMA-BUF file descriptor and pass it to the recipient, which in turn can get its own GEM handle from the received file descriptor. This method is used by DRI3 to share buffers between the client and the X Server and also by Wayland. Kernel Mode Setting In order to work properly, a video card or graphics adapter must set a mode—a combination of screen resolution, color depth and refresh rate—that is within the range of values supported by itself and the attached display screen. This operation is called mode-setting, and it usually requires raw access to the graphics hardware—i.e. the ability to write to certain registers of the video card. A mode-setting operation must be performed before starting to use the framebuffer, and also when the mode is required to change by an application or the user. In early days, the user space programs that wanted to use the graphical framebuffer were also responsible for providing the mode-setting operations, and therefore they needed to run with privileged access to the video hardware. In Unix-type operating systems, the X Server was the most prominent example, and its mode-setting implementation lived in the DDX driver for each specific type of video card. This approach, later referred as User space Mode-Setting or UMS, poses several issues. It not only breaks the isolation that operating systems should provide between programs and hardware, raising both stability and security concerns, but also could leave the graphics hardware in an inconsistent state if two or more user space programs try to do the mode-setting at the same time. To avoid these conflicts, the X Server became in practice the only user space program that performed mode-setting operations; the remainder user space programs relied on the X Server to set the appropriate mode and to handle any other operation involving mode-setting. Initially the mode-setting was performed exclusively during the X Server startup process, but later the X Server gained the ability to do it while running. The XFree86-VidModeExtension extension was introduced in XFree86 3.1.2 to let any X client request modeline (resolution) changes to the X Server. VidMode extension was later superseded by the more generic XRandR extension. However, this was not the only code doing mode-setting in a Linux system. During the system booting process, the Linux kernel must set a minimal text mode for the virtual console (based on the standard modes defined by VESA BIOS extensions). Also the Linux kernel framebuffer driver contained mode-setting code to configure framebuffer devices. To avoid mode-setting conflicts, the XFree86 Server—and later the X.Org Server—handled the case when the user switched from the graphical environment to a text virtual console by saving its mode-setting state, and restoring it when the user switched back to X. This process caused an annoying flicker in the transition, and also can fail, leading to a corrupted or unusable output display. The user space mode setting approach also caused other issues: The suspend/resume process has to rely on user space tools to restore the previous mode. One single failure or crash of one of these programs could leave the system without a working display due to a modeset misconfiguration, and therefore unusable. It was also impossible for the kernel to show error or debug messages when the screen was in a graphics mode—for example when X was running—since the only modes the kernel knew about were the VESA BIOS standard text modes. A more pressing issue was the proliferation of graphical applications bypassing the X Server and the emergence of other graphics stack alternatives to X, extending the duplication of mode-setting code across the system even further. To address these problems, the mode-setting code was moved to a single place inside the kernel, specifically to the existing DRM module. Then, every process—including the X Server—should be able to command the kernel to perform mode-setting operations, and the kernel would ensure that concurrent operations don't result in an inconsistent state. The new kernel API and code added to the DRM module to perform these mode-setting operations was called Kernel Mode-Setting (KMS). Kernel Mode-Setting provides several benefits. The most immediate is of course the removal of duplicate mode-setting code, from both the kernel (Linux console, fbdev) and user space (X Server DDX drivers). KMS also makes it easier to write alternative graphics systems, which now don't need to implement their own mode-setting code. By providing centralized mode management, KMS solves the flickering issues while changing between console and X, and also between different instances of X (fast user switching). Since it is available in the kernel, it can also be used at the beginning of the boot process, saving flickering due to mode changes in these early stages. The fact that KMS is part of the kernel allows it to use resources only available at kernel space such as interrupts. For example, the mode recovery after a suspend/resume process simplifies a lot by being managed by the kernel itself, and incidentally improves security (no more user space tools requiring root permissions). The kernel also allows the hotplug of new display devices easily, solving a longstanding problem. Mode-setting is also closely related to memory management—since framebuffers are basically memory buffers—so a tight integration with the graphics memory manager is highly recommended. That's the main reason why the kernel mode-setting code was incorporated into DRM and not as a separate subsystem. To avoid breaking backwards compatibility of the DRM API, Kernel Mode-Setting is provided as an additional driver feature of certain DRM drivers. Any DRM driver can choose to provide the flag when it registers with the DRM core to indicate that supports the KMS API. Those drivers that implement Kernel Mode-Setting are often called KMS drivers as a way to differentiate them from the legacy—without KMS—DRM drivers. KMS has been adopted to such an extent that certain drivers which lack 3D acceleration (or for which the hardware vendor doesn't want to expose or implement it) nevertheless implement the KMS API without the rest of the DRM API. KMS device model KMS models and manages the output devices as a series of abstract hardware blocks commonly found on the display output pipeline of a display controller. These blocks are: CRTCs: each CRTC (from CRT Controller) represents a scanout engine of the display controller, pointing to a scanout buffer (framebuffer). The purpose of a CRTC is to read the pixel data currently in the scanout buffer and generate from it the video mode timing signal with the help of a PLL circuit. The number of CRTCs available determines how many independent output devices can the hardware handle at the same time, so in order to use multi-head configurations at least one CRTC per display device is required. Two—or more—CRTCs can also work in clone mode if they scan out from the same framebuffer to send the same image to several output devices. Connectors: a connector represents where the display controller sends the video signal from a scanout operation to be displayed. Usually, the KMS concept of a connector corresponds to a physical connector (VGA, DVI, FPD-Link, HDMI, DisplayPort, S-Video, ...) in the hardware where an output device (monitor, laptop panel, ...) is permanently or can temporarily be attached. Information related to the current physically attached output device—such as connection status, EDID data, DPMS status or supported video modes—is also stored within the connector. Encoders: the display controller must encode the video mode timing signal from the CRTC using a format suitable for the intended connector. An encoder represents the hardware block able to do one of these encodings. Examples of encodings—for digital outputs—are TMDS and LVDS; for analog outputs such as VGA and TV out, specific DAC blocks are generally used. A connector can only receive the signal from one encoder at a time, and each type of connector only supports some encodings. There also might be additional physical restrictions by which not every CRTC is connected to every available encoder, limiting the possible combinations of CRTC-encoder-connector. Planes: a plane is not a hardware block but a memory object containing a buffer from which a scanout engine (a CRTC) is fed. The plane that holds the framebuffer is called the primary plane, and each CRTC must have one associated, since it's the source for the CRTC to determine the video mode—display resolution (width and height), pixel size, pixel format, refresh rate, etc. A CRTC might have also cursor planes associated to it if the display controller supports hardware cursor overlays, or secondary planes if it's able to scan out from additional hardware overlays and compose or blend "on the fly" the final image sent to the output device. Atomic Display In recent years there has been an ongoing effort to bring atomicity to some regular operations pertaining the KMS API, specifically to the mode setting and page flipping operations. This enhanced KMS API is what is called Atomic Display (formerly known as atomic mode-setting and atomic or nuclear pageflip). The purpose of the atomic mode-setting is to ensure a correct change of mode in complex configurations with multiple restrictions, by avoiding intermediate steps which could lead to an inconsistent or invalid video state; it also avoids risky video states when a failed mode-setting process has to be undone ("rollback"). Atomic mode-setting allows to know beforehand if certain specific mode configuration is appropriate, by providing mode testing capabilities. When an atomic mode is tested and its validity confirmed, it can be applied with a single indivisible (atomic) commit operation. Both test and commit operations are provided by the same new ioctl with different flags. Atomic page flip on the other hand allows to update multiple planes on the same output (for instance the primary plane, the cursor plane and maybe some overlays or secondary planes) all synchronized within the same VBLANK interval, ensuring a proper display without tearing. This requirement is especially relevant to mobile and embedded display controllers, that tend to use multiple planes/overlays to save power. The new atomic API is built upon the old KMS API. It uses the same model and objects (CRTCs, encoders, connectors, planes, ...), but with an increasing number of object properties that can be modified. The atomic procedure is based on changing the relevant properties to build the state that we want to test or commit. The properties we want to modify depend on whether we want to do a mode-setting (mostly CRTCs, encoders and connectors properties) or page flipping (usually planes properties). The ioctl is the same for both cases, the difference being the list of properties passed with each one. Render nodes In the original DRM API, the DRM device /dev/dri/cardX is used for both privileged (modesetting, other display control) and non-privileged (rendering, GPGPU compute) operations. For security reasons, opening the associated DRM device file requires special privileges "equivalent to root-privileges". This leads to an architecture where only some reliable user space programs (the X server, a graphical compositor, ...) have full access to the DRM API, including the privileged parts like the modeset API. The remainder user space applications that want to render or make GPGPU computations should be granted by the owner of the DRM device ("DRM Master") through the use of a special authentication interface. Then the authenticated applications can render or make computations using a restricted version of the DRM API without privileged operations. This design imposes a severe constraint: there must always be a running graphics server (the X Server, a Wayland compositor, ...) acting as DRM-Master of a DRM device so that other user space programs can be granted the use of the device, even in cases not involving any graphics display like GPGPU computations. The "render nodes" concept tries to solve these scenarios by splitting the DRM user space API into two interfaces – one privileged and one non-privileged – and using separate device files (or "nodes") for each one. For every GPU found, its corresponding DRM driver—if it supports the render nodes feature—creates a device file /dev/dri/renderDX, called the render node, in addition to the primary node /dev/dri/cardX. Clients that use a direct rendering model and applications that want to take advantage of the computing facilities of a GPU, can do it without requiring additional privileges by simply opening any existing render node and dispatching GPU operations using the limited subset of the DRM API supported by those nodes—provided they have file system permissions to open the device file. Display servers, compositors and any other program that requires the modeset API or any other privileged operation must open the standard primary node that grants access to the full DRM API and use it as usual. Render nodes explicitly disallow the GEM flink operation to prevent buffer sharing using insecure GEM global names; only PRIME (DMA-BUF) file descriptors can be used to share buffers with another client, including the graphics server. Hardware support The Linux DRM subsystem includes free and open-source drivers to support hardware from the 3 main manufacturers of GPUs for desktop computers (AMD, NVIDIA and Intel), as well as from a growing number of mobile GPU and System on a chip (SoC) integrators. The quality of each driver varies highly, depending on the degree of cooperation by the manufacturer and other matters. There is also a number of drivers for old, obsolete hardware detailed in the next table for historical purposes. Some of them still remains in the kernel code, but others have been already removed. Development The Direct Rendering Manager is developed within the Linux kernel, and its source code resides in the /drivers/gpu/drm directory of the Linux source code. The subsystem maintainter is Dave Airlie, with other maintainers taking care of specific drivers. As usual in the Linux kernel development, DRM submaintainers and contributors send their patches with new features and bug fixes to the main DRM maintainer which integrates them into its own Linux repository. The DRM maintainer in turn submits all of these patches that are ready to be mainlined to Linus Torvalds whenever a new Linux version is going to be released. Torvalds, as top maintainer of the whole kernel, holds the last word on whether a patch is suitable or not for inclusion in the kernel. For historical reasons, the source code of the libdrm library is maintained under the umbrella of the Mesa project. History In 1999, while developing DRI for XFree86, Precision Insight created the first version of DRM for the 3dfx video cards, as a Linux kernel patch included within the Mesa source code. Later that year, the DRM code was mainlined in Linux kernel 2.3.18 under the /drivers/char/drm/ directory for character devices. During the following years the number of supported video cards grew. When Linux 2.4.0 was released in January 2001 there was already support for Creative Labs GMX 2000, Intel i810, Matrox G200/G400 and ATI Rage 128, in addition to 3dfx Voodoo3 cards, and that list expanded during the 2.4.x series, with drivers for ATI Radeon cards, some SiS video cards and Intel 830M and subsequent integrated GPUs. The split of DRM into two components, DRM core and DRM driver, called DRM core/personality split was done during the second half of 2004, and merged into kernel version 2.6.11. This split allowed multiple DRM drivers for multiple devices to work simultaneously, opening the way to multi-GPU support. The idea of putting all the video mode setting code in one place inside the kernel had been acknowledged for years, but the graphics card manufacturers had argued that the only way to do the mode-setting was to use the routines provided by themselves and contained in the Video BIOS of each graphics card. Such code had to be executed using x86 real mode, which prevented it from being invoked by a kernel running in protected mode. The situation changed when Luc Verhaegen and other developers found a way to do the mode-setting natively instead of BIOS-based, showing that it was possible to do it using normal kernel code and laying the groundwork for what would become Kernel Mode Setting. In May 2007 Jesse Barnes (Intel) published the first proposal for a drm-modesetting API and a working native implementation of mode-setting for Intel GPUs within the i915 DRM driver. In December 2007 Jerome Glisse started to add the native mode-setting code for ATI cards to the radeon DRM driver. Work on both the API and drivers continued during 2008, but got delayed by the necessity of a memory manager also in kernel space to handle the framebuffers. In October 2008 the Linux kernel 2.6.27 brought a major source code reorganization, prior to some significant upcoming changes. The DRM source code tree was moved to its own source directory /drivers/gpu/drm/ and the different drivers were moved into their own subdirectories. Headers were also moved into a new /include/drm directory. The increasing complexity of video memory management led to several approaches to solving this issue. The first attempt was the Translation Table Maps (TTM) memory manager, developed by Thomas Hellstrom (Tungsten Graphics) in collaboration with Emma Anholt (Intel) and Dave Airlie (Red Hat). TTM was proposed for inclusion into mainline kernel 2.6.25 in November 2007, and again in May 2008, but was ditched in favor of a new approach called Graphics Execution Manager (GEM). GEM was first developed by Keith Packard and Emma Anholt from Intel as simpler solution for memory management for their i915 driver. GEM was well received and merged into the Linux kernel version 2.6.28 released in December 2008. Meanwhile, TTM had to wait until September 2009 to be finally merged into Linux 2.6.31 as a requirement of the new Radeon KMS DRM driver. With memory management in place to handle buffer objects, DRM developers could finally add to the kernel the already finished API and code to do mode setting. This expanded API is what is called Kernel Mode-setting (KMS) and the drivers which implement it are often referred to as KMS drivers. In March 2009, KMS was merged into the Linux kernel version 2.6.29, along with KMS support for the i915 driver. The KMS API have been exposed to user space programs since libdrm 2.4.3. The userspace X.Org DDX driver for Intel graphics cards was also the first to use the new GEM and KMS APIs. KMS support for the radeon DRM driver was added to Linux 2.6.31 release of September 2009. The new radeon KMS driver used the TTM memory manager but exposed GEM-compatible interfaces and ioctls instead of TTM ones. Since 2006 the nouveau project had been developing a free software DRM driver for NVIDIA GPUs outside of the official Linux kernel. In 2010 the nouveau source code was merged into Linux 2.6.33 as an experimental driver. At the time of merging, the driver had been already converted to KMS, and behind the GEM API it used TTM as its memory manager. The new KMS API—including the GEM API—was a big milestone in the development of DRM, but it didn't stop the API for being enhanced in the following years. KMS gained support for page flips in conjunction with asyncronous VBLANK notifications in Linux 2.6.33—only for the i915 driver, radeon and nouveau added it later during Linux 2.6.38 release. The new page flip interface was added to libdrm 2.4.17. In early 2011, during the Linux 2.6.39 release cycle, the so-called dumb buffers—a hardware-independent non-accelerated way to handle simple buffers suitable for use as framebuffers—were added to the KMS API. The goal was to reduce the complexity of applications such as Plymouth that don't need to use special accelerated operations provided by driver-specific ioctls. The feature was exposed by libdrm from version 2.4.25 onwards. Later that year it also gained a new main type of objects, called planes. Planes were developed to represent hardware overlays supported by the scanout engine. Plane support was merged into Linux 3.3. and libdrm 2.4.30. Another concept added to the API—during Linux 3.5 and libdrm 2.4.36 releases—were generic object properties, a method to add generic values to any KMS object. Properties are specially useful to set special behaviour or features to objects such as CRTCs and planes. An early proof of concept to provide GPU offloading between DRM drivers was developed by Dave Airlie in 2010. Since Airlie was trying to mimic the NVIDIA Optimus technology, he decided to name it "PRIME". Airlie resumed his work on PRIME in late 2011, but based on the new DMA-BUF buffer sharing mechanism introduced by Linux kernel 3.3. The basic DMA-BUF PRIME infrastructure was finished in March 2012 and merged into the Linux 3.4 release, as well as into libdrm 2.4.34. Later during the Linux 3.5 release, several DRM drivers implemented PRIME support, including i915 for Intel cards, radeon for AMD cards and nouveau for NVIDIA cards. In recent years, the DRM API has incrementally expanded with new and improved features. In 2013, as part of GSoC, David Herrmann developed the multiple render nodes feature. His code was added to the Linux kernel version 3.12 as an experimental feature supported by i915, radeon and nouveau drivers, and enabled by default since Linux 3.17. In 2014 Matt Roper (Intel) developed the universal planes (or unified planes) concept by which framebuffers (primary planes), overlays (secondary planes) and cursors (cursor planes) are all treated as a single type of object with an unified API. Universal planes support provides a more consistent DRM API with fewer, more generic ioctls. In order to maintain the API backwards compatible, the feature is exposed by DRM core as an additional capability that a DRM driver can provide. Universal plane support debuted in Linux 3.15 and libdrm 2.4.55. Several drivers, such as the Intel i915, have already implemented it. The most recent DRM API enhancement is the atomic mode-setting API, which brings atomicity to the mode-setting and page flipping operations on a DRM device. The idea of an atomic API for mode-setting was first proposed in early 2012. Ville Syrjälä (Intel) took over the task of designing and implementing such atomic API. Based on his work, Rob Clark (Texas Instruments) took a similar approach aiming to implement atomic page flips. Later in 2013 both proposed features were reunited in a single one using a single ioctl for both tasks. Since it was a requirement, the feature had to wait for the support of universal planes to be merged in mid-2014. During the second half of 2014 the atomic code was greatly enhanced by Daniel Vetter (Intel) and other DRM developers in order to facilitate the transition for the existing KMS drivers to the new atomic framework. All of this work was finally merged into Linux 3.19 and Linux 4.0 releases, and enabled by default since Linux 4.2. libdrm exposed the new atomic API since version 2.4.62. Multiple drivers have already been converted to the new atomic API. By 2018 ten new DRM drivers based on this new atomic model had been added to the Linux kernel. Adoption The Direct Rendering Manager kernel subsystem was initially developed to be used with the new Direct Rendering Infrastructure of the XFree86 4.0 display server, later inherited by its successor, the X.Org Server. Therefore, the main users of DRM were DRI clients that link to the hardware-accelerated OpenGL implementation that lives in the Mesa 3D library, as well as the X Server itself. Nowadays DRM is also used by several Wayland compositors, including Weston reference compositor. kmscon is a virtual console implementation that runs in user space using DRM KMS facilities. In 2015, version 358.09 (beta) of the proprietary Nvidia GeForce driver received support for the DRM mode-setting interface implemented as a new kernel blob called nvidia-modeset.ko. This new driver component works in conjunction with the nvidia.ko kernel module to program the display engine (i.e. display controller) of the GPU. See also Direct Rendering Infrastructure Free and open-source graphics device driver Linux framebuffer References External links DRM home page Linux GPU Driver Developer's Guide (formerly Linux DRM Developer's Guide) Direct Rendering Infrastructure Interfaces of the Linux kernel Linux kernel features
32447379
https://en.wikipedia.org/wiki/TetGen
TetGen
TetGen is a mesh generator developed by Hang Si which is designed to partition any 3D geometry into tetrahedrons by employing a form of Delaunay triangulation whose algorithm was developed by the author. TetGen has since been incorporated into other software packages such as Mathematica and Gmsh. Some improvement by speed in quality in Version 1.6 were introduced. See also Gmsh Salome (software) References External links Weierstrass Institute: Hang Si's personal homepage Numerical analysis software for Linux Cross-platform software Mesh generators Numerical analysis software for MacOS Numerical analysis software for Windows Free mathematics software Free software programmed in C++ Cross-platform free software
10536832
https://en.wikipedia.org/wiki/Oracle%20VM%20Server%20for%20SPARC
Oracle VM Server for SPARC
Logical Domains (LDoms or LDOM) is the server virtualization and partitioning technology for SPARC V9 processors. It was first released by Sun Microsystems in April 2007. After the Oracle acquisition of Sun in January 2010, the product has been re-branded as Oracle VM Server for SPARC from version 2.0 onwards. Each domain is a full virtual machine with a reconfigurable subset of hardware resources. Domains can be securely live migrated between servers while running. Operating systems running inside Logical Domains can be started, stopped, and rebooted independently. A running domain can be dynamically reconfigured to add or remove CPUs, RAM, or I/O devices without requiring a reboot. Using Dynamic Resource Management, CPU resources can be automatically reconfigured as needed. Supported hardware SPARC hypervisors run in hyperprivileged execution mode, which was introduced in the sun4v architecture. The sun4v processors released as of October 2015 are the UltraSPARC T1, T2, T2+, T3, T4, T5, M5, M6, M10, and M7. Systems based on UltraSPARC T1 support only Logical Domains versions 1.0-1.2. The newer types of T-series servers support both older Logical Domains and newer Oracle VM Server for SPARC product version 2.0 and later. These include: UltraSPARC T1-based: Sun / Fujitsu SPARC Enterprise T1000 and T2000 servers Sun Fire T1000 and T2000 servers Netra T2000 Server Netra CP3060 Blade Sun Blade T6300 Server Module UltraSPARC T2-based: Sun / Fujitsu SPARC Enterprise T5120 and T5220 servers Sun Blade T6320 Server Module Netra CP3260 Blade Netra T5220 Rackmount Server UltraSPARC T2 Plus systems: Sun / Fujitsu SPARC Enterprise T5140 and T5240 servers (2 sockets) Sun / Fujitsu SPARC Enterprise T5440 (4 sockets) Sun Blade T6340 Server Module (2 sockets) SPARC T3 systems: Sun / Fujitsu SPARC T3-1 servers (1 socket) Sun SPARC T3-1B Server Module (1 socket) Sun / Fujitsu SPARC T3-2 servers (2 sockets) Sun / Fujitsu SPARC T3-4 servers (4 sockets) SPARC T4 systems SPARC T4-1 Server (1 socket) SPARC T4-1B Server Module (blade) SPARC T4-2 Server (2 sockets) SPARC T4-4 Server (4 sockets) SPARC T5 systems SPARC T5-1B Server Module (blade) SPARC T5-2 Server (2 sockets) SPARC T5-4 Server (4 sockets) SPARC T5-8 Server (8 sockets) SPARC T7 systems, which use the same SPARC M7 processor as the M7-8 and M7-16 servers listed below. SPARC T7-1 (1 CPU socket) SPARC T7-2 (2 CPU sockets) SPARC T7-4 (4 CPU sockets) SPARC M-Series systems Oracle SPARC M5-32 Server (32 sockets) Oracle SPARC M6-32 Server (32 sockets) Fujitsu M10-1 (1 socket) Fujitsu M10-4 (4 sockets) Fujitsu M10-4S (64 sockets) Oracle SPARC M7-8 (8 CPU sockets) Oracle SPARC M7-16 (16 CPU sockets) Technically, the virtualization product consists of two interdependent components: the hypervisor in the SPARC server firmware and the Logical Domains Manager software installed on the Solaris operating system running within the control domain (see Logical Domain roles, below). Because of this, each particular version of Logical Domains (Oracle VM Server for SPARC) software requires a certain minimum version of the hypervisor to be installed into the server firmware. Logical Domains exploits the chip multithreading (CMT) nature of the "CoolThreads" processors. A single chip contains up to 32 CPU cores, and each core has either four hardware threads (for the UltraSPARC T1) or eight hardware threads (for the UltraSPARC T2/T2+, and SPARC T3/T4 and later) that act as virtual CPUs. All CPU cores execute instructions concurrently, and each core switches between threads—typically when a thread stalls on a cache miss or goes idle—within a single clock cycle. This lets the processor gain throughput that is lost during cache misses in conventional CPU designs. Each domain is assigned its own CPU threads and executes CPU instructions at native speed, avoiding the virtualization overhead for privileged operation trap-and-emulate or binary rewrite typical of most VM designs. Each server can support as many as one domain per hardware thread up to a maximum of 128. That's up to 32 domains for the UltraSPARC T1, 64 domains for the UltraSPARC T2 and SPARC T4-1, and 128 domains for UltraSPARC T3 as examples single-processor (single-socket) servers. Servers with 2-4 UltraSPARC T2+ or 2-8 SPARC T3-T5 CPUs support as many logical domains as the number of processors multiplied by the number of threads of each CPU up to the limit of 128. M-series servers can be subdivided into physical domains ("PDoms"), each of which can host up to 128 logical domains. Typically, a given domain is assigned multiple CPU threads or CPU cores for additional capacity within a single OS instance. CPU threads, RAM, and virtual I/O devices can be added to or removed from a domain by administrator issuing command in the control domain. This change takes effect immediately without the need to reboot the affected domain, which can immediately make use of added CPU threads or continue operating with reduced resources. When hosts are connected to shared storage (SAN or NAS), running guest domains can be securely live migrated between servers without outage (starting with Oracle VM Server for SPARC version 2.1). The process encrypts guest VM memory contents before they are transmitted between servers, using cryptographic accelerators available on all processors with sun4v architecture. Logical Domain roles All logical domains are the same except for the roles that they are assigned. There are multiple roles that logical domains can perform such as: Control domain Service domain I/O domain Root domain Guest domain Control domain, as its name implies, controls the logical domain environment. It is used to configure machine resources and guest domains, and provides services necessary for domain operation, such as virtual console service. The control domain also normally acts as a service domain. Service domains present virtual services, such as virtual disk drives and network switches, to other domains. In most cases, guest domains perform I/O via bridged access through services domains, which are usually I/O domains and directly connected to the physical devices. Service domains can provide virtual LANs and SANs as well as bridge through to physical devices. Disk images can reside on complete local physical disks, shared SAN block devices, their slices, or even on files contained on a local UFS or ZFS file system, or on a shared NFS export or iSCSI target. Control and service functions can be combined within domains, however it is recommended that user applications not run within control or service domains in order to protect domain stability and performance. I/O domains have direct ownership of a PCI bus, or card on a bus, or Single Root I/O Virtualization (SR-IOV) function, providing direct access to physical I/O devices, such as a network card in a PCI controller. An I/O domain may use its devices to have native I/O performance its own applications, or act as a service domain and share the devices to other domains as virtual devices. Root domains have direct ownership of PCIe "root complex" and all associated PCIe slots. This can be used to grant access to physical I/O devices. A root domain is also an I/O domain. There are a maximum of two root domains for the UltraSPARC T1 (Niagara) servers, one of which also must be the control domain. UltraSPARC T2 Plus, SPARC T3, and SPARC T4 servers can have as many as 4 root domains, limited by the number of PCIe root complexes installed on the server. SPARC T5 servers can have up to 16 root complex domains. Multiple I/O domains can be configured to provide resiliency against failures. Guest domains run an operating system instance without performing any of the above roles, but leverage the services provided by the above in order to run applications. Supported guest operating systems The only operating system supported by the vendor for running within logical domains is Solaris 10 11/06 and later updates, and all Solaris 11 releases. There are operating systems that are not officially supported, but may still be capable of running within logical domains: Debian ports version OpenSolaris 2009.06 Illumos-derived releases Ubuntu Linux Server Edition OpenBSD 4.5 or later Wind River Linux 3.0 Oracle Linux for SPARC See also Oracle VM Server for x86 Logical partition References External links Oracle Announces Latest Version of Oracle VM Server for SPARC Oracle VM Server for SPARC product page at Oracle Oracle VM Server for SPARC software at Fujitsu Increasing Application Availability by Using the Oracle VM Server for SPARC Live Migration Feature Logical Domains Community at OpenSolaris.org Oracle VM Server for SPARC Best Practices Dynamic Resource Management from Oracle. Hardware partitioning Sun Microsystems software Virtualization software
33854598
https://en.wikipedia.org/wiki/Viber
Viber
Viber, or Rakuten Viber, is a cross-platform voice over IP (VoIP) and instant messaging (IM) software application owned by Japanese multinational company Rakuten, provided as freeware for the Android, iOS, Microsoft Windows, macOS and Linux platforms. Users are registered and identified through a cellular telephone number, although the service is accessible on desktop platforms without needing mobile connectivity. In addition to instant messaging it allows users to exchange media such as images and video records, and also provides a paid international landline and mobile calling service called Viber Out. As of 2018, there are over a billion registered users on the network. The software was developed in 2010 by Cyprus-based Viber Media, which was bought by Rakuten in 2014. Since 2017, its corporate name has been Rakuten Viber. It is based in Luxembourg. Viber's offices are located in Minsk, London, Manila, Moscow, Paris, San Francisco, Singapore, and Tokyo. History Founding Viber was founded in 2010 to solve the problem of long distance relationships. Co-founder, Talmon Marco's girlfriend at the time was based in Hong Kong while he was living out in New York City. Living apart but communicating all the time led to very expensive phone bills. Trying to overcome this issue, Marco turned to his friend Igor Magazinnik to find a solution. Viber Media was founded in Tel Aviv, Israel, in 2010 by Marco and Magazinnik, who are friends from the Israel Defense Forces where they were chief information officers. Marco and Magazinnik are also co-founders of the P2P media and file-sharing client iMesh. The company was run from Israel, with much of its development outsourced to Belarus in order to lower labor costs. It was registered in Cyprus. Sani Maroli and Ofer Smocha soon joined the company as well. Marco commented that Viber allows instant calling and synchronization with contacts because the ID is the user's cell number, unlike Skype which is modeled after a "buddy list" requiring registration and a password. Monetisation In its first two years of availability, Viber did not generate revenues. It began doing so in 2013, via user payments for Viber Out voice calling and the Viber graphical messaging "sticker store". The company was originally funded by individual investors, described by Marco as "friends and family". They invested $20 million in the company, which had 120 employees . On July 24, 2013, Viber's support system was defaced by the Syrian Electronic Army. According to Viber, no sensitive user information was accessed. Acquisition On February 13, 2014, Rakuten announced they had acquired Viber Media for $900 million. The sale of Viber earned the Shabtai family (Benny, his brother Gilad, and Gilad's son Ofer) some $500 million from their 55.2% stake in the company. At that sale price, the founders each realized over 30 times return on their investments. Djamel Agaoua became Viber Media CEO in February 2017, replacing co-founder Marco who left in 2015. In July 2017 the corporate name of Viber Media was changed to Rakuten Viber and a new wordmark logo was introduced. Its legal name remains Viber Media, S.à r.l. based in Luxembourg. Viber has been the official "communication channel" of F.C. Barcelona since Rakuten partnered with the football club in 2017. Market share , Viber had 800 million registered users. According to Statista, there are 260 million monthly active users as of January 2019. The Viber messenger is very popular in Greece, Eastern Europe, Russia, the Middle East, and some Asian markets. India was the largest market for Viber as of December 2014 with 33 million registered users, the fifth most popular instant messenger in the country. At the same time there were 30 million users in the United States, 28 million in Russia and 18 million in Brazil. Viber is particularly popular in Eastern Europe, being the most downloaded messaging app on Android in Belarus, Moldova and Ukraine as of 2016. It is also popular in Iraq, Libya and Nepal. As of 2018, Viber has an over 70 percent penetration rate in the CIS and CEE regions, but only 15 percent in North America. Russia Viber is one of the more popular messenger applications in Russia. In January 2016, Viber surpassed WhatsApp in Russia, with about 50 million users. Viber was growing especially rapidly in urban areas like Moscow and St. Petersburg. In April 2016 the usage of Viber in Russia was twice higher than in 2015, it reached 66 million users. By 2018, Viber had reached 100 million users in Russia. Another report from 2017 shows that Russian IM users prefer to use Viber or WhatsApp over other services. In Russia Viber plans to use the method of payment for shopping for goods and services. Nikolay Nikiforov of the Federal Service for Supervision of Communications, Information Technology and Mass Media has declined to comment on the effect that law № 241-FZ (which has restricted use of some other encrypted chats such as Telegram) would have on Viber. Ukraine In 2020 Viber Messenger is Ukraine's most popular IM, it is installed on 97% of all Ukrainian smartphones. Applications Platforms Viber was initially launched for iPhone on December 2, 2010, in direct competition with Skype. It was launched on BlackBerry and Windows Phone on May 8, 2012, followed by the Android platform on July 19, 2012, and Nokia's Series 40, Symbian and Samsung's Bada platform on July 24, 2012, by which time the application had 90 million users. In May 2013 with Viber 3.0, a desktop version for Windows and macOS was released. In August 2013, Viber for Linux was released as a public beta and in August 2014 a final version. In June 2016 a UWP-based desktop application for Windows 10 was released in the Windows Store. The desktop versions are tied with a user's registered Viber mobile number, but can operate independently afterwards. In 2015, a version for the iPad tablet and Apple Watch smartwatch was released. Features Viber was originally launched as a VoIP application for voice calling. On March 31, 2011, Viber 2.0 was released which added instant messaging (IM) capabilities. In July 2012 group messaging and an HD Voice engine were added to both Android and iOS applications. In December 2012 Viber added 'stickers' to the application. In October 2013, Viber 4.0 was announced featuring a sticker 'market' where Viber would be selling stickers as a source of revenue. In addition, version 4.0 introduced push-to-talk capabilities, and Viber Out, a feature that provides users the option to call mobile and landline numbers via VoIP without the need for the application. Viber Out became temporarily free in the Philippines to help Typhoon Haiyan victims connect with their loved ones. Voice support was officially added for all Windows Phone 8 devices on April 2, 2013. In September 2014, Viber 5.0 was released and introduced video calling. In November 2016, Viber version 6.5 launched Public Accounts to allow brands to engage in promotion and customer service on the platform, with initial partners including The Huffington Post, Yandex and The Weather Channel. The app integrates with CRM software and offers chatbot APIs for customer service. Viber Communities, an enhanced group chat feature, was introduced in February 2018. Group calling was introduced with version 10 in February 2019. Security On November 4, 2014, Viber scored 1 out of 7 points on the Electronic Frontier Foundation's "Secure Messaging Scorecard". Viber received a point for encryption during transit but lost points because communications were not encrypted with keys that the provider did not have access to (i.e. the communications were not end-to-end encrypted), users could not verify contacts' identities, past messages were not secure if the encryption keys were stolen (i.e. the service did not provide forward secrecy), the code was not open to independent review (i.e. the code was not open-source), the security design was not properly documented, and there had not been a recent independent security audit. On November 14, 2014, the EFF changed Viber's score to 2 out of 7 after it had received an external security audit from Ernst & Young's Advanced Security Centre. On April 19, 2016 with the announcement of Viber version 6.0, Rakuten added end-to-end encryption to their service, but only for one-to-one and group conversations in which all participants are using the latest Viber version for Android, iOS, Windows (Win32) or Windows 10 (UWP). The company said that the encryption protocol had only been audited internally, and promised to commission external audits "in the coming weeks". In May 2016, Viber published an overview of their encryption protocol, saying that it is a custom implementation that "uses the same concepts" as the Signal Protocol. See also Comparison of cross-platform instant messaging clients Comparison of VoIP software References External links Viber App Support IOS software Android (operating system) software Proprietary cross-platform software Instant messaging clients VoIP software VoIP services VoIP companies BlackBerry software Companies based in Luxembourg City Windows Phone software Social media Symbian software Rakuten Israeli companies established in 2010 2010 software Universal Windows Platform apps Mergers and acquisitions of Israeli companies 2014 mergers and acquisitions
50924492
https://en.wikipedia.org/wiki/CoverMyMeds
CoverMyMeds
CoverMyMeds is a healthcare software company that creates software to automate the prior authorization process used by some health insurance companies in the United States. The company was founded in 2008 and has offices in Ohio. Since early 2017, it has operated as a wholly owned subsidiary of McKesson Corporation. History CoverMyMeds was founded in 2008 by pharmacist Sam Rajan and developer Matt Scantland to create automated prior authorization software. That same year, JumpStart Ventures invested $250,000 into CoverMyMeds. By 2011, CoverMyMeds had raised a total of $1 million. In 2008, in response to a lack of space from the growing number of employees, CoverMyMeds opened a new office in Columbus, Ohio and moved most of its technical employees there. In the years following, CoverMyMeds saw most of its growth in Columbus, benefited by tax credits offered by the state of Ohio and city of Columbus. As of 2014, 120 of CoverMyMeds' 150 employees were in Columbus. In a 2014 Columbus Business First interview with co-founder Matt Scantland, when asked which city is considered the headquarters of CoverMyMeds, Scantland stated, "Both Columbus and Twinsburg serve critical functions in our company. Relative to our daily operations, we don't really think about having a 'headquarters.'" In 2014, Francisco Partners invested an undisclosed minority stake in CoverMyMeds. That same year, CoverMyMeds was offered government tax credits in exchange for growth plans. These growth plans included creating 116 new jobs in the following years. During this time, CoverMyMeds moved its office in Columbus from the Arena District to a larger space in Columbus' Scioto Mile at 2 Miranova building. In November 2014, CoverMyMeds' software was being used by 45,000 pharmacies and 260,000 prescribers. By August 2015, CoverMyMeds had over 400,000 prescribers use it, and as of June 2016, has over 500,000. In early 2016, CoverMyMeds announced they would be moving their Twinsburg office to Highland Hills, Ohio. The Highland Hills office consists of 15,000 square feet of space, and would be three times larger than the space in Twinsburg. The new space would allow CoverMyMeds to hire twenty five new employees. The office opened on January 11, 2016, primarily occupied by analysts and financial employees. In early 2017, CoverMyMeds was acquired by McKesson for $1.1 Billion. In late 2018, CoverMyMeds announced the building and development of a new campus in Franklinton, Columbus, Ohio. The new campus is a multimillion-dollar facility designed by the architect firm, Perkins+Will's Dallas studio. Product CoverMyMeds' software automates the prior authorization process used by some health insurance companies in the United States, helping to save time and eliminate paperwork. Traditionally, prior authorization required phone calls and faxes between multiple parties; CoverMyMeds circumvents this by automating the process. Involved parties are able to view the status of the authorization as it progresses. References Health care companies based in Ohio Health information technology companies Software companies based in Ohio Companies based in the Columbus, Ohio metropolitan area Software companies of the United States 2008 establishments in Ohio Software companies established in 2008 Health care companies established in 2008 2017 mergers and acquisitions
14235355
https://en.wikipedia.org/wiki/Jim%20Jagielski
Jim Jagielski
Jim Jagielski (born March 11, 1961) is an American software engineer, who specializes in web, cloud and open source technologies. Biography Jagielski graduated from the Johns Hopkins University in 1983 with a BES in Electrical/Computer Engineering. He was hired by NASA's Goddard Space Flight Center immediately after graduation. In 1994, Jagielski founded jaguNET Access Services, a Web Host and ISP. He has served as CTO for Zend Technologies, CTO for Covalent Technologies, Chief Architect for SpringSource/VMware and under the Office of CTO at Red Hat, Inc. as a Consulting Software Engineer, and Sr. Director at Capital One in the Tech Fellows program. Currently he is the Open Source Chef at ConsenSys. He's been a speaker at various conferences and seminars such as ApacheCon, Forrester's IT Gigaworld, and O'Reilly Open Source Convention. He has written on numerous topics, and was the editor of the Apache section on Slashdot. Career He is best known as cofounder, member, and director of The Apache Software Foundation (ASF) and as a core developer on several ASF projects, including the Apache HTTP Server, Apache Portable Runtime, and Apache Tomcat. His first recognition on the Internet was as editor of the A/UX FAQ and system administrator for Jagubox, the primary repository for third-party software for Apple's A/UX operating system. In addition to his involvement with the ASF, Jagielski has been involved with other open-source projects. Apache Software Foundation Jagielski is one of the founding members of The Apache Software Foundation, after having been a member of the original eight-member Apache Group. Jagielski served as Director on the ASF's board from its incorporation in 1999, until 2018, making him the longest serving Director in the Foundation's history. After having served eight years as Executive Vice President and Secretary, and three years as Chairman, Jagielski served for several years as President of the ASF. Jagielski is the original Chair of the Apache Incubator project, in which he is still involved. He was one of the original co-mentors for the Geronimo project, and he also mentors several Incubator podlings. Jagielski is an active developer on many open source projects, ASF and otherwise. After doing some development on the NCSA HTTPd web server, he started with Apache in early-to-mid 1995, making him likely the longest active contributor within the ASF. Software leadership In 2005, Jagielski was asked to serve on the Advisory Board of the Open Source Software Institute. Open Source Software Institute (OSSI) is a non-profit (501 c 6) organization of corporate, government and academic representatives. Its mission is to promote the development and implementation of open-source software solutions within U.S. federal, state and municipal government agencies and academic entities. In 2010, Jagielski was appointed to the Board of Directors of the CodePlex Foundation, which was later renamed to Outercurve Foundation. As well as Director, Jagielski serves as President for Outercurve. In 2011, Jagielski was appointed to the Board of Directors of the Open Source Initiative. He resigned in September 2013. Based on his long involvement in the FOSS community, Jagielski was one of the recipients of the O'Reilly Open Source Awards at OSCON 2012. In 2012, Jagielski was appointed as a new Council member of the MARSEC-XL Foundation. In 2015, Jagielski was awarded the European Commission/Open Innovation Strategy and Policy Group's Luminary Award in Creating Open Engagement Platforms for his global efforts in promoting Open Source as an Innovation process. Other open software projects Jagielski has contributed to Sendmail, xntpd, BIND, PHP, Perl and FreeBSD, among other projects. References External links 1961 births A/UX people Free software programmers Living people American software engineers People from Dundalk, Maryland Open source advocates Members of the Open Source Initiative board of directors
58447873
https://en.wikipedia.org/wiki/Elizabeth%20Bradley%20%28mathematician%20and%20rower%29
Elizabeth Bradley (mathematician and rower)
Elizabeth Bradley (born April 9, 1961) is an American applied mathematician and computer scientist, and a former Olympic rower. She is a professor of computer science at the University of Colorado Boulder, where she specializes in nonlinear systems and nonlinear time series analysis. Rowing Bradley competed in the women's coxed four event at the 1988 Summer Olympics, with rowers Jennifer Corbet, Cynthia Eckert, and Sarah Gengler, and coxswain Kim Santiago. Their boat placed fifth out of the ten boats competing in the event. She also competed in the 1986 World Rowing Championships, placing fourth in women's eights, and in the 1987 World Rowing Championships, placing fourth in women's pairs. Education and academic career Bradley was a student at the Massachusetts Institute of Technology, where she earned a bachelor's degree in electrical engineering in 1983, a master's degree in computer science in 1986, and a Ph.D. in electrical engineering and computer science in 1992. Her dissertation, Taming Chaotic Circuits, was jointly supervised by Hal Abelson and Gerald Jay Sussman. She joined the University of Colorado computer science department as an assistant professor in 1993, chaired the department from 2003 to 2006, and was promoted to full professor in 2004. She has also visited Harvard University, and was a Radcliffe fellow at the Radcliffe Institute for Advanced Study for 2006–2007. She was named a CRA-W Distinguished Professor by the Committee on Widening Participation in Computing Research in 2008, and was named a President’s Teaching Scholar by the University of Colorado in 2017. References External links Home page 1961 births Living people American female rowers Olympic rowers of the United States Rowers at the 1988 Summer Olympics Sportspeople from New York City 20th-century American mathematicians 21st-century American mathematicians American women mathematicians Applied mathematicians Dynamical systems theorists American computer scientists American women computer scientists University of Colorado Boulder faculty Radcliffe fellows 20th-century American women scientists 21st-century American women scientists MIT School of Engineering alumni
6702997
https://en.wikipedia.org/wiki/Patriot%20Act%2C%20Title%20VIII
Patriot Act, Title VIII
Title VIII: Strengthening the criminal laws against terrorism is the eighth of ten titles which comprise the USA PATRIOT Act, an anti-terrorism bill passed in the United States one month after the September 11, 2001 attacks. Title VIII contains 17 sections and creates definitions of terrorism, and establishes or re-defines rules with which to deal with it. Attacks on mass transportation systems The U.S. Code has a number of regulations concerning railroads. Section 801 added a new section that punishes those who wreck, demolish, set fire to, or disables a mass transportation vehicle or ferry, uses a biological agent or toxin on a train or mass transportation device, without previously obtaining the permission of the mass transportation provider, to cause injury or death, places any biological agent or toxin as a weapon near the facilities of a railroad in order to derail, disable, or wreck the transportation mechanisms, does something to impair the running of the transportation system, including removing or damaging a train control system, centralized dispatching system, or rail grade crossing warning signal, interferes with, disables, or incapacitates any dispatcher, train driver, captain, or person while they are dispatching, operating, or maintaining a mass transportation vehicle or ferry in order to cause harm or death to passengers, does something to cause death or serious bodily injury to an employee or passenger of a mass transportation provider, or makes false allegations that an attempt or alleged attempt is being undertaken to perform a prohibited activity on a mass transportation system If such an offense is committed, then the offender is to be fined and/or imprisoned for not more than twenty years. However, if the activity was undertaken while the mass transportation vehicle or ferry was carrying a passenger at the time of the offense, or the offense resulted in the death of any person, then the punishment is a fine and/or life imprisonment. Biological weapons Section 817 of the Patriot Act expands the biological weapons statute. was amended to define the use of a biological agent, toxin, or delivery system as a weapon, other than when it is used for "prophylactic, protective, bona fide research, or other peaceful purposes". The Patriot Act created a penalty of more than 10 years imprisonment, a fine for anyone who cannot prove reasonably that they are using a biological agent, toxin, or delivery system for these purposes, or both. Section 817 also prevents certain people from shipping, transporting or receiving a select biological agent Those who are restricted are those who are under indictment or have been convicted of a crime that is punishable with a jail sentence of over a year, is a fugitive from justice, convicted drug users, illegal aliens, aliens from certain countries that have been deemed to have provided support for acts of international terrorism, and those who are mentally ill and who have been committed to a mental institution. Penalties for those who are prohibited from transporting or receiving selected agents are fines, punishment of not more than 10 years imprisonment, or both. Terrorist support A number of measures were undertaken in an attempt to prevent and penalize activities that are deemed to support terrorism. Section 803 of the Patriot Act amends the statute on terrorism to include a new section to prevent the harboring of or concealment of terrorists. This states than any person who "harbors or conceals" someone whom they know or believe to have committed an offense designated under 11 specific other codes will be subject to a fine or imprisonment of up to ten years, or both. These violations may be prosecuted in any Federal judicial district where the offense was committed, or in another Federal district as provided by law. Section 805 modifies the statute on providing material support to terrorists so that a person being prosecuted "may be prosecuted in any Federal judicial district" where the offense was committed, "or in any other Federal judicial district as provided by law." It also adds four codes to be considered under the title. defines "providing material support to terrorists" in subsection (b). Section 805 changes this definition by adding "expert advice or assistance" and "monetary instruments." Section 807 of the Patriot Act made a technical clarification that nothing in the Trade Sanctions Reform and Export Enhancement Act of 2000 would limit criminal prohibitions against the provision of material support to terrorists and terrorist organizations. Section 806 of the Patriot Act amends U.S. forfeiture law to allow authorities to seize all foreign and domestic assets from any group or individual that is caught planning to commit acts of terrorism against the U.S. or U.S. citizens. Assets may also be seized if they have been acquired or maintained by an individual or organization for the purposes of further terrorist activities. Penalties A number of penalties for terrorism offenses were defined or amended by Title VIII. Section 809 removed the statute of limitations on prosecution for any terrorist offense that led to the death or injury of any person, while section 810 increased the maximum penalty for destroying an energy facility, providing material support to terrorists or terrorist organizations, destroying national-defense materials, sabotaging nuclear facilities or fuel or the damage or destruction an interstate gas or hazardous liquid pipeline facility from not more than 20 years imprisonment to any prison term. Conspiracy provisions were added by section 811 to criminal statutes that cover arson within the special maritime and territorial jurisdiction of the United States, killings in Federal facilities, the destruction of communications lines, stations, or systems, the destruction of property within the special maritime and territorial jurisdiction of the United States, wrecking trains, material support to terrorists, torture, the sabotage of nuclear facilities or fuel, interfering with flight crews, the carrying on of weapons or explosives on an aircraft, and the destruction of interstate gas or hazardous liquid pipeline facility. Furthermore, section 812 of the Patriot Act specifies that there is to be post-release supervision of terrorists for the rest of their lives, if they committed a terrorist act that resulted in, or created a foreseeable risk of, death or serious bodily injury to another person. Cyberterrorism and cybersecurity Several aspects of cyberterrorism are dealt with in title VIII. Under section 814 of the Patriot Act, it is clarified that punishments apply to those who either damage or gain unauthorized access to a protected computer and thus cause a person an aggregate loss greater than $5,000; adversely affects someone's medical examination, diagnosis or treatment; causes a person to be injured; causes a threat to public health or safety; or causes damage to a governmental computer that is used as a tool to administer justice, national defense, or national security. It is only through these specific actions that civil action may be taken against an offender. Section 814 also prohibits any extortion via a protected computer, and not just extortion against a "firm, association, educational institution, financial institution, government entity, or other legal entity". Punishments were expanded to include attempted illegal use or access of protected computers. The punishment for attempting to damage protected computers through the use of viruses or other software mechanism is now imprisonment for not more than 10 years, while the punishment for unauthorized access and subsequent damage to a protected computer is now more than five years imprisonment. Should the offense occur a second time, the penalty increases to no more than 20 years imprisonment. The Federal sentencing guidelines were amended to allow any individual convicted of computer fraud and abuse to be subjected to appropriate penalties, without regard to any mandatory minimum term of imprisonment. Section 816 specifies the development and support of cybersecurity forensic capabilities. It directs the Attorney General to establish regional computer forensic laboratories that have the capability of performing forensic examinations of intercepted computer evidence relating to criminal activity and cyberterrorism, and that have the capability of training and educating Federal, State, and local law enforcement personnel and prosecutors in computer crime, and to "facilitate and promote the sharing of Federal law enforcement expertise and information about the investigation, analysis, and prosecution of computer-related crime with State and local law enforcement personnel and prosecutors, including the use of multijurisdictional task forces". US$50,000,000 was authorized for establishing such labs. Records Under section 815 of the Patriot Act, an additional defense was added against civil actions where it is alleged that there were unlawful violations of access to stored communications and the interception of communications. It allows an ISP to show a good faith reliance on requests from a governmental entity that orders them to preserve records and other evidence in its possession pending the issuance of a court order or other process. Definitions Title VIII defines or redefines a number of terms. The terms "domestic terrorism" is already defined under and this was amended by section 802 of the Patriot Act to include mass destruction as well as assassination or kidnapping as a terrorist activity. The definition encompasses activities that are "dangerous to human life that are a violation of the criminal laws of the United States or of any State" and are intended to "intimidate or coerce a civilian population", "influence the policy of a government by intimidation or coercion" or are undertaken "to affect the conduct of a government by mass destruction, assassination, or kidnapping" while in the jurisdiction of the United States. When investigating international terrorism, the Attorney General can investigate: the willful production of defective national-defense material, national-defense premises, or national-defense utilities, the destruction or interference of a submarine mine, torpedo, fortification or harbor-defense system, or violates any Presidential Executive Order governing persons or vessels within the limits of defensive sea areas, an assault on the President of the United States, the President-elect, the Vice President or the next in line, an assault on a United States Member of Congress or a Member-of-Congress-elect, the destruction of an energy facility, a raid or predatory attack of any U.S. property the conspiracy to damage or destroy specific property situated within a foreign country belonging to a foreign government with which the U.S. is at peace the destruction of any building, vehicle, or other personal or real property that is leased or owned by the U.S. government through the use of fire or an explosive, threats made to kill or injure another person A further amendment made the following activities part of the definition of "International terrorism": the destruction of aircraft or aircraft facilities violence at international airports arson within special maritime and territorial jurisdictions use of biological weapons use of the variola virus use of chemical weapons the kidnapping or assassination of congressional, cabinet, and Supreme Court members the use of nuclear materials in a terrorist act participation in nuclear and weapons of mass destruction threats to the U.S. the use of plastic explosives the arson and bombing of Government property risking or causing death the arson and bombing of property used in interstate commerce the killing or attempted killing during an attack on a Federal facility with a dangerous weapon the conspiracy to murder, kidnap, or maim persons abroad unauthorised access to protected computers the killing or attempted killing of officers and employees of the U.S., the murder or manslaughter of foreign officials, official guests, or internationally protected persons hostage taking the depredation of government property or contracts the destruction of communication lines, stations, or systems injury to buildings or property within special maritime and territorial jurisdiction of the U.S. the destruction of an energy facility Presidential and Presidential staff assassination and kidnaping the wrecking of trains terrorist attacks and other acts of violence against mass transportation systems the destruction of national defense materials, premises, or utilities offenses relating to national defense material, premises, or utilities violence against maritime navigation violence against maritime fixed platforms homicides and other violence against U.S. nationals outside of the U.S. the use of weapons of mass destruction acts of terrorism transcending national boundaries, the bombing of public places and facilities the use of anti-aircraft missile systems the use of radiological dispersal devices harboring terrorists providing material support to terrorists providing material support to terrorist organizations the financing of terrorism, or torture the sabotage of nuclear facilities or fuel airline piracy, assault on a flight crew with a dangerous weapon, endangering human life by using explosive or incendiary devices on aircraft, homicide or attempted homicide on an aircraft destruction of interstate gas or hazardous liquid pipeline facility. Section 813 included acts of terrorism as racketeering activity. Section 804 amends . is a list of things or places that fall within the "special maritime and territorial jurisdiction of the United States" within the usage of Title 18, the title of the U.S.C. that deals with crime. It is amended so that when a crime is committed by or against a U.S. national, "the premises of United States diplomatic, consular, military or other United States Government missions or entities in foreign States" are considered to be part of the aforesaid jurisdiction. This includes "residences in foreign States...irrespective of ownership, used for purposes of those missions or entities or used by United States personnel...." It ends by adding a clause saying that this paragraph does not trump any international agreement that it comes into conflict with, and that it does not apply to members of the Armed forces who commit an offense outside the U.S. that would have resulted in a year or longer imprisonment had it been committed within the U.S. Under section 814, a number of terms relating to cyberterrorism were redefined. A "protected computer" was expanded to include a "computer located outside the United States that is used in a manner that affects interstate or foreign commerce or communication of the United States", "damage" means any impairment to the integrity or availability of data, a program, a system, or information, "conviction" now includes a conviction under the law of any State for a crime punishable by imprisonment for more than 1 year, where the crime involved the unauthorized access of a protected computer, "loss" means "any reasonable cost to any victim, including the cost of responding to an offense, conducting a damage assessment, and restoring the data, program, system, or information to its condition prior to the offense, and any revenue lost, cost incurred, or other consequential damages incurred because of interruption of service", and "person" means "any individual, firm, corporation, educational institution, financial institution, governmental entity, or legal or other entity." Notes and references External links Text of the USA PATRIOT Act (.pdf file) Title VIII
56676454
https://en.wikipedia.org/wiki/Skylum
Skylum
Skylum (formerly Macphun) is a software developing company based in Bellevue, Washington. It is most known for its photo editing software Aurora HDR and Luminar. Skylum is also the developer of Snapheal, Focus, Tonality, Intensify, Noiseless and FX Photo Studio. Founded as Macphun in 2008, the company decided to change its name to Skylum in 2017, following the decision to develop its Mac only software for Windows as well. History Skylum was founded as Macphun in 2008 by two gaming developers and amateur photographers, Dmitry Sytnik and Paul Muzok. Initially the company developed applications for iOS. One of its first applications was Vintage Video Maker, which was later named Vintagio. In 2009, Apple named Vintagio among Best iPhone apps of the year. Alex Tsepko, Ivan Kutanin & Oksana Milczarek joined the team few years later. In total, the company released over 60 applications in the first three years. However, it saw the greatest number of downloads in the photography applications. Skylum thus decided to develop the same photography applications for macOS. In early 2010, Skylum launched its first macOS application, FX Photo Studio Pro, which was earlier available for iOS only. Several other applications were also developed for macOS including Snapheal. In order to tap the North Americas, the company moved its headquarters to San Diego, United States in 2013. A great number of employees came from the Nik Collection, which was earlier acquired by Google. Later that year, the company launched Intensify, a fully featured photo editing software, that was named among 2013 Best Mac App Store apps. In 2014, Skylum launched Tonality, a black-and-white photo editor software, that won Apple’s Editors’ Choice of the year. The same year, Skylum hired a team in Europe to develop localized versions of its software and start European expansion. In 2015, Skylum released a new image noise reduction application called Noiseless. Same year Skylum partnered with Trey Ratcliff to develop an HDR program. Through the collaboration Aurora HDR, a High Dynamic Range editing and processing tool, was released in November. A year later, Skylum developed Luminar, an all-in-one photo editing software as an alternative to Adobe’s Lightroom. Both the software became the most known applications by the company. In 2017, the company released Aurora HDR and Luminar for Windows - software that previously was available for macOS only. At the same time, it was also announced that Macphun would change its name to Skylum. Products Skylum is most known for its photo editing software, Aurora HDR launched in 2015, and Luminar launched in 2016. Other notable software by the company include: Snapheal, Focus, Tonality, Intensify, Noiseless, FX Photo Studio, and Photolemur. Skylum products have been recognized with multiple awards. In 2019, Skylum took home four gold awards at the Digital Camera Grand Prix. The company was honored for innovation in the photo editing sphere with its Luminar 4, Aurora HDR 2019, and Photolemur editing software. Skylum also took home a technical award for its innovative approach. In 2017 Luminar and in 2019 Luminar Flex was named the Best Software Plugin in the Lucie Technical Awards. In the same years, Luminar was awarded the Best Imaging Software by TIPA. In 2018, Luminar was named an Editor’s Pick by Outdoor Photographer. References Photo software
33726642
https://en.wikipedia.org/wiki/Commodore%20OS
Commodore OS
Commodore OS (full name: Commodore OS Vision) was a free-to-download Linux distribution developed by Commodore USA and intended for Commodore PCs. The distribution was based on Linux Mint, available only for x86-64 architectures, and used the GNOME 2 desktop environment. The first public beta version was released on 11 November 2011. It has been continually updated through Commodore OS Vision 0.8 Beta and never came out of beta phase. 1.0 did come out of Beta and was released on DistroWatch.com. This operating system is no longer in development. The company is now closed and its web site is no longer active. History The first beta release of the OS was released on 12 November 2011, announced as an operating system for "pre-installation on all future Commodore USA hardware". Commodore USA went defunct in 2013, the website hosting the OS is down and its last release was on 9 July 2012. However, it continues to have a small community support - with an unofficial 32 bit version of the operating system released on 22 February 2012. Compatibility Commodore OS was not compatible with Commodore 64 software. However, it did contain VICE, an open-source program which emulates Commodore systems. Design Commodore OS was designed as a way to imitate the look and feel of Commodore's legacy systems, and as a complement to the all-in-one-keyboard style of the personal computer. Commodore OS includes a collection of software intended to imitate classic Commodore software. It was fully compatible only to Commodore USA products often causing kernel panic on general PCs. An Improved Fusion version was promised but never released. References External links Commodore OS Website (Defunct) Commodore OS archive Customize Linux Mint in Commodore OS Vision [Italian] 2011 software Commodore 64 Debian-based distributions Ubuntu derivatives Linux distributions
451178
https://en.wikipedia.org/wiki/Avida
Avida
Avida is an artificial life software platform to study the evolutionary biology of self-replicating and evolving computer programs (digital organisms). Avida is under active development by Charles Ofria's Digital Evolution Lab at Michigan State University; the first version of Avida was designed in 1993 by Ofria, Chris Adami and C. Titus Brown at Caltech, and has been fully reengineered by Ofria on multiple occasions since then. The software was originally inspired by the Tierra system. Design principles Tierra simulated an evolutionary system by introducing computer programs that competed for computer resources, specifically processor (CPU) time and access to main memory. In this respect it was similar to Core Wars, but differed in that the programs being run in the simulation were able to modify themselves, and thereby evolve. Tierra's programs were artificial life organisms. Unlike Tierra, Avida assigns every digital organism its own protected region of memory, and executes it with a separate virtual CPU. By default, other digital organisms cannot access this memory space, neither for reading nor for writing, and cannot execute code that is not in their own memory space. A second major difference is that the virtual CPUs of different organisms can run at different speeds, such that one organism executes, for example, twice as many instructions in the same time interval as another organism. The speed at which a virtual CPU runs is determined by a number of factors, but most importantly, by the tasks that the organism performs: logical computations that the organisms can carry out to reap extra CPU speed as bonus. Use in research Adami and Ofria, in collaboration with others, have used Avida to conduct research in digital evolution, and the scientific journals Nature and Science have published four of their papers. The 2003 paper "The Evolutionary Origin of Complex Features" describes the evolution of a mathematical equals operation from simpler bitwise operations. Use in education The Avida-ED project (Avida-ED) uses the Avida software platform within a simplified graphical user interface suitable for use in evolution education instruction at the high school and undergraduate college level, and provides freely available software, documentation, tutorials, lesson plans, and other course materials. The Avida-ED software runs as a web application in the browser, with the user interface implemented in JavaScript and Avida compiled to JavaScript using Emscripten, making the software broadly compatible with devices commonly used in classrooms. This approach has been shown to be effective in improving students' understanding of evolution. The Avida-ED project was the winner of the 2017 International Society for Artificial Life Education and Outreach Award. See also References "Testing Darwin", Discover Magazine, February 2005. External links Avida Software - GitHub Avida-ED Project - Robert T. Pennock An Avida Developer's Site MSU Devolab website Scientific publications featuring Avida C. Adami and C.T. Brown (1994), Evolutionary Learning in the 2D Artificial Life Systems Avida, in: R. Brooks, P. Maes (Eds.), Proc. Artificial Life IV, MIT Press, Cambridge, MA, p. 377-381. R. E. Lenski, C. Ofria, T. C. Collier, C. Adami (1999). Genome Complexity, Robustness, and Genetic Interactions in Digital Organisms. Nature 400:661-664. C.O. Wilke, J.L. Wang, C. Ofria, R.E. Lenski, and C. Adami (2001). Evolution of Digital Organisms at High Mutation Rate Leads To Survival of the Flattest. Nature 412:331-333. R.E. Lenski, C. Ofria, R.T. Pennock, and C. Adami (2003). The Evolutionary Origin of Complex Features. Nature 423:139-145. S.S. Chow, C.O. Wilke, C. Ofria, R.E. Lenski, and C. Adami (2004). Adaptive Radiation from Resource Competition in Digital Organisms. Science 305:84-86. J. Clune, D. Misevic, C. Ofria, R.E. Lenski, S.F. Elena, and R. Sanjuán. Natural selection fails to optimize mutation rates for long-term adaptation on rugged fitness landscapes. PLoS Computational Biology 4(9): 2008. Clune J, Goldsby HJ, Ofria C, and Pennock RT (2011) Selective pressures for accurate altruism targeting: Evidence from digital evolution for difficult-to-test aspects of inclusive fitness theory. Proceedings of the Royal Society. pdf (archive) Benjamin E. Beckmann, Philip K. McKinley, Charles Ofria (2007). Evolution of an adaptive sleep response in digital organisms. ECAL 2007 pdf Artificial life Artificial life models Digital organisms
46265
https://en.wikipedia.org/wiki/Atbash
Atbash
Atbash (; also transliterated Atbaš) is a monoalphabetic substitution cipher originally used to encrypt the Hebrew alphabet. It can be modified for use with any known writing system with a standard collating order. Encryption The Atbash cipher is a particular type of monoalphabetic cipher formed by taking the alphabet (or abjad, syllabary, etc.) and mapping it to its reverse, so that the first letter becomes the last letter, the second letter becomes the second to last letter, and so on. For example, the Latin alphabet would work like this: Due to the fact that there is only one way to perform this, the Atbash cipher provides no communications security, as it lacks any sort of key. If multiple collating orders are available, which one was used in encryption can be used as a key, but this does not provide significantly more security, considering that only a few letters can give away which one was used. History The name derives from the first, last, second, and second to last Hebrew letters (Aleph–Taw–Bet–Shin). The Atbash cipher for the modern Hebrew alphabet would be: In the Bible Several biblical words are described by commentators as being examples of Atbash: Jeremiah 25:26 – "The king of Sheshach shall drink after them" – Sheshach meaning Babylon in Atbash ( bbl → ššk). Jeremiah 51:1 – "Behold, I will raise up against Babylon, and against the inhabitants of Lev-kamai, a destroying wind." – Lev-kamai meaning Chaldeans ( kšdym → lbqmy). Jeremiah 51:41 – "How has Sheshach been captured! and the praise of the whole earth taken! How has Babylon become a curse among the nations!" – Sheshach meaning Babylon ( bbl → ššk). Regarding a potential Atbash switch of a single letter: - "Any place I will mention My name" () → "Any place you will mention My name" () (a → t), according to Yom Tov Asevilli Relationship to the affine cipher The Atbash cipher can be seen as a special case of the affine cipher. Under the standard affine convention, an alphabet of m letters is mapped to the numbers (The Hebrew alphabet has and the standard Latin alphabet has The Atbash cipher may then be enciphered and deciphered using the encryption function for an affine cipher by setting This may be simplified to If, instead, the m letters of the alphabet are mapped to then the encryption and decryption function for the Atbash cipher becomes See also Temurah (Kabbalah) Gematria Hebrew language ROT13 Notes References External links Online Atbash decoder Classical ciphers Jewish mysticism Hebrew-language names
26301474
https://en.wikipedia.org/wiki/Keyboard%20layout
Keyboard layout
A keyboard layout is any specific physical, visual or functional arrangement of the keys, legends, or key-meaning associations (respectively) of a computer keyboard, mobile phone, or other computer-controlled typographic keyboard. is the actual positioning of keys on a keyboard. is the arrangement of the legends (labels, markings, engravings) that appear on those keys. is the arrangement of the key-meaning association or keyboard mapping, determined in software, of all the keys of a keyboard: it is this (rather than the legends) that determines the actual response to a key press. Modern computer keyboards are designed to send a scancode to the operating system (OS) when a key is pressed or released: this code reports only the key's row and column, not the specific character engraved on that key. The OS converts the scancode into a specific binary character code using a "scancode to character" conversion table, called the keyboard mapping table. This means that a physical keyboard may be dynamically mapped to any layout without switching hardware components—merely by changing the software that interprets the keystrokes. Often, a user can change keyboard mapping in system settings. In addition, software may be available to modify or extend keyboard functionality. Thus the symbol shown on the physical key-top need not be the same as appears on the screen or goes into a document being typed. Some settings enable the user to type supplementary symbols which are not engraved on the keys used to invoke them. Key types A computer keyboard consists of alphanumeric or character keys for typing, modifier keys for altering the functions of other keys, navigation keys for moving the text cursor on the screen, function keys and system command keys—such as and —for special actions, and often a numeric keypad to facilitate calculations. There is some variation between different keyboard models in the physical layout—i.e., how many keys there are and how they are positioned on the keyboard. However, differences between national layouts are mostly due to different selections and placements of symbols on the character keys. Character keys The core section of a keyboard consists of character keys, which can be used to type letters and other characters. Typically, there are three rows of keys for typing letters and punctuation, an upper row for typing digits and special symbols, and the on the bottom row. The positioning of the character keys is similar to the keyboard of a typewriter. Modifier keys Besides the character keys, a keyboard incorporates special keys that do nothing by themselves but modify the functions of other keys. For example, the key can be used to alter the output of character keys, whereas the (control), (alternate) and (alternative graphic) keys trigger special operations when used in concert with other keys. (Apple keyboards have differently labelled but equivalent keys, see below). Typically, a modifier key is held down while another key is struck. To facilitate this, modifier keys usually come in pairs, one functionally identical key for each hand, so holding a modifier key with one hand leaves the other hand free to strike another key. An alphanumeric key labelled with only a single letter (usually the capital form) can generally be struck to type either a lower case or capital letter, the latter requiring the simultaneous holding of the key. The key is also used to type the upper of two symbols engraved on a given key, the lower being typed without using the modifier key. The Latin alphabet keyboard has a dedicated key for each of the letters A–Z, keys for punctuation and other symbols, usually a row of function keys, often a numeric keypad and some system control keys. In most languages except English, additional letters (some with diacritics) are required and some are present as standard on each national keyboard, as appropriate for its national language. These keyboards have another modified key, labelled (alternative graphic), to the right of the space-bar. (US keyboards just have a second key in this position). It can be used to type an extra symbol in addition to the two otherwise available with an alphanumeric key, and using it simultaneously with the key usually gives access to a fourth symbol. These third-level and fourth-level symbols may be engraved on the right half of the key top, or they may be unmarked. Cyrillic alphabet and Greek alphabet keyboards have similar arrangements. Instead of the , and keys seen on commodity keyboards, Apple Keyboards have (command) and keys. The key is used much like the , and the key like the and , to access menu options and shortcuts. Macs have a key for compatibility with programs that expect a more traditional keyboard layout. It is especially useful when using a terminal, X11 (a Unix environment included with OS X as an install option) or MS Windows. The key can generally be used to produce a secondary mouse click as well. There is also a key on modern Mac keyboards, which is used for switching between use of the , , etc. keys either as function keys or for other functions like media control, accessing dashboard widgets, controlling the volume, or handling exposé. key can be also found on smaller Windows and Linux laptops and tablets, where it serves a similar purpose. Many Unix workstations (and also home computers like the Amiga) keyboards placed the key to the left of the letter , and the key in the bottom left. This position of the key is also used on the XO laptop, which does not have a . The UNIX keyboard layout also differs in the placement of the key, which is to the left of . Some early keyboards experimented with using large numbers of modifier keys. The most extreme example of such a keyboard, the so-called "space-cadet keyboard" found on MIT LISP machines, had no fewer than seven modifier keys: four control keys, , , , and , along with three shift keys, , , and . This allowed the user to type over 8000 possible characters by playing suitable "chords" with many modifier keys pressed simultaneously. Dead keys A dead key is a special kind of a modifier key that, instead of being held while another key is struck, is pressed and released before the other key. The dead key does not generate a character by itself, but it modifies the character generated by the key struck immediately after, typically making it possible to type a letter with a specific diacritic. For example, on some keyboard layouts, the grave accent key is a dead key: in this case, striking and then results in (a with grave accent); followed by results in (E with grave accent). A grave accent in isolated form can be typed by striking and then . A key may function as a dead key by default, or sometimes a normal key can temporarily be altered to function as a dead key by simultaneously holding down the secondary-shift key— or : a typical example might be will produce (assuming the "6" key is also the "^" key). In some systems, there is no indication to the user that a dead key has been struck, so the key appears dead, but in some text-entry systems the diacritical mark is displayed along with an indication that the system is waiting for another keystroke: either the base character to be marked, an additional diacritical mark, or to produce the diacritical mark in isolation. Compared with the secondary-shift modifier key, the dead-key approach may be a little more complicated, but it allows more additional letters. Using AltGr, only one or (if used simultaneously with the normal shift key) two additional letters with each key, whereas using a dead key, a specific diacritic can be attached to a range of different base letters. Compose key A Compose key can be characterized as a generic dead key that may in some systems be available instead of or in addition to the more specific dead keys. It allows access to a wide range of predefined extra characters by interpreting a whole sequence of keystrokes following it. For example, striking followed by (apostrophe) and then results in á (a with acute accent), followed by and then results in æ (ae ligature), and followed by and then results in © (circled c, copyright symbol). The key is supported by the X Window System (used by most Unix-like operating systems, including most Linux distributions). Some keyboards have a key labeled "Compose", but any key can be configured to serve this function. For example, the otherwise redundant right-hand key may, when available, be used for this purpose. This can be emulated in Windows with third party programs, for example WinCompose. System command keys Depending on the application, some keyboard keys are not used to enter a printable character but instead are interpreted by the system as a formatting, mode shift, or special commands to the system. The following examples are found on personal computer keyboards. SysRq and PrtSc The system request () and print screen ( or on some keyboards e.g. ) commands often share the same key. SysRq was used in earlier computers as a "panic" button to recover from crashes (and it is still used in this sense to some extent by the Linux kernel; see Magic SysRq key). The print screen command used to capture the entire screen and send it to the printer, but in the present it usually puts a screenshot in the clipboard. Break key The Break key/Pause key no longer has a well-defined purpose. Its origins go back to teleprinter users, who wanted a key that would temporarily interrupt the communications line. The Break key can be used by software in several different ways, such as to switch between multiple login sessions, to terminate a program, or to interrupt a modem connection. In programming, especially old DOS-style BASIC, Pascal and C, Break is used (in conjunction with Ctrl) to stop program execution. In addition to this, Linux and variants, as well as many DOS programs, treat this combination the same as Ctrl+C. On modern keyboards, the break key is usually labeled Pause/Break. In most Microsoft Windows environments, the key combination brings up the system properties. Escape key The escape key (often abbreviated Esc) "nearly all of the time" signals Stop - QUIT - let me "get out of a dialog" (or pop-up window): LET ME ESCAPE. Another common application today of the key is to trigger the Stop button in many web browsers. ESC was part of the standard keyboard of the Teletype Model 33 (introduced in 1964 and used with many early minicomputers). The DEC VT50, introduced July 1974, also had an Esc key. The TECO text editor (c. 1963) and its descendant Emacs () use the Esc key extensively. Historically it also served as a type of shift key, such that one or more following characters were interpreted differently, hence the term escape sequence, which refers to a series of characters, usually preceded by the escape character. On machines running Microsoft Windows, prior to the implementation of the Windows key on keyboards, the typical practice for invoking the "start" button was to hold down the control key and press escape. This process still works in Windows 95, 98, Me, NT 4, 2000, XP, Vista, 7, 8, and 10. Enter key An "enter" key may terminate a paragraph of text and advance an editing cursor to the start of the next available line, similar to the "carriage return" key of a typewriter. When the attached system is processing a user command line, pressing "enter" may signal that the command has been completely entered and that the system may now process it. Shift key Shift key: when one presses shift and a letter, it will capitalize the letter pressed with the shift key. Another use is to type more symbols than appear to be available, for instance the semi-colon key is accompanied with a colon symbol on the top. To type a semi-colon, the key is pressed without pressing any other key. To type a colon, both this key and the Shift key are pressed concurrently. (Some systems make provision for users with mobility impairment by allowing the Shift key to be pressed first and then the desired symbol key). Menu key, Command key, Windows key The Menu key or Application key is a key found on Windows-oriented computer keyboards: on Apple keyboard the same function is provided by the Command key (labelled ⌘). It is used to launch a context menu with the keyboard rather than with the usual right mouse button. The key's symbol is usually a small icon depicting a cursor hovering above a menu. On some Samsung keyboards the cursor in the icon is not present, showing the menu only. This key was created at the same time as the Windows key. This key is normally used when the right mouse button is not present on the mouse. Some Windows public terminals do not have a Menu key on their keyboard to prevent users from right-clicking (however, in many Windows applications, a similar functionality can be invoked with the Shift+F10 keyboard shortcut). The Windows key opens the 'Start' (applications) menu. History Keyboard layouts have evolved over time, usually alongside major technology changes. Particularly influential have been: the Sholes and Glidden typewriter (1874, also known as Remington No. 1), the first commercially successful typewriter, which introduced QWERTY; its successor, the Remington No. 2 (1878), which introduced the shift key; the IBM Selectric (1961), a very influential electric typewriter, which was imitated by computer keyboards; and the IBM PC (1981), namely the Model M (1985), which is the basis for many modern keyboard layouts. Within a community, keyboard layout is generally quite stable, due to the high training cost of touch-typing, and the resulting network effect of having a standard layout and high switching cost of retraining, and the suboptimal QWERTY layout is a case study in switching costs. Nevertheless, significant market forces can result in changes (as in Turkish adoption of QWERTY), and non-core keys are more prone to change, as they are less frequently used and less subject to the lock-in of touch-typing. The main, alphanumeric portion is typically stable, while symbol keys and shifted key values change somewhat, modifier keys more so, and function keys most of all: QWERTY dates to the No. 1 (1874) (though 1 and 0 were added later), shifted keys date in some cases to the No. 2 (1878), in other cases to the Selectric (1961), and modifier key placement largely dates to the Model M (1985); function key placement typically dates to the Model M, but varies significantly, particularly on laptops. The earliest mechanical keyboards were used in musical instruments to play particular notes. With the advent of the printing telegraph, a keyboard was needed to select characters. Some of the earliest printing telegraph machines either used a piano keyboard outright or else a layout similar to a piano keyboard. The Hughes-Phelps printing telegraph piano keyboard laid keys A-N in left-to-right order on the black piano keys, and keys O-Z in right-to-left order on the white piano keys below. In countries using the Latin script, the center, alphanumeric portion of the modern keyboard is most often based on the QWERTY design by Christopher Sholes. Sholes' layout was long thought to have been laid out in such a way that common two-letter combinations were placed on opposite sides of the keyboard so that his mechanical keyboard would not jam. However, evidence for this claim has often been contested. In 2012, an argument was advanced by two Japanese historians of technology showing that the key order on the earliest Sholes prototypes in fact followed the left-right and right-left arrangement of the contemporary Hughes-Phelps printing telegraph, described above. Later iterations diverged progressively for various technical reasons, and strong vestiges of the left-right A-N, right-left O-Z arrangement can still be seen in the modern QWERTY layout. Sholes' chief improvement was thus to lay out the keys in rows offset horizontally from each other by three-eighths, three-sixteenths, and three-eighths inches to provide room for the levers and to reduce hand-movement distance. Although it has been demonstrated that the QWERTY layout is not the most efficient layout for typing, it remains the standard. Sholes chose the size of the keys to be on three-quarter [, or 0.75] inch centers (about 19 mm, versus musical piano keys which are 23.5 mm or about 0.93 inches wide). 0.75 inches has turned out to be optimum for fast key entry by the average size hand, and keyboards with this key size are called "full-sized keyboards". On a manual typewriter, the operator could press the key down with a lighter touch for such characters as the period or comma, which did not occupy as much area on the paper. Since an electric typewriter supplied the force to the typebar itself after the typist merely touched the key, the typewriter itself had to be designed to supply different force for different characters. To simplify this, the most common layout for electric typewriters in the United States differed from that for the one most common on manual typewriters. Single-quote and double-quote, instead of being above the keys for the digits 2 and 8 respectively, were placed together on a key of their own. The underscore, another light character, replaced the asterisk above the hyphen. The ASCII communications code was designed so that characters on a mechanical teletypewriter keyboard could be laid out in a manner somewhat resembling that of a manual typewriter. This was imperfect, as some shifted special characters were moved one key to the left, as the number zero, although on the right, was low in code sequence. Later, when computer terminals were designed from less expensive electronic components, it was not necessary to have any bits in common between the shifted and unshifted characters on a given key. This eventually led to standards being adopted for the "bit-pairing" and "typewriter-pairing" forms of keyboards for computer terminals. The typewriter-pairing standard came under reconsideration, on the basis that typewriters have many different keyboard arrangements. The U.S. keyboard for the IBM PC, although it resembles the typewriter-pairing standard in most respects, differs in one significant respect: the braces are on the same two keys as the brackets, as their shifts. This innovation predated the IBM Personal Computer by several years. The standard 101/102-key PC keyboard layout was invented by Mark Tiddens of Key Tronic Corporation in 1982. IBM adopted the layout on the PS/2 in 1987 (after previously using an 84-key keyboard which did not have separate cursor and numeric key pads). Most modern keyboards basically conform to the layout specifications contained in parts 1, 2, and 5 of the international standard series ISO/IEC 9995. These specifications were first defined by the user group at AFNOR in 1984 working under the direction of Alain Souloumiac. Based on this work, a well known ergonomic expert wrote a report which was adopted at the ISO Berlin meeting in 1985 and became the reference for keyboard layouts. The 104/105-key PC keyboard was born when two keys and a key were added on the bottom row (originally for the Microsoft Windows operating system). Newer keyboards may incorporate even further additions, such as Internet access (World Wide Web navigation) keys and multimedia (access to media players) buttons. Physical, visual, and functional layouts As noted before, the layout of a keyboard may refer to its physical (arrangement of keys), visual (physical labeling of keys), or functional (software response to a key press or release) layout. Physical layouts Physical layouts only address tangible differences among keyboards. When a key is pressed, the keyboard does not send a message such as the A-key is depressed but rather the left-most main key of the home row is depressed. (Technically, each key has an internal reference number, the scan code, and these numbers are what is sent to the computer when a key is pressed or released.) The keyboard and the computer each have no information about what is marked on that key, and it could equally well be the letter A or the digit 9. Historically, the user of the computer was requested to identify the functional layout of the keyboard when installing or customizing the operating system. Modern USB keyboards are plug and play; they communicate their visual layout to the OS when connected (though the user is still able to reset this at will). Today, most keyboards use one of three different physical layouts, usually referred to as simply ISO (ISO/IEC 9995-2), ANSI (ANSI-INCITS 154-1988), and JIS (JIS X 6002-1980), referring roughly to the organizations issuing the relevant worldwide, United States, and Japanese standards, respectively. (In fact, the physical layouts referred such as "ISO" and "ANSI" comply to the primary recommendations in the named standards, while each of these standards in fact also allows the other.) Keyboard layout in this sense may refer either to this broad categorization or to finer distinctions within these categories. For example, , Apple Inc produces ISO, ANSI, and JIS desktop keyboards, each in both extended and compact forms. The extended keyboards have 110, 109, and 112 keys (ISO, ANSI, and JIS, respectively), and the compact models have 79, 78, and 80. Visual layouts The visual layout includes the symbols printed on the physical keycaps. Visual layouts vary by language, country, and user preference, and any one physical and functional layout can be employed with a number of different visual layouts. For example, the "ISO" keyboard layout is used throughout Europe, but typical French, German, and UK variants of physically identical keyboards appear different because they bear different legends on their keys. Even blank keyboards—with no legends—are sometimes used to learn typing skills or by user preference. Some users choose to attach custom labels on top of their keycaps. This can be, e.g., for masking foreign layouts, adding additional information such as shortcuts, learning aid, gaming controls, or solely for decorational purposes. Functional layouts The functional layout of the keyboard refers to the mapping between the physical keys, such as the key, and software events, such as the letter "A" appearing on the screen. Usually the functional layout is set (in the system configuration) to match the visual layout of the keyboard being used, so that pressing a key will produce the expected result, corresponding to the legends on the keyboard. However, most operating systems have software that allow the user to easily switch between functional layouts, such as the language bar in Microsoft Windows. For example, a user with a Swedish keyboard who wishes to type more easily in German may switch to a functional layout intended for German—without regard to key markings—just as a Dvorak touch typist may choose a Dvorak layout regardless of the visual layout of the keyboard used. Customized functional layouts Functional layouts can be redefined or customized within the operating system, by reconfiguring operating system keyboard driver, or with a use of a separate software application. Transliteration is one example of that whereby letters in other language get matched to visible Latin letters on the keyboard by the way they sound. Thus, a touch typist can type various foreign languages with a visible English-language keyboard only. Mixed hardware-to-software keyboard extensions exist to overcome above discrepancies between functional and visual layouts. A keyboard overlay is a plastic or paper masks that can be placed over the empty space between the keys, providing the user with the functional use of various keys. Alternatively, a user applies keyboard stickers with an extra imprinted language alphabet and adds another keyboard layout via language support options in the operating system. The visual layout of any keyboard can also be changed by simply replacing its keys or attaching labels to them, such as to change an English-language keyboard from the common QWERTY to the Dvorak layout, although for touch typists, the placement of the tactile bumps on the home keys is of more practical importance than that of the visual markings. In the past, complex software that mapped many non-standard functions to the keys (such as a flight simulator) would be shipped with a "keyboard overlay", a large sheet of paper with pre-cut holes matching the key layout of a particular model of computer. When placed over the keyboard, the overlay provided a quick visual reference as to what each key's new function was, without blocking the keys or permanently modifying their appearance. The overlay was often made from good-quality laminated paper and was designed to fold up and fit in the game's packaging when not in use. National variants The U.S. IBM PC keyboard has 104 keys, while the PC keyboards for most other countries have 105 keys. In an operating system configured for a non-English language, the keys are placed differently. For example, keyboards designed for typing in Spanish have some characters shifted, to make room for Ñ/ñ; similarly those for French or Portuguese may have a special key for the character Ç/ç. Keyboards designed for Japanese may have special keys to switch between Japanese and Latin scripts, and the character ¥ (Japanese yen or Chinese yuan currency symbol) instead of \ (backslash, which may be replaced by the former in some codepages). Using a keyboard for alternative languages leads to a conflict: the image on the key does not correspond to the character. In such cases, each new language may require an additional label on the key, because the standard keyboard layouts do not even share similar characters of different languages. The United States keyboard layout is used as default in some Linux distributions. Most operating systems allow switching between functional keyboard layouts, using a key combination involving register keys that are not used for normal operations (e.g. Microsoft reserve or register control keys for sequential layout switching; those keys were inherited from old DOS keyboard drivers). There are keyboards with two parallel sets of characters labeled on the keys, representing alternate alphabets or scripts. It is also possible to add a second set of characters to a keyboard with keyboard stickers manufactured by third parties. Size variation Modern keyboard models contain a set number of total keys according to their given standard, described as 104, 105, etc., and sold as "full-size" keyboards. This number is not always followed, and individual keys or whole sections are commonly skipped for the sake of compactness or user preference. The most common choice is to not include the numpad, which can usually be fully replaced by the alphanumeric section. Laptops and wireless peripherals often lack duplicate keys and ones seldom used. Function and arrow keys are nearly always present. Latin-script keyboard layouts Although there are a large number of keyboard layouts used for languages written with Latin-script alphabets, most of these layouts are quite similar. They can be divided into three main families according to where the , , , , and keys are placed on the keyboard. These layouts are usually named after the first six letters on the first row: AZERTY, QWERTY, QWERTZ, QZERTY and national variants thereof. While the central area of the keyboard, the alphabetic section, remains fairly constant, and the numbers from 1–9 are almost invariably on the row above, keyboards may differ in: the placement of punctuation, typographic and other special characters, and which of these characters are included, whether numbers are accessible directly or in a shift-state, the presence and placement of letters with diacritics (In some layouts, diacritics are applied using dead keys but these are rarely engraved). the presence and placement of a row of function keys above the number row the presence and placement of one or two Alt keys, an AltGr key or Option key, a backspace or delete key, a control key or command key, a compose key, an Esc key, and OS-specific keys like the Windows key. The physical keyboard is of the basic ISO, ANSI, or JIS type; pressing a key sends a scan code to the operating-system or other software, which in turn determines the character to be generated: this arrangement is known as the keyboard mapping. It is customary for keyboards to be engraved appropriately to the local default mapping. For example, when the and numeric keys are pressed simultaneously on a US keyboard; "@" is generated, and the key is engraved appropriately. On a UK keyboard this key combination generates the double-quote character, and UK keyboards are so engraved. In the keyboard charts listed below, the primary letters or characters available with each alphanumeric key are often shown in black in the left half of the key, whereas characters accessed using the key appear in blue in the right half of the corresponding key. Symbols representing dead keys usually appear in red. QWERTY The QWERTY layout is, by far, the most widespread layout in use, and the only one that is not confined to a particular geographical area. In some territories, keys like and are not translated to the language of the territory in question. In other varieties such keys have been translated, like and , on Spanish computer keyboards respectively for the example above. On Macintosh computers these keys are usually just represented by symbols without the word "Enter", "Shift", "Command", "Option/Alt" or "Control", with the exception of keyboards distributed in the US and East Asia. QÜERTY (Azeri) Azeri keyboards use a layout known as QÜERTY, where Ü appears in place of W above S, with W not being accessible at all. It is supported by Microsoft Windows. ÄWERTY (Turkmen) Turkmen keyboards use a layout known as ÄWERTY, where Ä appears in place of Q above A, Ü appears in place of X below S, Ç appears in place of C, and Ý appears in place of V, with C, Q, V, and X not being accessible at all. It is supported by Microsoft Windows (Vista and later only). QWERTZ The QWERTZ layout is the normal keyboard layout in Germany, Austria and Switzerland. It is also fairly widely used in Czechia, Slovakia and other parts of Central Europe. The main difference between it and QWERTY is that and are swapped, and some special characters such as brackets are replaced by diacritical characters like Ä, Ö, Ü, ß. In Czechia and Slovakia diacritical characters like Ě, Š, Č, Ř, Ž, Ý, Á, Í also replace numbers. Caps lock can be a shift lock as in AZERTY (see below). AZERTY The AZERTY layout is used in France, Belgium, and some African countries. It differs from the QWERTY layout thus: and are swapped, and are swapped, is moved to the right of , (taking place of the / or colon/semicolon key on a US keyboard), The digits 0 to 9 are on the same keys, but to be typed the shift key must be pressed. The unshifted positions are used for accented characters, Caps lock is replaced by Shift lock, thus affecting non-letter keys as well. However, there is an ongoing evolution towards a Caps lock key instead of a Shift lock. ĄŽERTY (Lithuanian) Lithuanian keyboards use a layout known as ĄŽERTY, where Ą appears in place of Q above A, Ž in place of W above S, and Ū in place of X below S, with Q, W, and X being available either on the far right-hand side or by use of the AltGr key. Besides ĄŽERTY, the Lithuanian QWERTY keyboard is also used. It is standardized as LST 1582 QZERTY The QZERTY layout was used mostly in Italy, where it was the traditional typewriter layout. In recent years, however, a modified QWERTY layout with stressed keys such as à, è, ò, has gained widespread usage throughout Italy. Computer keyboards usually have QWERTY, although non-alphanumeric characters vary. and are swapped is moved from the right of to the right of , as in AZERTY Number keys are shifted Apple supported QZERTY layout in its early Italian keyboards, and currently iPod Touch also has it available. Sámi Extended Sámi keyboards use a layout known as the Sámi Extended, where Á appears in place of Q above A, Š appears in place of W above S, Č appears in place of X to the left of C, and Ŧ appears in place of Y to the right of T, with Q, W, X, and Y being available by use of the AltGr key. Also, Å is to the right of P (to match the Norwegian and Swedish/Finnish keyboards), Ŋ is to the right of Å, and Đ is to the right of Ŋ. It is different in Norway than in Sweden and Finland, because of the placement of the letters different between Norwegian and Swedish/Finnish (Ä, Æ, Ö, and Ø), which are placed where they match the standard keyboard for the main language spoken in the country. It is supported by Microsoft Windows (Windows XP SP2 and later only). Microsoft Windows also has Swedish with Sami, Norwegian with Sami and Finnish with Sami layouts, which match the normal Swedish, Norwegian, or Finnish keyboards, but has additional Sami characters as AltGr-combinations. Other Latin-script keyboard layouts There are also keyboard layouts that do not resemble traditional typewriter layouts very closely, if at all. These are designed to reduce finger movement and are claimed by some proponents to offer higher typing speed along with ergonomic benefits. Dvorak The Dvorak layout was named after its inventor, August Dvorak. There are also numerous adaptations for languages other than English, and single-handed variants. Dvorak's original layout had the numerals rearranged, but the present-day layout has them in numerical order. Dvorak has numerous properties designed to increase typing speed, decrease errors, and increase comfort. Research has found a 4% average advantage to the end user in typing speed. The layout concentrates the most used English letters in the home row where the fingers rest, thus having 70% of typing done in the home row (compared to 32% in QWERTY). The Dvorak layout is available out-of-the-box on most operating systems, making switching through software very easy. "Hardwired" Dvorak keyboards are also available, though only from specialized hardware companies. Colemak The Colemak layout is another popular alternative to the standard QWERTY layout, offering a more familiar change for users already accustomed to the standard layout. It builds upon the QWERTY layout as a base, changing the positions of 17 keys while retaining the QWERTY positions of most non-alphabetic characters and many popular keyboard shortcuts, supposedly making it easier to learn than Dvorak for people who already type in QWERTY without sacrificing efficiency. It shares several design goals with the Dvorak layout, such as minimizing finger path distance and making heavy use of the home row. An additional defining (albeit optional) feature of the Colemak layout is the lack of a caps lock key; an additional backspace key occupies the position typically occupied by Caps Lock on modern keyboards. Operating systems such as macOS, Linux, Android, Chrome OS, and BSD allow a user to switch to the Colemak layout. A program to install the layout is available for Microsoft Windows, as well as a portable AutoHotKey implementation. Colemak variants exist, including Colemak Mod-DH, which seeks to rectify concerns that the layout places too much emphasis on the middle-row centre-column keys (D and H), leading to awkward lateral hand movements for certain common English bigrams such as HE. Others seek to have more compatibility with other keyboard layouts. Workman Workman is an English layout supported out-of-the-box in Linux/X11 systems. The Workman layout employs a hypothesis about the preferential movement of each finger rather than categorically considering the lowest letter row to be least accessible. Specifically, the index finger prefers to curl inwards rather than stretch outwards. So for the index finger, the position of second preference goes to the bottom row rather than the top row. Contrarily, the middle and ring fingers are relatively long and prefer to stretch out rather than curl in. Based on this, weighting is allotted to each key specifically rather than each row generically. Another principle applied is that it is more natural and less effort to curl in or stretch out fingers rather than rotate one's wrist inwards or outwards. Thus the Workman layout allots a lower priority to the two innermost columns between the home keys (G and H columns on a QWERTY layout), similarly to the Colemak-DH or "Curl" mods. Workman also balances the load quite evenly between both hands. The Workman layout is found to achieve overall less travel distance of the fingers for the English language than even Colemak. It does however generally incur higher same-finger n-gram frequencies; or in other words, one finger will need to hit two keys in succession more often than in other layouts. Other English layouts There are many other layouts for English, each developed with differing basic principles. The Norman Layout, like Workman, deprioritizes the central columns but gives more load to the right hand with the assumption that the right hand is more capable than the left. It also gives importance to retaining letters in the same position or at least the same finger as QWERTY. MTGAP's Layout for a Standard Keyboard / an Ergonomic Keyboard has the lowest finger travel for a standard keyboard, and travel distance for an ergonomic keyboard second only to Arensito keyboard layout. Further variations were created using the keyboard layout optimizer. Other layouts lay importance on minimal key deviation from QWERTY to give a reasonable increase in typing speed and ergonomics with minimal relearning of keys. Qwpr is a layout that changes only 11 basic keys from their QWERTY positions, with only 2 keys typed with different fingers. Minimak has versions which changes four, six, eight, or twelve keys, all have only 3 keys change finger. These intend to offer much of the reduced finger movement of Dvorak without the steep learning curve and with an increased ability to remain proficient with a QWERTY keyboard. The Qwpr layout is also designed for programmers and multilingual users, as it uses Caps Lock as a "punctuation shift", offering quicker access to ASCII symbols and arrow keys, as well as to 15 dead keys for typing hundreds of different glyphs such as accented characters, mathematical symbols, or emoji. In Canada, the CSA keyboard is designed to write several languages, especially French. Sholes 2nd Layout Christopher Latham Sholes, inventor of the QWERTY layout, created his own alternative in 1898. The patent was granted in 1896. Similar to Dvorak, he placed all the vowels on the home row, but in this case on the right hand. The layout is right-hand biased with both the vowels and many of the most common consonants on the right side of the layout. JCUKEN (Latin) The JCUKEN layout was used in the USSR for all computers (both domestically produced and imported such as Japan-made MSX-compatible systems) except IBM-compatible ES PEVM due to its phonetic compatibility with Russian ЙЦУКЕН layout (see right). The layout has the advantage of having punctuation marks on Latin and Cyrillic layouts mapped on the same keys. This Russian Typewriter layout can be found on many Russian typewriters produced before the 1990s, and it is the default Russian keyboard layout in the OpenSolaris operating system. Neo The Neo layout is an optimized German keyboard layout developed in 2004 by the Neo Users Group, supporting nearly all Latin-based alphabets, including the International Phonetic Alphabet, the Vietnamese language and some African languages. The positions of the letters are not only optimized for German letter frequency, but also for typical groups of two or three letters. English is considered a major target as well. The design tries to enforce the alternating usage of both hands to increase typing speed. It is based on ideas from de-ergo and other ergonomic layouts. The high frequency keys are placed in the home row. The current layout Neo 2.0 has unique features not present in other layouts, making it suited for many target groups such as programmers, mathematicians, scientists or LaTeX authors. Neo is grouped in different layers, each designed for a special purpose. Most special characters inherit the meaning of the lower layers—the character is one layer above the , or the Greek is above the character. Neo uses a total of six layers with the following general use: Lowercase characters Uppercase characters, typographical characters Special characters for programming, etc. WASD-like movement keys and number block Greek characters Mathematical symbols and Greek uppercase characters BÉPO The BÉPO layout is an optimized French keyboard layout developed by the BÉPO community, supporting all Latin-based alphabets of the European Union, Greek and Esperanto. It is also designed to ease programming. It is based on ideas from the Dvorak and other ergonomic layouts. Typing with it is usually easier due to the high frequency keys being in the home row. Typing tutors exist to ease the transition. In 2019, a slightly modified version of the BÉPO layout is featured in a French standard developed by AFNOR, along with an improved version of the traditional AZERTY layout. However, the use of the BÉPO layout remains marginal. Dvorak-fr The Dvorak-fr layout is a Dvorak like layout specific to the French language, without concession to the use of programming languages, and published in 2002 by Francis Leboutte. Version 2 was released in June 2020. Its design meets the need to maximize comfort and prevent risks when typing in French. Unlike Azerty, the characters needed for good French typography are easily accessible: for example, the quotation marks (« ») and the curved apostrophe are available directly. More than 150 additional characters are available via dead keys. Turkish (F-keyboard) The Turkish language uses the Turkish Latin alphabet, and a dedicated keyboard layout was designed in 1955 by İhsan Sıtkı Yener (tr). During its design, letter frequencies in the Turkish language were investigated with the aid of Turkish Language Association. These statistics were then combined with studies on bone and muscle anatomy of the fingers to design the Turkish F-keyboard (). The keyboard provides a balanced distribution of typing effort between the hands: 49% for the left hand and 51% for the right. With this scientific preparation, Turkey has broken 14 world records in typewriting championships between 1957 and 1995. In 2009, Recep Ertaş and in 2011, Hakan Kurt from Turkey came in first in the text production event of the 47th (Beijing) and 48th (Paris) Intersteno congresses respectively. Despite the greater efficiency of the Turkish F-keyboard however, the modified QWERTY keyboard ("Q-keyboard") is the one that is used on most computers in Turkey. The reason for the popularity of QWERTY in Turkey is that they were overwhelmingly imported since the beginning of the 1990s. ŪGJRMV The ŪGJRMV layout is specifically designed for the Latvian language. Malt The Malt layout—named for its inventor, South African-born Lilian Malt—is best known for its use on molded, ergonomic Maltron keyboards. Nevertheless, it has been adapted as well for flat keyboards, with a compromise involved: a flat keyboard has a single, wide space-bar, rather than a space button as on Maltron keyboards, so the E key was moved to the bottom row. Modified Blickensderfer The Blickensderfer typewriter, designed by George Canfield Blickensderfer in 1892, was known for its novel keyboard layout, its interchangeable font, and its suitability for travel. The Blickensderfer keyboard had three banks (rows of keys), with special characters being entered using a separate Shift key; the home row was, uniquely, the bottom one (i.e., the typist kept her hands on the bottom row). A computer or standard typewriter keyboard, on the other hand, has four banks of keys, with home row being second from bottom. To fit on a Sholes-patterned (typewriter or computer) keyboard, the Blickensderfer layout was modified by Nick Matavka in 2012, and released for both Mac OS X and Windows. To accommodate the differences between Blickensderfer and Sholes keyboards (not the layouts, but the keyboards themselves), the order of the rows was changed and special characters were given their own keys. The keyboard drivers created by Nick Matavka for the modified Blickensderfer layout (nicknamed the 'Blick') have several variations, including one that includes the option of switching between Blick and another keyboard layout and one that is internationalised, allowing the entry of diacritics. Hexagon The honeycomb layout has hexagon keys and was invented by Typewise in cooperation with the ETH Zurich in 2015 for smartphones. It exists for 40+ languages including English, German, Spanish, French and Afrikaans. The keys are arranged like those of the respective traditional keyboard with a few changes. Instead of the there are two smaller space bars in the middle of the keyboard. The is replaced by swiping up on keys and by swiping to the left on the keyboard. Diacritic characters can be accessed by holding on a key. Alphabetical Layout A few companies offer "ABC" (alphabetical) layout keyboards. Chorded keyboards and mobile devices Chorded keyboards, such as the Stenotype and Velotype, allow letters and words to be entered using combinations of keys in a single stroke. Users of stenotype machines regularly reach rates of 225 words per minute. These systems are commonly used for real-time transcription by court reporters and in live closed captioning systems. Ordinary keyboards may be adapted for this purpose using Plover. However, due to hardware constraints, chording three or more keys may not work as expected. Many high-end keyboards support n-key rollover and so do not have this limitation. The multi-touch screens of mobile devices allow implementation of virtual on-screen chorded keyboards. Buttons are fewer, so they can be made larger. Symbols on the keys can be changed dynamically depending on what other keys are pressed, thus eliminating the need to memorize combos for characters and functions before use. For example, in the chorded GKOS keyboard which has been adapted for the Google Android, Apple iPhone, MS Windows Phone, and Intel MeeGo/Harmattan platforms, thumbs are used for chording by pressing one or two keys at the same time. The layout divides the keys into two separate pads which are positioned near the sides of the screen, while text appears in the middle. The most frequent letters have dedicated keys and do not require chording. Some other layouts have also been designed specifically for use with mobile devices. The FITALY layout is optimised for use with a stylus by placing the most commonly used letters closest to the centre and thus minimising the distance travelled when entering words. A similar concept was followed to research and develop the MessagEase keyboard layout for fast text entry with stylus or finger. The ATOMIK layout, designed for stylus use, was developed by IBM using the Metropolis Algorithm to mathematically minimize the movement necessary to spell words in English. The ATOMIK keyboard layout is an alternative to QWERTY in ShapeWriter's WritingPad software. ASETNIOP is a keyboard layout designed for tablet computers that uses 10 input points, eight of them on the home row. Other original layouts and layout design software Several other alternative keyboard layouts have been designed either for use with specialist commercial keyboards (e.g. Maltron and PLUM) or by hobbyists (e.g. Asset, Arensito, Minimak, Norman, Qwpr, Workman as well as symmetric typing layouts Niro and Soul); however, none of them are in widespread use, and many of them are merely proofs of concept. Principles commonly used in their design include maximising use of the home row, minimising finger movement, maximising hand alternation or inward rolls (where successive letters are typed moving towards the centre of the keyboard), minimising changes from QWERTY to ease the learning curve, and so on. Maltron also has a single-handed keyboard layout. Programs such as the Microsoft Keyboard Layout Creator (basic editor, free, for use on MS Windows), SIL Ukelele (advanced editor, free, for use on Apple's Mac OS X), KbdEdit (commercial editor, for Windows) and Keyman Developer (free, open source editor for Windows, macOS, iOS, Android, or for sites on the web as virtual keyboards) make it easy to create custom keyboard layouts for regular keyboards; users may satisfy their own typing patterns or specific needs by creating new ones from scratch (like the IPA or pan-Iberian layouts) or modify existing ones (for example, the Latin American Extended or Gaelic layouts). Such editors can also construct complex key sequences using dead keys or the key. Certain virtual keyboards and keyboard layouts are accessible online. With no hardware limitations, those online keyboards can display custom layouts, or allow the user to pre-configure or try out different language layouts. Resulting text can then be pasted into other web sites or applications flexibly with no need to reprogram keyboard mappings at all. Some high end keyboards allow users flexibility to reprogram keyboard mappings at the hardware level. For example, the Kinesis Advantage contoured keyboard allows for reprogramming single keys (not key combinations), as well as creating macros for remapping combinations of keys (this however includes more processing from the keyboard hardware, and can therefore be slightly slower, with a lag that may be noticed in daily use). Non-QWERTY layouts were also used with specialized machines such as the 90-key Linotype type setting machine. Keyboard layouts for non-Latin alphabetic scripts Some keyboard layouts for non-Latin alphabetic scripts, most notably the Greek layout, are based on the QWERTY layout, in that glyphs are assigned as far as possible to keys that bear similar-sounding or appearing glyphs in QWERTY. This saves learning time for those familiar with QWERTY, and eases entry of Latin characters (with QWERTY) as well for Greek users. This is not a general rule, and many non-Latin keyboard layouts have been invented from scratch. All non-Latin computer keyboard layouts can also input Latin letters as well as the script of the language, for example, when typing in URLs or names. This may be done through a special key on the keyboard devoted to this task, or through some special combination of keys, or through software programs that do not interact with the keyboard much. Abugida Brahmic scripts Baybayin It is possible to type directly from one's keyboard without the need to use web applications which implement an input method. The Philippines Unicode Keyboard Layout includes different sets of layout for different keyboard users: QWERTY, Capewell-Dvorak, Capewell-QWERF 2006, Colemak, and Dvorak, all of which work in both Microsoft Windows and Linux. This keyboard layout with can be downloaded here. Bengali There are many different systems developed to type Bengali language characters using a typewriter or a computer keyboard and mobile device. There were efforts taken to standardize the input system for Bengali in Bangladesh ( Jatiyo layout), but still no input method has still been effectively adopted widely. Dhivehi Dhivehi Keyboards have two layouts. Both are supported by Microsoft Windows (Windows XP and later only). InScript InScript is the standard keyboard for 12 Indian scripts including Devanagari, Bengali, Gujarati, Gurmukhi, Kannada, Malayalam, Oriya, Tamil, and Telugu etc. Most Indian scripts are derived from Brahmi, therefore their alphabetic order is identical. On the basis of this property, the InScript keyboard layout scheme was prepared. So a person who knows InScript typing in one language can type in other scripts using dictation even without knowledge of that script. An InScript keyboard is inbuilt in most modern operating systems including Windows, Linux, and Mac OS. It is also available in some mobile phones. Khmer Khmer uses its own layout designed to correspond, to the extent practicable, to its QWERTY counterpart, thus easing the learning curve in either direction. For example, the letter is typed on the same key as the letter L on the English-based qwerty. Since Khmer has no capitalization, but many more letters than Latin, the shift key is used to select between two distinct letters. For most consonants, the shift key selects between a "first series" consonant (unshifted) and the corresponding "second series" consonant (shifted), e.g., and ទ on the T key, or and on the B key. For most vowels, the two on the key are consecutive in the Khmer alphabet. Although Khmer has no capital or lowercase, per se, there are two forms for consonants. All but one ( ) have a second, "subscript", form to be used when it occurs as the second (or, rarely, third) letter in a consonant cluster. The glyph below the letter on the J key cap produces a non-printing character, U+17D2, which functions to indicate that the following Khmer letter is to be rendered subscripted. Khmer is written with no spaces between words, but lines may only be broken at word boundaries. The spacebar therefore produces a zero width space (non-printable U+200B) for invisible word separation. SHIFT+SPACE produces visible spaces (U+0020) which are used as punctuation, e.g., to separate items in lists, etc. There are five vowel signs that appear on the key caps, and are considered single letters in the Khmer alphabet, but are not assigned to unicode code points. They are instead represented by two consecutive vowel sign codes, the glyphs of which combine to make the vowel's glyph, e.g., / is stored as / U+17C1 followed by / U+17C7. The Khmer keyboard map does not send the code pair sequence, however. It sends one officially-unassigned code (from the Khmer block). It is up to the running application to recognize these codes and insert the appropriate code pair into the document. Thai The Thai Kedmanee keyboard layout predominates. The Thai Pattachote keyboard layout is also used, though it is much less common. Infrequently used characters are accessed via the Shift key. Despite their wide usage in Thai, Arabic numerals are not present on the main section of the keyboard. Instead they are accessed via the numeric keypad or by switching to the Latin character set on keyboards without dedicated numeric keys. Lao The keyboard layout used for Lao language. Sinhala The Sinhala keyboard layout is based on the Wijesekara typewriter for Sinhala script. Tibetan Tibetan (China) The Chinese National Standard on Tibetan Keyboard Layout standardises a layout for the Tibetan language in China. The first version of Microsoft Windows to support the Tibetan keyboard layout is MS Windows Vista. The layout has been available in Linux since September 2007. Tibetan (International) Mac OS-X introduced Tibetan Unicode support with OS-X version 10.5 and later, now with three different keyboard layouts available: Tibetan-Wylie, Tibetan QWERTY and Tibetan-Otani. Dzongkha (Bhutan) The Bhutanese Standard for a Dzongkha keyboard layout standardizes the layout for typing Dzongkha, and other languages using the Tibetan script, in Bhutan. This standard layout was formulated by the Dzongkha Development Commission and Department of Information Technology in Bhutan. The Dzongkha keyboard layout is very easy to learn as the key sequence essentially follows the order of letters in the Dzongkha and Tibetan alphabet. The layout has been available in Linux since 2004. Inuktitut Inuktitut has two similar, though not identical, commonly available keyboard layouts for Windows. Both contain a basic Latin layout in its base and shift states, with a few Latin characters in the AltGr shift states. The Canadian Aboriginal syllabics can be found in the Capslock and AltGr shift states in both layouts as well. The difference between the two layouts lies in the use of as an alternate to AltGr to create the dotted, long vowel syllables, and the mapping of the small plain consonants to the Caps + number keys in the "Naqittaut" layout, while the "Latin" layout does not have access to the plain consonants, and can only access the long vowel syllables through the AltGr shift states. Abjad Arabic This layout was developed by Microsoft from the classic Arabic typewriter layout and is used by . There is also a 102-key variant and a 102-key phonetic variant that maps to AZERTY. For Apple keyboards there is a different layout. For Chrome a 1:1 layout also exists. Hebrew All keyboards in Israel are fitted with both Latin and Hebrew letters. Trilingual editions including either Arabic or Cyrillic also exist. Note that in the standard layout (but not all keyboards), paired delimiters—parentheses (), brackets [], and braces {}, as well as less/greater than <>—are in the opposite order from the standard in other left-to-right languages. This results in "open"/"close" being consistent with right-to-left languages (Shift-9 always gives "close parenthesis" U+0029, which visually looks like "open parenthesis" in left-to-right languages). This is shared with Arabic keyboards. Certain Hebrew layouts are extended with niqqud symbols (vowel points), which require Alt+Shift or similar key combination in order to be typed. Tifinagh Tamazight (Berber) The Tamazight (Tifinagh) standards-compliant layout is optimised for a wide range of Tamazight (Berber) language variants, and includes support for Tuareg variants. AZERTY-mapped, it installs as "Tamazight F" and can be used both on the French locale and with Tamazight locales. QWERTY and QWERTZ adaptations are available for the physical keyboards used by major Amazigh (Berber) communities around the world. Non-standards-compliant but convenient combined AZERTY Latin script layouts exist which also allow typing in Tifinagh script without switching layout: Tamazight (International) is optimised for French keyboard compatibility, with Tamazight (Berber) as an extension and limited Tifinagh script access (i.e. by deadkey). It installs as "Tamazight (Agraghlan)" or "Français+". Tamazight (International)+ is optimised for Tamazight (Berber), but with close French compatibility and easy typing in Tifinagh script. It installs as "Tamazight (Agraghlan)+" or "Tamazight LF". A non-standards-compliant but convenient combined AZERTY-mapped Tifinagh layout exists which also allows typing in Latin script without switching layout: Tifinagh (International)+. It installs as "Tifinagh (Agraghlan)+" or "Tamazight FL". All the above layouts were designed by the Universal Amazigh Keyboard Project and are available from there. Morocco The Royal institute of the Amazigh culture (IRCAM) developed a national standard Tifinagh layout for Tamazight (Berber) in Morocco. It is included in Linux and Windows 8, and is available from IRCAM for the Mac and older versions of Windows. A compatible, international version of this layout, called "Tifinagh (International)" exists for typing a wide range of Tamazight (Berber) language variants, and includes support for Tuareg variants. It was designed by the Universal Amazigh Keyboard Project and is available from there. Alphabetic Armenian The Armenian keyboard is similar to the Greek in that in most (but not all) cases, a given Armenian letter is at the same location as the corresponding Latin letter on the QWERTY keyboard. The illustrated keyboard layout can be enabled on Linux with: . Western and Eastern Armenian have different layouts. In the pre-computer times Armenian keyboards had quite a different layout, more suitable for producing letter combinations inherent to the language. Several attempts have been made to create innovative ergonomical layouts, some of them inspired by Dvorak. Cyrillic Bulgarian The current official Bulgarian keyboard layout for both typewriters and computer keyboards is described in BDS (Bulgarian State/National Standard) 5237:1978. It superseded the old standard, BDS 5237:1968, on 1 January 1978. Like the Dvorak layout, it has been designed to optimize typing speed and efficiency, placing the most common letters in the Bulgarian language—О, Н, Т, and А—under the strongest fingers. In addition to the standard 30 letters of the Bulgarian alphabet, the layout includes the non-Bulgarian Cyrillic symbols Э and ы and the Roman numerals I and V (the X is supposed to be represented by the Cyrillic capital Х, which is acceptable in typewriters but problematic in computers). There is also a second, informal layout in widespread use—the so-called "phonetic" layout, in which Cyrillic letters are mapped to the QWERTY keys for Latin letters that "sound" or "look" the same, with several exceptions (Я is mapped to Q, Ж is mapped to V, etc.—see the layout and compare it to the standard QWERTY layout). This layout is available as an alternative to the BDS one in some operating systems, including Microsoft Windows, Apple Mac OS X and Ubuntu Linux. Normally, the layouts are set up so that the user can switch between Latin and Cyrillic script by pressing Shift + Alt, and between BDS and Phonetic by pressing Shift + Ctrl. In 2006, Prof. Dimiter Skordev from the Faculty of Mathematics and Informatics of Sofia University and Dimitar Dobrev from the Bulgarian Academy of Sciences proposed a new standard, prBDS 5237:2006, including a revised version of the old BDS layout, which includes the letter Ѝ and the capital Ы and replaces the letters I and V with the currency symbols of $ and € respectively, and a standardization of the informal "phonetic" layout. After some controversy and a public discussion in 2008, the proposal was not accepted, although it had been already used in several places—the "Bulgarian Phonetic" layout in MS Windows Vista is based on it. There is a new "Bulgarian Phonetic" layout in MS Windows 7. Russian JCUKEN The most common keyboard layout in modern Russia is the so-called Windows layout, which is the default Russian layout used in the MS Windows operating system. The layout was designed to be compatible with the hardware standard in many other countries, but introduced compromises to accommodate the larger alphabet. The full stop and comma symbols share a key, requiring the shift key to be held to produce a comma, despite the high relative frequency of comma in the language. There are some other Russian keyboard layouts in use: in particular, the traditional Russian Typewriter layout (punctuation symbols are placed on numerical keys, one needs to press Shift to enter numbers) and the Russian DOS layout (similar to the Russian Typewriter layout with common punctuation symbols on numerical keys, but numbers are entered without Shift). The Russian Typewriter layout can be found on many Russian typewriters produced before the 1990s, and it is the default Russian keyboard layout in the OpenSolaris operating system. Keyboards in Russia always have Cyrillic letters on the keytops as well as Latin letters. Usually Cyrillic and Latin letters are labeled with different colors. Russian QWERTY/QWERTZ-based phonetic layouts The Russian phonetic keyboard layout (also called homophonic or transliterated) is widely used outside Russia, where normally there are no Russian letters drawn on keyboard buttons. This layout is made for typists who are more familiar with other layouts, like the common English QWERTY keyboard, and follows the Greek and Armenian layouts in placing most letters at the corresponding Latin letter locations. It is famous among both native speakers and people who use, teach, or are learning Russian, and is recommended—along with the Standard Layout—by the linguists, translators, teachers and students of AATSEEL.org. The earliest known implementation of the Cyrillic-to-QWERTY homophonic keyboard was by former AATSEEL officer Constance Curtin between 1972 and 1976, for the PLATO education system's Russian Language curriculum developed at the University of Illinois at Urbana-Champaign. Curtin's design sought to map phonetically related Russian sounds to QWERTY keys, to map proximate phonetic and visual cues nearby one another, as well as to map unused positions in a mnemonically ideal way. Peter Zelchenko worked under Curtin at UIUC, and his later modifications to the number row for Windows and Macintosh keyboards follow Curtin's original design intent. There are several different Russian phonetic layouts, for example YaZhERT (яжерт), YaWERT (яверт), and YaShERT (яшерт) suggested by AATSEEL.org and called "Student" layout. They are named after the first several letters that take over the 'QWERTY' row on the Latin keyboard. They differ by where a few of the letters are placed. For example, some have Cyrillic 'B' (which is pronounced 'V') on the Latin 'W' key (after the German transliteration of B), while others have it on the Latin 'V' key. The images of AATSEEL "Student", YaZhERT (яжерт) and YaWERT (яверт) are shown on this page. There are also variations within these variations; for example the Mac OS X Phonetic Russian layout is YaShERT but differs in placement of ж and э. Windows 10 includes its own implementation of a mnemonic QWERTY-based input method for Russian, which does not fully rely on assigning a key to every Russian letter, but uses the sh, sc, ch, ya (ja), yu (ju), ye (je), yo (jo) combinations to input ш, щ, ч, я, ю, э, and ё respectively. Virtual (on-screen) Russian keyboards allow entering Cyrillic directly in a browser without activating the system layout. This virtual keyboard offers YaZhERT (яжерт) variant. Another virtual keyboard supports both traditional (MS Windows and Typewriter) and some phonetic keyboard layouts, including AATSEEL "Student", Mac OS X Phonetic Russian layout and the RUSSIANEASY 1:1 keyboard for chrome. Serbian (Cyrillic) Apart from a set of characters common to most Cyrillic alphabets, the Serbian Cyrillic layout uses six additional special characters unique or nearly unique to the Serbian Cyrillic alphabet: Љ, Њ, Ћ, Ђ, Џ, and Ј. Due to the bialphabetic nature of the language, actual physical keyboards with the Serbian Cyrillic layout printed on the keys are uncommon today. Typical keyboards sold in Serbian-speaking markets are marked with Serbian Latin characters and used with both the Latin (QWERTZ) and Cyrillic layout configured in the software. What makes the two layouts this readily interchangeable is that the non-alphabetic keys are identical between them, and alphabetic keys always correspond directly to their counterparts (except the Latin letters Q, W, X, and Y that have no Cyrillic equivalents, and the Cyrillic letters Љ, Њ and Џ whose Latin counterparts are digraphs LJ, NJ and DŽ). This also makes the Serbian Cyrillic layout a rare example of a non-Latin layout based on QWERTZ. The Macedonian dze is on this keyboard despite not being used in Serbian Cyrillic. The gje and kje can be typed by striking the apostrophe key then striking the G or K key. There is also a dedicated Macedonian keyboard that is based on QWERTY (LjNjERTDz) and uses Alt Gr to type the dje and tshe. However, the capital forms are next to the small forms. An alternative version of the layout is quite different and has no dje or tshe access. This alternative was not supported until Windows Vista. Ukrainian Ukrainian keyboards, based on a slight modification of Russian Standard Layout, often also have the Russian Standard ("Windows") layout marked on them, making it easy to switch from one language to another. This keyboard layout had several problems, one of which was the omission of the letter Ґ, which does not exist in Russian. The other long-standing problem was the omission of the apostrophe, which is used in Ukrainian almost as commonly as in English (though with a different meaning), but which also does not exist in Russian. Both of these problems were resolved with the "improved Ukrainian" keyboard layout for Windows available with Vista and subsequent Windows versions. There also exists an adapted keyboard for Westerners learning Ukrainian (mostly in the diaspora) which closely matches the qwerty keyboard, so that the letters either have the same sound or same shape, for example pressing the "v" on the Latin QWERTY produces the Cyrillic в (which makes roughly the same sound) and pressing the qwerty "w" key gives the Cyrillic ш (based on the similar shape). Georgian All keyboards in Georgia are fitted with both Latin and Georgian letters. As with the Armenian, Greek, and phonetic Russian layouts, most Georgian letters are on the same keys as their Latin equivalents. During the Soviet era, the Georgian alphabet was adapted to the Russian JCUKEN layout, mainly for typewriters. Soviet computers did not support Georgian keyboards. After the dissolution of the Soviet Union a large variety of computers were introduced to post-Soviet countries. The keyboards had QWERTY layout for Latin alphabet and JCUKEN for Cyrillic both printed on keys. Georgia started to adopt the QWERTY pattern. In both cases the letters which did not exist in the Cyrillic or Latin alphabets were substituted by letters that did not exist in Georgian alphabet. Today the most commonly used layout follows the QWERTY pattern with some changes. Greek The usual Greek layout follows the US layout for letters related to Latin letters (ABDEHIKLMNOPRSTXYZ, ΑΒΔΕΗΙΚΛΜΝΟΠΡΣΤΧΥΖ, respectively), substitutes phonetically similar letters (Φ at F; Γ at G) and uses the remaining slots for the remaining Greek letters: Ξ at J; Ψ at C; Ω at V; Θ at U). Greek has two fewer letters than English, but has two accents which, because of their frequency, are placed on the home row at the U.K. ";" position; they are dead keys. Word-final sigma has its own position as well, replacing W, and semicolon (which is used as a question mark in Greek) and colon move to the position of Q. The Greek Polytonic layout has various dead keys to input the accented letters. There is also the Greek 220 layout and the Greek 319 layout. Syllabic Cherokee The Cherokee language uses an 86-character Syllabary. The keyboard is available for the iPhone and iPad and is supported by Google. East Asian languages The orthography used for Chinese, Japanese, and Korean ("CJK characters") require special input methods, due to the thousands of possible characters in these languages. Various methods have been invented to fit every possibility into a QWERTY keyboard, so East Asian keyboards are essentially the same as those in other countries. However, their input methods are considerably more complex, without one-to-one mappings between keys and characters. In general, the range of possibilities is first narrowed down (often by entering the desired character's pronunciation). Then, if there remains more than one possibility, the desired ideogram is selected, either by typing the number before the character, or using a graphical menu to select it. The computer assists the typist by using heuristics to guess which character is most likely desired. Although this may seem painstaking, East Asian input methods are today sufficient in that, even for beginners, typing in these languages is only slightly slower than typing an alphabetic language like English (where each phoneme is represented by one grapheme). In Japanese, the QWERTY-based JIS keyboard layout is used, and the pronunciation of each character is entered using various approximations to Hepburn romanization or Kunrei-shiki romanization. There are several kana-based typing methods. . Of the three, Chinese has the most varied input options. Characters can either be entered by pronunciation (like Japanese and Hanja in Korean), or by structure. Most of the structural methods are very difficult to learn but extremely efficient for experienced typists, as there is no need to select characters from a menu. There exist a variety of other, slower methods in which a character may be entered. If the pronunciation of a character is not known, the selection can be narrowed down by giving its component shapes, radicals, and stroke count. Also, many input systems include a "drawing pad" permitting "handwriting" of a character using a mouse. Finally, if the computer does not have CJK software installed, it may be possible to enter a character directly through its encoding number (e.g., Unicode). In contrast to Chinese and Japanese, Korean is typed similarly to Western languages. There exist two major forms of keyboard layouts: Dubeolsik (두벌식), and Sebeolsik (세벌식). Dubeolsik, which shares its symbol layout with the QWERTY keyboard, is much more commonly used. While Korean consonants and vowels (jamo) are grouped together into syllabic grids when written, the script is essentially alphabetical, and therefore typing in Korean is quite simple for those who understand the Korean alphabet Hangul. Each jamo is assigned to a single key. As the user types letters, the computer automatically groups them into syllabic characters. Given a sequence of jamo, there is only one unambiguous way letters can be validly grouped into syllables, so the computer groups them together as the user types. Hangul (for Korean) Pressing the Han/Eng() key once switches between Hangul as shown, and QWERTY. There is another key to the left of the space bar for Hanja( or ) input. If using an ordinary keyboard without the two extra keys, the right Alt key will become the Ha/En key, and the right Ctrl key will become the Hanja key. Apple Keyboards do not have the two extra keys. Dubeolsik Dubeolsik (두벌식; 2-set) is by far the most common and the sole national standard of Hangul keyboard layout in use in South Korea since 1969. Consonants occupy the left side of the layout, while vowels are on the right. Sebeolsik 390 Sebeolsik 390 (세벌식 390; 3-set 390) was released in 1990. It is based on Dr. Kong Byung Woo's earlier work. This layout is notable for its compatibility with the QWERTY layout; almost all QWERTY symbols that are not alphanumeric are available in Hangul mode. Numbers are placed in three rows. Syllable-initial consonants are on the right (shown green in the picture), and syllable-final consonants and consonant clusters are on the left (shown red). Some consonant clusters are not printed on the keyboard; the user has to press multiple consonant keys to input some consonant clusters, unlike Sebeolsik Final. It is more ergonomic than the dubeolsik, but is not in wide use. Sebeolsik Final Sebeolsik Final (세벌식 최종; 3-set Final) is another Hangul keyboard layout in use in South Korea. It is the final Sebulsik layout designed by Dr. Kong Byung woo, hence the name. Numbers are placed on two rows. Syllable-initial consonants are on the right, and syllable-final consonants and consonant clusters are on the left. Vowels are in the middle. All consonant clusters are available on the keyboard, unlike the Sebeolsik 390 which does not include all of them. It is more ergonomic than the dubeolsik, but is not in wide use. Sebeolsik Noshift Sebeolsik Noshift is a variant of sebeolsik which can be used without pressing the shift key. Its advantage is that people with disabilities who cannot press two keys at the same time will still be able to use it to type in Hangul. Chinese Chinese keyboards are usually in US layout with/without Chinese input method labels printed on keys. Without an input method handler activated, these keyboards would simply respond to Latin characters as physically labelled, provided that the US keyboard layout is selected correctly in the operating system. Most modern input methods allow input of both simplified and traditional characters, and will simply default to one or the other based on the locale setting. People's Republic of China Keyboards used in the People's Republic of China are standard or slightly modified English US (QWERTY) ones without extra labelling, while various IMEs are employed to input Chinese characters. The most common IMEs are Hanyu pinyin-based, representing the pronunciation of characters using Latin letters. However, keyboards with labels for alternative structural input methods such as Wubi method can also be found, although those are usually very old products and are extremely rare, as of 2015. Taiwan Computers in Taiwan often use Zhuyin (bopomofo) style keyboards (US keyboards with bopomofo labels), many also with Cangjie method key labels, as Cangjie is a popular method for typing in Traditional Chinese. The bopomofo style keyboards are in lexicographical order, top-to-bottom left-to-right. The codes of three input methods are typically printed on the Chinese (traditional) keyboard: Zhuyin (upper right); Cangjie (lower left); and Dayi (lower right). Hong Kong and Macau In Hong Kong, both Chinese (Taiwan) and US keyboards are found. Japanese keyboards are occasionally found, and UK keyboards are rare. For Chinese input, Shape-based input methods such as Cangjie (pronounced cong1 kit3 in Cantonese) or Chinese handwriting recognition are the most common input method. The use of phonetic-based input method is uncommon due to the lack of official standard for Cantonese romanisation and people in Hong Kong almost never learn any romanisation schemes in schools. An advantage of phonetic-based input method is that most Cantonese speakers are able to input Traditional Chinese characters with no particular training at all where they spell out the Cantonese sound of each character without tone marks, e.g. 'heung gong' for 香港(; Hong Kong) and to choose the characters from a list. However, Microsoft Windows, which is the most popular operating system used in desktops, does not provide any Cantonese phonetic input method, requiring users to find and install third-party input method software. Also, most people find the process of picking characters from a list being too slow due to homonyms so the Cangjie method is generally preferred. Although thorough training and practice are required to use Cangjie, many Cantonese speakers have taken Cangjie input courses because of the fast typing speed availed by the input method. This method is the fastest because it has the capability to fetch the exact, unambiguous Chinese character which the user has in mind to input, pinpointing to only one character in most cases. This is also the reason why no provision for an input of phonetic accent is needed to complement this Input Method. The Cangjie character feature is available on both Mac OS X and Windows. On Mac OS X, handwriting recognition input method is bundled with the OS. Macau utilizes the same layouts as Hong Kong, with the addition of the Portuguese layout used in Portugal. Japanese The JIS standard layout includes Japanese kana in addition to a QWERTY style layout. The shifted values of many keys (digits, together with ) are a legacy of bit-paired keyboards, dating to ASCII telex machines and terminals of the 1960s and 1970s. For entering Japanese, the most common method is entering text phonetically, as romanized (transliterated) kana, which are then converted to kanji as appropriate by an input method editor. It is also possible to type kana directly, depending on the mode used. To type , "Takahashi", a Japanese name, one could type either () in Romanized (Rōmaji) input mode, or in kana input mode. Then the user can proceed to the conversion step to convert the input into the appropriate kanji. The extra keys in the bottom row (muhenkan, henkan, and the Hiragana/Katakana switch key), and the special keys in the leftmost column (the hankaku/zenkaku key at the upper left corner, and the eisū key at the Caps Lock position), control various aspects of the conversion process and select different modes of input. The Oyayubi Shifuto (Thumb Shift) layout is based on kana input, but uses two modifying keys replacing the space bar. When a key is pressed simultaneously with one of the keys, it yields another letter. Letters with the "dakuten" diacritic are typed with the opposite side "thumb shift". Letters with "handakuten" are either typed while the conventional pinky-operated shift key is pressed (that is, each key corresponds to a maximum of 4 letters), or, on the "NICOLA" variation, on a key which does not have a dakuten counterpart. The key in the QWERTY layout individually yields は, but with the () key, yields み. Simultaneous input with () yields ば, which is the individually mapped letter with the aforementioned dakuten. While the pinky-operated key is pressed, the same key yields ぱ. (This same letter must be typed with () + on the NICOLA variant.) Layout changing software The character code produced by any key press is determined by the keyboard driver software. A key press generates a scancode which is interpreted as an alphanumeric character or control function. Depending on operating systems, various application programs are available to create, add and switch among keyboard layouts. Many programs are available, some of which are language specific. The arrangement of symbols of specific language can be customized. An existing keyboard layout can be edited, and a new layout can be created using this type of software. For example, for Mac, AutoHotkey or The Microsoft Keyboard Layout Creator for Windows, and open-source Avro Keyboard provide the ability to customize the keyboard layout as desired. See also Half-keyboard Telephone keypad letter mapping Notes References External links
9202625
https://en.wikipedia.org/wiki/The%20Unarchiver
The Unarchiver
The Unarchiver is a proprietary freeware data decompression utility, which supports more formats than Archive Utility (formerly known as BOMArchiveHelper), the built-in archive unpacker program in macOS. It can also handle filenames in various character encodings, created using operating system versions that use those character encodings. The latest version requires Mac OS X Lion or higher. The Unarchiver does not compress files. The corresponding command line utilities unar and lsar is free software licensed under the LGPL run on Microsoft Windows, Linux, and macOS. A main feature of the Unarchiver is its ability to handle many old or obscure formats like StuffIt as well as AmigaOS disk images and LZH / LZX archives, and so on. This is credited in its source code to the use of libxad, an Amiga file format library. Ågren also worked to reverse engineer the StuffIt and StuffIt X formats, and his code was one of the most complete open source implementations of these proprietary formats. References External links The Unarchiver's Website unar, lsar source code repository Circlesoft Website Version for Mac OS X 10.6.8 and earlier Old Source Code as ZIP File archivers Data compression software macOS software
5868871
https://en.wikipedia.org/wiki/QBittorrent
QBittorrent
qBittorrent is a cross-platform free and open-source BitTorrent client. qBittorrent is a native application written in C++. It uses Boost, Qt 5 toolkit, and the libtorrent-rasterbar library (for the torrent back-end). Its optional search engine is written in Python. History qBittorrent was originally developed in March 2006 by Christophe Dumez, from the Université de technologie de Belfort-Montbéliard (University of Technology of Belfort-Montbeliard) in France. It is currently developed by contributors worldwide and is funded through donations, led by Sledgehammer999 from Greece, who became project maintainer in June 2013. Along with the 4.0.0 release a new logo for the project was unveiled. Features Some of the features present in qBittorrent include: Bandwidth scheduler Bind all traffic to a specific interface Control over torrents, trackers, and peers (Torrents queueing and prioritizing and Torrent content selection and prioritizing DHT, PEX, encrypted connections, LPD, UPnP, NAT-PMP port forwarding support, µTP, magnet links, private torrents IP filtering: file types eMule dat, or PeerGuardian IPv6 support Integrated RSS feed reader (with advanced download filters) and downloader Integrated torrent search engine (Simultaneous search in many Torrent search sites and Category-specific search requests (e.g. Books, Music, Software)) Remote control through Secure Web User Interface Sequential downloading (Download in order) Super-seeding option Torrent creation tool Torrent queuing, filtering, and prioritizing Unicode support, available in ≈70 languages Versions qBittorrent is cross-platform, available on many operating systems, including: FreeBSD, Linux, macOS, OS/2 (including ArcaOS and eComStation), Windows. , SourceForge statistics indicate that the most popular qBittorrent version of all supported platforms, 81% of downloads were for Windows computers. , FossHub statistics indicate qBittorrent as the second most downloaded software with over 75 million downloads. Packages for different Linux distributions are available, though most are provided through official channels via various distributions. Reception In 2012, Ghacks suggested qBittorrent as a great alternative to μTorrent, for anybody put off by recent controversial ad and bundleware changes made to μTorrent. See also Comparison of BitTorrent clients List of free and open-source software packages Usage share of BitTorrent clients References External links qBittorrent on FossHub 2006 software BitTorrent clients for Linux File sharing software that uses Qt Free BitTorrent clients Free file sharing software Free software programmed in C++ MacOS file sharing software Portable software Windows file sharing software Windows-only freeware
4430560
https://en.wikipedia.org/wiki/Apple%20II%20serial%20cards
Apple II serial cards
This article is a sub-page of Apple II peripheral cards. Apple II serial cards primarily used the serial RS-232 protocol. They most often were used for communicating with printers, Modems, and less often for computer to computer data transfer. They could be programmed to interface with any number of external devices which were RS-232 compatible. Most serial cards had speed ranges starting from 110 bit/s up to 19,200 bit/s, however some could be modified to go much faster. The most popular and widely used of these cards was Apple Computer's Super Serial Card, a solid design that was often copied for maximum software compatibility of the end product. Apple II Communications Card – Apple Computer The Apple II Communications Card is the original serial card from Apple Computer. Released in 1978 for $225, it was designed to work with modems utilizing acoustic couplers. It offered speeds of 110 and 300 bit/s but with a simple hardware modification (described in the manual accompanying the card) one could change this to 300 and 1200 bit/s, or 1200 and 4800 bit/s. Apple II Serial Interface Card – Apple Computer The Apple II Serial Interface Card was released by Apple Computer shortly after the Communications Card, in August 1978. Designed for printing, this card had ROM revisions, P8 and P8A. The P8A ROM supported handshaking while the earlier P8 rom didn't. Unfortunately the P8A ROM revision was not compatible with some printers that worked under the original P8 ROM. Serial Pro – Applied Engineering The Serial Pro serial interface card from Applied Engineering was compatible with the Apple Super Serial Card. Unlike the Apple SSC, which used a jumper block to select printer mode or modem mode, the Serial Pro board had two connectors to which the card's ribbon cable could be connected, one for use with a printer and one for use with a modem. The Serial Pro was a multifunction card which included a ProDOS and DOS 3.3 compatible clock/calendar, freeing up an extra slot for those with highly populated machines. This card was unique in the sense that it did not use "Phantom Slots" to achieve this functionality. Previous multifunction cards required that a secondary function be "mapped" to a different slot in the computer's memory, rendering that slot unusable. If used with a dot-matrix printer, the Serial Pro offered several screen-print variations. It could print either HiRes page (or both in a single dump) normally, or print page one rotated or inverted. The Serial Pro utilized the MOS Technology 6551 ACIA chip and offered serial baud rates from 50 bit/s to 19,200 bit/s. The lifespan of the card's battery (which retained configuration information and powered the clock chip when the computer was powered off) was touted as 20 years. The card retailed for $139 during the late 1980s. For more on the Serial Pro's clock capabilities, see its entry in Apple II system clocks. Super Serial Card – Apple Computer Apple Computer's Super Serial Card, sometimes abbreviated as "SSC", is the most well known communication card made for the Apple II. Apple called it "Super" because it was able to function as both of Apple's previous cards, the Apple II Communications Card for modem use and the Apple II Serial Interface Card for printer use. A jumper block was used to configure the card for each of the two modes. The card has a maximum speed of 19,200 bit/s and is compatible with both ROM revisions of the Apple II Serial Interface Card. Reliable communications at 9600 bit/s and higher required disabling of interrupts. The card can actually run at 115,200 bit/s as well, using undocumented register settings; but speeds between 19,200 and 115,200 are not possible using this technique. The Super Serial Card was released in 1981 and utilizes the MOS Technology 6551 ACIA serial communications chip. Other serial cards Use this article for: Apple II multi I/O cards 7710 Serial Interface – California Computer Systems 7711 Super Serial Interface – California Computer Systems AIO Interface – SSM or Transend ASIO Interface – SSM or Transend Alphabits – Street Electronics Serial Interface – Apricorn Multicore – Quadram SV-622 Serial Interface – Microtek SeriALL – Practical Peripherals Serial Interface DK 244 – Digitek International Ltd Super Serial Board – MC Price Breakers – Generic Super Serial Card clone Super Serial Imager – Apricorn Super-COMM – Sequential Systems – SSC compatible, built in term program in ROM, supported grappler screen dumps and graphics. Versacard – Prometheus Products Inc. MasterCard II – Pace electronics.'' 6850 based serial port with a 6522 user port to drive autodial modems. Simple terminal program included in onboard EPROM References Apple II peripherals
1564167
https://en.wikipedia.org/wiki/Digital%20Audio%20Access%20Protocol
Digital Audio Access Protocol
The Digital Audio Access Protocol (DAAP) is the proprietary protocol introduced by Apple in its iTunes software to share media across a local network. DAAP addresses the same problems for Apple as the UPnP AV standards address for members of the Digital Living Network Alliance (DLNA). Description The DAAP protocol was originally introduced in iTunes version 4.0. Initially, Apple did not officially release a protocol description, but it has been reverse-engineered to a sufficient degree that reimplementations of the protocol for non-iTunes platforms have been possible. A DAAP server is a specialized HTTP server, which performs two functions. It sends a list of songs and it streams requested songs to clients. There are also provisions to notify the client of changes to the server. Requests are sent to the server by the client in form of URLs and are responded to with data in mime-type, which can be converted to XML by the client. iTunes uses the zeroconf (also known as Bonjour) service to announce and discover DAAP shares on a local subnet. The DAAP service uses TCP port 3689 by default. DAAP is one of two media sharing schemes that Apple has currently released. The other, Digital Photo Access Protocol (DPAP), is used by iPhoto for sharing images. They both rely on an underlying protocol, Digital Media Access Protocol (DMAP). Early versions of iTunes allowed users to connect to shares across the Internet, however, in recent versions only computers on the same subnet can share music (workarounds such as port tunneling are possible). The Register speculates that Apple made this move in response to pressure from the record labels. More recent versions of iTunes also limit the number of clients to 5 unique IP addresses within a 24-hour period. DAAP has also been implemented in other non-iTunes media applications such as Banshee, Amarok, Exaile (with a plugin), Songbird (with a plugin), Rhythmbox, and WiFiTunes. DAAP authentication Beginning with iTunes 4.2, Apple introduced authentication to DAAP sharing, meaning that the only clients that could connect to iTunes servers were other instances of iTunes. This was further modified in iTunes 4.5 to use a custom hashing algorithm, rather than the standard MD5 function used previously. Both authentication methods were successfully reverse engineered within months of release. With iTunes 7.0, a new 'Client-DAAP-Validation' header hash is needed when connecting to an iTunes 7.0 server. This does not affect third-party DAAP servers, but all current DAAP clients (including official iTunes before iTunes 7.0) will fail to connect to an iTunes 7.0 server, receiving a '403 Forbidden' HTTP error. The iTunes 7.0 authentication traffic analysis seem to indicate that a certificate exchange is performed to calculate the hash sent in the 'Client-DAAP-Validation' header. This authentication has not yet been reverse engineered. DAAP clients DAAP servers See also List of software using Digital Audio Access Protocol Digital Audio Control Protocol Remote Audio Output Protocol Notes and references Apple Inc. services ITunes Data transmission Network protocols Computer-related introductions in 2003
38079028
https://en.wikipedia.org/wiki/SmartOS
SmartOS
SmartOS is a free and open-source SVR4 hypervisor based on the UNIX operating system that combines OpenSolaris technology with Linux's KVM virtualization. Its core kernel contributed to the illumos project. It features several technologies: Crossbow, DTrace, KVM, ZFS, and Zones. Unlike other illumos distributions, SmartOS employs NetBSD pkgsrc package management. SmartOS is designed to be particularly suitable for building clouds and generating appliances. It is developed for and by Joyent, but is open-source and free for anyone to use. SmartOS is an in-memory operating system and boots directly into random-access memory. It supports various boot mechanisms such as booting from USB thumbdrive, ISO Image, or over the network via PXE boot. One of the many benefits of employing this boot mechanism is that operating system upgrades are trivial, simply requiring a reboot from a newer SmartOS image version. SmartOS follows a strict local node storage architecture. This means that virtual machines are stored locally on each node and do not boot over the network from a central SAN or NAS. This helps ensure that network latency issues are eliminated as well as to preserve node independence. Multi-node SmartOS clouds can be managed via the open-source Joyent Triton DataCenter (formerly known as SmartDataCenter) cloud orchestration suite or via the Project Fifo Open Source SmartOS Cloud management platform built on Erlang. SmartOS types of zones SmartOS has several types of zones, also referred to as containers. The typical zone is UNIX, using pkgsrc as a package manager. KVM, which allows running arbitrary other operating systems using hardware virtualization, also runs inside a zone, albeit with minimal privileges to further increase security. Another type is LX, which can run many different popular Linux distributions without the overhead of KVM, by supporting the Linux syscall table. In 2012, Joyent and MongoDB Inc. (formerly 10gen) partnered to improve the scalability of SmartOS. References External links Joyent OpenSolaris Samsung software Virtualization software
24307012
https://en.wikipedia.org/wiki/JMAG
JMAG
JMAG is simulation software for the development and design of electrical devices. JMAG was originally released in 1983 as a tool to support design for devices such as motors, actuators, circuit components, and antennas. JMAG incorporates simulation technology to accurately analyze a wide range of physical phenomenon that includes complicated geometry, various material properties, and the heat and structure at the center of electromagnetic fields. JMAG has an interface capable of linking to third-party software and a portion of the JMAG analysis functions can also be executed from many of the major CAD and CAE systems. JMAG is used actively to analyze designs at a system level that includes drive circuits by utilizing links to power electronic simulators. JMAG is also being used for the development of drive motors for electric vehicles. History 1983 – JMAG Version 1 was developed as 3D static magnetic field analysis software. 1986 – JMAG DYN was developed as 3D transient magnetic field analysis software. 1994 – JMAG-Works was developed as integrated electromagnetic analysis software with thermal analysis solutions. 1998 – JMAG-Studio was developed as an integrated electromagnetic analysis software native to Windows. 2000 – Coupled analyses was implemented for control solutions. 2002 – JMAG-Designer was developed as an add-on for SolidWorks. 2004 – JMAG RT-Solutions was developed for model based development of motor drive systems. 2007 – JMAG Motor Template 2 was developed for creating motor templates by specifying basic parameters such as the geometry and the windings. 2009 – JMAG Motor Bench and JMAG Transformer Design and Evaluation tools were developed for improving the manufacturing of devices. 2018 – JMAG-Express Online was developed for designing and evaluating motor on web browser. See also Computer-aided engineering Finite element analysis List of finite element software packages External links jmag-international.com, official website References Serec Newsletter, Swiss Research & Engineering Centre, March 9, 2009 Development of an Electromagnet Energy Storage System, Te-Chin Wei, Department of Electrical and Computer Engineering, University of Auckland JMAG Application Catalog JMAG-Express Online Simulation software Finite element software Finite element software for Linux Sumitomo Mitsui Financial Group 1983 software
28169451
https://en.wikipedia.org/wiki/Pashchimanchal%20Campus
Pashchimanchal Campus
Pashchimanchal Campus (), formerly known also as Western Region Campus (WRC) is one of the four constituent campuses of Tribhuvan University, Institute of Engineering in Nepal. Pashchimanchal Campus is accredited by University Grants Commission (UGC) Nepal, 2021. The Pashchimanchal campus became operational from 1987 with assistance from the World Bank and UNDP/ILO. Initially, technician courses were offered at the campus along with diploma courses. However, Tribhuvan University Institute of Engineering has phased out the diploma stream and henceforth stopped enrollment in this stream after 2069 BS. Now it offers engineering courses in six disciplines at the bachelor level and three disciplines at the master's level. For admission, students must pass an entrance exam conducted by IOE. Quotas are available for disadvantaged groups. Location The scenic Pashchimanchal Campus is located in the northern part of Pokhara the regional headquarters of the Western Development Region about 210 km west of Kathmandu, spreading on 312 ropanis land in a scenic section of Pokhara valley in the western development region of Nepal. It is located in the Lamachaur-16, Pokhara near to famous Mahendra Cave, Chamere Cave, and Seti Gorge. Courses The courses offered by Pashchimanchal Campus and their capacity are as follows: Mechanical and Automobile Engineering (Bachelor) Civil Engineering (Bachelor) Electronics, Communication and Information Engineering (Bachelor) Electrical Engineering (Bachelor) Geomatics Engineering (Bachelor) Computer Engineering (Bachelor) Communication and Knowledge Engineering (Masters) Infrastructure Engineering and Management (Masters) Electrical Engineering in Distributed Generations(Masters) Departments Department of Civil Engineering The department has been in operation since the establishment of the campus. Initially, a three-year diploma course in Civil Engineering was offered. A four-year Bachelor's program in Civil Engineering was launched in 2056 B.S. Department of Automobile and Mechanical Engineering The department has been in operation since the establishment of the campus. Initially, a three-year diploma course in Mechanical Engineering and Automobile Engineering was offered. A four-year bachelor's program in Mechanical Engineering was launched in 2069 B.S. The campus now offers undergraduate studies in Mechanical and Automobile Engineering with 48 students capacity each. The head of the Department of Automobile and Mechanical Engineering is Dr. Durga Bastakoti. Department of Electrical Engineering The department has been operating since the establishment of the campus. Initially, a three-year diploma course in Electrical Engineering was offered. A four years bachelor program in Electrical Engineering was launched in 2067 B.S. Department of Electronics and Computer Engineering The department was established in 2053 B.S. Initially, three-year diploma courses in Electronics & Communication and Computer Engineering were offered. A four-year bachelor program began in 2062 B.S., and a four-year bachelor program in Computer Engineering was launched in 2069 B.S. According to recent meeting of IOE board Electronics and Communication Engineering is renamed as Electronics and Information Technology which will more emphasized towards the modern engineering syllabus. Courses offered Bachelor's degree in Electronics and Communication Engineering (4 years, day shift) Bachelor's degree in Computer Engineering (4 years, day shift) RTCU Services The department has Research training and Consultancy works, training and consultancy services in the following areas: Computer Programming (C, C++, VB, Java), Oracle, Linux, Windows Advanced Server, Computer Repair and Maintenance, Computer Networking, Radio and TV maintenance, Electronics Project Design, Micro-Controller interfacing. The Center of Information Technology(CIT) has 40/40 Mbit/s of dedicated wire (optical fiber backbone) and wireless facilities to hostels and administration. CIT not only provides internet facilities but also conducts training on engineering packages of information technology. It has more than 50 terminals connected to broadband Internet via wireless. Department of Engineering Science & Humanities The department has been in existence since the establishment of the campus. The department also manages courses like Communication English. Laboratories The campus has 22 laboratories, six in engineering and two in science, as well as a language lab: Civil Engineering Engineering Materials testing Lab Soil testing Lab Hydraulics Lab Structural Lab(cement, reinforcement, brick and concrete testing) Hydro Power Lab Water supply Lab Transportation Lab Electrical Engineering Electric Power Lab Electrical Machines Lab Automatic & Digital Control Lab Power Electronics Lab Electrical Instrumentation & Measurement Lab Micro Hydro Lab Electronics and Computer Engineering Digital Electronics Lab Communication Lab Computer Lab(Basic Computer lab, Advanced Computer lab, Computer Repair & Maintenance lab, Internet lab, Multimedia lab) Basic Electronics Lab Mechanical Engineering Thermal Engineering Lab Hydraulics Laboratory Engineering Science and Humanities Department Engineering Chemistry Lab Engineering Physics Lab Communication Lab Workshops There are 11 workshops on the campus. Civil department Carpentry workshop Wet-construction workshop Plumbing workshop Electrical department Electrical Installation Basic Electrical and Repair & Maintenance workshops. Mechanical department Machining workshop Fitting & Maintenance workshop Arc Welding and Foundry/Forgin Workshop Gas Welding & Sheet Metal workshop Automobile workshop Electronics Repair and Maintenance workshop Accommodation Sports Pashchimanchal Campus provides sporting facilities to its students and faculty. These include: Football ground with a stadium and cricket pitch within the football ground Table Tennis courts Volleyball courts Badminton court Basketball Court Students Clubs And Associations Empower Happy Club Robotics Club E-Gen Club of Technical Students (CoTS) Innovative Computer Engineering Students' Society (i-CES) Talking Minds - A group of Innovative Students Society of Innovative Mechanical Engineering Students (SIMES) Nepal Engineering Students Society (NESS) Algorithm Club Geomatics Engineering Students Association Nepal(GESAN) Civil Engineering Students Society (CESS) HG Pariwaar FemTech And many other technical and regional students societies for cultural and regional identity and conservations. See also Institute of Engineering homepage Pulchok Campus, IOE Thapathali Campus, IOE Purwanchal Campus, IOE Tribhuvan University Kathmandu University Pokhara University Mid-western University References Tribhuvan University Engineering universities and colleges in Nepal Education in Pokhara Educational institutions established in 1987 1987 establishments in Nepal
2405553
https://en.wikipedia.org/wiki/Comparison%20of%20accounting%20software
Comparison of accounting software
The following comparison of accounting software documents the various features and differences between different professional accounting software, personal finance and other accounting packages. The comparison only focus considering financial and external accounting functions. No comparison is made for internal/management accounting, cost accounting, budgeting, or integrated MAS accounting. Free and open source software Proprietary software Further details See also List of personal finance software List of ERP software packages Point of sale Comparison of development estimation software List of project management software References Accounting software Lists of software Accounting Software
47094971
https://en.wikipedia.org/wiki/Souls%20%28series%29
Souls (series)
is a series of action role-playing games developed by FromSoftware. The series began with the release of Demon's Souls for the PlayStation 3 in 2009, and was followed by Dark Souls and its sequels, Dark Souls II and Dark Souls III, in the 2010s. The series' creator, Hidetaka Miyazaki, served as director for each of them with the exceptions of Dark Souls II and the externally developed Demon's Souls remake. The series has been both praised and criticized for its high level of game difficulty. Other FromSoftware games, including King's Field, Bloodborne, Sekiro, and Elden Ring, share several related concepts. By 2020, the Souls series had sold over 28.7 million copies. It also inspired the soulslike video game genre. Setting The games take place within a dark, medieval fantasy setting, where the player's character fights against knights, dragons, phantoms, demons, and other monstrous or supernatural entities. The accretion, loss, and recovery of souls is central to the narrative and gameplay of Souls games. Another recurring motif is that of a once powerful and prosperous kingdom which has fallen into ruin: for example the setting of Demon's Souls, Boletaria, in which the player attempts to halt the spread of a demon-infested fog that threatens to consume the world. The plot of the Dark Souls trilogy differs in that it revolves around the player's attempts, through various means, to either reverse or perpetuate the spread of a curse of undeath known as the "Darksign" - which prevents true death but prompts the undead's gradual descent into a madness and decay called "Hollowing" - based on the player's decisions. These games are linked through their setting and an overarching cyclic narrative centering around fire, and are linked to their predecessor Demon's Souls through common themes and elements, including interactions with phantoms and battles with demons. At the end of each game, characters may reignite the "first flame" or allow it to fade, recurring a choice others have made before, which becomes a plot point in itself. Gameplay The Souls games are played in a third-person perspective, and focus on exploring interconnected environments while fighting enemies with weapons and magic. Players battle bosses to progress through the story, while interacting with strange non-playable characters. The protagonist of each Souls game can have a varying gender, appearance, name, and starting class via character creation. Players can choose between classes, including knights, barbarians, thieves, and mages. Each class has its own starting equipment and abilities that can be tailored to the player's experience and choices as they progress. The player gains souls from gameplay battles which act as both experience points to level up and as currency to buy items. Souls gained are usually proportional to the difficulty of fighting certain enemies; the more difficult an enemy, the more souls the player will gain. One of the core mechanics of the series is that it uses death to teach players how to react in hostile environments, encouraging repetition, learning from past mistakes, and prior experience as a means of overcoming its difficulty. Upon losing all of their health points and dying, players lose their Souls and are teleported back to a bonfire where they last rested, which serves as a checkpoint. One chance is given for the player to recover their lost Souls in the form of a bloodstain, which is placed at or around where they last died. If the player dies again before reaching their bloodstain, the Souls are permanently gone. As most enemies are respawned following player death, or if the player should rest at a bonfire, the player has the opportunity to regain more Souls by repeated combat encounters. The bonfire is a type of campfire in the action role-playing game Dark Souls and its sequels that functions as a checkpoint for the player character's progress, as well as reviving most enemies that the player previously killed. Later in the game, and in Dark Souls II and III, they function as warp points. Another core aspect of the Souls games is its dependency on endurance in battle. Performing attacks, blocking, or dodging consume stamina, which otherwise quickly restores if the player stands still or just walks around. Certain moves cannot be executed if the player lacks a certain amount of stamina, making them vulnerable to attack. Players must balance their rate of attacks against defensive moves and brief periods of rest to survive more difficult encounters. Online interaction in the Souls games is integrated into the single-player experience. Throughout levels, players can briefly see the actions of other players as ghosts in the same area that may show hidden passages or switches. When a player dies, a bloodstain can be left in other players' game world that when activated can show a ghost playing out their final moments, indicating how that person died and potentially helping the player avoid the same fate in advance. Players can leave messages on the ground that can either help other players by providing hints and warnings or harm them by leaving false hints. Players can also engage in both player versus player combat and cooperative gameplay using invasion or summoning mechanics. Games Demon's Souls Released in 2009 for PlayStation 3, Demon's Souls is the first game in the Souls series. It has also been described as a spiritual successor to the King's Field series of games, while at the same time being described as a separate entity "guided by differing core game design concepts." It also drew inspiration from video games such as Ico, The Legend of Zelda, as well as manga such as Berserk and JoJo's Bizarre Adventure. Demon's Souls takes place in the fictional kingdom of Boletaria, which is being ravaged by a cursed fog that brings forth demons who feast on the souls of mortals. Unlike its successors, Demon's Souls uses a central hub system known as the "Nexus" where players can level up, repair equipment, or buy certain items, before venturing into one of the five connected worlds. The "World Tendency" feature is also exclusive to Demon's Souls, where the difficulty of exploring a world is dependent on how many bosses have been killed, and how the player dies. The gameplay involves a character-creation system and emphasizes gathering loot through combat with enemies in a non-linear series of varied locations. It had an online multiplayer system integrated into single-player, in which players could leave messages and warnings for other players' worlds, as well as join other players to assist and/or kill them. The multiplayer servers were shut down in early 2018 due to inactivity. A remake by Bluepoint Games was released for the PlayStation 5 in November 2020. Remake In 2016, Miyazaki acknowledged the demand for a Demon's Souls remake, but said he was personally not interested in working on such a project, adding that he was instead open to the possibility for an external company to do so, provided that they loved the original and devoted themselves to it. In June 2020, a remake of the game by Bluepoint Games was announced for release as a launch title on the PlayStation 5 in November 2020. Japan Studio, who codeveloped the original game and assisted Bluepoint in their previous remake of Shadow of the Colossus, also assisted in the development on the remake of Demon's Souls. Gavin Moore served as creative director. Dark Souls Dark Souls Dark Souls is the second game in the Souls series; it is considered a spiritual successor to Demon's Souls. FromSoftware wanted to develop a sequel to Demon's Souls, but Sony's ownership of the intellectual property prevented them from doing so on other platforms. It was released in 2011 for PlayStation 3 and Xbox 360. In 2012, Dark Souls: Prepare to Die Edition was released for Microsoft Windows, PlayStation 3, and Xbox 360, featuring the base game and the Artorias of the Abyss downloadable content. The game takes place in the fictional kingdom of Lordran. Players assume the role of a cursed human character who sets out to discover the fate of undead humans like themselves. The plot of Dark Souls is primarily told through environmental details, in-game item flavor text, and dialogue with non-playable characters (NPCs). Players must piece together clues in order to understand the story, rather than being told the story through more traditional means, such as through cutscenes. Dark Souls and its predecessor Demon's Souls garnered recognition due to the series' high level of difficulty. A remaster of the game, Dark Souls: Remastered, was released in May 2018. Dark Souls II Dark Souls II is the third installment in the Souls series. Unlike the previous two games, director Hidetaka Miyazaki did not reprise his role as he was busy directing Bloodborne, although he was still involved in supervision. It was released in 2014 for Microsoft Windows, PlayStation 3, and Xbox 360. In 2015, an updated version featuring The Lost Crowns downloadable content was released for Microsoft Windows, PlayStation 3, Xbox 360, PlayStation 4, and Xbox One, under the title Dark Souls II: Scholar of the First Sin - with the latter two platforms receiving retail releases. The game takes place in the fictional kingdom of Drangleic, where the player must find a cure for the undead curse. Although set in the same universe as the previous game, there is no direct story connection to Dark Souls. Dark Souls III Dark Souls III was announced at E3 2015. It was released in Japan on March 24, 2016, and worldwide on April 12, 2016, for Microsoft Windows, PlayStation 4, and Xbox One. The gameplay is paced faster than previous Souls installments, which was attributed in part to the gameplay of Bloodborne. The game takes place in the kingdom of Lothric, where the player must end the cycle of linking the Flame. In 2017, the complete version containing the base game and both expansions (Ashes of Ariendel and The Ringed City) was released, under the title Dark Souls III: The Fire Fades. Dark Souls III was both critically and commercially successful, with critics calling it a worthy and fitting conclusion to the series. It was the fastest-selling game in Bandai Namco's history, selling over 10 million copies by 2020. In an interview promoting Dark Souls III, Miyazaki was asked how he felt about the current number of Souls games. He responded by saying, "I don't think it'd be the right choice to continue indefinitely creating Souls and Bloodborne games. I'm considering Dark Souls 3 to be the big closure on the series. That's not just limited to me, but From Software and myself together want to aggressively make new things in the future... I believe that From Software has to create new things. There will be new types of games coming from us, and Dark Souls 3 is an important marker in the evolution of From Software." In April 2016, it was reported that Miyazaki and FromSoftware had begun working on a new intellectual property (Sekiro: Shadows Die Twice), and had no current plans to continue the Souls series with sequels or spin-offs. Related games In February 2016, Bandai Namco Entertainment partnered with American retailer GameStop to release Slashy Souls, a free-to-play mobile endless runner, to promote Dark Souls III. The game was presented in a pixel art style, and shares the series' level of difficulty. The game was met with highly negative critical reception, with reviewers such as Chris Carter of Destructoid and Jim Sterling both giving the game a 1/10. The King's Field series, also developed by FromSoftware, is considered a spiritual predecessor to the Souls series. It debuted in 1994 with King's Field for the PlayStation and had three sequels in addition to other spinoffs. Other FromSoftware games directed by Miyazaki, such as Bloodborne, Sekiro: Shadows Die Twice, and Elden Ring, share many of the same concepts of Souls and are often associated with the series. Fans and journalists often group the games under the moniker "Soulsborne". Reception The Souls series has been met with critical acclaim. Demon's Souls won several awards, including "Best New IP" from GameTrailers, and overall Game of the Year from GameSpot. Former SIE president Shuhei Yoshida later regretted not publishing the game in the west, doubting how successful it would become. Dark Souls originally did not have a port for Microsoft Windows, but upon seeing a fan petition for it, Bandai Namco community manager Tony Shoupinou lauded their support, and a Windows port was released in 2012. Dark Souls is also considered by some critics to be one of the greatest video games of all time, and has influenced the development of many future video games. Dark Souls II also received critical acclaim upon release. Before release, Dark Souls III was one of the most anticipated games of 2016, and also received critical acclaim upon release. Bluepoint's version of Demon's Souls is currently the highest scoring installment in the series on Metacritic. Several news outlets have reported it as one of the greatest video games of all time released as a launch title for a console. The "bloodstain" gameplay mechanic has been given praise by critics. David Craddock of Shacknews called bloodstains "the hook that reels Demon's and Dark Souls players back in time and time again", and said that the resurrection of all enemies upon death make the journey back to one's bloodstain "quite the nail-biter". He stated that the harshest punishment one can receive in a Souls game is "not dying once, but twice." Stephen Totilo of Kotaku called bloodstains the "best game death innovation" after playing a demo of Demon's Souls, questioning 'what took so long for a breakthrough like this?'" GamesRadar+ called bloodstains, in combination with Demon Souls's message system, "a graceful, elegant way of letting players guide each other without the need for words", and said that "rarely has the price of failure been balanced on such a precarious knife edge" as being forced to retrieve one's bloodstain. The series inspired a social media app for iOS and Android called Soapstone, which uses a similar online messaging system used in the series adapted for the real world, using GPS to determine a user's location and bringing up a list of cryptic messages posted by other users in the area. The bonfire concept was similarly praised. Matthew Elliott of GamesRadar+ called bonfires a powerful symbol of relief, and "a meaty cocktail of progress, exhaustion and joy", and that, while other games evoke emotions with their save points, no other game does so as effectively. Vice called the Bonfire a "mark of genius" that "reinvented the save point" and allowed the player to reflect on their progress. Sales , Demon's Souls had sold an estimated 1.7 million copies, while , the Dark Souls series had sold over 25 million copies worldwide. Dark Souls III broke sales records upon release, with the title being the fastest-selling game in Bandai Namco's history, selling over three million copies worldwide a month after its international release. By May 2020, the Dark Souls series had sold over 27 million copies. Legacy A comic book by Titan Comics and based on the series debuted alongside the international release of Dark Souls III in April 2016. That same month, a Kickstarter campaign for an officially licensed board game based on the series was announced, titled Dark Souls – The Board Game. The campaign was funded within the first three minutes of its launch, and was published by Steamforged Games and released in April 2017. In February 2017, music from the series composed by Motoi Sakuraba, was performed by a live orchestra at the Salle Pleyel concert hall in Paris. In September of that year, a limited edition vinyl box set containing the soundtracks of all three games was released in Europe. In Japan, a box set containing the enhanced versions of all three games for the PlayStation 4, the soundtracks for each, bookends, artwork prints, and dictionaries detailing every in-game item from the series were released on May 24, 2018. The soulslike video game genre was inspired by the series, resulting in many games using similar mechanics. The series has also been cited as an influence on several PlayStation Network features, including asynchronous messaging, social networking and video sharing, as well as for the television show Stranger Things. Notes References Action role-playing video games Video game franchises Video game franchises introduced in 2009 Sony Interactive Entertainment franchises
53068776
https://en.wikipedia.org/wiki/Tajemnica%20Statuetki
Tajemnica Statuetki
Tajemnica Statuetki is a Polish-language adventure game developed and published by Metropolis Software House for DOS-based computers in 1993. While it was never released in English, it is known in the English-speaking world as The Mystery of the Statuette. The game was conceived by a team led by Adrian Chmielarz, who used photographs taken in France as static screens within the game. The first title in the adventure game genre that was produced in Poland, its plot revolves around a fictional Interpol agent named John Pollack trying to solve a mystery associated with the thefts of ancient artifacts around the world. At the time of the game's release, software piracy was rampant in Poland; the game, however, sold between 4,000 and 6,000 copies, becoming very popular there. Tajemnica Statuetki was praised for its plot and for being a cultural milestone that helped advance and legitimise the Polish gaming industry despite attracting minor criticism for its game mechanics and audiovisual design. The game found warm reception from both the gaming community and from industry magazines which tended to focus on the title's positives. Gameplay Tajemnica Statuetki is shown from a first-person perspective. It is a point-and-click adventure game that consists of a series of photographic images, although most information is communicated with text. The game is divided into three chapters, each of which takes place in a different location. Players solve puzzles and interact with characters to progress through the story. The menu offers six different actions, equipment, and a map. The player uses the action commands in a manner similar to LucasArts adventures. The player completes actions by clicking a command button then either an inventory item or a part of the room screen. Often, progression through the game requires the player to locate objects the size of a single pixel on the monitor; this is known as pixel hunting. The game's puzzles call on the player's knowledge of varying fields, such as cocktail recipes; the player is, for example, tasked with ordering a drink of the correct composition for a tourist. Plot When religious artifacts from around the world, often with insignificant market value, start to disappear, Interpol realises it is not the work of a dishonest collector. Clues lead suspicion to fall upon commando and former CIA agent Joachim Wadner, who appears unhelpful but intelligent. To catch the thief, Interpol chooses their best trainee, protagonist and playable character John Pollack, a young American of Polish descent. Pollack has unlimited funds at his disposal, complete a Rubik's Cube in under three minutes, and can kill with his bare hands. He takes his gadget briefcase, boards a submarine, and sails to the Pacific Ocean. Pollack follows Wadner to San Ambrosio, an island off the coast of Chile. He completes a reconnaissance mission at a cafe by working there for weeks to complete trained movements like placing a napkin on a counter. During this time, the cafe changes ownership but there is no progress on the case. Pollack interrogates a bartender, who says Wadner has recently been there. Players can choose to mix a range of cocktails for Pollack. Wadner visits a beach and watches three young girls, none of whom pay any attention to him. At one point, the player is captured and tasked with escaping from a cell, a task that requires the splitting of doors, their frames and handles, the use of electricity to paralyse a guard, and to counteract throat dryness caused by drinking from a puddle. From this point on, the quest continues through cities and tourist centers, and the crime eventually takes on a satanic, occult feel. Development Tajemnica Statuetki was conceived by Adrian Chmielarz. Sometime during the 1990s, he figured out that the time was right for him to create the first Polish adventure game. He "and a few friends hatched a plan to take photographs from his vacation to France and turn them into a video game". The group realised they could service the untapped Polish software market, where many people had PCs but were unable to become immersed in adventure games because they did not understand English. Chmielarz was not worried about the Polish gaming market being small because the market had already been tapped by developer X-Land and realised the market had potential, noting the number of people who attended conventions. This project evolved into Tajemnica Statuetki. The game was conceived by Chmielarz after he visited Côte d'Azur, photographing the area and building a plot around his experiences. He and his high school friends, Grzegorz Miechowski and Andrzej Michalak, collaborated on the game. Chmielarz took on a directing-scriptwriting role and set the creative tone. According to Miechowski, until March 1993, Chmielarz wrote sixteen hours daily for the game. Miechowski dealt with business stakeholders and marketing while Michalak applied physiognomy to the production. Miechowski and his brother were responsible for sales and logistics; they collaborated with computer companies Optimus and JTT. Miechowski and Michalak financed the project by selling computers. Marcin M. Drews, who took some of the photographs for the game, is often mistaken for the game's creator. Design While Tajemnica Statuetki was initially supposed to have hand-drawn graphics, during development the game was altered to instead use digitised photographs as static screen backgrounds. Up until that point, graphic designers had spent many hours on concept art and storyboards. This was atypical for the industry at the time, where it was the norm to see moving protagonists and diverse locations on the screen. The choice resulted in a game consisting entirely of photographic material, which had to be properly rendered so the game could fit onto no more than two floppy disks. This was necessary to minimise its cost to consumers to the point at which they would be willing to legally buy it. The developers also cut costs by opting to include photos instead of having animated video sequences. According to Chmielarz, "we were indie before it was cool, so to speak". According to Antyweb, the game's development was successful despite a shoestring budget, noting Chmielarz' "initiative" of using "home-made methods" when he did not have access to multiple Elwro and IMB XT computers. During development, he was living in Wroclaw. The game was written entirely in Assembler, had a one megabyte source text, and contained 2,000 loops, altogether around 30,000 commands. The title, typical of point-and-click adventures, requires the player to control the main character with a mouse. Photographs in the game were taken at the Côte d'Azur, Saint Tropez, Monte Carlo, Nice and the abandoned Calvinist church and cemetery in Jędrzychowice, Strzelin. The team went to these locations specifically for the game, where they took as many photographs as possible. The game's soundscape includes effects such as closing doors and glasses being wiped. Chmielarz inserted references to mass culture, for instance James Bond. To make the game more appealing, the developers added a skill-based section that players could skip if they wanted to. The final segment of Tajemnica Statuetki has a sense of fun that emulates the lighter sides of the game's production; it is a "mix of horror and thriller" according to OnlySP, and mix of sensation, humour, and occultism according to Chmielarz. A "youthful fantasy" can be observed during a meeting with the main opponent, performing a magical ritual in a fiery circle made of birthday candles. According to Miechowski, the game was made with the motto "Teach with fun, play with learning", and this educational slant was acknowledged by Nie tylko Wiedźmin, which said the game is responsible for its players, decades later, knowing how to make good cocktails as the result of its in-game puzzle. Chmielarz's design philosophy was to create a game similar to those released in the West; according to Komputers Wait, at the time of the game's development, the Polish computer industry was five years behind that of the West, and Polish titles were generally clones of then-popular Western video games. Tajemnica Statuetki is comparable to late 1980s adventure games. Secret Service agreed, saying the game visually resembles point-and-click adventure Countdown (1990) and interactive fiction adventure Amazon (1984) due to their similar menu systems and use of digitized images. Many critics acknowledged the limitations of the game in terms of the size of the country's video gaming industry at the time. One magazine reminded its readers not to make comparisons between the game and the latest titles from adventure gaming giants like Sierra On-Line, LucasArts, Infogrames, and Delphine, due to the "tremendous" gaps in "experience, financial and technological resources, infrastructure, and legal protection" that would gradually close. Release Development of Tajemnica Statuetki was finished by February 12, 1993; at the end of that year the game was released onto the market at a price of 231,000 Polish zlotys. During the game's development, the team approached publishers Atlantis Software and Avalon Interactive (then called Virgin Interactive) but were turned down. Chmielarz got IPS Computer Group to distribute and market it but the company agreed to take only 100 copies rather than the 2,000 the creators had offered; as a result, Chmielarz founded his own company to sell the rest of the stock. The name of his publishing company, Metropolis Software, was chosen because "it sounded nice". Founded in 1992 in preparation for the game's release the following year, it became one of the first Polish video game companies after contemporaries such as Computer Adventure, Studio, and X-Land. Chmielarz boxed the games in packaging he had designed himself. Each box contained two HD floppy disks with the program and extra material. The latter included a copy of the fictional newspaper Dziennik Metropolis dated October 1; the articles presented the game's plot and contained anti-piracy safeguard information, self-referential humour and an advertisement for future release Teenagent, a tiny crossword puzzle, and secret codes for use in the game. Tajemnica Statuetki became the company's premiere title. Metropolis Software posted advertisements throughout industry magazines. One issue of Secret Service contained a review that included an interview with the authors, a competition, and an advertisement. Competition entrants could win computer equipment and books. The team also advertised in the press. Geezmo thought the game's commercial success was largely due to a "deliberate, well-thought-out media campaign" that included the sale of CDs attached to a popular magazine. Sales of the game exceeded the estimates of IPS and the creators' expectations. Chmielarz sold between 4,000 and 6,000 copies by mail at a time when moving just 1,000 or 2,000 units was considered a major achievement. With the profits of the initial sales, Chmielarz was able to run Metropolis Software for two years without financial difficulty. Western hits had rarely achieved such level of sales. The market was also still dominated by rampant piracy. At a time where democracy and capitalism were being introduced to Poland, and in which The Software Protection Act was coming into effect in 1994, players were not used to paying high amounts for original games. Polish developers had become accustomed to players pirating their games and continued to spend months on titles despite little return. According to Chmielarz, the main reason for this cultural landscape was the commodity exchange he himself came from; he was very critical about pirates, particularly those who had tried to hack his game. Despite being frequently pirated, a sizable number of units were sold legally. SS-NG (Secret Service - Next Generation) puts Tajemnica Statuetki success in a piracy-prone market to it being reasonably cheap and comparable in quality to English-language adventure games. Reception Writing and plot Tajemnica Statuetkis plot and writing were highly praised; according to Video Games Around the World, its overall positive reception was mostly due to the strength of its script. Gry Online praised the "greatly realized scenario" that held up the narrative while Gra.pl said the best element of the game was its plot and MiastoGier said the game's engaging story outweighs its negatives. GameDot said the title "still surprises with its brilliance in the description of the surroundings and the structure of dialogues (very modeled on LucasArts productions) that bring to mind solid literary material". SS-NG wrote that the Polish language was "professionally implemented" without spelling and stylistic errors and thought the game struck a balance between humour and bleak, horror-filled scenarios. SS-NG also said the narrative and gameplay were well thought-out and hold up, and added that the clever mix of humour and drama effectively break the suspense with laugh-out-loud moments. Secret Service said the game's strongest asset is use of Polish language, including idiosyncrasies such as noun declensions and references to Polish jokes, stereotypes and culture. According to Benchmark, the "professional, tense thriller" is highly original and owes much of its success to the period in which it was released, when technological imperfections were compensated for by strong concepts. According to Orange, players appreciated the intriguing scenario and attention to detail and the game would attract new fans as of 2014. Polygamia said players appreciated the game's engaging scenario, high-class criminal intrigue, and thoughtful production. According to Dawno temu w grach, the game did not delight but its strength was the execution of its well-constructed and -written scenario, which hold the player in suspense until the end. Bastion Magazynów wrote despite its status as the only Polish-language adventure game, its only draw was its well-constructed plot. According to Geezmo, the "well-thought-out and moderately addictive storyline" was considered atypical at the time and both a curiosity and a novelty. Neskazmlekiem said the convoluted description of escaping from the cell, including the passage "spit so long until it dries in your throat", justified the description of Chimielarz as "the greatest Polish sadist in the gaming market". While noting its lack of a complicated plot and well-constructed characters, Polskie Gry Wideo wrote that the game offers hours of entertainment. Valhalla wrote that the plot of 2002's photographic adventure game A Quiet Weekend in Capri sounded "even worse" than that of Tajemnica Statuetki. While noting that the interesting, occult-inspired plot was well received, even outside Poland, the reviewer at Gameplay questioned the point of the titular statuette in the story. SS-NG wrote that the game has a great atmosphere. Gambler wrote that the game was a "very successful program" that was somewhat modelled on Sierra's Quest series. Gry.impo said the game was twisted, commenting that it burst with atmosphere and cult-oriented puzzles. Gameplay Tajemnica Statuetki'''s gameplay had a mixed reception from reviewers. Gry Online opined the game is demanding and requires players to have patience and an open mind to find absurd solutions to puzzles. Despite its difficulty, SS-NG urged players to stick with it, noting that even today it will provide hours of entertainment. Polygamia wrote that while the game is not technically proficient, it was appreciated by players for its engaging scenario, high-class criminal intrigue, and careful performance. SS-NG noted that in a pre-walkthrough age, the game was particularly difficult with aspects such as copy protection revealed in the middle of the game, playing in a games room, and a useful item disguised as part of a building interior. Secret Service described the gameplay as being reminiscent of Infocom products, in particular The Hitchhikers Guide To the Galaxy, noting that players are required to be sharp and perceptive when interacting with found objects, which often must be used in ways contradictory to their original purposes. The reviewer criticised the challenge of locating certain items and learning of the existence of others. Citing examples such as a coin on a beach, a hairpin on a sofa, and a hook on an anchor, Secret Service noted that such items were one-pixel size and are not visible on the screen, necessitating pixel hunting with the mouse, "creating unnecessary downtime". Due to this difficulty, the magazine said the developers should have tested the game with regular players before publishing it.Polskie Gry Wideo wrote that the game's interface, which required pixel hunting, does not confirm correct solutions to puzzles, provide hints or a clear purpose of what to do in the world, putting it at odds with other adventure games of the time. They deemed it "a bit frustrating" and less enjoyable as a result. Benchmark noted that looking for details and objects relevant to advancing through the plot requires players to strain their eyes, a sentiment echoed by Orange. Nie tylko Wiedźmin noted that even by the standards of the time, the game was considered difficult. Geezmo said the way the game is designed requires players to smack the screen with their mice. While Secret Service said Tajemnica Statuetki is more modestly constructed that its Western counterparts, it concluded that it was solidly made without lags or having game-breaking bugs. Gry Online argued that the game was less fun and interesting than its successor Teenagent. Polskie Gry Wideo noted the game's "undeveloped game design and numerous unintuitive solutions." Audiovisual design Critics had a mixed response to Tajemnica Statuetki audiovisual design. Gry Online said that when the game appeared on the market, "the mere use of digitized photographs was the pinnacle of the achievements of Polish programmers". Michael Zacharzewski of Imperium Gier said that by 1999 standards, the game should be considered an embarrassment in terms of technical quality. MiastoGier wrote that the game had impressive graphics despite its simple design. PB.pl said the game looks "clumsy" today while Interaktywnie noted that it appealed despite its "amateur character". According to Radio Szczecin, by 2014 standards, the game is "unkempt" due to its static photographs, lack of animation and music, and expository subtitles, all of which might "scare off the younger players". It noted, however, that older players should play the game to remind them of what adventure games used to be like.SS-NG said the game's audiovisual content would have seemed lacking compared to Western games even at the time of release, particularly its sound effects and use of digitised photographs. Gry Online noted that while the game was lacking graphically, the use of digitised photographs was considered a milestone in the Polish video game industry at the time. Describing the game as a "strong textbook with pretty images", Secret Service expressed disappointment that it did not include music in the introduction or a line-by-line voiceover. Video Games Around the World deemed the production values "Spartan" because of the use of digitised photographs, lack of animations and music, and minimal sound effects. Polskie Gry Wideo wrote that the game's graphics contrasted with those of its western contemporaries such as Mystery House (1980), Maniac Mansion (1987), The Secret of Monkey Island (1990), and Indiana Jones and the Fate of Atlantis (1992).SS-NG said the game's photographic material added an air of authenticity. Bastion Magazynów noted that even at the time of its release, the game did not delight with sound or graphics. Geezmo said the lack of animation and the inaudible soundtrack came across as technical weaknesses. MiastoGier noted that the unprecedented use of incredibly photo-realistic graphics caused players astonishment. Chmielarz agreed that the game "technically did not delight". Secret Service wrote that they wished the game's creators had made a version that would be compatible with inferior graphics cards. SS-NG thought that the game "did not impress with graphics". Exec lamented that the sound was limited to just a few samples and some "very annoying squeaks". PolishnessVideo Games Around the World wrote that reviewers were willing to overlook the shortcomings of Tajemnica Statuetki as it was the first Polish adventure for the PC. SS-NG noted that because it was only released in Polish, English-speakers were unable to fully enjoy it and that despite its shortcomings it "won the hearts of Polish [and non-Polish] adventure lovers". The site also said it should be played by everyone so they will appreciate one of the first Polish video games. Despite the magazine's reservations, Secret Service deemed Tajemnica Statuetkia good game, recommending it because of its affordability and accessibility; it also said, "the patriotic duty of all players should be to buy it from the producer and finish it". SS-NG called the game a source of national pride, commenting that "anyone who knew our beautiful Slavic language could play it". To the reviewer, the game has a replay-ability unlike other adventure games because it is a piece of their childhood; the writers felt gratitude to the developers for the many hours they used to spend playing it. Aleksy Uchański wrote in Gambler that at the time the Polish gaming industry had "2.5 good games of native production (and about ten times more rubbish)", most of which were not influential in the industry's development, calling Metropolis and its early games the worthy exception. While Tajemnica Statuetki was unable to compete with Sierra or LucasArts products, SS-NG said it was quite good by Polish standards of the time, writing that the best aspect of the title is that "above all, it was Polish". Despite writing that in the modern age it is difficult to understand how such an average product could have been praised one day, Gameplay wrote that playing Tajemnica Statuetki was worthwhile to see what the "global hit of Polish production" once looked like. Polskie Gry Wideo said arguably the most important thing about the game was that information is provided in Polish, deeming it "revolutionary". SS-NG said while the game may have been "a bit forgotten" by 2004, that did not detract from its "contribution to the development of the entire domestic industry". In 2014, Radio Szczecin commented that the game had not aged well but was unforgettable because it is a fully Polish production from its artists to its subtitles, with international action. Radio Szczecin also said the title is unforgettable due to its cultural relevance, in the sense that a game with Polish artists and subtitles made a dent on the world stage. Legacy Tajemnica Statuetki was followed by the critically acclaimed point-and-click adventure Teenagent (1995), which the company eagerly advertised thus: "The creators of Tajemnica Statuetki have been silent for over a year. See for yourself why". InnPoland attributes this marketing campaign to the game becoming a "breakthrough". PB.pl said this slogan "grabbed" the public. As Teenagent was the first Polish game to be professionally marketed, Polygamia said, "none of the Polish producers has ever preceded their game with such great marketing efforts", and that this created a media buzz around the game. According to Eurogamer, "the studio was flush with the early successes" of these two games. Among the studio's later work was the satirical adventure game The Prince and the Coward (1998), created with the help of the fantasy writer Jacek Piekara. The Prince and the Coward completed the trilogy of adventure games that Chmielarz started his game designer career with. Chmielarz thought the title, one of his first three commercially released games, was more convincing than his first ZX Spectrum games. A continuation of the Tajemnica Statuetki story, which was to be released on a CD-ROM, was announced. Photographic material for the game was collected but work was suspended and Metropolis Software worked on other projects. As of December 1994, Secret Service still expected Tajemnica Statuetki 2 would be released, most likely in 1995. Bajty Polskie noted that even in 2015, players still anticipated a sequel to the game, with a story in which Pollack goes on holiday with a friend to a Polish castle near Masurian Lake and an ambulance takes away his friend. A personal conflict between Chmielarz and Miechowski led to the former leaving the company around 2002 to form People Can Fly. Metropolis Software was acquired by CD Projekt in 2008 as a subsidiary and became defunct the following year. Miechowski later founded 11 Bit Studios. Metropolis kept the rights to Tajemnica Statuetki; these became the property of CD Projekt when they bought the company and they may still own them. The use of holiday photographs as part of the game's visual language partly inspired People Can Fly developer Wojciech Pazdur to use photogrammetry to help build levels for the first-person shooter Painkiller. The 2004 adventure game Ramon's Spell, according to SS-NG, was modeled on this game. Tajemnica Statuetki was Metropolis' first use of special service agents, a trope that would reappear in many of their later games including 2007's Infernal. Chmielarz, the "Polish Sid Meier" said his 2014 game The Vanishing of Ethan Carter saw him return to his adventure gaming roots of the Tajemnica Statuetki era, while Gamezilla said the latter title would not have existed without the former. Adam Juszczak's text Polski Rynek Gier Wideo – Sytuacja Obecna Oraz Perspektywy Na Przyszłość noted the "breakthrough" nature of Tajemnica Statuetki in a wider context; the Polish gaming market had been delayed because of the socialist system and the lack of widespread access to computers and consoles; as a result it had only begun to develop in the 1980s. The first Polish computer magazine, Bajtek, was launched in 1985. Around this time, the first Polish games also began to emerge, placing Tajemnica Statuetki in the history of Polish entertainment. {{Quotebox|quote=Thanks to this product, players believed that Poles could make good games, and the developers themselves believed that there was a demand for domestic goods.|source=- SS-NG reviewer Jacek Marciniak, on Tajemnica Statuetkis impact on the development of the Polish computer games market,|width=30%}} Chmielarz's game had a wide impact on the domestic market. InnPoland wrote that it was one of the first Polish games to "enjoy hit status". It became an "undoubted success" and a "loud hit of the AT-era". It stood out significantly against the background of similar adventure games from Mirage, LK Avalon, and ASF, and it was difficult to confuse it with other titles. Nie tylko Wiedźmin wrote that IPS' behaviour as a publisher was a "funny ... anecdote" due to the unprecedented success of the game. MiastoGier said Metropolis Software's early work was part of the golden age of adventure games after the age of animation sprites and connected frames; the site deemed it an important piece of cultural history for adventure gamers that shows the beginnings of Polish game development and the start of Adrian Chmielarz's career, and describing the game as "legendary." Onet called Tajemnica Statuetki "one of the most important games in the history of virtual entertainment". The game "allowed Metropolis Software" to "gain recognition in the Polish PC world of adventure games admirers". In addition, Onet said the game revolutionised game development in the country and that "for years it marked the direction of the industry". While Tajemnica Statuetki is virtually unheard of in the Western world, it was a "great success for the fast-paced Polish audience". SS-NG wrote that so much had been written about it that certain points of analysis had become hackneyed. According to Polygon, "thanks to piracy just about every gamer of a certain age in Poland has played Mystery of the Statuette". Part of the reason was a Metropolis Pack containing Tajemnica Statuetki, Teenagent, and Katharsis put together by pirates and released as shareware. SS-NG said the unprofitability in the industry due to piracy necessitated a leap of faith on Chmielarz's part. Polish YouTuber Krzysztof “NRGeek” Micielski asserted that the game's original release marked the first title he ever bought rather than pirated. , Chmielarz is considered a "living legend" in the Polish video gaming industry and "one of the most important computer game developers" in the country. Polskie Gry Wideo shrugged off the title's underwhelming aspects as the "first, clumsy steps" at the beginning of Chmielarz's influential career, calling it a "step in the right direction". The history of early Polish video game development, and specifically the creation of Tajemnica Statuetki, was addressed in the Polish-language book Nie tylko Wiedźmin: Historia polskich gier komputerowych. In 2014, Orange listed the game as the sixth best Polish game of the 1990s. Logo 24 listed Tajemnica Statuetki as one of the "Top 10 ... most important Polish games", deeming Chmielarz "probably the most known and respected Polish game developer". In 2016, the game was exhibited as part of the Digital Dreamers exhibition at the Palace of Culture and Science Sciences in Warsaw and was listed by Benchmark in their article Najlepsze polskie gry (The best Polish games''). Marcin N. Drews and Chmielarz reunited for a panel at the 2017 Pixel Connect convention, where they spoke about the game. Chmielarz had previously spoken about it on a 2013 panel with other early Polish game developers. The game can be legally played at GOG.com. Notes References External links An hour-long interview with Adrian Chmielarz An interview with Marcin M. Drews about the game 1993 video games Point-and-click adventure games Detective video games DOS-only games DOS games Europe-exclusive video games First-person adventure games Video games about police officers Video games developed in Poland Video games set in Chile
63745905
https://en.wikipedia.org/wiki/Studierfenster
Studierfenster
Studierfenster is a free, non-commercial open science client/server-based medical imaging processing online framework. It offers capabilities, like viewing medical data (computed tomography, magnetic resonance imaging, etc.) in two- and three-dimensional space directly in standard web browsers, like Google Chrome, Mozilla Firefox, Safari, and Microsoft Edge. Other functionalities are the calculation of medical metrics (dice score and Hausdorff distance), manual slice-by-slice outlining of structures in medical images (segmentation), manual placing of (anatomical) landmarks in medical image data, viewing medical data in virtual reality and a facial reconstruction and registration of medical data for augmented reality. Other features of Studierfenster are the automatic cranial implant design with a neural network, the inpainting of aortic dissections with a generative adversarial network and an automatic aortic landmark detection with deep learning in computed tomography angiography scans. Studierfenster is currently hosted on an server at the Graz University of Technology in Austria, and expanded jointly with the Institute for Artificial Intelligence in Medicine (IKIM) in Essen, Germany. History Studierfenster was initiated within two bachelor theses during the summer bachelor program of the Institute of Computer Graphics and Vision at Graz University of Technology, Austria, in cooperation with the Medical University of Graz, Austria, in 2018/2019. The name Studierfenster (or StudierFenster) is German and can be translated to 'StudyWindow', whereby window refers here to a browser window. The word Studierfenster is an adaption from the word ('study room'), which was an augmented reality project at the Vienna University of Technology in Austria. Architecture Studierfenster is setup as a distributed application via a client–server model. The client side (front-end) consists of hypertext markup language and JavaScript. The front-end also uses the web graphics library (WebGL) that is a Javascript application programming interface descending from the Open Graphics Library ES 2.0 specification, which it still closely resembles. In contrast to OpenGL, WebGL allows for the rendering of two- and three-dimensional graphics in web browsers. This enables the use of graphics features known from stand-alone programs directly in web applications, supported by the processing power of the client-side graphics processing unit. The server side (back-end) handles client requests via C, C++ and Python. It interfaces to common open source libraries and software tools like the Insight Toolkit, the Visualization Toolkit (VTK), the X Toolkit (XTK) and Slice:Drop. The server communication is handled by AJAX requests were needed. Studierfenster employs a Flask server. Features Dicom browser This allows client-side parsing a local folder with DICOM (Digital Imaging and Communications in Medicine) files. Afterwards, the whole folder can be converted to compressed (nearly raw raster data) files and downloaded as a single file. Nrrd is a library and file format for the representation and processing of n-dimensional raster data. It is intended to support scientific visualization and (medical) image processing applications. With the "Dicom Browser" of Studierfenster, it is possible to select specific Studies or Series, and only convert these. File converter The file converter converts a medical volume file (e.g. a non-compressed file) to a compressed/binary file. After the conversion, the compressed file can be downloaded and used with the "Medical 3D Viewer" for 2D and 3D visualization, and further image processing. Metrics module This can calculate the Dice similarity coefficient and Hausdorff distance between two segmentation masks (in format) in a standard web browser. The resulting table has seven columns: the file names for both files used in the calculation, the calculated Dice similarity coefficient, the calculated Hausdorff distance, the calculated directed HD for both directions and the information if image spacing was used in the calculation. The table can be sorted, is searchable and can be exported as a simple copy, an Excel spreadsheet, a comma-separated values file or as a portable document format. The Metrics Module has been used to compare manual anatomical segmentations of brain tumors VR viewer The VR Viewer (or Medical VR Viewer) enables viewing (medical) data in Virtual Reality (VR) with devices like the Google Cardboard or the HTC Vive (via the WebVR App). For viewing the data in VR, it needs to be converted to the VTI (.vti) format, which can be done with open-source, multi-platform data analysis and visualization application ParaView Critics Studierfenster is not a certified medical product; it can only be used for educational, research, and informational purposes. References External links Studierfenster 2018 software Open science Software frameworks Medical imaging Free health care software Free DICOM software Free science software Free software programmed in C++ Free software programmed in Python Free bioimaging software Computer vision software Image processing software Data visualization software Free 3D graphics software Free biovisualization software Software that uses VTK
30428605
https://en.wikipedia.org/wiki/1966%20USC%20Trojans%20football%20team
1966 USC Trojans football team
The 1966 USC Trojans football team represented the University of Southern California (USC) in the 1966 NCAA University Division football season. In their seventh year under head coach John McKay, the Trojans compiled a 7–4 record (4–1 against conference opponents), won the Athletic Association of Western Universities (AAWU or Pac-8) championship, and outscored their opponents by a combined total of 199 to 128. The team was ranked #18 in the final Coaches Poll. Quarterback Troy Winslow led the team in passing, completing 82 of 138 passes for 1,023 yards with 6 touchdowns and 5 interceptions. Don McCall led the team in rushing with 127 carries for 560 yards and 5 touchdowns. Ron Drake led the team in receiving with 52 catches for 607 yards and four touchdowns. Schedule Game summaries at Texas at Miami (FL) Notre Dame Purdue (Rose Bowl) References USC USC Trojans football seasons Pac-12 Conference football champion seasons USC Trojans football
68472059
https://en.wikipedia.org/wiki/Federal%20College%20of%20Education%20%28Technical%29%2C%20Potiskum
Federal College of Education (Technical), Potiskum
The Federal College of Education (Technical), Potiskum is a federal government higher education institution located in Potiskum, Yobe State, Nigeria. It was first affiliated to Federal University of Technology Minna then substituted to Abubakar Tafawa Balewa University for its degree programmes. The college started offering PGDE programmes in 2021. The current Provost of the college is Dr. Muhammad Madu Yunusa. History The Federal College of Education (Technical), Potiskum was established in 1991. It was originally known as Federal Advanced Teachers’ College (FATC), Yola. Schools The college has the following schools under which exists various departments: School of Science Education School of Technical Education School of Business Education School of Vocational Education School of Education Departments School of Science Education Chemistry Complex Physics Complex Mathematics Complex Biology Complex Integrated Science Complex School of Vocational Education Agricultural Science Complex Home Economics Complex Fine and Applied Art Complex School of Technical Education Electrical Complex Automobile Complex Building Complex Woodwork Complex Metalwork Complex School of Business Education Secretarial Complex Accounting Complex School of Education Primary Education Complex Early Childhood Education Complex Courses The institution offers the following courses; Education and physics Building technology education Woodwork technology education Education and integrated science Integrated science/physics Education and computer science Education and mathematics Home economics Computer education/chemistry Chemistry/integrated science Computer education/physics Electrical/electronics education Automobile technology education Education and chemistry Biology/integrated science Agricultural science and education Education and biology Business education Computer education/biology Computer science education/mathematics Agricultural science Early childhood care education Technical education Fine and applied arts Primary education studies Computer science/biology Metalwork technology education Affiliation The institution is affiliated with the Abubakar Tafawa Balewa University to offer programmes leading to Bachelor of Education, (B.Ed.) in; Education & mathematics Metal work technology education Education and biology Wood work/education Automobile technology education Electrical/electronics education Education and chemistry Education & computer science Education & physics Building education Agricultural science and education See also List of tertiary institutions in Yobe State References Universities and colleges in Nigeria 1991 establishments in Nigeria
40879657
https://en.wikipedia.org/wiki/Phabricator
Phabricator
Phabricator is a suite of web-based development collaboration tools, including: Differential code review tool Diffusion repository browser Herald change monitoring tool Maniphest bug tracker Phriction wiki Phabricator integrates with Git, Mercurial, and Subversion. It is available as free software under the Apache License 2.0. Phabricator was originally developed as an internal tool at Facebook overseen by Evan Priestley. Priestley left Facebook to continue Phabricator's development in a new company called Phacility. On May 29, 2021, Phacility announced that it was ceasing operations and no longer maintaining Phabricator starting June 1st 2021. a community fork, Phorge, is in creation. Notable users Phabricator's users include: AngularJS Asana Blender Discord Dropbox Facebook FreeBSD GnuPG Khan Academy KDE Mozilla LLVM/Clang/LLDB (debugger)/LLD (linker) Lubuntu MemSQL Pinterest Quora Twitter Uber Wildfire Games Wikimedia Gallery See also List of tools for code review References External links Wikimedia Phabricator, used for Wikimedia and MediaWiki tasks (bug reports and feature requests). MediaWiki page about Phabricator, including user help Facebook software Software review Free project management software Free wiki software Bug and issue tracking software Help desk Free software programmed in PHP Software using the Apache license
4965675
https://en.wikipedia.org/wiki/Initng
Initng
Initng is a full replacement of the UNIX System V init, the first process spawned by the kernel in Unix-like computer operating systems, which is responsible for the initialization of every other process. Initng's website calls initng "The next generation init system". Purpose Many implementations of init (including Sysvinit used in many Linux distributions) start processes in a pre-determined order, and only start a process once the previous process finishes its initialization. Initng starts a process as soon as all of its dependencies are met. It can start several processes in parallel. Initng is designed to significantly increase the speed of booting a Unix-compatible system by starting processes asynchronously. Initng's supporters claim that it also gives the user more statistics and control over the system. Development Despite being still considered beta, it was chosen as the default init system for Pingwinek, Enlisy, Berry Linux and Bee. Also there are packages for many distributions such as Ubuntu and Fedora, as well as ebuilds for Gentoo and spells for Source Mage. Contrary to other similar projects, it features a portable and flexible code base, more suited for embedded usage, and has been already ported to other operating systems like Haiku and FreeBSD. It was created by Jimmy Wennlund. The current maintainer and project lead is Ismael Luceno. Awards In Linux Format's Issue 72, on November 2005, InitNG received the Hottest Pick Award. Process (computing)
18784745
https://en.wikipedia.org/wiki/List%20of%20Microsoft%20365%20Applications
List of Microsoft 365 Applications
Microsoft Office is a set of interrelated desktop applications, servers and services, collectively referred to as an office suite, for the Microsoft Windows and macOS operating systems. This list contains all the programs that are, or have been, in Microsoft Office since Current Microsoft 365 Applications Server applications Discontinued programs See also Microsoft Office shared tools List of office suites Comparison of office suites Microsoft MapPoint Microsoft Visual Studio Microsoft Works Microsoft Education Edition References External links The Microsoft Office page for Windows The Microsoft Office page for macOS Training Center for Microsoft Office Application Office Office suites for Windows Office suites for macOS
41131329
https://en.wikipedia.org/wiki/Neil%20Ramiller
Neil Ramiller
Neil Clifford Ramiller (born 1952) is an American academic, and Professor of Management at the Portland State University School of Business Administration, known for his work with Swanson, E. Burton on the management of information-technology innovations, particularly on organizing vision. Biography After received his BA in Anthropology and Chemistry from Sonoma State University, Ramiller has done graduate work in anthropology and linguistics in the 1970s. Later in 1996 he received his PhD from the UCLA Anderson School of Management under supervision of E. Burton Swanson, and his MBA from University of California, Berkeley. In the 1970s Ramiller had started his career in cultural resources management, doing both archaeological fieldwork and administration. In the 1980s he moved into the software industry, working in software development, documentation, administration and consultancy. In the 1990s he joined the UCLA Anderson School of Management. In the new millennium he was appointed Professor of Management at the Portland State University School of Business Administration. Ramiller was part of the editorial board of the journals Information & Organization, and Information Technology & People, and was associate editor for MIS Quarterly. He has been member of the International Federation for Information Processing Working Group 8.2 (IFIP WG 8.2). Ramiller was awarded for best paper published in MIS Quarterly in 2004; best published paper award by the Academy of Management OCIS Division in 2009; and best paper award by the Association of Graduate Liberal Studies Programs in 2009. Selected publications Ramiller authored and co-authored many publications. Articles, a selection: Swanson, E. Burton, and Neil C. Ramiller. "Information systems research thematics: submissions to a new journal, 1987–1992." Information Systems Research 4.4 (1993): 299-330. Ramiller, Neil C. "Perceived compatibility of information technology innovations among secondary adopters: Toward a reassessment." Journal of Engineering and Technology Management 11.1 (1994): 1-23. Swanson, E. Burton, and Neil C. Ramiller. "The organizing vision in information systems innovation." Organization science 8.5 (1997): 458-474. Swanson, E. Burton, and Neil C. Ramiller. "Innovating mindfully with information technology." MIS Quarterly (2004): 553-583. Wang, Ping, and Neil C. Ramiller. "Community learning in information technology innovation." MIS Quarterly 33.4 (2009): 709-734. References External links Neil Ramiller at Portland State University 1952 births Living people American business theorists American computer scientists Sonoma State University alumni UCLA Anderson School of Management alumni University of California, Berkeley alumni UCLA Anderson School of Management faculty Portland State University faculty
33426248
https://en.wikipedia.org/wiki/Tonara%20%28company%29
Tonara (company)
Tonara is an Israeli education and technology company that developed sheet music software of the same name. It allows music teachers to track the practice sessions of their students. Software Tonara software is capable of acoustic polyphonic score following, showing the musician's real-time position on the score, and turning the pages automatically as the player reaches the end of a page. The software was initially launched as an iPad application in September 2011. The application includes several free music scores, and additional music scores can be acquired through in-app purchasing. The application also enables users to record their performances, and share them with friends. Reception Tonara was launched as an iPad application during the TechCrunch Disrupt conference in San Francisco, on September 12, 2011. The presentation on stage included a live string quartet and a vocal performance by Randi Zuckerberg accompanied by Piano, all of them using Tonara. After its launch, CNN included Tonara in its "Startup stars" coverage of the Battlefield, The Guardian called it "very clever" and Wired magazine described it as “sheet music for the iPad generation… It’s pretty clear that something interactive like this will do to sheet music what Kindle did to hardback books". In October 2011, it was selected by Apple as "App of the Week" in China, Germany, Austria and Switzerland. The first all-Tonara concert took place on November 12, 2011, in New York's Washington Square Park. Two pianists and one violinist played a complete concert repertoire including pieces by Beethoven, Brahms, Bach and Chopin, all from the Tonara store. See also List of music software References External links Software companies established in 2008 Music software Products introduced in 2008 Software companies of Israel 2008 establishments in Israel
307976
https://en.wikipedia.org/wiki/Battlefield%201942
Battlefield 1942
Battlefield 1942 is a 2002 first-person shooter video game developed by DICE and published by Electronic Arts for Microsoft Windows and Mac OS X. The game can be played in single-player mode against the video game AI or in multiplayer mode against players on the Internet or in a local area network. It is a popular platform for mod developers, with many released modifications that alter the gameplay and theme. In-game, players assume the role of one of five classes of infantry: Scout, Assault, Anti-Tank, Medic, and Engineer. Players also have the ability to fly various World War II fighter aircraft and bombers, navigate capital ships, submarines, and aircraft carriers, man coastal artillery defenses, drive tanks, APCs and jeeps, and take control of anti-aircraft guns and mounted machine guns. Each battle takes place on one of several maps located in a variety of places and famous battlefields in all of the major theaters of World War II: the Pacific, European, North African, Eastern, and Italian Fronts. Combat is between the Axis Powers and the Allies. The location determines which nation-specific armies are used (for example, on the Wake Island map, it is Japan versus the United States, while on the El Alamein map, it is Germany versus the United Kingdom). The maps in Battlefield 1942 are based on real battles and are somewhat realistically portrayed. Upon release, Battlefield 1942 received generally favorable reviews, with particular praise directed towards the innovative gameplay, multiplayer, and World War II theme. The game went on to perform well commercially, with over 3 million copies sold by 2004. Since its release, the game has spawned numerous sequels and spin-offs, which became part of what ultimately would become the Battlefield game series. Gameplay The gameplay of Battlefield 1942 generally has a more co-operative focus than previous games of this nature, as it is not only important to kill the opposition but to also hold certain "control points" around the map. Capturing control points allows the team to reinforce itself by enabling players and vehicles to spawn in a given area. Additionally, capturing and controlling control points also reduces enemy reinforcements. Battlefield 1942 was one of the first mainstream games to represent a dramatic shift in FPS gameplay mentality not only favoring individualism but simultaneously encouraging teamwork and coordination. The default gameplay mode, Conquest, centers on the capture and control of control points; once a team captures a control point, its members can respawn from it. When a team loses control of all their control points, they cannot respawn. And if no one is alive, the team with no "spawn" points or the popular term "tickets" loses. Games are composed of rounds. A team wins the round when the other team runs out of tickets. A team loses tickets when its members are killed, but also when the other team holds a majority of the capture points on the map (typically when one team holds more capture points than the other). Therefore, sometimes the winning team must hunt down straggling or hiding enemy forces at the end of a round. Spawn tickets also play a vital role in the success of both teams. Every time a player on a team dies and respawns, their team loses one ticket. Every team starts each round with between 150 and 300 tickets, depending on the team's role (e.g., defense). Teams also gradually lose tickets depending on how many spawn points they control. As a general rule, the fewer spawn points controlled by a team, the more tickets they lose, and as they hold on these spawn points reduces, the tickets start dropping at a much quicker pace. For a team of 32 on a 64 player map, with 150 tickets, this means a little less than 5 respawns or deaths on average for every player if they hold their starting spawn points. Roles The player can choose to play as either the Allied team or the Axis team. The Allies consist of the United States, the United Kingdom, Canada and the Soviet Union, while the Axis consists of Nazi Germany and Imperial Japan. Regardless of which nation is chosen by the player, there are five different roles of infantry that the player can assume the role: Scout, Assault, Medic, Anti-tank, and Engineer. Each role has its own strength and weakness. For example, the scout role has long-range surveillance, high stopping power and the ability to provide spotting for artillery shelling against an enemy position (unlike other games with a similar feature, other player characters must also supply the artillery fire); however, the sniper rifle is not designed to be used in close-quarter combat and players frequently treat this role as just a plain sniper role by not providing spotting for artillery. Assault is the standard role and provides very aggressive firepower. The Anti-tank role specializes against vehicles and tanks, but their main weapon is inaccurate against moving enemy infantry. The Medic role has the ability to heal (himself and other players), but his sub-machine gun has less stopping power than the Assault's weapons. The Engineer has the ability to repair damaged vehicles and stationary weapons, and they also have the ability to deploy explosives, which are highly effective against both enemy infantry and vehicles, and lastly, land mines, which destroy enemy vehicles on contact. Development The game was originally proposed by DICE as a GameCube exclusive. Though satisfied with the proposal, negotiations never made it further because Nintendo had no online strategy. The game was developed by a team of 14 people at Digital Illusions. Battlefield 1942 was built on the formula of the less well-known and successful Codename Eagle video game, set in an alternate history World War I. It featured single and multiplayer modes. The earlier Refractor 1 engine had more arcade-style physics and a less realistic focus than its successor, Refractor 2, which was used in Battlefield 2. A Macintosh-compatible version of Battlefield 1942 was made and released by Aspyr Media in mid-2004. An Xbox version of the game was also announced in early 2001 but was cancelled almost two years later so Electronic Arts could more closely work on an expansion pack for the PC. Expansions Two expansion packs would be released for Battlefield 1942, Battlefield 1942: The Road to Rome (adding the Italian Front) and Battlefield 1942: Secret Weapons of WWII, both adding various new gameplay modes, maps, and game concepts. The Road to Rome focuses on the Italian battles, allowing players to play as the Free French forces or as the Royal Italian Army. Secret Weapons of WWII focuses on prototypical, experimental, and rarely used weapons and vehicles (such as jet packs), and added subfactions to the German and British Armies, the German Elite Troops and British Commandos. Accompany each were patches to the base game that fixed bugs, and added extra content (such as the Battle of Britain map) to the base game. Battlefield 1942 Deluxe Edition includes the original game and Battlefield 1942: The Road To Rome, and the Battlefield 1942: World War II Anthology added Battlefield 1942: Secret Weapons of WWII expansion pack. Battlefield 1942: The Complete Collection later added Battlefield Vietnam and Battlefield Vietnam WWII Mod. Reception In the United States, Battlefield 1942 sold 680,000 copies and earned $27.1 million by August 2006. At the time, this led Edge to rank it as the country's 18th best-selling computer game released since January 2000. Combined sales of all Battlefield computer games, including Battlefield 1942, had reached 2.7 million units in the United States by August 2006. In December 2002, the game received a "Gold" sales award from the Verband der Unterhaltungssoftware Deutschland (VUD), indicating sales of at least 100,000 units across Germany, Austria and Switzerland. The game sold more than 3 million copies by July 2004. The game received "generally favorable reviews", just one point shy of "universal acclaim", according to the review aggregation website Metacritic. At 6th annual Interactive Achievement Awards, Battlefield 1942 received awards for Online Gameplay, Innovation in PC Gaming, PC Game of the Year, and Game of the Year. In March 2010 Battlefield 1942 was awarded with "Swedish game of the decade" award at the computer game gala hosted by Swedish Games Industry. Scott Osborne of GameSpot called it a "comic book version of WWII." The publication later named it the best computer game of September 2002. Steve Butts of IGN praised the multiplayer, but said that "the single-player game leaves much to be desired." PC Gamer US and Computer Games Magazine named Battlefield 1942 the best multiplayer computer game and best overall computer game of 2002; it tied with No One Lives Forever 2 for the latter award in Computer Games Magazine. It also won GameSpots annual "Best Multiplayer Action Game on PC" and "Biggest Surprise on PC" awards, and was nominated in the publication's "Best Graphics (Technical) on PC" and "Game of the Year on PC" categories. PC Gamer USs editors hailed it as "the realization of our 'dream PC game' — multiplayer battles in which every interesting element of combat is playable by human teammates and opponents." The Academy of Interactive Arts & Sciences awarded Battlefield 1942 with four honors at the 6th Annual Interactive Achievement Awards (now known as the D.I.C.E. Awards): "Game of the Year", "Computer Game of the Year", "Innovation in Computer Gaming" and "Outstanding Achievement in Online Gameplay"; it was also nominated for "Outstanding Achievement in Game Design". Sequels In March 2004, Battlefield Vietnam was released. In 2005, a sequel set in the modern era, Battlefield 2 was released. In 2006, a sequel set in the future era, Battlefield 2142 was released. On 8 July 2009, Battlefield 1943 was released for Xbox Live Arcade and on PlayStation Network one day later. The Battlefield: Bad Company series was launched in 2008, followed by Battlefield 3, in October 2011 on EA Games' Origin network. Battlefield 4 was released in October 2013. Battlefield Hardline, a cops and robbers style battlefield, launched on 17 March 2015. Battlefield 1, a World War I based title, was released on 21 October 2016. Battlefield V was released worldwide for Microsoft Windows, PlayStation 4, and Xbox One on 20 November 2018. This was the first time since Battlefield 1943 that the series saw a return to a World War II theater of operations, and the first since Battlefield 1942 set outside the Pacific Ocean theater of World War II. Mods An October 2004 public release from EA noted the game's modding community. Like Half-Life and some other popular FPS games, Battlefield 1942 spawned a number of mods. Most did not progress very far and were abandoned without ever producing a public release. Some are very limited and just include some gameplay changes or even a different loading screen while others are total conversions that modify content and gameplay extensively. A few mods have become popular and are nearly games in their own right. Early modifications of Battlefield 1942 were produced without a SDK. Later a "Mod Development Kit", Battlefield Mod Development Toolkit, was produced by EA to help the development of mods. With the release of the Battlefield 1942 sequel Battlefield Vietnam and Battlefield 2, some mods have released a new version or have continued development with that game. Battlefield Vietnam uses an updated version of the Refractor 2 game engine. Some mods have switched to the computer games Söldner: Secret Wars, Half-Life 2 while others were releasing a standalone game after completed mod development for Battlefield 1942 (Eve of Destruction - REDUX and FinnWars). Battlefield Interstate 1982, mentioned in 1UP "Free PC Games" December 2003 article. (Free PC Games "1UP.ORG" December 2003.) Battle G.I. Joe was reviewed on About.com, by Michael Klappenbach. The mod was also contacted by Hasbro for IP issues, as noted in Am I Mod or Not? (Nieborg, 2005) Desert Combat, produced by Trauma Studios, was winner of FilePlanet's Best Mod of 2003 Award and many other reviews and awards, such as the March 2003 PC Magazine. PC Gamer described it as "Desert Combat is set in the white-hot conflict zone of the Middle East and pits the United States against Iraq." Articles noted it was helped by the Iraq War, which increased the number of page views to approximately 15,000 per day, or even between 20,000 and 70,000. Desert Combat was pointed out as having two mods of its own, DC Extended and Desert Combat Realism in Am I Mod or Not? (Nieborg, 2005) Eve Of Destruction was the winner of PC Gamer 2003 Mod of the Year. Dan Morris of PC Gamer noted in the March 2004 issue of PC Gamer, "While Battlefield Vietnam was still a twinkle in its developers' eyes, this standout mod debuted to a rapturous reception from the Battlefield 1942 faithful." Experience WWII was described in PC Gamer as having substantial changes to be historically accurate that directly impacts gameplay. FinnWars was featured in Pelit magazine in issue 9/2005, and PC Pelaaja in 2007. FinnWars is based in Winter and Continuation Wars between Finland and the Soviet Union, as well as Lapland War between Finland and Nazi Germany. Forgotten Hope, a 2003 mod that aimed at a high degree of historical accuracy, was noted for including over 250 new pieces of authentic equipment (at the time more than any other World War II-themed FPS). It was awarded the Macologist Mod of the Year Award by Inside Mac Games in 2006 after the mod was ported to the Mac. It was followed by its 2006 Battlefield 2 sequel, Forgotten Hope 2. Galactic Conquest was noted for its permission to blatantly use Lucasarts Star Wars universe material in Am I Mod or Not? (Nieborg, 2005). It was mentioned in Edge in April 2004. Galactic Conquest was reviewed on TechTV's X-Play show in 2004. HydroRacers was reviewed in PC Zone in 2004 by Tony Lamb, and also the Madison Courier in June 2004. Siege was pointed out in a study by Utrecht University, both for its original concept, and its medieval warfare theme. Am I Mod or Not? (Nieborg, 2005) SilentHeroes won the PC ACTION-Super Mod Award in edition 07/2006 of the German gaming-magazine PC ACTION. Also, it was featured on many Norwegian and Swedish media websites, including VG, Aftonbladet and IDG. Who Dares Wins was reviewed in August 2005 UK edition of PC Gamer magazine and a copy of version 0.2 was distributed with the magazine on DVD-ROM to its readers. References External links 2002 video games Aspyr games AIAS Game of the Year winners 01 Cancelled Xbox games Electronic Arts games Fiction set in 1942 Interactive Achievement Award winners Multiplayer and single-player video games Multiplayer online games MacOS games Video games about the United States Marine Corps Video games developed in Sweden Video games set in Belgium Video games set in England Video games set in Egypt Video games set in France Video games set in Germany Video games set in Berlin Video games set in the Netherlands Video games set in Japan Video games set in Libya Video games set in Oceania Video games set in the Philippines Video games set in the Solomon Islands Video games set in the Soviet Union Video games set in the United States Video games set in Russia Video games set in Ukraine Video games with expansion packs War video games set in the United States Windows games World War II first-person shooters Pacific War video games
2465976
https://en.wikipedia.org/wiki/Micro%20Power
Micro Power
Micro Power was a British company established in the early 1980s by former accountant Bob Simpson. The company was best known as a video game publisher, originally under the name Program Power. It also sold many types of computer hardware and software (both its own and third-party) through its Leeds 'showroom' or via mail order. Games From 1980 to 1987 the company published a number of video games and other software for various home computers. The earliest programs were released for the Acorn Atom but Micro Power is best remembered for its games for that machine's successor, Acorn's BBC Micro (with all but two of its post-Atom games running on that machine). A large selection of games that could be (and weren't considered 'too old') were ported to the Acorn Electron after its release in 1983 and most new games were now released for these 2 machines in 1984. A few were also ported to other 8-bit platforms including Commodore 64, Amstrad CPC and ZX Spectrum but these never achieved the success of the Acorn originals. Most of these were basic single screen games, typically arcade clones (see the list of notable games below). While mostly well received and popular at the time (especially on the Acorn platforms), by the mid 1980s, video games were becoming increasingly complex. While simple early arcade-style games still sold well, it was usually at a budget price. Micro Power themselves released Micro Power Magic compilations in 1986, each featuring ten of their games that had previously sold at up to £7.95 each (some only two years earlier), for £7.95.<ref name="micropower_magic"> {{cite web |url=http://www.acornelectron.co.uk/eu/revs/micropower/r-mpm.html |title=Micro Power Magic review |accessdate=2007-08-03 |publisher=Electron User (issue 4.03, Dec 1986) |url-status=dead |archiveurl=https://web.archive.org/web/20070927013617/http://www.acornelectron.co.uk/eu/revs/micropower/r-mpm.html |archivedate=2007-09-27 }}</ref> From 1985 onwards, Micro Power began to produce a few advanced games as opposed to a high quantity of simpler games. The first of these was the arcade adventure Castle Quest (BBC only) by Tony Sothcott, billed as "Probably the most challenging game ever devised for the BBC Micro". It was never converted for the Electron, because it used near full-screen scrolling in an 8-colour mode which was not possible on the more limited machine. This game was successful and a sequel was started which became Doctor Who and the Mines of Terror (BBC, C64, CPC), a huge arcade adventure that required its own ROM chip to run on the BBC Micro. Another later release was puzzle/platform game Imogen (BBC only, later ported to Electron by Superior Software and more recently remade for PC) by Michael St. Aubyn which was noted for its witty, original puzzles and cute high-resolution monochrome graphics. These games took more money and time to produce and with significantly fewer releases per year, contributed to the downfall of the company. The Doctor Who game in particular is often cited as crippling the company with a number of problems such as the added cost of producing the ROM chips for the BBC version and the unreleased (but heavily previewed and advertised) ZX Spectrum version which would have required an add-on cartridge. There were also two 32-bit games, Chess 3D and Zelanites the Onslaught (a Space Invaders clone) for the Acorn Archimedes, released under the Micro Power name in 1991. It is unknown how these releases relate to the original company as there hadn't been a Micro Power release for four years. Notable earlier games include:Adventure – a text adventure (Atom, BBC, Electron)Alien Destroyers – a Space Invaders clone (BBC only)Bandits at 3 O'Clock – a 2-player World War II dogfight (BBC, Electron)Block Buster – a Q*bert clone (BBC only)Bumble Bee – a Lady Bug clone (BBC, Electron, C64)Cabman – an overhead view taxi driving game (Spectrum only)Cowboy Shootout – a Boot Hill clone (Atom, BBC, Spectrum)Croaker – a Frogger clone (BBC, Electron)Cybertron Mission – a Berzerk clone (BBC, Electron, C64)Danger UXB – a Check Man clone (BBC, Electron)Dune Rider – a Moon Patrol clone (BBC only)Electron Invaders – a Space Invaders clone (Electron only)Escape from Moonbase Alpha – a graphic adventure (BBC, Electron)Felix and the Fruit Monsters – a Pac-Man style overhead maze game (BBC, Electron)Felix in the Factory – a platform game (BBC, Electron, C64, Memotech MTX)Felix Meets the Evil Weevils – a platform game (BBC, Electron)Frenzy – a Qix clone (BBC, Electron, C64)Galactic Commander – a Lunar Lander clone (BBC, Electron)Gauntlet – a Defender clone (BBC, Electron, CPC)Ghouls – a platform game with Pac-Man-like characters (BBC, Electron, C64, CPC)Hell Driver – an overhead view driving game (BBC only)Intergalactic Trader – a text-based space trading game (BBC, Electron)Invasion Force – a Space Invaders clone (Atom only)Jet Power Jack - a platform game (BBC, Electron, C64)Killer Gorilla – a Donkey Kong clone (BBC, Electron, CPC)Laser Command – a Missile Command clone (BBC only)The Mine – a Dig Dug clone (BBC, Electron)Mr. Ee! – a Mr. Do! clone (BBC only)Moon Raider – a Scramble clone (BBC, Electron)Nemesis – a Centipede clone (BBC only)Plutonium Plunder – a Pengo-style overhead maze game (BBC only)Positron – a fast-paced Space Invaders style shoot 'em up (BBC, Electron)Rubble Trouble – a Pengo-style overhead maze game (BBC, Electron)Starfleet Encounter – a text-based strategy game for 2-8 players (BBC only)Stock Car – an overhead view racing game (BBC, Electron, C64)Swag – a 2-player arcade game involving bank robbery (BBC, Electron)Swoop – a Galaxian clone (BBC, Electron, C64)Zarm – a Lunar Rescue clone (BBC only) Educational / Utility Software As well as games, Micro Power released a number of educational programs (covering subjects such as science and geography) as well as utility software such as the Draw art package (BBC, Electron), Basic Extensions and Constellation'' astronomy program (Atom, BBC, later ported to Electron by Superior Software). Hardware Micro Power also released hardware such as the 'Micro Power Add-On' for the ZX Spectrum which added 2 joystick ports and 3-channel sound capability. Leeds Store Micro Power had a store on the corner of North Street and Meanwood Road in Leeds. They primarily sold Acorn hardware and software. Also they sold software for other computers including C64, ZX Spectrum, and QL. In the 1990s Micro Power downsized and moved further up Meanwood Road to reduce outgoings. There is still the original Micro Power sign at the back of their first premises. References Working at Micro Power / Program Power Defunct video game companies of the United Kingdom
27803618
https://en.wikipedia.org/wiki/Gary%20Hogan%20Field
Gary Hogan Field
Gary Hogan Field is a baseball venue located in Little Rock, Arkansas, United States. It has been home to the Little Rock Trojans college baseball team of the Division I Sun Belt Conference since 1978 and also the home of the Arkansas Baptist College Buffaloes junior college baseball team of Region 2 of the National Junior College Athletic Association. Formerly known as Curran Conway Field, the venue has a capacity of 2,550. History When the Trojans began using the facility in 1978, the field was in poor condition. This forced the program to play home games at other fields in Little Rock during the early 1980s. When Coach Gary Hogan arrived in 1986, he described Curran Conway Field as "a rock pit and dust bowl." Hogan secured donations from both alumni and the surrounding community to make numerous improvements to the field, making it playable for a Division I team. Name The field is named for former Little Rock baseball coach Gary Hogan. Hogan coached for 11 season from 1986 to 1996, recording 276 victories in that span. His career victories mark stands as a record for UALR baseball coaches. Renovations and features Following Coach Hogan's arrival in 1986, the field underwent many renovations. Hogan, after securing $500,000 of donations, installed a turf infield, natural grass outfield, outfield wall, dugouts, and indoor practice facility over the course of his 11-year tenure. In 1998, a new turf infield, warning track, and outfield drainage system were installed at Conway Field. In 2001, the Kris Wheeler Complex was constructed. Named after a Trojans outfielder who played from 1998 to 2002, the facility included a press box, concession stands, and a new gated entrance. In 2003, a new scoreboard was installed, located over the right field fence. Prior to the 2004 season, the UALR athletic department announced an anonymous $1.6 million donation for improvements to the baseball field. At the same time, it was announced that the field would be renamed Gary Hogan Field, in honor of the former Trojan baseball coach. Numerous improvements followed in the subsequent years. A new AstroTurf infield, lighting system, sound system, and indoor workout facility were constructed. Also, chairbacks were installed in 200 of the 550 seats located behind home plate. In 2011, the infield's AstroTurf surface was replaced with polyurethane turf. A Trojan logo was added behind home plate, and the surface of the warning track was changed from clay to turf. See also List of NCAA Division I baseball venues References External links 2011 polyurethane turf installation photo gallery at ualrtrojans.com Little Rock Trojans baseball College baseball venues in the United States Baseball venues in Arkansas Buildings and structures in Little Rock, Arkansas Tourist attractions in Little Rock, Arkansas Arkansas Baptist College
34997949
https://en.wikipedia.org/wiki/Zenprise
Zenprise
Zenprise provides Mobile Device Management (MDM) solutions to enterprises. The company's solutions are available in both on-premises and cloud-based (SaaS) versions. Zenprise MobileManager and Zencloud allow companies and government agencies to manage and secure mobile devices, including iOS, Android, BlackBerry, Windows Mobile, and Symbian. History Zenprise was co-founded in 2003 by Waheed Qureshi & Jayaram Bhat. Initially the company developed remote support and diagnostics software to help companies manage large-scale BlackBerry deployments. As the market evolved to enterprises supporting a diverse set of smartphones and tablets, Zenprise shifted its product development efforts to support the major mobile platforms, including iPhone and iPad, Android, BlackBerry, Symbian, and Windows Mobile. In 2010, Zenprise acquired French Mobile Device Management vendor, Sparus Software, and integrated Sparus’s software into its core product offering. In July 2011, Zenprise launched Zencloud, the cloud version of its Mobile Device Management software. Zencloud is hosted in globally-redundant, SAS70 Type II-, FISMA Moderate-compliant, and Federal Cloud Certified facilities. In September 2011, Zenprise developed and launched a free application on Splunkbase. Called Zenprise Mobile Security Intelligence, the application integrates with Zenprise mobile device management software, and provides visibility into mobile users and mobile device traffic. In September 2011, Zenprise launched their Mobile Data Leakage Prevention (DLP) solution, which provides a secure document container for mobile devices as well policy-based control over content. In October 2011, Zenprise raised $30 million from investors, including Bay Partners, Ignition Partners, Mayfield Fund, Rembrandt Venture Partners, Shasta Ventures, and newly added venture firm, Greylock Partners. In December 2012 Citrix announced its intent to acquire Zenprise. The acquisition closed in January 2013. Products and Features Zenprise MobileManager: An on-premises Mobile Device Management solution focused on providing mobile device lifecycle management and mobile security across devices, applications, the network, and data. Zencloud: Cloud version of Zenprise’s mobile device management software. Zencloud can be deployed as a standalone cloud solution or hybrid solution for policy enforcement at the mobile gateway. Services, support, and training: Zenprise offers 24x7x365 technical support on five continents and in ten languages, instructor-led in-person and web-based training, and a number of professional services offerings tailored to Mobile Device Management deployments. Customers and Partners Today, Zenprise has more than 1,000 global enterprise and government customers, including Monsanto, the Boston Red Sox, Cegedim, Shook, Hardy & Bacon, Cyberonics, PerkinElmer, Arsenal, ScentAir, Ross Stores, Jelly Belly, Carnival Corporation & PLC and Knight Transportation. Zenprise has a network of channel and technology partners, including Trace3, Fishnet, Accuvant, iSecure, IT21, Radpoint, RIM, Microsoft, and Cisco, F5 Networks and Palo Alto Networks. References Zenprise positioned in leaders Quadrant in Gartner's Magic Quadrant for Mobile Device Management, May 2012 Gartner Critical Capabilities for Mobile Device Management, July 29, 2011 Forrester Market Overview: On-Premises Mobile Device Management Solutions, Q3 2011, January 3, 2012 Forrester Market Overview: Cloud-Hosted Mobile Device Management Solutions and Managed Services, January 3, 2012 Zenprise Named Top New Security Vendor by HP at Protect 2011, September 20, 2011 Zenprise Raises $30 Million; Investment Led by Greylock Partners, October 11, 2011 Zenprise Achieves Tremendous Growth in 2011, January 24, 2012 Software companies established in 2003 Citrix Systems Mobile device management Companies based in Redwood City, California 2013 mergers and acquisitions Software companies based in the San Francisco Bay Area Software companies of the United States
4020018
https://en.wikipedia.org/wiki/CAP%20Scientific
CAP Scientific
CAP Scientific Ltd was a British defence software company, and was part of CAP (Computer Analysts and Programmers)] Group plc. In 1988, CAP Group merged with the French firm Sema-Metra SA in 1988 as Sema Group plc. In 1991 Sema Group put most of its defence operations (CAP Scientific Ltd and YARD Ltd) into joint venture with British Aerospace called BAeSEMA, which British Aerospace bought out in 1998. Parts of the former CAP Scientific are now BAE Systems Insyte). Formation of CAP Scientific CAP Scientific was formed in 1979 by four colleagues who had previously worked in Scicon, a BP subsidiary. Seeking to start a specialist software company for defence applications in the United Kingdom, they approached CAP-CPP, a commercial software house, to back a start-up operation. By 1985, CAP Scientific had established significant work in several areas. It had a strong naval business based on supporting the Admiralty Research Establishment. This Maritime Technology business applied the technologies fostered in research contracts on major development programmes. CAP worked with Vosper Thornycroft Controls to develop machinery control and surveillance systems for the Royal Navy's new generation ships and submarines. An associated Naval Command Systems business had built a strong Action Information Organisation design team, working with both surface and submarine fleets, and a Land Air Systems business also took research and development contracts and was prime contractor for the British Army's Brigade and Battlegroup Trainer (BBGT). The non-defence scientific sector was addressed by setting a Scientific Systems business with expertise in energy generation and conservation. In that year, CAP Scientific established the Centre for Operational Research and Defence Analysis (CORDA) as an independent unit to provide impartial assistance for investment appraisal. At that time military computer systems were purpose-built by major contractors, and CAP Scientific's strategy was to form joint ventures with companies which had market access but could not afford the investment to move into the new technology of microprocessors and distributed systems. The Falklands breakthrough, and DCG In its early years, CAP Scientific took time to establish itself, but in 1982 there came a breakthrough. While the UK was mustering its naval taskforce for the Falklands War, it became clear that for some purposes the Royal Navy needed more computational power. An Urgent Operational Requirement was raised to provide improved fire control solutions for RN Sub-Harpoon. Working in frantic haste, CAP's engineers were able to add an experimental Digital Equipment Corporation PDP-8 installation into a Royal Navy submarine before she sailed to the South Atlantic. This was one of the first examples of commercial off-the-shelf equipment being employed for military use. The success of this experimental deployment led to the development of a standard RN submarine fit, DCG, which allowed extra processing power to be added to submarine command systems. SMCS and Gresham-CAP By its prompt response to the needs of the Falklands War, CAP Scientific demonstrated its ability to supply naval computer technology. With the decision to build the new to carry the Trident missile system, the UK Ministry of Defence proceeded for the first time to run an open competition for the command system. In 1983, CAP Scientific teamed with Gresham-Lion, a British manufacturer of torpedo launch control equipment and now part of Ultra Electronics plc, to form a special purpose company, Gresham-CAP Ltd, to bid for the system. Up to that point all RN ships and submarines had command systems built by Ferranti using custom-built electronics. Gresham-CAP offered a novel distributed processing system based on commercial off-the-shelf components and utilising a modular software architecture largely written in the Ada programming language. The Gresham-CAP consortium won the bid, and their solution, known as Submarine Command System (SMCS) became the basis for subsequent products from the company. The choice of Intel 80386 processors and MultiBus, when many competing chips were available and the PC had only recently reached the market, showed foresight as the basic architecture remains in service today on RN submarines. (The choice of an array of INMOS Transputer chips to process sonar tracking data was less successful - whilst they did the job, the lack of long term support / future product line meant they have been phased out once general purpose processors were able to fulfill the role.) The impact of this still-young company displacing one of the great names of British electronics in the Royal Navy shocked the industry and can be seen as one of the first open competitions in modern British defence procurement and followed a long post-war period of 'preferred contractor' policies. The founders of CAP Scientific sold their complete shareholding to CAP-CPP, which subsequently listed on the London Stock Exchange as CAP Group plc. In June 1986, the Group acquired YARD (Yarrow-Admiralty Research Department) Ltd, a marine engineering consultancy, formerly part of Yarrow Shipbuilders, based in Glasgow. DNA (SSCS) and Merger The Falklands War prompted a further competition in British naval equipment supply when an analysis of the loss of showed that improvements were necessary in surface ship combat systems. A contract for the command system for the navy's new Type 23 frigates was cancelled and put out to competition, and after a long campaign was awarded to the CAP and Gresham consortium, teamed with Racal Electronics. The consortium developed the architecture of SMCS to create a derivative distributed system known internally as Surface-Ship Command System (SSCS). By now Gresham-Lion was under Dowty ownership and CAP Group had merged with the French company SEMA-METRA SA to form Sema Group plc. The Type 23 command system proved to be a step too far for Sema Scientific, as it was now called. The enormous fixed-price contract overran, causing problems for both Sema and Dowty. Dowty was taken over by TI Group, who sold their interests in Dowty-Sema back to Sema Group for £1. Sema Group invited British Aerospace in as a co-investor in the business, and the activities which once formed CAP Scientific, Gresham-CAP and YARD, together with some BAe interests were merged in 1991 into a new entity, BAeSEMA. Ultimately, BAe purchased Sema Group's interest in BAeSEMA. Ironically, with the BAe/Marconi Electronic Systems merger to form BAE Systems in 1999, the CAP Scientific business found itself under the same parent as its erstwhile competitor Ferranti. References Defence companies of the United Kingdom Defunct companies of the United Kingdom Software companies of the United Kingdom Software companies established in 1979 Software companies disestablished in 1988 British companies established in 1979 British companies disestablished in 1988
53708393
https://en.wikipedia.org/wiki/Nenad%20Medvidovi%C4%87
Nenad Medvidović
Nenad Medvidović is a Professor of Computer Science and Informatics at the University of Southern California in Los Angeles, CA. He is a fellow of the IEEE and an ACM Distinguished Member. He was chair of ACM SIGSOFT and co-author of Software Architecture: Foundations, Theory, and Practice (2009). In 2008, he received the Most Influential Paper Award for a paper titled "Architecture-Based Runtime Software Evolution" published in the ACM/IEEE International Conference on Software Engineering 1998. In 2020, he received the Most Influential Paper Award for a paper titled "An architectural style for solving computationally intensive problems on large networks" published in the ACM/IEEE Software Engineering for Adaptive and Self-Managing Systems 2007. In 2017, he received an IEEE International Conference on Software Architecture Best Paper Award for his paper titled "Continuous Analysis of Collaborative Design". Bibliography Software Architecture: Foundations, Theory, and Practice 2009. Wiley, References External links Website Living people Computer scientists University of Southern California faculty American computer programmers Computer science writers Year of birth missing (living people)
21284823
https://en.wikipedia.org/wiki/Plug%20computer
Plug computer
A plug computer is an external device, often configured for use in the home or office as a compact computer. The name is derived from the small configuration of such devices; they are often enclosed in an AC power plug or AC adapter. Plug computers consist of a high-performance, low-power system-on-a-chip processor, with several I/O hardware ports (USB ports, Ethernet connectors, etc.). Most versions do not have provisions for connecting a display and are best suited as running media servers, back-up services, or file sharing and remote access functions; thus acting as a bridge between in-home protocols (such as Digital Living Network Alliance (DLNA) and Server Message Block (SMB)) and cloud-based services. There are, however, plug computer offerings that have analog VGA monitor and/or HDMI connectors, which, along with multiple USB ports, permit the use of a display, keyboard, and mouse, thus making them full-fledged, low-power alternatives to desktop and laptop computers. They typically run any of a number of Linux distributions. Plug computers typically consume little power and are inexpensive. History A number of other devices of this type began to appear at the 2009 Consumer Electronics Show. On January 6, 2009 CTERA Networks launched a device called CloudPlug that provides online backup at local disk speeds and overlays a file sharing service. The device also transforms any external USB hard drive into a network-attached storage device. On January 7, 2009, Cloud Engines unveiled Pogoplug network access server. On January 8, 2009, Axentra announced availability of their HipServ platform. On February 23, 2009, Marvell Technology Group announced its plans to build a mini-industry around plug computers. On August 19, 2009, CodeLathe announced availability of their TonidoPlug network access server. On November 13, 2009 QuadAxis launched its plug computing device product line and development platform, featuring the QuadPlug and QuadPC and running QuadMix, a modified Linux. On January 5, 2010, Iomega announced their iConnect network access server. On January 7, 2010 Pbxnsip launched its plug computing device the sipJack running pbxnsip: an IP Communications platform. See also Classes of computers Computer appliance CuBox, a plug computer GuruPlug, a plug computer DreamPlug, a plug computer FreedomBox, an operating system Personal web server Print server Raspberry Pi, a single-board computer SheevaPlug, a plug computer Stick PC, a computer attached to and powered by a USB or HDMI plug References External links Cloud computing Classes of computers Cloud clients Home servers Server appliance
8694325
https://en.wikipedia.org/wiki/NEC%20V60
NEC V60
The NEC V60 is a CISC microprocessor manufactured by NEC starting in 1986. Several improved versions were introduced with the same instruction set architecture (ISA), the V70 in 1987, and the V80 and AFPP in 1989. They were succeeded by the V800 product families, which is currently produced by Renesas. The V60 family includes a floating-point unit (FPU) and memory management unit (MMU) and real-time operating system (RTOS) support for both Unix-based user-application-oriented systems and I-TRON–based hardware-control-oriented embedded systems. They can be used in a multi-cpu lockstep fault-tolerant mechanism named FRM. Development tools included Ada certified system MV-4000, and an in-circuit emulator (ICE). The V60/V70/V80's applications covered a wide area, including circuit switching telephone exchanges, minicomputers, aerospace guidance systems, word processors, industrial computers, and various game arcades. Introduction NEC V60 is a CISC processor manufactured by NEC starting in 1986. It was the first 32-bit general-purpose microprocessor commercially available in Japan. Based on a relatively traditional design for the period, the V60 was a radical departure from NEC's previous, 16-bit V–series processor, the V20-V50, which were based on the Intel 8086 model, although the V60 had the ability to emulate the V20/V30. According to NEC's documentation, this computer architectural change was due to the increasing demands for, and the diversity of, high-level programming languages. Such trends called for a processor with both improved performance, achieved by doubling the bus width to 32 bits, and with greater flexibility facilitated by having a large number of general-purpose registers. These were common features of RISC chips. At the time, a transition from CISC to RISC seemed to bring many benefits for emerging markets. Today, RISC chips are common, and CISC designs—such as Intel's x86 and the 80486—which have been mainstream for several decades, internally adopt RISC features in their microarchitectures. According to Pat Gelsinger, binary backward compatibility for legacy software is more important than changing the ISA. Overview Instruction set The V60 ( μPD70616) retained a CISC architecture. Its manual describes their architecture as having "features of high-end mainframe and supercomputers", with a fully orthogonal instruction set that includes non-uniform-length instructions, memory-to-memory operations that include string manipulation, and complex operand-addressing schemes. Family The V60 operates as a 32 bit processor internally, whilst externally providing 16-bit data, and 24-bit address, buses. In addition, the V60 has 32 32-bit general-purpose registers. Its basic architecture is used in several variants. The V70 (μPD70632), released in 1987, provides 32-bit external buses. Launched in 1989, the V80 (μPD70832) is the culmination of the series: having on-chip caches, a branch predictor, and less reliance on microcode for complex operations. Software The operating system developed for the V60-V80 series, are generally oriented toward real-time operations. Several OSs were ported to the series, including real-time versions of Unix and I‑TRON. Because the V60/V70 was used in various Japanese arcade games, their instruction set architecture is emulated in the MAME CPU simulator. The latest open-source code is available from the GitHub repository. FRM All three processors have the FRM (Functional Redundancy Monitoring) synchronous multiple modular lockstep mechanism, which enables fault-tolerant computer systems. It requires multiple devices of the same model, one of which then operates in "master mode", while the other devices listen to the master device, in "checker mode". If two or more devices simultaneously output different results via their "fault output" pins, a majority-voting decision can be taken by external circuits. In addition, a recovery method for the mismatched instruction—either "roll-back by retry" or "roll-forward by exception"—can be selected via an external pin. V60 The work on V60 processor began in 1982 with about 250 engineers under the leadership of Yoichi Yano, and the processor debuted in February 1986. It had a six-stage pipeline, built-in memory-management unit, and floating-point arithmetic. It was manufactured using a two-layer aluminum metal CMOS process technology, under a 1.5 μm design rule, to implement 375,000 transistors on a die. It operates at 5 V and was initially packaged in a 68-pin PGA. The first version ran at 16 MHz and attained 3.5 MIPS. Its sample price at launch was set at ¥100,000 ($588.23). It entered full-scale production in August 1986. Sega employed this processor for most of its arcade game sets in the 1990s; both the Sega System 32 and the Sega Model 1 architectures used V60 as their main CPU. (The latter used the lower-cost μPD70615 variant, which doesn't implement V20/V30 emulation and FRM.) The V60 was also used as the main CPU in the SSV arcade architecture—so named because it was developed jointly by Seta, Sammy, and Visco. Sega originally considered using a 16 MHz V60 as the basis for its Sega Saturn console; but after receiving word that the PlayStation employed a 33.8 MHz MIPS R3000A processor, instead chose the dual-SH-2 design for the production model. In 1988, NEC released a kit called PS98-145-HMW for Unix enthusiasts. The kit contained a V60 processor board that could be plugged into selected models of the PC-9800 computer series and a distribution of their UNIX System V port, the PC-UX/V Rel 2.0 (V60), on 15 8-inch floppy disks. The suggested retail price for this kit was 450,000 Yen. NEC-group companies themselves intensively employed the V60 processor. Their telephone circuit switcher (exchange), which was one of the first intended targets, used V60. In 1991, they expanded their word processor products line with Bungou Mini (文豪ミニ in Japanese) series 5SX, 7SX, and 7SD, which used the V60 for fast outline font processing, while the main system processor was a 16 MHz NEC V33. In addition, V60 microcode variants were employed in NEC's MS-4100 minicomputer series, which was the fastest one in Japan at that time. V70 The V70 (μPD70632) improved on the V60 by increasing the external buses to 32 bits, equal to the internal buses. It was also manufactured in 1.5 μm with a two-metal layer process. Its die had 385,000 transistors and was packaged in a 132-pin ceramic PGA. Its MMU had support for demand paging. Its floating-point unit was IEEE 754 compliant. The 20 MHz version attained a peak performance of 6.6 MIPS and was priced, at launch in August 1987, at ¥100,000 ($719.42). The initial production capacity was 20,000 units per month. A later report describes it as fabricated in 1.2-micrometer CMOS on a die. The V70 had a two-cycle non-pipeline (T1-T2) external bus system, whereas that of the V60 operated at 3 or 4 cycles (T1-T3/T4). Of course, the internal units were pipelined. The V70 was used by Sega in its System Multi 32 and by Jaleco in its Mega System 32. (See the photo of the V70 mounted on the latter system's printed circuit board.) JAXA embedded its variant of the V70, with the I-TRON RX616 operating system, in the Guidance Control Computer of the H-IIA carrier rockets, in satellites such as the Akatsuki (Venus Climate Orbiter), and the Kibo International Space Station (ISS) module. The H-IIA launch vehicles were deployed domestically, in Japan, although their payloads included satellites from foreign countries. As described in JAXA's LSI (MPU/ASIC) roadmap, this V70 variant is designated "32bit MPU (H32/V70)", whose development, probably including the testing (QT) phase, was "from the middle of 1980s to early 1990s". This variant was used until its replacement, in 2013, by the HR5000 64-bit, 25MHz microprocessor, which is based on the MIPS64-5Kf architecture, fabricated by HIREC, whose development was completed around 2011. "Space Environment Data Acquisition" for the V70 was done at the Kibo-ISS exposed facility. V80 The V80 (μPD70832) was launched in the spring of 1989. By incorporating on-chip caches and a branch predictor, it was declared NEC's 486 by Computer Business Review. The performance of the V80 was two to four times than that of the V70, depending on application. For example, compared with V70, the V80 had a 32-bit hardware multiplier that reduced the number of cycles required to complete an integer-multiplication machine-instruction from 23 to 9. (For more detailed differences, see the hardware architecture section below.) The V80 was manufactured in a 0.8-micrometer CMOS process on a die area of , implementing 980,000 transistors. It was packaged in a 280-pin PGA, and operated at 25 and 33 MHz with claimed peak performances of 12.5 and 16.5 MIPS, respectively. The V80 had separate 1 KB on-die caches for both instructions and data. It had a 64-entry branch predictor, a 5% performance gain being attributed to it. The launch prices of the V80 were cited as equivalent to $1200 for the 33 MHz model and $960 for the 25 MHz model. Supposedly, a 45 MHz model was scheduled for 1990, but it did not materialize. The V80, with μPD72691 co-FPP and μPD71101 simple peripheral chips, was used for an industrial computer running the RX-UX832 real-time UNIX operating system and a X11-R4-based windowing system. AFPP (co-FPP) The Advanced Floating Point Processor (AFPP) (μPD72691) is a co-processor for floating-point arithmetic operations. The V60/V70/V80 themselves can perform floating-point arithmetic, but they are very slow because they lack hardware dedicated to such operations. In 1989, to compensate V60/V70/V80 for their fairly weak floating-point performance, NEC launched this 80-bit floating-point co-processor for 32-bit single precision, 64-bit double precision, and 80-bit extended precision operations according to IEEE 754 specifications. This chip had a performance of 6.7 MFLOPS, doing vector-matrix multiplication while operating at 20 MHz. It was fabricated using a 1.2-micrometer double-metal layer CMOS process, resulting in 433,000 transistors on an die. It was packaged in a 68-pin PGA. This co-processor connected to a V80 via a dedicated bus, to a V60 or V70 via a shared main bus, which constrained peak performance. Hardware architecture The V60/V70/V80 shared a basic architecture. They had thirty-two 32-bit general-purpose registers, with the last three of them commonly used as stack pointer, frame pointer, and argument pointer, which well matched high level language compilers' calling conventions. The V60 and V70 have 119 machine instructions, with that number being extended slightly to 123 instructions for the V80. The instructions are of non-uniform length, between one and 22 bytes, and take two operands, both of which can be addresses in main memory. After studying the V60's reference manual, Paul Vixie described it as "a very VAX-ish arch, with a V20/V30 emulation mode (which[...] means it can run Intel 8086/8088 software)". The V60–V80 has a built-in memory management unit (MMU) that divides a 4-GB virtual address space into four 1-GB sections, each section being further divided into 1,024 1-MB areas, and each area being composed of 256 4-KB pages. On the V60/V70, four registers (ATBR0 to ATBR3) store section pointers, but the "area tables entries" (ATE) and page tables entries (PTE) are stored in off-chip RAM. The V80 merged the ATE and ATBR registers—which are both on-chip, with only the PTE entries stored in external RAM—allowing for faster execution of translation lookaside buffer (TLB) misses by eliminating one memory read. The translation lookaside buffers on the V60/70 are 16-entry fully associative with replacement done by microcode. The V80, in contrast, has a 64-entry 2-way set associative TLB with replacement done in hardware. TLB replacement took 58 cycles in the V70 and disrupted the pipelined execution of other instructions. On the V80, a TLB replacement takes only 6 or 11 cycles depending on whether the page is in the same area; pipeline disruption no longer occurs in the V80 because of the separate TLB replacement hardware unit, which operates in parallel with the rest of the processor. All three processors use the same protection mechanism, with 4 protection levels set via a program status word, Ring 0 being the privileged level that could access a special set of registers on the processors. All three models support a triple-mode redundancy configuration with three CPUs used in a byzantine fault–tolerance scheme with bus freeze, instruction retry, and chip replacement signals. The V80 added parity signals to its data and address buses. String operations were implemented in microcode in the V60/V70; but these were aided by a hardware data control unit, running at full bus speed, in the V80. This made string operations about five times faster in the V80 than in the V60/V70. All floating-point operations are largely implemented in microcode across the processor family and are thus fairly slow. On the V60/V70, the 32-bit floating-point operations take 120/116/137 cycles for addition/multiplication/division, while the corresponding 64-bit floating-point operations take 178/270/590 cycles. The V80 has some limited hardware assist for phases of floating-point operations—e.g. decomposition into sign, exponent, and mantissa—thus its floating-point unit was claimed to be up to three times as effective as that of the V70, with 32-bit floating-point operations taking 36/44/74 cycles and 64-bit operations taking 75/110/533 cycles (addition/multiplication/division). Operating systems Unix (non-real-time and real-time) NEC ported several variants of the Unix operating system to its V60/V70/V80 processors for user-application-oriented systems, including real-time ones. The first flavor of NEC's UNIX System V port for V60 was called PC-UX/V Rel 2.0 (V60). (Also refer to external link photos below.) NEC developed a Unix variant with a focus on real-time operation to run on V60/V70/V80. Called Real-time UNIX RX-UX 832, it has a double-layered kernel structure, with all task scheduling handled by the real-time kernel. A multiprocessor version of RX-UX 832 was also developed, named MUSTARD (Multiprocessor Unix for Embedded Real-Time Systems). The MUSTARD-powered computer prototype uses eight V70 processors. It utilizes FRM function, and can configure and change the configuration of master and checker upon request. I-TRON (real-time) For hardware-control-oriented embedded systems, the I-TRON-based real-time operating system, named RX616, was implemented by NEC for the V60/V70. The 32-bit RX616 was a continuous fork from the 16-bit RX116, which was for the V20-V50. FlexOS (real-time) In 1987, Digital Research, Inc. also announced that they were planning on porting FlexOS to the V60 and V70. CP/M and DOS (legacy 16-bit) The V60 could also run CP/M and DOS programs (ported from the V20-V50 series) using V20/V30 emulation mode. According to a 1991 article in InfoWorld, Digital Research was working on a version of Concurrent DOS for the V60 at some point; but this was never released, as the V60/V70 processors were not imported to the US for use in PC clones. Development tools C/C++ cross-compilers As part of its development tool kit and integrated development environment (IDE), NEC had its own C-compiler, the PKG70616 "Software Generation tool package for V60/V70". In addition, GHS (Green Hills Software) made its native mode C compiler (MULTI), and MetaWare, Inc. (currently Synopsys, via ARC International) made one, for V20/V30 (Intel 8086), emulation mode, called High C/C++. Cygnus Solutions (currently Red Hat) also ported GCC as a part of an enhanced GNU compiler system (EGCS) fork, but it seems not to be public. , the processor-specific directory necv70 is still kept alive in the newlib C-language libraries (libc.a and libm.a) by RedHat. Recent maintenance seems to be done on Sourceware.org. The latest source code is available from its git repository. MV-4100 Ada 83–certified system The Ada 83–certified "platform system" was named MV‑4000, certified as "MV4000". This certification was done with a target system, that utilized the real-time UNIX RX-UX 832 OS running on a VMEbus (IEEE 1014)–based system with a V70 processor board plugged in. The host of the cross compiler was an NEC Engineering Work Station EWS 4800, whose host OS, EWS-US/V, was also UNIX System V–based. The processor received Ada-83 validation from AETECH, Inc. (Note: In accordance with the Ada Validation Procedures (Version 5.0), certificates will no longer be issued for Ada 83 compilers. Testing may be performed by an Ada Conformity Assessment Laboratory (ACAL) for specific procurement requirements, and the ACAA will issue a letter affirming such testing, but no certificates will be issued. All validation certificates ever issued for testing under Version 1.11 of the ACVC test suite expired on 31 March 1998.) Evaluation board kits NEC released some plug-in evaluation board kits for the V60/V70. In-circuit emulator On-chip software debug support with the IE-V60 NEC based its own full (non-ROM and non-JTAG) probe-based in-circuit emulator, the IE-V60, on the V60, because V60/V70 chips themselves had emulator-chip capabilities. The IE-V60 was the first in-circuit emulator for V60 that was manufactured by NEC. It also had a PROM programmer function.Section 9.4, p. 205 NEC described it as a "user friendly software debug function". The chips have various trapping exceptions, such as data read (or write) to the user specified address, and 2 break-points simultaneously.Section 9 External bus status pins The external bus system indicates its bus status using 3 status pins, which provide three bits to signal such conditions as first instruction fetch after branch, continuous instruction fetch, TLB data access, single data access, and sequential data access. Section 6.1, p. 114 Debugging with V80 These software and hardware debugging functions were also built into the V80. However, the V80 did not have an in-circuit emulator, possibly because the presence of such software as real-time UNIX RX-UX 832 and real-time I-TRON RX616 rendered such a function unnecessary. Once Unix boots up, there is no need for an in-circuit emulator for developing either device drivers or application software. What is needed is a C compiler, a cross compiler, and a screen debugger—such as GDB-Tk—that works with the target device,. HP 64758 Hewlett Packard (currently Keysight) offered probing-pod-based in-circuit emulation hardware for the V70, built on their HP 64700 Series systems, the successor to the HP 64000 Series, specifically the HP 64758. It enables trace function like a logic analyzer. This test equipment also displays disassembled source code automatically, with trace data display and without an object file, and displays high-level language source code when the source code and the object files are provided and they were compiled in DWARF format. An interface for the V60 (10339G) was also in the catalog, but the long probing-pod cable required "special grade qualified" devices, i.e. the high-speed grade V70. HP 64758: Main units, sub-units, and hosted interface Software options Hardware options Failings Strategic failure of the V80 microarchitecture In its development phase, the V80 was thought to have the same performance as the Intel 80486, but they ended up having many different features. The internal execution for each instruction of the V80 needed at least two cycles, while that of i486 required one. The internal pipeline of the V80 seemed buffered asynchronous, but that of i486 was synchronous. In other words, the internal microarchitecture of V80 was CISC, but that of i486 was RISC. Both of their ISAs allowed long non-uniform CISC instructions, but the i486 had a wider, 128-bit internal cache memory bus, while that of V80 had a 32-bit width. This difference can be seen on their die photos. The design was fatal from the performance point of view, but NEC did not change it. NEC might have been able to redesign the physical design, with the same register-transfer level, but it did not. Lack of commercial success The V60-V80 architecture did not enjoy much commercial success. The V60, V70, and V80 were listed in the 1989 and 1990 NEC catalogs in their PGA packaging. A NEC catalog from 1995 still listed the V60 and V70 (not only in their PGA version but also in a QFP packaging, and also included a low-cost variant of the V60 named μPD70615, which eliminated V20/V30 emulation and FRM function), alongside their assorted chipsets; but the V80 was not offered in this catalog. The 1999 edition of the same catalog no longer had any V60-V80 products. Successors The V800 series In 1992, NEC launched a new model, the V800 Series 32-bit microcontroller; but it did not have a memory management unit (MMU). It had a RISC-based architecture, inspired by the Intel i960 and MIPS architectures, and other RISC processor instructions, such as JARL (Jump and Register Link) and load/store architecture. At this time, the enormous software assets of the V60/V70, such as real-time Unix, were abandoned and never returned to their successors, a scenario Intel avoided. The V800 Series had 3 major variants, the V810, V830, and V850 families. The V820 (μPD70742) was a simple variant of the V810 (μPD70732), but with peripherals. The designation V840 may have been skipped as a designation because of Japanese tetraphobia (see page 58). One Japanese pronunciation of "4" means "death", thus avoid names evoking such as Death-watch Shi-ban (the number 4 – Shi-ban) Bug (, precisely "deathwatch beetle"). As of 2005, it was already the V850 era, and the V850 family has been enjoying great success. As of 2018, it is called the Renesas V850 family and the RH850 family, with V850/V850E1/V850E2 and V850E2/V850E3 CPU cores, respectively. Those CPU cores have extended the ISA of the original V810 core; running with the V850 compiler. Modern software-based simulation MAME Because the V60/V70 had been used for many Japanese arcade games, MAME (for "Multiple Arcade Machine Emulator"), which emulates multiple old arcade games for enthusiasts, includes an CPU simulator for their instruction set architecture. It is a kind of an instruction set simulator, not for developers but for users. It has been maintained by the MAME development team. The latest open-source code, written in C++, is available from the GitHub repository. The operation codes in the file optable.hxx are exactly the same as those of the V60. See also NEC V20 V850 R4200 References Further reading External links Die photo of the V60; at Nikkei BP (in Japanese) Die photo of the V60; at Semiconductor History Museum of Japan (in Japanese) Die photo of the V60, mounted on PGA package (much clear, in Chinese) Die photo of the V60 with PGA packaging, removed ceramic cap (in Chinese) Photo of the V60 in PGA packaging w/ ceramic cap shield; glass shield Photo of the V60 in PGA packaging w/ metal cap shield; seam weld Blog: PS98-145-HMW kit: "PC-UX/V" w/ 15 disks & "V60 Sub board" for NEC PC-9801 slot (in Japanese) Article: V70 in PGA packaging and the H-IIA rocket (in English) Photo of NEC V60 CPU board of the Sega Virtua Racing (in English) Site: "System 16" - Sega System 32 Hardware (in English) Site: "System 16" - Sega Model 1 Hardware (in English) Site: "System 16" - Sega System Multi 32 Hardware (in English) Original documents for the V60 (μPD70616) & V70 (μPD70632) is available from here. Datasheets for the AFPP (μPD72691) is available from here. Renesas V850 Family web site Renesas RH850 Family web site V60|V70|V80|AFPP|NEC V60|NEC V70|NEC V80|NEC AFPP NEC V60 32-bit microprocessors
8246248
https://en.wikipedia.org/wiki/Memory-mapped%20file
Memory-mapped file
A memory-mapped file is a segment of virtual memory that has been assigned a direct byte-for-byte correlation with some portion of a file or file-like resource. This resource is typically a file that is physically present on disk, but can also be a device, shared memory object, or other resource that the operating system can reference through a file descriptor. Once present, this correlation between the file and the memory space permits applications to treat the mapped portion as if it were primary memory. History TOPS-20 PMAP An early () implementation of this was the PMAP system call on the DEC-20's TOPS-20 operating system, a feature used by Software House's System-1022 database system. SunOS 4 mmap SunOS 4 introduced Unix's mmap, which permitted programs "to map files into memory." Windows Growable Memory-Mapped Files (GMMF) Two decades after the release of TOPS-20's PMAP, Windows NT was given Growable Memory-Mapped Files (GMMF). Since " function requires a size to be passed to it" and altering a file's size is not readily accommodated, a GMMF API was developed. Use of GMMF requires declaring the maximum to which the file size can grow, but no unused space is wasted. Benefits The benefit of memory mapping a file is increasing I/O performance, especially when used on large files. For small files, memory-mapped files can result in a waste of slack space as memory maps are always aligned to the page size, which is mostly 4 KiB. Therefore, a 5 KiB file will allocate 8 KiB and thus 3 KiB are wasted. Accessing memory mapped files is faster than using direct read and write operations for two reasons. Firstly, a system call is orders of magnitude slower than a simple change to a program's local memory. Secondly, in most operating systems the memory region mapped actually is the kernel's page cache (file cache), meaning that no copies need to be created in user space. Certain application-level memory-mapped file operations also perform better than their physical file counterparts. Applications can access and update data in the file directly and in-place, as opposed to seeking from the start of the file or rewriting the entire edited contents to a temporary location. Since the memory-mapped file is handled internally in pages, linear file access (as seen, for example, in flat file data storage or configuration files) requires disk access only when a new page boundary is crossed, and can write larger sections of the file to disk in a single operation. A possible benefit of memory-mapped files is a "lazy loading", thus using small amounts of RAM even for a very large file. Trying to load the entire contents of a file that is significantly larger than the amount of memory available can cause severe thrashing as the operating system reads from disk into memory and simultaneously writes pages from memory back to disk. Memory-mapping may not only bypass the page file completely, but also allow smaller page-sized sections to be loaded as data is being edited, similarly to demand paging used for programs. The memory mapping process is handled by the virtual memory manager, which is the same subsystem responsible for dealing with the page file. Memory mapped files are loaded into memory one entire page at a time. The page size is selected by the operating system for maximum performance. Since page file management is one of the most critical elements of a virtual memory system, loading page sized sections of a file into physical memory is typically a very highly optimized system function. Types There are two types of memory-mapped files: Persisted Persisted files are associated with a source file on a disk. The data is saved to the source file on the disk once the last process is finished. These memory-mapped files are suitable for working with extremely large source files. Non-persisted Non-persisted files are not associated with a file on a disk. When the last process has finished working with the file, the data is lost. These files are suitable for creating shared memory for inter-process communications (IPC). Drawbacks The major reason to choose memory mapped file I/O is performance. Nevertheless, there can be tradeoffs. The standard I/O approach is costly due to system call overhead and memory copying. The memory-mapped approach has its cost in minor page faults—when a block of data is loaded in page cache, but is not yet mapped into the process's virtual memory space. In some circumstances, memory mapped file I/O can be substantially slower than standard file I/O. Another drawback of memory-mapped files relates to a given architecture's address space: a file larger than the addressable space can have only portions mapped at a time, complicating reading it. For example, a 32-bit architecture such as Intel's IA-32 can only directly address 4 GiB or smaller portions of files. An even smaller amount of addressable space is available to individual programs—typically in the range of 2 to 3 GiB, depending on the operating system kernel. This drawback however is virtually eliminated on modern 64-bit architecture. mmap also tends to be less scalable than standard means of file I/O, since many operating systems, including Linux, has a cap on the number of cores handling page faults. Extremely fast devices, such as modern NVM Express SSDs, are capable of making the overhead a real concern. I/O errors on the underlying file (e.g. its removable drive is unplugged or optical media is ejected, disk full when writing, etc.) while accessing its mapped memory are reported to the application as the SIGSEGV/SIGBUS signals on POSIX, and the EXECUTE_IN_PAGE_ERROR structured exception on Windows. All code accessing mapped memory must be prepared to handle these errors, which don't normally occur when accessing memory. Only hardware architectures with an MMU can support memory-mapped files. On architectures without an MMU, the operating system can copy the entire file into memory when the request to map it is made, but this is extremely wasteful and slow if only a little bit of the file will be accessed, and can only work for files that will fit in available memory. Common uses Perhaps the most common use for a memory-mapped file is the process loader in most modern operating systems (including Microsoft Windows and Unix-like systems.) When a process is started, the operating system uses a memory mapped file to bring the executable file, along with any loadable modules, into memory for execution. Most memory-mapping systems use a technique called demand paging, where the file is loaded into physical memory in subsets (one page each), and only when that page is actually referenced. In the specific case of executable files, this permits the OS to selectively load only those portions of a process image that actually need to execute. Another common use for memory-mapped files is to share memory between multiple processes. In modern protected mode operating systems, processes are generally not permitted to access memory space that is allocated for use by another process. (A program's attempt to do so causes invalid page faults or segmentation violations.) There are a number of techniques available to safely share memory, and memory-mapped file I/O is one of the most popular. Two or more applications can simultaneously map a single physical file into memory and access this memory. For example, the Microsoft Windows operating system provides a mechanism for applications to memory-map a shared segment of the system's page file itself and share data via this section. Platform support Most modern operating systems or runtime environments support some form of memory-mapped file access. The function , which creates a mapping of a file given a file descriptor, starting location in the file, and a length, is part of the POSIX specification, so the wide variety of POSIX-compliant systems, such as UNIX, Linux, Mac OS X or OpenVMS, support a common mechanism for memory mapping files. The Microsoft Windows operating systems also support a group of API functions for this purpose, such as . Some free portable implementations of memory-mapped files for Microsoft Windows and POSIX-compliant platforms are: Boost.Interprocess, in Boost C++ Libraries Boost.Iostreams, also in Boost C++ Libraries Fmstream Cpp-mmf The Java programming language provides classes and methods to access memory mapped files, such as . The D programming language supports memory mapped files in its standard library (std.mmfile module). Ruby has a gem (library) called Mmap, which implements memory-mapped file objects. Since version 1.6, Python has included a module in its Standard Library. Details of the module vary according to whether the host platform is Windows or Unix-like. For Perl there are several modules available for memory mapping files on the CPAN, such as and . In the Microsoft .NET runtime, P/Invoke can be used to use memory mapped files directly through the Windows API. Managed access (P/Invoke not necessary) to memory mapped files was introduced in version 4 of the runtime (see Memory-Mapped Files). For previous versions, there are third-party libraries which provide managed API's. PHP supported memory-mapping techniques in a number of native file access functions such as file_get_contents() but has removed this in 5.3 (see revision log). For the R programming language there exists a library on CRAN called bigmemory which uses the Boost library and provides memory-mapped backed arrays directly in R. The package ff offers memory-mapped vectors, matrices, arrays and data frames. The J programming language has supported memory mapped files since at least 2005. It includes support for boxed array data, and single datatype files. Support can be loaded from 'data/jmf' J's Jdb and JD database engines use memory mapped files for column stores. References Virtual memory
45318320
https://en.wikipedia.org/wiki/Stevie%20%28text%20editor%29
Stevie (text editor)
Stevie, ST Editor for VI Enthusiasts, is a discontinued clone of Bill Joy's vi text editor. Stevie was written by Tim Thompson for the Atari ST in 1987. It later became the basis for Vim, which was released in 1991. Thompson posted his original C source code as free software to the comp.sys.atari.st newsgroup on 28 June 1987. Tony Andrews added features and ported it to Unix, OS/2 and Amiga, posting his version to the comp.sources.unix newsgroup as free software on 6 June 1988. In 1991, Bram Moolenaar released Vim, which he based on the source code of the Amiga port of Stevie. References Vi Free text editors Atari ST software Amiga software Unix text editors OS/2 text editors Free software programmed in C Cross-platform free software
298558
https://en.wikipedia.org/wiki/Upstream%20%28software%20development%29
Upstream (software development)
In software development, upstream refers to a direction toward the original authors or maintainers of software that is distributed as source code, and is a qualification of either a version (released by the original authors, based on their upstream source code), a bug or a patch. Practical examples: A patch sent upstream is offered to the original authors or maintainers of the software. If accepted, the authors or maintainers will include the patch in their software, either immediately or in a future release. If rejected, the person who submitted the patch will have to maintain his or her own distribution of the author's software. Upstream repository or source code distribution version, which can either be a version-tagged release for which the source code has specifically been packaged, a specific commit, or master (jargon for latest commit). Where custom distributions (such as forks) may have missed out on bugfixes and improvements (maturing of the project tied to the original authors, upstream) as a result of not merging (all) upstream patches. In such cases, the custom distribution may even have been adapted to suit the specific needs and requirements of those using or maintaining it. This is also often seen with dependencies (vendor packages), where the taker just settles with a base version once and tends to stick with it, over time accumulating so many (arbitrary) modifications or non-standard uses in their environment that merging the latest upstream patches into their custom distribution won't be possible without major additional work for patch and feature compatibility, and avoiding duplicate patches of bugs that they resolved by themselves (and in their own way) while upstream also has a patch for it. A lot of custom distribution users would still cherry-pick and merge critical upstream patches (such as security vulnerability related). Upstream development allows other distributions to benefit from it when they pick up the future release or merge recent (or all) upstream patches. Likewise, the original authors (maintaining upstream) can benefit from contributions that originate from custom distributions, if their users send patches upstream. The term also pertains to bugs; responsibility for a bug is said to lie upstream when it is not caused through the distribution's porting, non-upstream modification or integration efforts. See also Backporting Downstream (software development) Fork (software development) References Computing terminology Software project management
63960343
https://en.wikipedia.org/wiki/Mariarosaria%20Taddeo
Mariarosaria Taddeo
Mariarosaria Taddeo is a senior research fellow at the Oxford Internet Institute, part of the University of Oxford, and deputy director of the Digital Ethics Lab. Taddeo is also an associate scholar at Said Business School, University of Oxford. Education Taddeo holds a PhD in philosophy from the University of Padua. Prior to joining the Oxford Internet Institute, she was research fellow in cybersecurity and ethics at the University of Warwick (2014). She has also held a Marie Curie Fellowship at the University of Hertfordshire, exploring information warfare and its ethical implications. Career Taddeo is the principal investigator on projects relating to ethics, digital technologies and artificial intelligence (AI), in particular the ethical challenges of the use of AI for national defence, socially good uses of AI, and how AI can advance the United Nation's Sustainable Development Goals. She is also a Defence, Security and Technology Dtsl Ethics Fellow of the Alan Turing Institute. In her work at the Turing Institute, she is the Principal Investigator for a two-year project focusing on the ethical implications of the use of data science and AI for national security and defence. Taddeo is also a member of the Exploratory Team on Operational Ethics, part of the Human Factors and Medicine (HFM) panel of the NATO Science and Technology Organisation. Recognition In 2018, ORBIT listed Taddeo among the top 100 experts working on AI in the world. In 2013 she was awarded the World Technology Award for Ethics, acknowledging her research on the ethics of cyber conflicts. In 2010 she received The Simon Awards for Outstanding Research in Computing and Philosophy. Published works Taddeo has published two books. The Responsibilities of Online Service Providers, (2016) (2016) explores the responsibility of online service providers in contemporary societies, examining the complexity and global dimensions of the rapidly evolving challenges posed by the development of internet services. The Ethics of Information Warfare (2014), examines the ethical problems posed by information warfare and the different methods and approaches used to solve them. Taddeo's research work co-authored with Professor Luciano Floridi on ethics and cyberwarfare, exploring the need for the regulation of artificial intelligence and cybersecurity has been published in Nature Her work on data philanthropy and individual rights was published in Minds and Machines, and work on the ethics of online service providers has been published in Science and Engineering Ethics. Taddeo has also published research into cyber conflicts and political paper, published by Minds and Machines. Other notable work on AI and security includes her paper published in Nature Machine Intelligence, co-authored with Tom McCutcheon and Luciano Floridi, describing the risks of trusting AI for national security and defence. The paper makes recommendations for how AI can be part of the solution for a less risky approach. Taddeo's and her co-authors body of work has been presented at the European Parliament's Subcommittee on Security and Defence. Media Taddeo has been quoted in the media providing expert comment on ethics, AI and cybersecurity, with recent articles by the Financial Times talking about bias in AI and the Guardian interviewed on the need for public services to exercise care when using chatbots. She has also been interviewed by news channels, giving her opinions on AI and cyberwarfare, such as appearing as a guest on Al Jazeera programme, Inside Story References Year of birth missing (living people) Living people Artificial intelligence ethicists Artificial intelligence researchers University of Padua alumni
40199992
https://en.wikipedia.org/wiki/Information%20Control%20Systems
Information Control Systems
Information Control Systems (founded in 1962) was a computer programming and data processing company serving clients in the Midwestern United States. Overview Founded in the mid 1960s, by a graduate student from the University of Michigan at a time when the first general purpose transistorized logic modules and low-cost general-purpose computers produced by Digital Equipment Corporation were available on the market, ICS provided industrial automation hardware and software design services to industries in the Detroit, Michigan area . Initially focused on software services only, as these low cost-computers began to become available from many companies such as Hewlett-Packard, Varian, Computer Automation, Microdata, Data General and others, ICS began a transition from a software company into a “system” house with both software and hardware staffs. By the late 1960s, ICS’s management recognized the significance of IBM’s magnetic tape/Selectric typewriter (MT/ST) automated typing system, introduced in 1964 and gaining attention in office typing pools as a productivity improvement tool for documentation creation and editing. Even though the MT/ST was limited in its capabilities, it was a large step forward towards creating “clean” documents without erasure, or whiteout correction fluid/tape. Having gained design experience with hardware automation and control systems, as well as real-time process control programming, ICS believed that the MT/ST could be improved on in many ways using the PDP-8 general purpose computer coupled with the unique (pseudo "disk like") DECtape drive offered by Digital Equipment Corp. In late 1967 the company decided that it made better business sense to become more of a "product" based than contract services company, and begin design efforts to create one of the first stand-alone computer controlled Word Processing systems. Combining the PDP-8 computer with the DECtape's small 4-inch (10 cm) reel of tape that held over 350,000 characters (versus the 25,000 characters on an MT/ST tape) and allowing random access (albeit slower) like a floppy disk, the DECtape units allowed much more flexible storage access, and thus the potential for a much more capable word processor design than the MT/ST which used a slow sprocket hole driven tape (much like a film strip) to record a single character at a time and could only read/write a maximum of 20 characters per second, and had limited search capabilities. The high speed, random addressable, general purpose DECtape computer drive, coupled with a general purpose mini-computer appeared to offer a significant opportunity for an extremely capable word processing system. This design approach also offered an economic advantage as additional terminals could be added (up to 7 additional) to the initial single station system, resulting in a very capable system with approximately the same price per station (~$10,000) as a collection of MT/ST units but with far more capability. Like the MT/ST, the ASTROTYPE system utilized the IBM Selectric typewriter. IBM offered a “terminal” version of the Selectric for use as a computer console I/O device and the IBM 2741 Terminal, that offered significant advantages over the Teletype and Flexowriter terminals in general use at that time. These modified Selectrics featured electronically interfaced typing mechanisms and keyboards and thus provided a typing station with IBM quality that was easily connected to a computer. In October, 1968, at the Business Equipment Manufacturers Association trade show at McCormick Place in Chicago, the company announced its first propriety product, a typing automation product called Astrotype. Astrotype used Digital Equipment Corporation PDP-8 mini computers and modified IBM Selectric typewriters to run text editing software developed by Information Control Systems. Before the Astrotype product, software-based typing automation was available only as a service from time sharing companies using large mainframe computers. Astrotype allowed organizations of any size to make use of computer based text editing in house. First shipments of the Astrotype product began in April, 1969. Prices ranged from $36,000 for a single typing station model, to $59,000 for a model with four typing stations. In June, 1971, again at McCormick Place, the company announced a variation of the Astrotype product at the National Printing Equipment show. The new product, called Astrocomp, was directed at the printing and publishing industry. Its primary function was the original typing and subsequent editing of text intended to be set into type, either on a Linotype machine or on photocomposition equipment from manufacturers such as AM/Varityper, Merganthaler, and the Compugraphic Corporation. The Astrocomp product produced punched paper tape or magnetic tape that contained both the text and codes needed to drive these devices. Customers RR Donnelley First National Bank (Chicago) Time/Life Books Chi Systems Ohio Bell Telephone Weaver Composition Foxboro Co. Curtiss-Wright Stone & Webster Engineering Western Electric Montgomery Ward Otis Elevator Company (NYC) References American companies established in 1962 Computer companies established in 1962 Defunct computer companies of the United States
662
https://en.wikipedia.org/wiki/Apollo%2011
Apollo 11
Apollo 11 (July 16–24, 1969) was the American spaceflight that first landed humans on the Moon. Commander Neil Armstrong and lunar module pilot Buzz Aldrin landed the Apollo Lunar Module Eagle on July 20, 1969, at 20:17 UTC, and Armstrong became the first person to step onto the Moon's surface six hours and 39 minutes later, on July 21 at 02:56 UTC. Aldrin joined him 19 minutes later, and they spent about two and a quarter hours together exploring the site they had named Tranquility Base upon landing. Armstrong and Aldrin collected of lunar material to bring back to Earth as pilot Michael Collins flew the Command Module Columbia in lunar orbit, and were on the Moon's surface for 21 hours, 36 minutes before lifting off to rejoin Columbia. Apollo 11 was launched by a Saturn V rocket from Kennedy Space Center on Merritt Island, Florida, on July 16 at 13:32 UTC, and it was the fifth crewed mission of NASA's Apollo program. The Apollo spacecraft had three parts: a command module (CM) with a cabin for the three astronauts, the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon and an ascent stage to place the astronauts back into lunar orbit. After being sent to the Moon by the Saturn V's third stage, the astronauts separated the spacecraft from it and traveled for three days until they entered lunar orbit. Armstrong and Aldrin then moved into Eagle and landed in the Sea of Tranquility on July 20. The astronauts used Eagles ascent stage to lift off from the lunar surface and rejoin Collins in the command module. They jettisoned Eagle before they performed the maneuvers that propelled Columbia out of the last of its 30 lunar orbits onto a trajectory back to Earth. They returned to Earth and splashed down in the Pacific Ocean on July 24 after more than eight days in space. Armstrong's first step onto the lunar surface was broadcast on live TV to a worldwide audience. He described the event as "one small step for [a] man, one giant leap for mankind." Apollo 11 effectively proved US victory in the Space Race to demonstrate spaceflight superiority, by fulfilling a national goal proposed in 1961 by President John F. Kennedy, "before this decade is out, of landing a man on the Moon and returning him safely to the Earth." Background In the late 1950s and early 1960s, the United States was engaged in the Cold War, a geopolitical rivalry with the Soviet Union. On October 4, 1957, the Soviet Union launched Sputnik 1, the first artificial satellite. This surprise success fired fears and imaginations around the world. It demonstrated that the Soviet Union had the capability to deliver nuclear weapons over intercontinental distances, and challenged American claims of military, economic and technological superiority. This precipitated the Sputnik crisis, and triggered the Space Race to prove which superpower would achieve superior spaceflight capability. President Dwight D. Eisenhower responded to the Sputnik challenge by creating the National Aeronautics and Space Administration (NASA), and initiating Project Mercury, which aimed to launch a man into Earth orbit. But on April 12, 1961, Soviet cosmonaut Yuri Gagarin became the first person in space, and the first to orbit the Earth. Nearly a month later, on May 5, 1961, Alan Shepard became the first American in space, completing a 15-minute suborbital journey. After being recovered from the Atlantic Ocean, he received a congratulatory telephone call from Eisenhower's successor, John F. Kennedy. Since the Soviet Union had higher lift capacity launch vehicles, Kennedy chose, from among options presented by NASA, a challenge beyond the capacity of the existing generation of rocketry, so that the US and Soviet Union would be starting from a position of equality. A crewed mission to the Moon would serve this purpose. On May 25, 1961, Kennedy addressed the United States Congress on "Urgent National Needs" and declared: On September 12, 1962, Kennedy delivered another speech before a crowd of about 40,000 people in the Rice University football stadium in Houston, Texas. A widely quoted refrain from the middle portion of the speech reads as follows: In spite of that, the proposed program faced the opposition of many Americans and was dubbed a "moondoggle" by Norbert Wiener, a mathematician at the Massachusetts Institute of Technology. The effort to land a man on the Moon already had a name: Project Apollo. When Kennedy met with Nikita Khrushchev, the Premier of the Soviet Union in June 1961, he proposed making the Moon landing a joint project, but Khrushchev did not take up the offer. Kennedy again proposed a joint expedition to the Moon in a speech to the United Nations General Assembly on September 20, 1963. The idea of a joint Moon mission was abandoned after Kennedy's death. An early and crucial decision was choosing lunar orbit rendezvous over both direct ascent and Earth orbit rendezvous. A space rendezvous is an orbital maneuver in which two spacecraft navigate through space and meet up. In July 1962 NASA head James Webb announced that lunar orbit rendezvous would be used and that the Apollo spacecraft would have three major parts: a command module (CM) with a cabin for the three astronauts, and the only part that returned to Earth; a service module (SM), which supported the command module with propulsion, electrical power, oxygen, and water; and a lunar module (LM) that had two stages—a descent stage for landing on the Moon, and an ascent stage to place the astronauts back into lunar orbit. This design meant the spacecraft could be launched by a single Saturn V rocket that was then under development. Technologies and techniques required for Apollo were developed by Project Gemini. The Apollo project was enabled by NASA's adoption of new advances in semiconductor electronic technology, including metal-oxide-semiconductor field-effect transistors (MOSFETs) in the Interplanetary Monitoring Platform (IMP) and silicon integrated circuit (IC) chips in the Apollo Guidance Computer (AGC). Project Apollo was abruptly halted by the Apollo 1 fire on January 27, 1967, in which astronauts Gus Grissom, Ed White, and Roger B. Chaffee died, and the subsequent investigation. In October 1968, Apollo 7 evaluated the command module in Earth orbit, and in December Apollo 8 tested it in lunar orbit. In March 1969, Apollo 9 put the lunar module through its paces in Earth orbit, and in May Apollo 10 conducted a "dress rehearsal" in lunar orbit. By July 1969, all was in readiness for Apollo 11 to take the final step onto the Moon. The Soviet Union appeared to be winning the Space Race by beating the US to firsts, but its early lead was overtaken by the US Gemini program and Soviet failure to develop the N1 launcher, which would have been comparable to the Saturn V. The Soviets tried to beat the US to return lunar material to the Earth by means of uncrewed probes. On July 13, three days before Apollo 11's launch, the Soviet Union launched Luna 15, which reached lunar orbit before Apollo 11. During descent, a malfunction caused Luna 15 to crash in Mare Crisium about two hours before Armstrong and Aldrin took off from the Moon's surface to begin their voyage home. The Nuffield Radio Astronomy Laboratories radio telescope in England recorded transmissions from Luna 15 during its descent, and these were released in July 2009 for the 40th anniversary of Apollo 11. Personnel Prime crew The initial crew assignment of Commander Neil Armstrong, Command Module Pilot (CMP) Jim Lovell, and Lunar Module Pilot (LMP) Buzz Aldrin on the backup crew for Apollo9 was officially announced on November 20, 1967. Lovell and Aldrin had previously flown together as the crew of Gemini 12. Due to design and manufacturing delays in the LM, Apollo8 and Apollo9 swapped prime and backup crews, and Armstrong's crew became the backup for Apollo8. Based on the normal crew rotation scheme, Armstrong was then expected to command Apollo 11. There would be one change. Michael Collins, the CMP on the Apollo8 crew, began experiencing trouble with his legs. Doctors diagnosed the problem as a bony growth between his fifth and sixth vertebrae, requiring surgery. Lovell took his place on the Apollo8 crew, and when Collins recovered he joined Armstrong's crew as CMP. In the meantime, Fred Haise filled in as backup LMP, and Aldrin as backup CMP for Apollo 8. Apollo 11 was the second American mission where all the crew members had prior spaceflight experience, the first being Apollo 10. The next was STS-26 in 1988. Deke Slayton gave Armstrong the option to replace Aldrin with Lovell, since some thought Aldrin was difficult to work with. Armstrong had no issues working with Aldrin but thought it over for a day before declining. He thought Lovell deserved to command his own mission (eventually Apollo 13). The Apollo 11 prime crew had none of the close cheerful camaraderie characterized by that of Apollo 12. Instead, they forged an amiable working relationship. Armstrong in particular was notoriously aloof, but Collins, who considered himself a loner, confessed to rebuffing Aldrin's attempts to create a more personal relationship. Aldrin and Collins described the crew as "amiable strangers". Armstrong did not agree with the assessment, and said "... all the crews I was on worked very well together." Backup crew The backup crew consisted of Lovell as Commander, William Anders as CMP, and Haise as LMP. Anders had flown with Lovell on Apollo8. In early 1969, he accepted a job with the National Aeronautics and Space Council effective August 1969, and announced he would retire as an astronaut at that time. Ken Mattingly was moved from the support crew into parallel training with Anders as backup CMP in case Apollo 11 was delayed past its intended July launch date, at which point Anders would be unavailable. By the normal crew rotation in place during Apollo, Lovell, Mattingly, and Haise were scheduled to fly on Apollo 14 after backing up for Apollo 11. Later, Lovell's crew was forced to switch places with Alan Shepard's tentative Apollo 13 crew to give Shepard more training time. Support crew During Projects Mercury and Gemini, each mission had a prime and a backup crew. For Apollo, a third crew of astronauts was added, known as the support crew. The support crew maintained the flight plan, checklists and mission ground rules, and ensured the prime and backup crews were apprised of changes. They developed procedures, especially those for emergency situations, so these were ready for when the prime and backup crews came to train in the simulators, allowing them to concentrate on practicing and mastering them. For Apollo 11, the support crew consisted of Ken Mattingly, Ronald Evans and Bill Pogue. Capsule communicators The capsule communicator (CAPCOM) was an astronaut at the Mission Control Center in Houston, Texas, who was the only person who communicated directly with the flight crew. For Apollo 11, the CAPCOMs were: Charles Duke, Ronald Evans, Bruce McCandless II, James Lovell, William Anders, Ken Mattingly, Fred Haise, Don L. Lind, Owen K. Garriott and Harrison Schmitt. Flight directors The flight directors for this mission were: Other key personnel Other key personnel who played important roles in the Apollo 11 mission include the following. Preparations Insignia The Apollo 11 mission emblem was designed by Collins, who wanted a symbol for "peaceful lunar landing by the United States". At Lovell's suggestion, he chose the bald eagle, the national bird of the United States, as the symbol. Tom Wilson, a simulator instructor, suggested an olive branch in its beak to represent their peaceful mission. Collins added a lunar background with the Earth in the distance. The sunlight in the image was coming from the wrong direction; the shadow should have been in the lower part of the Earth instead of the left. Aldrin, Armstrong and Collins decided the Eagle and the Moon would be in their natural colors, and decided on a blue and gold border. Armstrong was concerned that "eleven" would not be understood by non-English speakers, so they went with "Apollo 11", and they decided not to put their names on the patch, so it would "be representative of everyone who had worked toward a lunar landing". An illustrator at the Manned Spacecraft Center (MSC) did the artwork, which was then sent off to NASA officials for approval. The design was rejected. Bob Gilruth, the director of the MSC felt the talons of the eagle looked "too warlike". After some discussion, the olive branch was moved to the talons. When the Eisenhower dollar coin was released in 1971, the patch design provided the eagle for its reverse side. The design was also used for the smaller Susan B. Anthony dollar unveiled in 1979. Call signs After the crew of Apollo 10 named their spacecraft Charlie Brown and Snoopy, assistant manager for public affairs Julian Scheer wrote to George Low, the Manager of the Apollo Spacecraft Program Office at the MSC, to suggest the Apollo 11 crew be less flippant in naming their craft. The name Snowcone was used for the CM and Haystack was used for the LM in both internal and external communications during early mission planning. The LM was named Eagle after the motif which was featured prominently on the mission insignia. At Scheer's suggestion, the CM was named Columbia after Columbiad, the giant cannon that launched a spacecraft (also from Florida) in Jules Verne's 1865 novel From the Earth to the Moon. It also referred to Columbia, a historical name of the United States. In Collins' 1976 book, he said Columbia was in reference to Christopher Columbus. Mementos The astronauts had personal preference kits (PPKs), small bags containing personal items of significance they wanted to take with them on the mission. Five PPKs were carried on Apollo 11: three (one for each astronaut) were stowed on Columbia before launch, and two on Eagle. Neil Armstrong's LM PPK contained a piece of wood from the Wright brothers' 1903 Wright Flyers left propeller and a piece of fabric from its wing, along with a diamond-studded astronaut pin originally given to Slayton by the widows of the Apollo1 crew. This pin had been intended to be flown on that mission and given to Slayton afterwards, but following the disastrous launch pad fire and subsequent funerals, the widows gave the pin to Slayton. Armstrong took it with him on Apollo 11. Site selection NASA's Apollo Site Selection Board announced five potential landing sites on February 8, 1968. These were the result of two years' worth of studies based on high-resolution photography of the lunar surface by the five uncrewed probes of the Lunar Orbiter program and information about surface conditions provided by the Surveyor program. The best Earth-bound telescopes could not resolve features with the resolution Project Apollo required. The landing site had to be close to the lunar equator to minimize the amount of propellant required, clear of obstacles to minimize maneuvering, and flat to simplify the task of the landing radar. Scientific value was not a consideration. Areas that appeared promising on photographs taken on Earth were often found to be totally unacceptable. The original requirement that the site be free of craters had to be relaxed, as no such site was found. Five sites were considered: Sites1 and2 were in the Sea of Tranquility (Mare Tranquillitatis); Site3 was in the Central Bay (Sinus Medii); and Sites4 and5 were in the Ocean of Storms (Oceanus Procellarum). The final site selection was based on seven criteria: The site needed to be smooth, with relatively few craters; with approach paths free of large hills, tall cliffs or deep craters that might confuse the landing radar and cause it to issue incorrect readings; reachable with a minimum amount of propellant; allowing for delays in the launch countdown; providing the Apollo spacecraft with a free-return trajectory, one that would allow it to coast around the Moon and safely return to Earth without requiring any engine firings should a problem arise on the way to the Moon; with good visibility during the landing approach, meaning the Sun would be between 7and 20 degrees behind the LM; and a general slope of less than two degrees in the landing area. The requirement for the Sun angle was particularly restrictive, limiting the launch date to one day per month. A landing just after dawn was chosen to limit the temperature extremes the astronauts would experience. The Apollo Site Selection Board selected Site2, with Sites 3and5 as backups in the event of the launch being delayed. In May 1969, Apollo 10's lunar module flew to within of Site2, and reported it was acceptable. First-step decision During the first press conference after the Apollo 11 crew was announced, the first question was, "Which one of you gentlemen will be the first man to step onto the lunar surface?" Slayton told the reporter it had not been decided, and Armstrong added that it was "not based on individual desire". One of the first versions of the egress checklist had the lunar module pilot exit the spacecraft before the commander, which matched what had been done on Gemini missions, where the commander had never performed the spacewalk. Reporters wrote in early 1969 that Aldrin would be the first man to walk on the Moon, and Associate Administrator George Mueller told reporters he would be first as well. Aldrin heard that Armstrong would be the first because Armstrong was a civilian, which made Aldrin livid. Aldrin attempted to persuade other lunar module pilots he should be first, but they responded cynically about what they perceived as a lobbying campaign. Attempting to stem interdepartmental conflict, Slayton told Aldrin that Armstrong would be first since he was the commander. The decision was announced in a press conference on April 14, 1969. For decades, Aldrin believed the final decision was largely driven by the lunar module's hatch location. Because the astronauts had their spacesuits on and the spacecraft was so small, maneuvering to exit the spacecraft was difficult. The crew tried a simulation in which Aldrin left the spacecraft first, but he damaged the simulator while attempting to egress. While this was enough for mission planners to make their decision, Aldrin and Armstrong were left in the dark on the decision until late spring. Slayton told Armstrong the plan was to have him leave the spacecraft first, if he agreed. Armstrong said, "Yes, that's the way to do it." The media accused Armstrong of exercising his commander's prerogative to exit the spacecraft first. Chris Kraft revealed in his 2001 autobiography that a meeting occurred between Gilruth, Slayton, Low, and himself to make sure Aldrin would not be the first to walk on the Moon. They argued that the first person to walk on the Moon should be like Charles Lindbergh, a calm and quiet person. They made the decision to change the flight plan so the commander was the first to egress from the spacecraft. Pre-launch The ascent stage of LM-5 Eagle arrived at the Kennedy Space Center on January 8, 1969, followed by the descent stage four days later, and CSM-107 Columbia on January 23. There were several differences between Eagle and Apollo 10's LM-4 Snoopy; Eagle had a VHF radio antenna to facilitate communication with the astronauts during their EVA on the lunar surface; a lighter ascent engine; more thermal protection on the landing gear; and a package of scientific experiments known as the Early Apollo Scientific Experiments Package (EASEP). The only change in the configuration of the command module was the removal of some insulation from the forward hatch. The CSM was mated on January 29, and moved from the Operations and Checkout Building to the Vehicle Assembly Building on April 14. The S-IVB third stage of Saturn V AS-506 had arrived on January 18, followed by the S-II second stage on February 6, S-IC first stage on February 20, and the Saturn V Instrument Unit on February 27. At 12:30 on May 20, the assembly departed the Vehicle Assembly Building atop the crawler-transporter, bound for Launch Pad 39A, part of Launch Complex 39, while Apollo 10 was still on its way to the Moon. A countdown test commenced on June 26, and concluded on July 2. The launch complex was floodlit on the night of July 15, when the crawler-transporter carried the mobile service structure back to its parking area. In the early hours of the morning, the fuel tanks of the S-II and S-IVB stages were filled with liquid hydrogen. Fueling was completed by three hours before launch. Launch operations were partly automated, with 43 programs written in the ATOLL programming language. Slayton roused the crew shortly after 04:00, and they showered, shaved, and had the traditional pre-flight breakfast of steak and eggs with Slayton and the backup crew. They then donned their space suits and began breathing pure oxygen. At 06:30, they headed out to Launch Complex 39. Haise entered Columbia about three hours and ten minutes before launch time. Along with a technician, he helped Armstrong into the left-hand couch at 06:54. Five minutes later, Collins joined him, taking up his position on the right-hand couch. Finally, Aldrin entered, taking the center couch. Haise left around two hours and ten minutes before launch. The closeout crew sealed the hatch, and the cabin was purged and pressurized. The closeout crew then left the launch complex about an hour before launch time. The countdown became automated at three minutes and twenty seconds before launch time. Over 450 personnel were at the consoles in the firing room. Mission Launch and flight to lunar orbit An estimated one million spectators watched the launch of Apollo 11 from the highways and beaches in the vicinity of the launch site. Dignitaries included the Chief of Staff of the United States Army, General William Westmoreland, four cabinet members, 19 state governors, 40 mayors, 60 ambassadors and 200 congressmen. Vice President Spiro Agnew viewed the launch with former president Lyndon B. Johnson and his wife Lady Bird Johnson. Around 3,500 media representatives were present. About two-thirds were from the United States; the rest came from 55 other countries. The launch was televised live in 33 countries, with an estimated 25 million viewers in the United States alone. Millions more around the world listened to radio broadcasts. President Richard Nixon viewed the launch from his office in the White House with his NASA liaison officer, Apollo astronaut Frank Borman. Saturn V AS-506 launched Apollo 11 on July 16, 1969, at 13:32:00 UTC (9:32:00 EDT). At 13.2 seconds into the flight, the launch vehicle began to roll into its flight azimuth of 72.058°. Full shutdown of the first-stage engines occurred about 2minutes and 42 seconds into the mission, followed by separation of the S-IC and ignition of the S-II engines. The second stage engines then cut off and separated at about 9minutes and 8seconds, allowing the first ignition of the S-IVB engine a few seconds later. Apollo 11 entered a near-circular Earth orbit at an altitude of by , twelve minutes into its flight. After one and a half orbits, a second ignition of the S-IVB engine pushed the spacecraft onto its trajectory toward the Moon with the trans-lunar injection (TLI) burn at 16:22:13 UTC. About 30 minutes later, with Collins in the left seat and at the controls, the transposition, docking, and extraction maneuver was performed. This involved separating Columbia from the spent S-IVB stage, turning around, and docking with Eagle still attached to the stage. After the LM was extracted, the combined spacecraft headed for the Moon, while the rocket stage flew on a trajectory past the Moon. This was done to avoid the third stage colliding with the spacecraft, the Earth, or the Moon. A slingshot effect from passing around the Moon threw it into an orbit around the Sun. On July 19 at 17:21:50 UTC, Apollo 11 passed behind the Moon and fired its service propulsion engine to enter lunar orbit. In the thirty orbits that followed, the crew saw passing views of their landing site in the southern Sea of Tranquility about southwest of the crater Sabine D. The site was selected in part because it had been characterized as relatively flat and smooth by the automated Ranger 8 and Surveyor 5 landers and the Lunar Orbiter mapping spacecraft, and because it was unlikely to present major landing or EVA challenges. It lay about southeast of the Surveyor5 landing site, and southwest of Ranger8's crash site. Lunar descent At 12:52:00 UTC on July 20, Aldrin and Armstrong entered Eagle, and began the final preparations for lunar descent. At 17:44:00 Eagle separated from Columbia. Collins, alone aboard Columbia, inspected Eagle as it pirouetted before him to ensure the craft was not damaged, and that the landing gear was correctly deployed. Armstrong exclaimed: "The Eagle has wings!" As the descent began, Armstrong and Aldrin found themselves passing landmarks on the surface two or three seconds early, and reported that they were "long"; they would land miles west of their target point. Eagle was traveling too fast. The problem could have been mascons—concentrations of high mass in a region or regions of the Moon's crust that contains a gravitational anomaly, potentially altering Eagle'''s trajectory. Flight Director Gene Kranz speculated that it could have resulted from extra air pressure in the docking tunnel. Or it could have been the result of Eagles pirouette maneuver. Five minutes into the descent burn, and above the surface of the Moon, the LM guidance computer (LGC) distracted the crew with the first of several unexpected 1201 and 1202 program alarms. Inside Mission Control Center, computer engineer Jack Garman told Guidance Officer Steve Bales it was safe to continue the descent, and this was relayed to the crew. The program alarms indicated "executive overflows", meaning the guidance computer could not complete all its tasks in real-time and had to postpone some of them. Margaret Hamilton, the Director of Apollo Flight Computer Programming at the MIT Charles Stark Draper Laboratory later recalled: During the mission, the cause was diagnosed as the rendezvous radar switch being in the wrong position, causing the computer to process data from both the rendezvous and landing radars at the same time. Software engineer Don Eyles concluded in a 2005 Guidance and Control Conference paper that the problem was due to a hardware design bug previously seen during testing of the first uncrewed LM in Apollo 5. Having the rendezvous radar on (so it was warmed up in case of an emergency landing abort) should have been irrelevant to the computer, but an electrical phasing mismatch between two parts of the rendezvous radar system could cause the stationary antenna to appear to the computer as dithering back and forth between two positions, depending upon how the hardware randomly powered up. The extra spurious cycle stealing, as the rendezvous radar updated an involuntary counter, caused the computer alarms. Landing When Armstrong again looked outside, he saw that the computer's landing target was in a boulder-strewn area just north and east of a crater (later determined to be West crater), so he took semi-automatic control. Armstrong considered landing short of the boulder field so they could collect geological samples from it, but could not since their horizontal velocity was too high. Throughout the descent, Aldrin called out navigation data to Armstrong, who was busy piloting Eagle. Now above the surface, Armstrong knew their propellant supply was dwindling and was determined to land at the first possible landing site. Armstrong found a clear patch of ground and maneuvered the spacecraft towards it. As he got closer, now above the surface, he discovered his new landing site had a crater in it. He cleared the crater and found another patch of level ground. They were now from the surface, with only 90 seconds of propellant remaining. Lunar dust kicked up by the LM's engine began to impair his ability to determine the spacecraft's motion. Some large rocks jutted out of the dust cloud, and Armstrong focused on them during his descent so he could determine the spacecraft's speed. A light informed Aldrin that at least one of the probes hanging from Eagle footpads had touched the surface a few moments before the landing and he said: "Contact light!" Armstrong was supposed to immediately shut the engine down, as the engineers suspected the pressure caused by the engine's own exhaust reflecting off the lunar surface could make it explode, but he forgot. Three seconds later, Eagle landed and Armstrong shut the engine down. Aldrin immediately said "Okay, engine stop. ACA—out of detent." Armstrong acknowledged: "Out of detent. Auto." Aldrin continued: "Mode control—both auto. Descent engine command override off. Engine arm—off. 413 is in." ACA was the Attitude Control Assembly—the LM's control stick. Output went to the LGC to command the reaction control system (RCS) jets to fire. "Out of Detent" meant the stick had moved away from its centered position; it was spring-centered like the turn indicator in a car. LGC address 413 contained the variable that indicated the LM had landed.Eagle landed at 20:17:40 UTC on Sunday July 20 with of usable fuel remaining. Information available to the crew and mission controllers during the landing showed the LM had enough fuel for another 25 seconds of powered flight before an abort without touchdown would have become unsafe, but post-mission analysis showed that the real figure was probably closer to 50 seconds. Apollo 11 landed with less fuel than most subsequent missions, and the astronauts encountered a premature low fuel warning. This was later found to be the result of the propellant sloshing more than expected, uncovering a fuel sensor. On subsequent missions, extra anti-slosh baffles were added to the tanks to prevent this. Armstrong acknowledged Aldrin's completion of the post-landing checklist with "Engine arm is off", before responding to the CAPCOM, Charles Duke, with the words, "Houston, Tranquility Base here. The Eagle has landed." Armstrong's unrehearsed change of call sign from "Eagle" to "Tranquility Base" emphasized to listeners that landing was complete and successful. Duke mispronounced his reply as he expressed the relief at Mission Control: "Roger, Twan—Tranquility, we copy you on the ground. You got a bunch of guys about to turn blue. We're breathing again. Thanks a lot." Two and a half hours after landing, before preparations began for the EVA, Aldrin radioed to Earth: He then took communion privately. At this time NASA was still fighting a lawsuit brought by atheist Madalyn Murray O'Hair (who had objected to the Apollo8 crew reading from the Book of Genesis) demanding that their astronauts refrain from broadcasting religious activities while in space. For this reason, Aldrin chose to refrain from directly mentioning taking communion on the Moon. Aldrin was an elder at the Webster Presbyterian Church, and his communion kit was prepared by the pastor of the church, Dean Woodruff. Webster Presbyterian possesses the chalice used on the Moon and commemorates the event each year on the Sunday closest to July 20. The schedule for the mission called for the astronauts to follow the landing with a five-hour sleep period, but they chose to begin preparations for the EVA early, thinking they would be unable to sleep. Lunar surface operations Preparations for Neil Armstrong and Buzz Aldrin to walk on the Moon began at 23:43. These took longer than expected; three and a half hours instead of two. During training on Earth, everything required had been neatly laid out in advance, but on the Moon the cabin contained a large number of other items as well, such as checklists, food packets, and tools. Six hours and thirty-nine minutes after landing Armstrong and Aldrin were ready to go outside, and Eagle was depressurized.Eagles hatch was opened at 02:39:33. Armstrong initially had some difficulties squeezing through the hatch with his portable life support system (PLSS). Some of the highest heart rates recorded from Apollo astronauts occurred during LM egress and ingress. At 02:51 Armstrong began his descent to the lunar surface. The remote control unit on his chest kept him from seeing his feet. Climbing down the nine-rung ladder, Armstrong pulled a D-ring to deploy the modular equipment stowage assembly (MESA) folded against Eagle side and activate the TV camera. Apollo 11 used slow-scan television (TV) incompatible with broadcast TV, so it was displayed on a special monitor and a conventional TV camera viewed this monitor (thus, a broadcast of a broadcast), significantly reducing the quality of the picture. The signal was received at Goldstone in the United States, but with better fidelity by Honeysuckle Creek Tracking Station near Canberra in Australia. Minutes later the feed was switched to the more sensitive Parkes radio telescope in Australia. Despite some technical and weather difficulties, ghostly black and white images of the first lunar EVA were received and broadcast to at least 600 million people on Earth. Copies of this video in broadcast format were saved and are widely available, but recordings of the original slow scan source transmission from the lunar surface were likely destroyed during routine magnetic tape re-use at NASA. After describing the surface dust as "very fine-grained" and "almost like a powder", at 02:56:15, six and a half hours after landing, Armstrong stepped off Eagle footpad and declared: "That's one small step for [a] man, one giant leap for mankind." Armstrong intended to say "That's one small step for a man", but the word "a" is not audible in the transmission, and thus was not initially reported by most observers of the live broadcast. When later asked about his quote, Armstrong said he believed he said "for a man", and subsequent printed versions of the quote included the "a" in square brackets. One explanation for the absence may be that his accent caused him to slur the words "for a" together; another is the intermittent nature of the audio and video links to Earth, partly because of storms near Parkes Observatory. A more recent digital analysis of the tape claims to reveal the "a" may have been spoken but obscured by static. Other analysis points to the claims of static and slurring as "face-saving fabrication", and that Armstrong himself later admitted to misspeaking the line. About seven minutes after stepping onto the Moon's surface, Armstrong collected a contingency soil sample using a sample bag on a stick. He then folded the bag and tucked it into a pocket on his right thigh. This was to guarantee there would be some lunar soil brought back in case an emergency required the astronauts to abandon the EVA and return to the LM. Twelve minutes after the sample was collected, he removed the TV camera from the MESA and made a panoramic sweep, then mounted it on a tripod. The TV camera cable remained partly coiled and presented a tripping hazard throughout the EVA. Still photography was accomplished with a Hasselblad camera that could be operated hand held or mounted on Armstrong's Apollo space suit. Aldrin joined Armstrong on the surface. He described the view with the simple phrase: "Magnificent desolation." Armstrong said moving in the lunar gravity, one-sixth of Earth's, was "even perhaps easier than the simulations ... It's absolutely no trouble to walk around." Aldrin joined him on the surface and tested methods for moving around, including two-footed kangaroo hops. The PLSS backpack created a tendency to tip backward, but neither astronaut had serious problems maintaining balance. Loping became the preferred method of movement. The astronauts reported that they needed to plan their movements six or seven steps ahead. The fine soil was quite slippery. Aldrin remarked that moving from sunlight into Eagle shadow produced no temperature change inside the suit, but the helmet was warmer in sunlight, so he felt cooler in shadow. The MESA failed to provide a stable work platform and was in shadow, slowing work somewhat. As they worked, the moonwalkers kicked up gray dust, which soiled the outer part of their suits. The astronauts planted the Lunar Flag Assembly containing a flag of the United States on the lunar surface, in clear view of the TV camera. Aldrin remembered, "Of all the jobs I had to do on the Moon the one I wanted to go the smoothest was the flag raising." But the astronauts struggled with the telescoping rod and could only jam the pole about into the hard lunar surface. Aldrin was afraid it might topple in front of TV viewers. But he gave "a crisp West Point salute". Before Aldrin could take a photo of Armstrong with the flag, President Richard Nixon spoke to them through a telephone-radio transmission, which Nixon called "the most historic phone call ever made from the White House." Nixon originally had a long speech prepared to read during the phone call, but Frank Borman, who was at the White House as a NASA liaison during Apollo 11, convinced Nixon to keep his words brief. They deployed the EASEP, which included a passive seismic experiment package used to measure moonquakes and a retroreflector array used for the lunar laser ranging experiment. Then Armstrong walked from the LM to snap photos at the rim of Little West Crater while Aldrin collected two core samples. He used the geologist's hammer to pound in the tubes—the only time the hammer was used on Apollo 11—but was unable to penetrate more than deep. The astronauts then collected rock samples using scoops and tongs on extension handles. Many of the surface activities took longer than expected, so they had to stop documenting sample collection halfway through the allotted 34 minutes. Aldrin shoveled of soil into the box of rocks in order to pack them in tightly. Two types of rocks were found in the geological samples: basalt and breccia. Three new minerals were discovered in the rock samples collected by the astronauts: armalcolite, tranquillityite, and pyroxferroite. Armalcolite was named after Armstrong, Aldrin, and Collins. All have subsequently been found on Earth. While on the surface, Armstrong uncovered a plaque mounted on the LM ladder, bearing two drawings of Earth (of the Western and Eastern Hemispheres), an inscription, and signatures of the astronauts and President Nixon. The inscription read: At the behest of the Nixon administration to add a reference to God, NASA included the vague date as a reason to include A.D., which stands for Anno Domini, "in the year of our Lord" (although it should have been placed before the year, not after). Mission Control used a coded phrase to warn Armstrong his metabolic rates were high, and that he should slow down. He was moving rapidly from task to task as time ran out. As metabolic rates remained generally lower than expected for both astronauts throughout the walk, Mission Control granted the astronauts a 15-minute extension. In a 2010 interview, Armstrong explained that NASA limited the first moonwalk's time and distance because there was no empirical proof of how much cooling water the astronauts' PLSS backpacks would consume to handle their body heat generation while working on the Moon. Lunar ascent Aldrin entered Eagle first. With some difficulty the astronauts lifted film and two sample boxes containing of lunar surface material to the LM hatch using a flat cable pulley device called the Lunar Equipment Conveyor (LEC). This proved to be an inefficient tool, and later missions preferred to carry equipment and samples up to the LM by hand. Armstrong reminded Aldrin of a bag of memorial items in his sleeve pocket, and Aldrin tossed the bag down. Armstrong then jumped onto the ladder's third rung, and climbed into the LM. After transferring to LM life support, the explorers lightened the ascent stage for the return to lunar orbit by tossing out their PLSS backpacks, lunar overshoes, an empty Hasselblad camera, and other equipment. The hatch was closed again at 05:11:13. They then pressurized the LM and settled down to sleep. Presidential speech writer William Safire had prepared an In Event of Moon Disaster announcement for Nixon to read in the event the Apollo 11 astronauts were stranded on the Moon. The remarks were in a memo from Safire to Nixon's White House Chief of Staff H. R. Haldeman, in which Safire suggested a protocol the administration might follow in reaction to such a disaster. According to the plan, Mission Control would "close down communications" with the LM, and a clergyman would "commend their souls to the deepest of the deep" in a public ritual likened to burial at sea. The last line of the prepared text contained an allusion to Rupert Brooke's First World War poem, "The Soldier". While moving inside the cabin, Aldrin accidentally damaged the circuit breaker that would arm the main engine for liftoff from the Moon. There was a concern this would prevent firing the engine, stranding them on the Moon. A felt-tip pen was sufficient to activate the switch. After more than hours on the lunar surface, in addition to the scientific instruments, the astronauts left behind: an Apollo 1 mission patch in memory of astronauts Roger Chaffee, Gus Grissom, and Edward White, who died when their command module caught fire during a test in January 1967; two memorial medals of Soviet cosmonauts Vladimir Komarov and Yuri Gagarin, who died in 1967 and 1968 respectively; a memorial bag containing a gold replica of an olive branch as a traditional symbol of peace; and a silicon message disk carrying the goodwill statements by Presidents Eisenhower, Kennedy, Johnson, and Nixon along with messages from leaders of 73 countries around the world. The disk also carries a listing of the leadership of the US Congress, a listing of members of the four committees of the House and Senate responsible for the NASA legislation, and the names of NASA's past and then-current top management. After about seven hours of rest, the crew was awakened by Houston to prepare for the return flight. Two and a half hours later, at 17:54:00 UTC, they lifted off in Eagle ascent stage to rejoin Collins aboard Columbia in lunar orbit. Film taken from the LM ascent stage upon liftoff from the Moon reveals the American flag, planted some from the descent stage, whipping violently in the exhaust of the ascent stage engine. Aldrin looked up in time to witness the flag topple: "The ascent stage of the LM separated ... I was concentrating on the computers, and Neil was studying the attitude indicator, but I looked up long enough to see the flag fall over." Subsequent Apollo missions planted their flags farther from the LM. Columbia in lunar orbit During his day flying solo around the Moon, Collins never felt lonely. Although it has been said "not since Adam has any human known such solitude", Collins felt very much a part of the mission. In his autobiography he wrote: "this venture has been structured for three men, and I consider my third to be as necessary as either of the other two". In the 48 minutes of each orbit when he was out of radio contact with the Earth while Columbia passed round the far side of the Moon, the feeling he reported was not fear or loneliness, but rather "awareness, anticipation, satisfaction, confidence, almost exultation". One of Collins' first tasks was to identify the lunar module on the ground. To give Collins an idea where to look, Mission Control radioed that they believed the lunar module landed about off target. Each time he passed over the suspected lunar landing site, he tried in vain to find the module. On his first orbits on the back side of the Moon, Collins performed maintenance activities such as dumping excess water produced by the fuel cells and preparing the cabin for Armstrong and Aldrin to return. Just before he reached the dark side on the third orbit, Mission Control informed Collins there was a problem with the temperature of the coolant. If it became too cold, parts of Columbia might freeze. Mission Control advised him to assume manual control and implement Environmental Control System Malfunction Procedure 17. Instead, Collins flicked the switch on the system from automatic to manual and back to automatic again, and carried on with normal housekeeping chores, while keeping an eye on the temperature. When Columbia came back around to the near side of the Moon again, he was able to report that the problem had been resolved. For the next couple of orbits, he described his time on the back side of the Moon as "relaxing". After Aldrin and Armstrong completed their EVA, Collins slept so he could be rested for the rendezvous. While the flight plan called for Eagle to meet up with Columbia, Collins was prepared for a contingency in which he would fly Columbia down to meet Eagle. Return Eagle rendezvoused with Columbia at 21:24 UTC on July 21, and the two docked at 21:35. Eagles ascent stage was jettisoned into lunar orbit at 23:41. Just before the Apollo 12 flight, it was noted that Eagle was still likely to be orbiting the Moon. Later NASA reports mentioned that Eagle orbit had decayed, resulting in it impacting in an "uncertain location" on the lunar surface. In 2021, however, some calculations show that lander may still be in orbit. On July 23, the last night before splashdown, the three astronauts made a television broadcast in which Collins commented: Aldrin added: Armstrong concluded: On the return to Earth, a bearing at the Guam tracking station failed, potentially preventing communication on the last segment of the Earth return. A regular repair was not possible in the available time but the station director, Charles Force, had his ten-year-old son Greg use his small hands to reach into the housing and pack it with grease. Greg was later thanked by Armstrong. Splashdown and quarantine The aircraft carrier , under the command of Captain Carl J. Seiberlich, was selected as the primary recovery ship (PRS) for Apollo 11 on June 5, replacing its sister ship, the LPH , which had recovered Apollo 10 on May 26. Hornet was then at her home port of Long Beach, California. On reaching Pearl Harbor on July 5, Hornet embarked the Sikorsky SH-3 Sea King helicopters of HS-4, a unit which specialized in recovery of Apollo spacecraft, specialized divers of UDT Detachment Apollo, a 35-man NASA recovery team, and about 120 media representatives. To make room, most of Hornets air wing was left behind in Long Beach. Special recovery equipment was also loaded, including a boilerplate command module used for training. On July 12, with Apollo 11 still on the launch pad, Hornet departed Pearl Harbor for the recovery area in the central Pacific, in the vicinity of . A presidential party consisting of Nixon, Borman, Secretary of State William P. Rogers and National Security Advisor Henry Kissinger flew to Johnston Atoll on Air Force One, then to the command ship USS Arlington in Marine One. After a night on board, they would fly to Hornet in Marine One for a few hours of ceremonies. On arrival aboard Hornet, the party was greeted by the Commander-in-Chief, Pacific Command (CINCPAC), Admiral John S. McCain Jr., and NASA Administrator Thomas O. Paine, who flew to Hornet from Pago Pago in one of Hornets carrier onboard delivery aircraft. Weather satellites were not yet common, but US Air Force Captain Hank Brandli had access to top-secret spy satellite images. He realized that a storm front was headed for the Apollo recovery area. Poor visibility which could make locating the capsule difficult, and strong upper-level winds which "would have ripped their parachutes to shreds" according to Brandli, posed a serious threat to the safety of the mission. Brandli alerted Navy Captain Willard S. Houston Jr., the commander of the Fleet Weather Center at Pearl Harbor, who had the required security clearance. On their recommendation, Rear Admiral Donald C. Davis, commander of Manned Spaceflight Recovery Forces, Pacific, advised NASA to change the recovery area, each man risking his career. A new location was selected northeast. This altered the flight plan. A different sequence of computer programs was used, one never before attempted. In a conventional entry, trajectory event P64 was followed by P67. For a skip-out re-entry, P65 and P66 were employed to handle the exit and entry parts of the skip. In this case, because they were extending the re-entry but not actually skipping out, P66 was not invoked and instead, P65 led directly to P67. The crew were also warned they would not be in a full-lift (heads-down) attitude when they entered P67. The first program's acceleration subjected the astronauts to ; the second, to . Before dawn on July 24, Hornet launched four Sea King helicopters and three Grumman E-1 Tracers. Two of the E-1s were designated as "air boss" while the third acted as a communications relay aircraft. Two of the Sea Kings carried divers and recovery equipment. The third carried photographic equipment, and the fourth carried the decontamination swimmer and the flight surgeon. At 16:44 UTC (05:44 local time) Columbias drogue parachutes were deployed. This was observed by the helicopters. Seven minutes later Columbia struck the water forcefully east of Wake Island, south of Johnston Atoll, and from Hornet, at . with seas and winds at from the east were reported under broken clouds at with visibility of at the recovery site. Reconnaissance aircraft flying to the original splashdown location reported the conditions Brandli and Houston had predicted. During splashdown, Columbia landed upside down but was righted within ten minutes by flotation bags activated by the astronauts. A diver from the Navy helicopter hovering above attached a sea anchor to prevent it from drifting. More divers attached flotation collars to stabilize the module and positioned rafts for astronaut extraction. The divers then passed biological isolation garments (BIGs) to the astronauts, and assisted them into the life raft. The possibility of bringing back pathogens from the lunar surface was considered remote, but NASA took precautions at the recovery site. The astronauts were rubbed down with a sodium hypochlorite solution and Columbia wiped with Povidone-iodine to remove any lunar dust that might be present. The astronauts were winched on board the recovery helicopter. BIGs were worn until they reached isolation facilities on board Hornet. The raft containing decontamination materials was intentionally sunk. After touchdown on Hornet at 17:53 UTC, the helicopter was lowered by the elevator into the hangar bay, where the astronauts walked the to the Mobile quarantine facility (MQF), where they would begin the Earth-based portion of their 21 days of quarantine. This practice would continue for two more Apollo missions, Apollo 12 and Apollo 14, before the Moon was proven to be barren of life, and the quarantine process dropped. Nixon welcomed the astronauts back to Earth. He told them: "[A]s a result of what you've done, the world has never been closer together before." After Nixon departed, Hornet was brought alongside the Columbia, which was lifted aboard by the ship's crane, placed on a dolly and moved next to the MQF. It was then attached to the MQF with a flexible tunnel, allowing the lunar samples, film, data tapes and other items to be removed. Hornet returned to Pearl Harbor, where the MQF was loaded onto a Lockheed C-141 Starlifter and airlifted to the Manned Spacecraft Center. The astronauts arrived at the Lunar Receiving Laboratory at 10:00 UTC on July 28. Columbia was taken to Ford Island for deactivation, and its pyrotechnics made safe. It was then taken to Hickham Air Force Base, from whence it was flown to Houston in a Douglas C-133 Cargomaster, reaching the Lunar Receiving Laboratory on July 30. In accordance with the Extra-Terrestrial Exposure Law, a set of regulations promulgated by NASA on July 16 to codify its quarantine protocol, the astronauts continued in quarantine. After three weeks in confinement (first in the Apollo spacecraft, then in their trailer on Hornet, and finally in the Lunar Receiving Laboratory), the astronauts were given a clean bill of health. On August 10, 1969, the Interagency Committee on Back Contamination met in Atlanta and lifted the quarantine on the astronauts, on those who had joined them in quarantine (NASA physician William Carpentier and MQF project engineer John Hirasaki), and on Columbia itself. Loose equipment from the spacecraft remained in isolation until the lunar samples were released for study. Celebrations On August 13, the three astronauts rode in ticker-tape parades in their honor in New York and Chicago, with an estimated six million attendees. On the same evening in Los Angeles there was an official state dinner to celebrate the flight, attended by members of Congress, 44 governors, Chief Justice of the United States Warren E. Burger and his predecessor, Earl Warren, and ambassadors from 83 nations at the Century Plaza Hotel. Nixon and Agnew honored each astronaut with a presentation of the Presidential Medal of Freedom. The three astronauts spoke before a joint session of Congress on September 16, 1969. They presented two US flags, one to the House of Representatives and the other to the Senate, that they had carried with them to the surface of the Moon. The flag of American Samoa on Apollo 11 is on display at the Jean P. Haydon Museum in Pago Pago, the capital of American Samoa. This celebration began a 38-day world tour that brought the astronauts to 22 foreign countries and included visits with the leaders of many countries. The crew toured from September 29 to November 5. Many nations honored the first human Moon landing with special features in magazines or by issuing Apollo 11 commemorative postage stamps or coins. Legacy Cultural significance Humans walking on the Moon and returning safely to Earth accomplished Kennedy's goal set eight years earlier. In Mission Control during the Apollo 11 landing, Kennedy's speech flashed on the screen, followed by the words "TASK ACCOMPLISHED, July 1969". The success of Apollo 11 demonstrated the United States' technological superiority; and with the success of Apollo 11, America had won the Space Race. New phrases permeated into the English language. "If they can send a man to the Moon, why can't they ...?" became a common saying following Apollo 11. Armstrong's words on the lunar surface also spun off various parodies. While most people celebrated the accomplishment, disenfranchised Americans saw it as a symbol of the divide in America, evidenced by protesters led by Ralph Abernathy outside of Kennedy Space Center the day before Apollo 11 launched. NASA Administrator Thomas Paine met with Abernathy at the occasion, both hoping that the space program can spur progress also in other regards, such as poverty in the US. Paine was then asked, and agreed, to host protesters as spectators at the launch, and Abernathy, awestruck by the spectacle, prayed for the astronauts. Racial and financial inequalities frustrated citizens who wondered why money spent on the Apollo program was not spent taking care of humans on Earth. A poem by Gil Scott-Heron called "Whitey on the Moon" (1970) illustrated the racial inequality in the United States that was highlighted by the Space Race. The poem starts with: Twenty percent of the world's population watched humans walk on the Moon for the first time. While Apollo 11 sparked the interest of the world, the follow-on Apollo missions did not hold the interest of the nation. One possible explanation was the shift in complexity. Landing someone on the Moon was an easy goal to understand; lunar geology was too abstract for the average person. Another is that Kennedy's goal of landing humans on the Moon had already been accomplished. A well-defined objective helped Project Apollo accomplish its goal, but after it was completed it was hard to justify continuing the lunar missions. While most Americans were proud of their nation's achievements in space exploration, only once during the late 1960s did the Gallup Poll indicate that a majority of Americans favored "doing more" in space as opposed to "doing less". By 1973, 59 percent of those polled favored cutting spending on space exploration. The Space Race had been won, and Cold War tensions were easing as the US and Soviet Union entered the era of détente. This was also a time when inflation was rising, which put pressure on the government to reduce spending. What saved the space program was that it was one of the few government programs that had achieved something great. Drastic cuts, warned Caspar Weinberger, the deputy director of the Office of Management and Budget, might send a signal that "our best years are behind us". After the Apollo 11 mission, officials from the Soviet Union said landing humans on the Moon was dangerous and unnecessary. At the time the Soviet Union was attempting to retrieve lunar samples robotically. The Soviets publicly denied there was a race to the Moon, and indicated they were not making an attempt. Mstislav Keldysh said in July 1969, "We are concentrating wholly on the creation of large satellite systems." It was revealed in 1989 that the Soviets had tried to send people to the Moon, but were unable due to technological difficulties. The public's reaction in the Soviet Union was mixed. The Soviet government limited the release of information about the lunar landing, which affected the reaction. A portion of the populace did not give it any attention, and another portion was angered by it. The Apollo 11 landing is referenced in the songs "Armstrong, Aldrin and Collins" by The Byrds on the 1969 album Ballad of Easy Rider and "Coon on the Moon" by Howlin' Wolf on the 1973 album The Back Door Wolf. Spacecraft The command module Columbia went on a tour of the United States, visiting 49 state capitals, the District of Columbia, and Anchorage, Alaska. In 1971, it was transferred to the Smithsonian Institution, and was displayed at the National Air and Space Museum (NASM) in Washington, DC. It was in the central Milestones of Flight exhibition hall in front of the Jefferson Drive entrance, sharing the main hall with other pioneering flight vehicles such as the Wright Flyer, Spirit of St. Louis, Bell X-1, North American X-15 and Friendship 7.Columbia was moved in 2017 to the NASM Mary Baker Engen Restoration Hangar at the Steven F. Udvar-Hazy Center in Chantilly, Virginia, to be readied for a four-city tour titled Destination Moon: The Apollo 11 Mission. This included Space Center Houston from October 14, 2017, to March 18, 2018, the Saint Louis Science Center from April 14 to September 3, 2018, the Senator John Heinz History Center in Pittsburgh from September 29, 2018, to February 18, 2019, and its last location at Museum of Flight in Seattle from March 16 to September 2, 2019. Continued renovations at the Smithsonian allowed time for an additional stop for the capsule, and it was moved to the Cincinnati Museum Center. The ribbon cutting ceremony was on September 29, 2019. For 40 years Armstrong's and Aldrin's space suits were displayed in the museum's Apollo to the Moon exhibit, until it permanently closed on December 3, 2018, to be replaced by a new gallery which was scheduled to open in 2022. A special display of Armstrong's suit was unveiled for the 50th anniversary of Apollo 11 in July 2019. The quarantine trailer, the flotation collar and the flotation bags are in the Smithsonian's Steven F. Udvar-Hazy Center annex near Washington Dulles International Airport in Chantilly, Virginia, where they are on display along with a test lunar module. The descent stage of the LM Eagle remains on the Moon. In 2009, the Lunar Reconnaissance Orbiter (LRO) imaged the various Apollo landing sites on the surface of the Moon, for the first time with sufficient resolution to see the descent stages of the lunar modules, scientific instruments, and foot trails made by the astronauts. The remains of the ascent stage lie at an unknown location on the lunar surface, after being abandoned and impacting the Moon. The location is uncertain because Eagle ascent stage was not tracked after it was jettisoned, and the lunar gravity field is sufficiently non-uniform to make the orbit of the spacecraft unpredictable after a short time. In March 2012 a team of specialists financed by Amazon founder Jeff Bezos located the F-1 engines from the S-IC stage that launched Apollo 11 into space. They were found on the Atlantic seabed using advanced sonar scanning. His team brought parts of two of the five engines to the surface. In July 2013, a conservator discovered a serial number under the rust on one of the engines raised from the Atlantic, which NASA confirmed was from Apollo 11. The S-IVB third stage which performed Apollo 11's trans-lunar injection remains in a solar orbit near to that of Earth. Moon rocks The main repository for the Apollo Moon rocks is the Lunar Sample Laboratory Facility at the Lyndon B. Johnson Space Center in Houston, Texas. For safekeeping, there is also a smaller collection stored at White Sands Test Facility near Las Cruces, New Mexico. Most of the rocks are stored in nitrogen to keep them free of moisture. They are handled only indirectly, using special tools. Over 100 research laboratories around the world conduct studies of the samples, and approximately 500 samples are prepared and sent to investigators every year. In November 1969, Nixon asked NASA to make up about 250 presentation Apollo 11 lunar sample displays for 135 nations, the fifty states of the United States and its possessions, and the United Nations. Each display included Moon dust from Apollo 11. The rice-sized particles were four small pieces of Moon soil weighing about 50 mg and were enveloped in a clear acrylic button about as big as a United States half dollar coin. This acrylic button magnified the grains of lunar dust. The Apollo 11 lunar sample displays were given out as goodwill gifts by Nixon in 1970.Earth magazine, March 2011, pp. 42–51 Experiment results The Passive Seismic Experiment ran until the command uplink failed on August 25, 1969. The downlink failed on December 14, 1969. , the Lunar Laser Ranging experiment remains operational. Armstrong's camera Armstrong's Hasselblad camera was thought to be lost or left on the Moon surface. LM memorabilia In 2015, after Armstrong died in 2012, his widow contacted the National Air and Space Museum to inform them she had found a white cloth bag in one of Armstrong's closets. The bag contained various items, which should have been left behind in the lunar module, including the 16mm Data Acquisition Camera that had been used to capture images of the first Moon landing. The camera is currently on display at the National Air and Space Museum. Anniversary events 40th anniversary On July 15, 2009, Life.com released a photo gallery of previously unpublished photos of the astronauts taken by Life photographer Ralph Morse prior to the Apollo 11 launch. From July 16 to 24, 2009, NASA streamed the original mission audio on its website in real time 40 years to the minute after the events occurred. It is in the process of restoring the video footage and has released a preview of key moments. In July 2010, air-to-ground voice recordings and film footage shot in Mission Control during the Apollo 11 powered descent and landing was re-synchronized and released for the first time. The John F. Kennedy Presidential Library and Museum set up an Adobe Flash website that rebroadcasts the transmissions of Apollo 11 from launch to landing on the Moon. On July 20, 2009, Armstrong, Aldrin, and Collins met with US President Barack Obama at the White House. "We expect that there is, as we speak, another generation of kids out there who are looking up at the sky and are going to be the next Armstrong, Collins, and Aldrin", Obama said. "We want to make sure that NASA is going to be there for them when they want to take their journey." On August 7, 2009, an act of Congress awarded the three astronauts a Congressional Gold Medal, the highest civilian award in the United States. The bill was sponsored by Florida Senator Bill Nelson and Florida Representative Alan Grayson. A group of British scientists interviewed as part of the anniversary events reflected on the significance of the Moon landing: 50th anniversary On June 10, 2015, Congressman Bill Posey introduced resolution H.R. 2726 to the 114th session of the United States House of Representatives directing the United States Mint to design and sell commemorative coins in gold, silver and clad for the 50th anniversary of the Apollo 11 mission. On January 24, 2019, the Mint released the Apollo 11 Fiftieth Anniversary commemorative coins to the public on its website. A documentary film, Apollo 11, with restored footage of the 1969 event, premiered in IMAX on March 1, 2019, and broadly in theaters on March 8. The Smithsonian Institute's National Air and Space Museum and NASA sponsored the "Apollo 50 Festival" on the National Mall in Washington DC. The three day (July 18 to 20, 2019) outdoor festival featured hands-on exhibits and activities, live performances, and speakers such as Adam Savage and NASA scientists. As part of the festival, a projection of the tall Saturn V rocket was displayed on the east face of the tall Washington Monument from July 16 through the 20th from 9:30pm until 11:30pm (EDT). The program also included a 17-minute show that combined full-motion video projected on the Washington Monument to recreate the assembly and launch of the Saturn V rocket. The projection was joined by a wide recreation of the Kennedy Space Center countdown clock and two large video screens showing archival footage to recreate the time leading up to the moon landing. There were three shows per night on July 19–20, with the last show on Saturday, delayed slightly so the portion where Armstrong first set foot on the Moon would happen exactly 50 years to the second after the actual event. On July 19, 2019, the Google Doodle paid tribute to the Apollo 11 Moon Landing, complete with a link to an animated YouTube video with voiceover by astronaut Michael Collins. Aldrin, Collins, and Armstrong's sons were hosted by President Donald Trump in the Oval Office. Films and documentaries Footprints on the Moon, a 1969 documentary film by Bill Gibson and Barry Coe, about the Apollo 11 mission Moonwalk One, a 1971 documentary film by Theo Kamecke Apollo 11: As it Happened, a 1994 six-hour documentary on ABC News' coverage of the event Apollo 11, a 2019 documentary film by Todd Douglas Miller with restored footage of the 1969 event Chasing the Moon, a July 2019 PBS three-night six-hour documentary, directed by Robert Stone, examined the events leading up to the Apollo 11 mission. An accompanying book of the same name was also released. 8 Days: To the Moon and Back, a PBS and BBC Studios 2019 documentary film by Anthony Philipson re-enacting major portions of the Apollo 11 mission using mission audio recordings, new studio footage, NASA and news archives, and computer-generated imagery. See also Moon landing conspiracy theories References Notes Citations In some of the following sources, times are shown in the format hours:minutes:seconds (e.g. 109:24:15), referring to the mission's Ground Elapsed Time (GET), based on the official launch time of July 16, 1969, 13:32:00 UTC (000:00:00 GET). Sources External links "Apollo 11 transcripts" at Spacelog Apollo 11 in real time Multimedia —Remastered videos of the original landing. Dynamic timeline of lunar excursion. Lunar Reconnaissance Orbiter Camera Apollo 11 Restored EVA Part 1 (1h of restored footage) Apollo 11: As They Photographed It (Augmented Reality) The New York Times'', Interactive, July 18, 2019 "Coverage of the Flight of Apollo 11" provided by Todd Kosovich for RadioTapes.com. Radio station recordings (airchecks) covering the flight of Apollo 11. Buzz Aldrin Neil Armstrong Michael Collins (astronaut) Apollo program missions 1969 on the Moon Soft landings on the Moon Spacecraft launched by Saturn rockets Articles containing video clips Crewed missions to the Moon
19051303
https://en.wikipedia.org/wiki/Rich%20Hacker
Rich Hacker
Richard Warren Hacker (October 6, 1947 – April 22, 2020) was a Major League Baseball player, base coach and scout. Hacker played 16 games for the Montreal Expos in the 1971 season as a shortstop. He had a .121 batting average, with four hits in 33 at-bats. Hacker attended Southern Illinois University. After his playing career Hacker became a coach. Coaching Hacker was a base coach in the Major Leagues from 1986 to 1993, coaching for the St. Louis Cardinals from 1986–90 and the Toronto Blue Jays from 1991–93. Hacker coached first base for the Cardinals from 1986–87 and third base from 1988–90. He was the third base coach for the Blue Jays from 1991–93. He coached in two World Series (1987 and 1992) and was on the Blue Jays bench for a third (1993). He also coached in the 1988 Major League Baseball All-Star Game. Hacker was seriously hurt in a car accident on the Martin Luther King Bridge in St. Louis in July 1993, when he collided with a driver who was racing. The accident ended his career. During his recovery from injury he remained a member of the Blue Jays coaching staff, but was transferred to off-field work such as creating hitting charts of opposing teams. He was replaced as third base coach by Nick Leyva. Personal life Hacker and his wife Kathryn lived in Belleville, Illinois, and had three grown children. He remained an active hunter and amateur baseball scout. He was a member of the New Athens High School Hall of Fame. Hacker's uncle was former Major Leaguer, Warren Hacker. Hacker died on April 22, 2020 in Fairview Heights, Illinois, due to leukemia. See also List of St. Louis Cardinals coaches References External links 1947 births 2020 deaths Amarillo Gold Sox players American expatriate baseball players in Canada Baseball coaches from Illinois Baseball players from Illinois Deaths from cancer in Illinois Deaths from leukemia Major League Baseball first base coaches Major League Baseball shortstops Major League Baseball third base coaches Mankato Mets players Memphis Blues players Montreal Expos players Peninsula Whips players St. Louis Cardinals coaches San Diego Padres scouts Southern Illinois Salukis baseball players Sportspeople from Belleville, Illinois Toronto Blue Jays coaches Toronto Blue Jays scouts Visalia Mets players Winnipeg Whips players
627071
https://en.wikipedia.org/wiki/Computer-aided%20software%20engineering
Computer-aided software engineering
Computer-aided software engineering (CASE) is the domain of software tools used to design and implement applications. CASE tools are similar to and were partly inspired by computer-aided design (CAD) tools used for designing hardware products. CASE tools were used for developing high-quality, defect-free, and maintainable software. CASE software is often associated with methods for the development of information systems together with automated tools that can be used in the software development process.<ref>P. Loucopoulos and V. Karakostas (1995). System Requirements Engineerinuality software which will perform effectively.</ref> History The Information System Design and Optimization System (ISDOS) project, started in 1968 at the University of Michigan, initiated a great deal of interest in the whole concept of using computer systems to help analysts in the very difficult process of analysing requirements and developing systems. Several papers by Daniel Teichroew fired a whole generation of enthusiasts with the potential of automated systems development. His Problem Statement Language / Problem Statement Analyzer (PSL/PSA) tool was a CASE tool although it predated the term. Another major thread emerged as a logical extension to the data dictionary of a database. By extending the range of metadata held, the attributes of an application could be held within a dictionary and used at runtime. This "active dictionary" became the precursor to the more modern model-driven engineering capability. However, the active dictionary did not provide a graphical representation of any of the metadata. It was the linking of the concept of a dictionary holding analysts' metadata, as derived from the use of an integrated set of techniques, together with the graphical representation of such data that gave rise to the earlier versions of CASE. The next entrant into the market was Excelerator from Index Technology in Cambridge, Mass. While DesignAid ran on Convergent Technologies and later Burroughs Ngen networked microcomputers, Index launched Excelerator on the IBM PC/AT platform. While, at the time of launch, and for several years, the IBM platform did not support networking or a centralized database as did the Convergent Technologies or Burroughs machines, the allure of IBM was strong, and Excelerator came to prominence. Hot on the heels of Excelerator were a rash of offerings from companies such as Knowledgeware (James Martin, Fran Tarkenton and Don Addington), Texas Instrument's CA Gen and Andersen Consulting's FOUNDATION toolset (DESIGN/1, INSTALL/1, FCP). CASE tools were at their peak in the early 1990s. According to the PC Magazine of January 1990, over 100 companies were offering nearly 200 different CASE tools. At the time IBM had proposed AD/Cycle, which was an alliance of software vendors centered on IBM's Software repository using IBM DB2 in mainframe and OS/2:The application development tools can be from several sources: from IBM, from vendors, and from the customers themselves. IBM has entered into relationships with Bachman Information Systems, Index Technology Corporation, and Knowledgeware wherein selected products from these vendors will be marketed through an IBM complementary marketing program to provide offerings that will help to achieve complete life-cycle coverage. With the decline of the mainframe, AD/Cycle and the Big CASE tools died off, opening the market for the mainstream CASE tools of today. Many of the leaders of the CASE market of the early 1990s ended up being purchased by Computer Associates, including IEW, IEF, ADW, Cayenne, and Learmonth & Burchett Management Systems (LBMS). The other trend that led to the evolution of CASE tools was the rise of object-oriented methods and tools. Most of the various tool vendors added some support for object-oriented methods and tools. In addition new products arose that were designed from the bottom up to support the object-oriented approach. Andersen developed its project Eagle as an alternative to Foundation. Several of the thought leaders in object-oriented development each developed their own methodology and CASE tool set: Jacobson, Rumbaugh, Booch, etc. Eventually, these diverse tool sets and methods were consolidated via standards led by the Object Management Group (OMG). The OMG's Unified Modelling Language (UML) is currently widely accepted as the industry standard for object-oriented modeling. CASE Software A. Fuggetta classified CASE software different into 3 categories: Tools support specific tasks in the software life-cycle. Workbenches combine two or more tools focused on a specific part of the software life-cycle. Environments'' combine two or more tools or workbenches and support the complete software life-cycle. Tools CASE tools support specific tasks in the software development life-cycle. They can be divided into the following categories: Business and Analysis modeling. Graphical modeling tools. E.g., E/R modeling, object modeling, etc. Development. Design and construction phases of the life-cycle. Debugging environments. E.g., IISE LKO. Verification and validation. Analyze code and specifications for correctness, performance, etc. Configuration management. Control the check-in and check-out of repository objects and files. E.g., SCCS, IISE. Metrics and measurement. Analyze code for complexity, modularity (e.g., no "go to's"), performance, etc. Project management. Manage project plans, task assignments, scheduling. Another common way to distinguish CASE tools is the distinction between Upper CASE and Lower CASE. Upper CASE Tools support business and analysis modeling. They support traditional diagrammatic languages such as ER diagrams, Data flow diagram, Structure charts, Decision Trees, Decision tables, etc. Lower CASE Tools support development activities, such as physical design, debugging, construction, testing, component integration, maintenance, and reverse engineering. All other activities span the entire life-cycle and apply equally to upper and lower CASE. Workbenches Workbenches integrate two or more CASE tools and support specific software-process activities. Hence they achieve: a homogeneous and consistent interface (presentation integration). Seamless integration of tools and toolchains (control and data integration). An example workbench is Microsoft's Visual Basic programming environment. It incorporates several development tools: a GUI builder, a smart code editor, debugger, etc. Most commercial CASE products tended to be such workbenches that seamlessly integrated two or more tools. Workbenches also can be classified in the same manner as tools; as focusing on Analysis, Development, Verification, etc. as well as being focused on the upper case, lower case, or processes such as configuration management that span the complete life-cycle. Environments An environment is a collection of CASE tools or workbenches that attempts to support the complete software process. This contrasts with tools that focus on one specific task or a specific part of the life-cycle. CASE environments are classified by Fuggetta as follows: Toolkits. Loosely coupled collections of tools. These typically build on operating system workbenches such as the Unix Programmer's Workbench or the VMS VAX set. They typically perform integration via piping or some other basic mechanism to share data and pass control. The strength of easy integration is also one of the drawbacks. Simple passing of parameters via technologies such as shell scripting can't provide the kind of sophisticated integration that a common repository database can. Fourth generation. These environments are also known as 4GL standing for fourth generation language environments due to the fact that the early environments were designed around specific languages such as Visual Basic. They were the first environments to provide deep integration of multiple tools. Typically these environments were focused on specific types of applications. For example, user-interface driven applications that did standard atomic transactions to a relational database. Examples are Informix 4GL, and Focus. Language-centered. Environments based on a single often object-oriented language such as the Symbolics Lisp Genera environment or VisualWorks Smalltalk from Parcplace. In these environments all the operating system resources were objects in the object-oriented language. This provides powerful debugging and graphical opportunities but the code developed is mostly limited to the specific language. For this reason, these environments were mostly a niche within CASE. Their use was mostly for prototyping and R&D projects. A common core idea for these environments was the model–view–controller user interface that facilitated keeping multiple presentations of the same design consistent with the underlying model. The MVC architecture was adopted by the other types of CASE environments as well as many of the applications that were built with them. Integrated. These environments are an example of what most IT people tend to think of first when they think of CASE. Environments such as IBM's AD/Cycle, Andersen Consulting's FOUNDATION, the ICL CADES system, and DEC Cohesion. These environments attempt to cover the complete life-cycle from analysis to maintenance and provide an integrated database repository for storing all artifacts of the software process. The integrated software repository was the defining feature for these kinds of tools. They provided multiple different design models as well as support for code in heterogenous languages. One of the main goals for these types of environments was "round trip engineering": being able to make changes at the design level and have those automatically be reflected in the code and vice versa. These environments were also typically associated with a particular methodology for software development. For example, the FOUNDATION CASE suite from Andersen was closely tied to the Andersen Method/1 methodology. Process-centered. This is the most ambitious type of integration. These environments attempt to not just formally specify the analysis and design objects of the software process but the actual process itself and to use that formal process to control and guide software projects. Examples are East, Enterprise II, Process Wise, Process Weaver, and Arcadia. These environments were by definition tied to some methodology since the software process itself is part of the environment and can control many aspects of tool invocation. In practice, the distinction between workbenches and environments was flexible. Visual Basic for example was a programming workbench but was also considered a 4GL environment by many. The features that distinguished workbenches from environments were deep integration via a shared repository or common language and some kind of methodology (integrated and process-centered environments) or domain (4GL) specificity. Major CASE Risk Factors Some of the most significant risk factors for organizations adopting CASE technology include: Inadequate standardization. Organizations usually have to tailor and adopt methodologies and tools to their specific requirements. Doing so may require significant effort to integrate both divergent technologies as well as divergent methods. For example, before the adoption of the UML standard the diagram conventions and methods for designing object-oriented models were vastly different among followers of Jacobsen, Booch, and Rumbaugh. Unrealistic expectations. The proponents of CASE technology—especially vendors marketing expensive tool sets—often hype expectations that the new approach will be a silver bullet that solves all problems. In reality no such technology can do that and if organizations approach CASE with unrealistic expectations they will inevitably be disappointed. Inadequate training. As with any new technology, CASE requires time to train people in how to use the tools and to get up to speed with them. CASE projects can fail if practitioners are not given adequate time for training or if the first project attempted with the new technology is itself highly mission critical and fraught with risk. Inadequate process control. CASE provides significant new capabilities to utilize new types of tools in innovative ways. Without the proper process guidance and controls these new capabilities can cause significant new problems as well. See also Data modeling Domain-specific modeling Method engineering Model-driven architecture Modeling language Rapid application development Automatic programming References Data management
55481537
https://en.wikipedia.org/wiki/Sam%20Ehlinger
Sam Ehlinger
Samuel George Ehlinger (born September 30, 1998) is an American football quarterback for the Indianapolis Colts of the National Football League (NFL). He played high school football at Westlake in Austin, Texas, where he broke various school records held by Super Bowl-winning quarterbacks Drew Brees and Nick Foles, before committing to play college football for the Texas Longhorns. As a freshman there, Ehlinger split playing time with quarterback Shane Buechele before taking over as the starter in 2018, where he led the team to the 2018 Big 12 Championship Game and two bowl games. He was selected by the Colts in the sixth round of the 2021 NFL Draft. Early years Ehlinger attended and played quarterback for Westlake High School in Austin, Texas, where he was coached by Todd Dodge. Ehlinger graduated as the school’s all-time leader in passing yards and touchdowns. He was named the MaxPreps National Junior of the Year after his junior season, and was rated by Rivals.com as a four-star recruit. He committed to play football at the University of Texas at Austin on July 28, 2015. College career 2017 season Ehlinger joined the Texas Longhorns under new head coach Tom Herman, who inherited an offense led by sophomore starting quarterback Shane Buechele. Addressing competition between Ehlinger and Buechele for the starting quarterback position for the 2017 season during spring practice, offensive coordinator Tim Beck described Buechele as having quicker adaptability than Ehlinger. During the spring game on April 15, Ehlinger played on the second-team offense against the first-team defense, scoring a touchdown and passing for 148 yards. Through spring and much of summer practice, coach Herman did not explicitly declare a starting quarterback for the season, though by late August he had hinted that Buechele would be starting, citing his experience. The depth chart for Texas' opening game against Maryland on September 2, Ehlinger was listed as the second-string quarterback. He did not see playing action in the eventual 51-41 loss to Maryland; however, Buechele injured his throwing shoulder over the course of the game, resulting in Ehlinger taking practice repetitions with first-team starters following the game. After Buechele was ultimately sidelined due to injury, Ehlinger made his first career start for Texas on September 9, 2017, against the San Jose State Spartans, leading the team to a 56–0 victory with 222 passing yards and a passing touchdown alongside 48 rushing yards. This made him just the tenth freshman to start a game at quarterback for the university. The starting quarterback role remained unsettled during the following week as Buechele was able to practice with the team while recovering from his shoulder injury. Ehlinger was eventually selected as the starter against #4 USC Trojans, completing 21 of 40 passes and two touchdowns in the double overtime loss. Ehlinger was able to connect with wide receiver Armanti Foreman on a 17-yard touchdown pass with 45 seconds remaining in regulation to give the Longhorns a late lead before the Trojans equalized with a field goal to send the game into overtime. However, Ehlinger fumbled the ball in the second overtime period, allowing USC to win on a second field goal. His 298-yard passing effort was the second-most by a true freshman in university history. A healthy Buechele reclaimed the starting quarterback role against the Iowa State Cyclones to open Big 12 Conference play, though Ehlinger did not take any snaps despite Buechele suffering an ankle injury during the subsequent victory. However, given the severity of Buechele's injury, Ehlinger started at quarterback for the Longhorns against the Kansas State Wildcats, leading Texas to a 40–34 double overtime victory. His 30 completions, 50 passing attempts, 380 passing yards, and 107 rushing yards marked personal highs for 2017. The 380 passing yards were the most by a true freshman at Texas and the tenth most of any quarterback at the school, while the 487 total yardage was the third most in school history. Ehlinger was named the Earl Campbell Tyler Rose Award Player of the Week for his efforts against Kansas State. He continued to serve as Texas' starting quarterback for his first Red River Rivalry game against the 12th-ranked Oklahoma Sooners on October 14. In the rivalry matchup, the Longhorns trailed the Sooners 0–20 in the second quarter before amassing a 24–23 lead in the fourth quarter following an 8-yard rushing touchdown by Ehlinger; however, Texas was unable to secure the lead, and a final attempt to score ended when Ehlinger threw the ball out of bounds on fourth-and-13, resulting in a 24–29 loss. Ehlinger finished the game having passed for 278 yards and rushed for 106 yards, making him the first freshman quarterback in school history to rush for over 100 yards in back-to-back games. During the fourth quarter of the game, Ehlinger briefly laid motionless following a hard tackle, sitting out a subsequent play to undergo concussion protocol before later returning to the field. Although Ehlinger stated in the post-game press conference that he "felt fine" and "wasn't ever confused where [he] was at all," Boston University Center for the Study of Traumatic Encephalopathy co-founder Christopher Nowinski expressed skepticism on Twitter that concussion protocol was properly followed. Ehlinger maintained the starting quarterback role in a 10–13 overtime loss to a top-10 Oklahoma State Cowboys team, completing 22 of 36 passes for 241 yards but throwing a game-ending interception in the endzone during overtime to solidify the loss. Following the game, he showed concussion symptoms that would place him in concussion protocol, with a healing Buechele slotted into the starting quarterback role; Ehlinger did not travel with the team for their 38–7 away victory the following week over the Baylor Bears. Ehlinger was available to play against the TCU Horned Frogs, with coach Herman indicating that "[Shane Buechele, Sam Ehlinger, and Jerrod Heard] are probably going to play at one point or another." However, an inner ear issue sidelined Ehlinger for the game. He was cleared to play against the Kansas Jayhawks the following week, serving as a backup quarterback and completing two passes and scoring one touchdown in the fourth quarter in the Longhorns' victory. He did not start at quarterback against the 24th-ranked West Virginia Mountaineers but assumed the role after two offensive series, throwing for two touchdowns as well as recording a 23-yard catch from Jerrod Heard in the 28–14 win. However, one throw was also intercepted 94 yards for a touchdown. With both Ehlinger and Buechele healthy and having started games for Texas, Herman publicly announced early that Ehlinger would start for the Longhorns against the Texas Tech Red Raiders, an unusual departure of protocol from previous games. Although he played well for much of the game in maintaining a lead for the Longhorns, Ehlinger threw an interception that was returned 55 yards in the closing minutes of the fourth quarter, eventually setting up a go-ahead touchdown from Texas Tech. Ehlinger threw a second interception on Texas' final offensive opportunity to reclaim the lead, resulting in a 23–27 loss to end the regular season. Although Ehlinger received the starting nod against Texas Tech over a healthy Buechele, their performances were not sufficiently separable to declare a longer-term starting quarterback, resulting in the two evenly splitting practices in the lead-up to the 2017 Texas Bowl against the Missouri Tigers. Buechele would eventually be prioritized for the bowl game, with Herman and Beck citing concerns over Ehlinger's protection of the football as a deciding factor. During the Texas Bowl, Ehlinger saw play as quarterback for part of the first half and much of the second half due to Buechele suffering a groin injury, leading the Longhorns to a 33–16 victory completing 11 of 15 passes for 112 yards and a touchdown. With the win, Ehlinger finished the season with a 2–4 record as a starter with playing time in nine total games; he was also the team's leading passer and rusher by yardage. 2018 season At the start of spring practice in 2018, the starting quarterback role for the Texas Longhorns remained unclear. Both Ehlinger and Buechele remained the primary candidates for the position, alongside newly recruited quarterbacks Cameron Rising and Casey Thompson. During the spring game, Ehlinger served as quarterback for the White team, throwing for 151 yards and a touchdown and leading both squads with 29 rushing yards on four carries. Despite not being officially named as the starting quarterback, Ehlinger appeared to edge out the other quarterbacks following spring practice, having improved substantially in the speed of his throws. In May, Athlon Sports named him as the fourth best quarterback in the Big 12 Conference. On August 20, coach Herman announced that Ehlinger would be the starting quarterback for the season opener against the Maryland Terrapins, lauding his improvements in throwing the football and his pocket presence. The Longhorns would lose to the Terrapins for the second straight year in a 29–34 loss on September 1. Ehlinger completed 21 of 39 passes for 263 yards and 2 touchdowns but threw two costly interceptions late in the fourth quarter, ensuring a Terrapins victory. Coach Herman expressed continued confidence in Ehlinger in the starting role despite the disappointing performance to close out the loss to Maryland, iterating that he did not doubt the quarterback's skills. Against the Tulsa Golden Hurricane the following week, Ehlinger threw for 237 yards and 2 touchdowns with an additional rushing touchdown, helping to stave off a resurgent performance from the Golden Hurricane to win 28–21. He led the Longhorns to a 37–14 win over the #24 USC Trojans to close out out-of-conference play, throwing for 223 yards and 2 touchdowns in addition to rushing for 35 yards and a touchdown. Ehlinger put up similar numbers against the 17th-ranked TCU Horned Frogs to open conference play on September 22 as he did against USC, going 22–32 for 255 yards and 2 touchdowns in the air with an additional rushing touchdown to propel the Longhorns to a 31–16 victory. His passing numbers moved him past his high school coach Todd Dodge in terms of career passing yardage for the Texas Longhorns, and Ehlinger became the first Longhorns quarterback since Colt McCoy's 2008 season to post three consecutive games with at least two passing touchdowns and one rushing touchdown. Ehlinger also became the first quarterback in school history to start a season with at least four consecutive games with over 200 yards passing and multiple touchdowns per game. His performance against TCU was also recognized in the Davey O'Brien Award's "Great 8" for the week. On October 7, Ehlinger completed a career-high 80.6 percent of his passes on a 207-yard passing effort and one touchdown, as well as making two receptions for 24 receiving yards in a 19–14 win at Kansas State. On October 6, Ehlinger led the Longhorns to a 48–45 upset victory against the #7 Oklahoma Sooners and accounted for 386 total yards and five touchdowns. Of those five touchdowns, three were rushing touchdowns—both statistics were career highs for Ehlinger and the five total touchdowns were the most by a Texas quarterback in the history of the Red River Rivalry. He also broke the school record set by Major Applewhite for consecutive passes without an interception. Ehlinger's efficient performance in the pivotal rivalry game attracted widespread praise. Following the game, FOX analyst and former quarterback Matt Leinart stated that Ehlinger "became [a legend] today." Ehlinger was named the Big 12 Offensive Player of the Week, the Walter Camp Award Offensive Player of the Week, the Earl Campbell Tyler Rose Award National Player of the Week, the Maxwell Award Player of the Week, and was listed on the Davey O’Brien Award’s “Great 8” for the second time in 2018. The next week against the Baylor Bears, he was injured on the first drive of the game after he suffered a separated shoulder after throwing for five times and rushing twice. An MRI scan would later confirm the injury as a grade I AC sprain. A break in play afforded by a bye gave Ehlinger more time to recover, allowing him to start at quarterback against Oklahoma State on October 27. In Stillwater, Oklahoma, Ehlinger scored four total touchdowns and threw for 283 yards, narrowing a 17-point halftime deficit that ended in a 35–38 loss. A second consecutive loss followed the next week against West Virginia, though Ehlinger recorded an efficient performance with 354 yards through the air—a season high—and three passing touchdowns in addition to a rushing touchdown in the 41–42 loss. The Longhorns ended the two-game losing skid with a 41–34 win against rivals Texas Tech on November 10 with Ehlinger throwing for a career-high 4 touchdowns with 312 passing yards, including a 29-yard go-ahead touchdown pass to Lil'Jordan Humphrey with 21 seconds remaining. By the end of the game, Ehlinger had attempted 280 passes without an interception, breaking former West Virginia quarterback Geno Smith's Big 12 record streak of 273 passes set in 2012. For his performance, Ehlinger was named as one of eight Manning Award Stars of the Week, while his overall performance in the season named him a semifinalist for the Earl Campbell Tyler Rose Award. He completed 12 passes for 137 yards with a passing touchdown and rushing touchdown in the first half of the game against the Iowa State Cyclones before a tackle late in the second quarter aggravated the AC sprain suffered against Baylor, sidelining him for the rest of the game. However, he was cleared shortly after to play in the final regular season game against the Kansas Jayhawks where he threw for two touchdowns and ran for another in a 24–17 victory. Ehlinger threw two interceptions during the game, ending his streak of pass attempts without an interception at 308. Despite being cleared for injury, Ehlinger noted that the shoulder injury "was bothering him" in the game against Kansas, though coach Herman predicted that Ehlinger would recover in time for the 2018 Big 12 Championship Game. In the conference title game against the Oklahoma Sooners on December 1, Ehlinger threw for 349 yards, passed for two touchdowns, and rushed for two touchdowns in a 27–39 loss. In the post-game press conference, Ehlinger asserted that he "would make it my mission to never let [the Longhorns] or [the University of Texas] feel this disappointment again," a sentiment that drew comparisons to a famous speech given by Tim Tebow following a loss to Ole Miss in 2008. Ahead of the 2019 Sugar Bowl matchup against the fifth-ranked Georgia Bulldogs at the New Orleans Saints' Superdome, Ehlinger wore Saints quarterback Drew Brees's high school jersey when arriving to the stadium, having attended the same high school. Ehlinger would lead the Longhorns to a 28–21 victory over the Bulldogs at the Sugar Bowl on January 1, 2019, despite entering the game as 13.5-point underdogs. He threw for 169 yards but was most impactful in the rushing game, where he ran 21 times for 64 yards and 3 touchdowns; for his performance, he was named the game's MVP. The three rushing touchdowns tied the Sugar Bowl record for rushing touchdowns by a quarterback, while the 16 total rushing touchdowns accumulated by Ehlinger over the season broke the school record for rushing touchdowns by a quarterback in a single season set by Donnie Wigginton in 1971 and Vince Young in 2004. Ehlinger's seasonal touchdown effort also made him the sixth Power 5 quarterback in the previous two decades to throw for over 25 touchdowns and run for more than 15 in a single season, joining five Heisman Trophy winners. 2019 season On the heels of a strong sophomore campaign, Ehlinger was named one of the early favorites to win the 2019 Heisman Trophy in December 2018. The Dallas Morning News considered him the best quarterback in the Big 12 Conference entering the 2019 season. Records University of Texas – Most rushing touchdowns by a quarterback, season (16, 2018) Big 12 – Most consecutive pass attempts without an interception (308, 2018) Statistics Professional career Ehlinger was drafted by the Indianapolis Colts in the sixth round, 218th overall, of the 2021 NFL Draft. On May 19, 2021, he signed his four-year rookie contract with the Colts. Ehlinger came into his rookie season competing for the backup position against Jacob Eason. In his first preseason game, he led the Colts to a 21-18 comeback win over the Carolina Panthers. He was placed on injured reserve on September 2, 2021 to start the season, after suffering an ACL sprain, before being activated on October 19. NFL career statistics Personal life Ehlinger is a Christian. Ehlinger's father, Ross, died from a heart attack in 2013 during a triathlon at age 46, and his younger brother Jake, who played linebacker for the Texas Longhorns, died on May 6, 2021 at the age of 20; it was later ruled an accidental overdose on Xanax laced with fentanyl. References External links Texas Longhorns bio 1998 births Living people Players of American football from Austin, Texas American football quarterbacks Texas Longhorns football players Westlake High School (Texas) alumni Indianapolis Colts players Christians from Texas
1163990
https://en.wikipedia.org/wiki/Toss%20bombing
Toss bombing
Toss bombing (sometimes known as loft bombing, and by the U.S. Air Force as the Low Altitude Bombing System, LABS) is a method of bombing where the attacking aircraft pulls upward when releasing its bomb load, giving the bomb additional time of flight by starting its ballistic path with an upward vector. The purpose of toss bombing is to compensate for the gravity drop of the bomb in flight, and allow an aircraft to bomb a target without flying directly over it. This is in order to avoid overflying a heavily defended target, or in order to distance the attacking aircraft from the blast effects of a nuclear (or conventional) bomb. Bomb tactics Pop-up In pop-up bombing, the pilot approaches from low altitude in level flight, and on cues from the computer pulls up at the last moment to release the bomb. Release usually occurs between 20° and 75° above the horizontal, causing the bomb to be tossed upward and forward, much like an underarm throw of a ball. Level toss Although "pop-up" bombing is generally characterized by its low-level approach, the same technique of a toss starting from level flight can be used at any altitude when it is not desirable to overfly the target. Additional altitude at release gives the bomb additional time of flight and range, at the cost (in the case of unguided munitions) of accuracy due to windage and the increased effect of a slight deviation in flight path. Dive toss The Dive-toss delivery technique was the first "toss" bombing method developed after WWII at the US Navy's rocket development center at Inyokern, California in 1947 as a method to attack heavily defended targets without unduly endangering the attacking aircraft. Although toss bombing might seem the direct opposite to dive bombing, where the plane pitches downward to aim at its target, toss bombing is often performed with a short dive before the bomber raises its nose and releases its bomb. This variant is known as "dive tossing". This gives both the bomb and aircraft extra momentum, thereby helping the aircraft regain altitude after the release and also ensuring that airspeed at the calculated release point is still sufficient to get the bomb to the target. Over-the-shoulder A more dynamic variant of toss bombing, called over-the-shoulder bombing, or the LABS (Low Altitude Bombing System) maneuver (known to pilots as the "idiot's loop"), is a particular kind of loft bombing where the bomb is released past the vertical so it is tossed back toward the target. This tactic was first made public on 7 May 1957 at Eglin AFB, when a B-47 entered its bombing run at low altitude, pulled up sharply (3.5 g) into a half loop, releasing its bomb under automatic control at a predetermined point in its climb, then executed a half roll, completing a maneuver similar to an Immelmann turn or Half Cuban Eight. The bomb continued upward for some time in a high arc before falling on a target which was a considerable distance from its point of release. In the meantime, the maneuver had allowed the bomber to change direction and distance itself from the target. Author and retired USAF F-84 pilot Richard Bach describes such an attack in his book Stranger to the Ground: The last red-roofed village flashes below me, and the target, a pyramid of white barrels, is just visible at the end of its run-in line. Five hundred knots. Switch down, button pressed. Timers begin their timing, circuits are alerted for the drop. Inch down to treetop altitude. I do not often fly at 500 knots on the deck, and it is apparent that I am moving quickly. The barrels inflate. I see that their white paint is flaking. And the pyramid streaks beneath me. Back on the stick smoothly firmly to read four G on the accelerometer and center the needles of the indicator that is only used in nuke weapon drops and center them and hold it there and I'll bet those computers are grinding their little hearts out and all I can see is sky in the windscreen hold the G's keep the needles centered there's the sun going beneath me and WHAM. My airplane rolls hard to the right and tucks more tightly into her loop and strains ahead even though we are upside down. The Shape has released me more than I have released it. The little white barrels are now six thousand feet directly beneath my canopy. I have no way to tell if it was a good drop or not. That was decided back with the charts and graphs and the dividers and the angles. I kept the needles centered, the computers did their task automatically, and the Device is on its way. Tactical use Toss bombing is generally used by pilots whenever it is not desirable to overfly the target with the aircraft at an altitude sufficient for dive-bombing or level bombing. Such cases include heavy anti-air defenses such as AAA and SAMs, when deploying powerful weapons such as "iron bombs" or even tactical nuclear bombs, and the use of limited-aspect targeting devices for guided munitions. To counter air defenses en route to the target, remaining at a low altitude for as long as possible allows the bomber to avoid radar and visual tracking and the launch envelope of older missile systems designed to be fired at targets overflying the missile site. However, a level pass at the target at low altitude will not only expose the aircraft to short-range defenses surrounding the target, but will place the aircraft in the bomb's blast radius. By executing a "pop-up" loft, on the other hand, the pilot releases the munition well outside the target area, out of range of air defenses. After release, the pilot can either dive back to low altitude or maintain the climb, in either case generally executing a sharp turn or "slice" away from the target. The blast produced by powerful munitions is thus (hopefully) avoided. The value of toss-bombing was increased with the introduction of precision-guided munitions such as the laser-guided bomb. Previous "dumb bombs" required a very high degree of pilot and fire control computer precision to loft the bomb accurately to the target. Unguided loft bombing also generally called for the use of a larger bomb than would be necessary for a direct hit, in order to generate a larger blast that would destroy the target even if the bomb did not hit accurately due to windage or computer/pilot error. Laser-targeting (and other methods like GPS as used in the JDAM system) allows the bomb to correct minor deviations from the intended ballistic path after it has been released, making toss-bombing as accurate as level bombing while still providing most of the advantages of toss-bombing using unguided munitions. However, the targeting pods used to deliver guided munitions generally have a limit to their field of view; most specifically, the pod usually cannot look behind the aircraft at more than a certain angle. Lofting the bomb allows the pilot to keep the target in front of the aircraft and thus within the targeting pod's field of view for as long as possible. "Dive-tossing" is generally used at moderate altitude (to allow for the dive) when the target, for whatever reason, cannot be designated precisely by radar. A target for instance may present too small a signature to be visible on radar (such as the entrance to an underground bunker) or may be indistinguishable in a group of radar returns. The pilot can in this case use a special "boresight" mode that allows the pilot to designate a target by pointing his aircraft directly at it. For a target on the ground, this means entering a dive. Thus designated, the pilot can then begin a climb, lofting the bomb at the target from a distance and regaining lost altitude at the same time. Technology Due to the intense pilot workload involved with flying and entering the window of opportunity, some aircraft are equipped with a “Toss Bomb Computer” (in US nuclear delivery, a part of the Low Altitude Bombing System) that enables the pilot to release the bomb at the correct angle. The Toss Bomb Computer takes airspeed inputs from the aircraft's pitot system, altitude inputs from the static system, attitude inputs from the gyroscopic system and inputs from weapons selectors signifying the type of bomb to calculate the appropriate release point of the ordnance. Instead of triggering the release directly, the pilot instead "consents" to release the weapon, then begins a steady climb. The computer then calculates the desired ballistic path, and when that path will be produced by the current aircraft attitude and airspeed, the computer releases the bomb. During the Second World War the engineers Erik Wilkenson and Torsten Faxén at Saab developed the first bomb sight for toss bombing. It was a mechanical computer that did the necessary calculations. It was first used in the Saab 17 and was standard on all Saab fighters up to and including Saab 32 Lansen. It was also sold to France, Switzerland, Denmark and USA and was used in for instance the Boeing B-47 Stratojet. While deployed in Europe with NATO, RCAF CF-104 fighter-bombers carried a Toss Bomb Computer until their nuclear role was eliminated by the Canadian government effective 1 January 1972. The same computational solutions used in the LABS system are now incorporated into two of the major bombing modes (the computer-controlled CCRP and a dedicated visually oriented "Dive-Toss" mode) of the Fire Control Computer of modern strike fighters such as the F-15E and F-16. As with LABS, the pilot designates their desired impact point, then consents to release while executing a climb, and the computer controls the actual release of the bomb. The integration into the FCC simplifies the pilot's workload by allowing the same bombing mode (CCRP) to be used for level, dive and loft bombing, providing similar cues in the pilot's displays regardless of the tactics used, since the computer simply sees it as the release point getting closer. See also Skip bombing Tactical bombing Strategic bombing Carpet bombing References External links “Over-the-Shoulder” Fact Sheet, National Museum of the United States Air Force (Archive.org) Aerial bombing Aerial maneuvers
13110872
https://en.wikipedia.org/wiki/Mechanical%20computer
Mechanical computer
A mechanical computer is a computer built from mechanical components such as levers and gears rather than electronic components. The most common examples are adding machines and mechanical counters, which use the turning of gears to increment output displays. More complex examples could carry out multiplication and division—Friden used a moving head which paused at each column—and even differential analysis. One model, the Ascota 170 accounting macine sold in the 1960s calculated square roots. Mechanical computers can be either analog, using smooth mechanisms such as curved plates or slide rules for computations; or digital, which use gears. Mechanical computers reached their zenith during World War II, when they formed the basis of complex bombsights including the Norden, as well as the similar devices for ship computations such as the US Torpedo Data Computer or British Admiralty Fire Control Table. Noteworthy are mechanical flight instruments for early spacecraft, which provided their computed output not in the form of digits, but through the displacements of indicator surfaces. From Yuri Gagarin's first manned spaceflight until 2002, every manned Soviet and Russian spacecraft Vostok, Voskhod and Soyuz was equipped with a Globus instrument showing the apparent movement of the Earth under the spacecraft through the displacement of a miniature terrestrial globe, plus latitude and longitude indicators. Mechanical computers continued to be used into the 1960s, but were quickly replaced by electronic calculators, which—with cathode-ray tube output—emerged in the mid-1960s. The evolution culminated in the 1970s with the introduction of inexpensive handheld electronic calculators. The use of mechanical computers declined in the 1970s and was rare by the 1980s. In 2016, NASA announced that its Automaton Rover for Extreme Environments program would use a mechanical computer to operate in the harsh environmental conditions found on Venus. Examples Antikythera mechanism, c. 100 BC – A mechanical astronomical clock. Cosmic Engine, 1092 – Su Song's hydro-mechanical astronomical clock tower invented during the Song dynasty, which featured the use of an early escapement mechanism applied to clockwork. Castle clock, 1206 – Al-Jazari's castle clock, a hydropowered mechanical astronomical clock, was the earliest programmable analog computer. Pascaline, 1642 – Blaise Pascal's arithmetic machine primarily intended as an adding machine which could add and subtract two numbers directly, as well as multiply and divide by repetition. Stepped Reckoner, 1672 – Gottfried Wilhelm Leibniz's mechanical calculator that could add, subtract, multiply, and divide. Difference Engine, 1822 – Charles Babbage's mechanical device to calculate polynomials. Analytical Engine, 1837 – A later Charles Babbage device that could be said to encapsulate most of the elements of modern computers. Odhner Arithmometer, 1873 - W. T. Odhner's calculator who had millions of clones manufactured until the 1970s. Ball-and-disk integrator, 1886 – William Thomson used it in his Harmonic Analyser to measure tide heights by calculating coefficients of a Fourier series. Percy Ludgate's 1909 Analytical Machine – The 2nd of only two mechanical Analytical Engines ever designed. Marchant Calculator, 1918 – Most advanced of the mechanical calculators. The key design was by Carl Friden. István Juhász Gamma-Juhász (early 1930s) Kerrison Predictor ("late 1930s"?) Z1, 1938 (ready in 1941) – Konrad Zuse's mechanical calculator (although part imprecisions hindered its function) Mark I Fire Control Computer, deployed by the United States Navy during World War II (1939 to 1945) and up to 1969 or later. Curta calculator, 1948 Moniac, 1949 – An analog computer used to model or simulate the UK economy. Voskhod Spacecraft "Globus" IMP navigation instrument, early 1960s Digi-Comp I, 1963 – An educational 3-bit digital computer Digi-Comp II, mid 1960s – A rolling ball digital computer Automaton – Mechanical devices that, in some cases, can store data and perform calculations, and perform other complicated tasks. Turing Tumble, 2017– An educational Turing-complete computer partially inspired by the Digi-Comp II Electro-mechanical computers Early electrically powered computers constructed from switches and relay logic rather than vacuum tubes (thermionic valves) or transistors (from which later electronic computers were constructed) are classified as electro-mechanical computers. These varied greatly in design and capabilities, with some later units capable of floating point arithmetic. Some relay-based computers remained in service after the development of vacuum-tube computers, where their slower speed was compensated for by good reliability. Some models were built as duplicate processors to detect errors, or could detect errors and retry the instruction. A few models were sold commercially with multiple units produced, but many designs were experimental one-off productions. See also Analog computer Billiard-ball computer Domino computer History of computing hardware List of pioneers in computer science Mechanical calculator Tide-Predicting Machine No. 2 Turing completeness References External links Electro-mechanical Harwell computer in action Electro-mechanical computers
1178644
https://en.wikipedia.org/wiki/Georgia%204-H
Georgia 4-H
Georgia 4-H was founded in 1904 by G.C. Adams in Newton County, Georgia, United States, as the Girls Canning, and Boys Corn Clubs. The Georgia 4-H Program is a branch of Georgia Cooperative Extension, which is part of the University of Georgia College of Agriculture and Environmental Sciences, and is funded by the University System of Georgia and private partners. History Georgia 4-H began with the start of the special Boys Corn Club contest that was first organized by Superintendent of Schools, G. C. Adams. Like the corn club he organized 100 years ago, G. C. Adams was unique. He ranked high as an educator. He taught at Pine Grove School in Newton County, he was principal of Palmer Institute at Oxford, he served as county school commissioner, and he was the president of the Fifth District Agriculture School at Monroe. Yet, Mr. Adams never attended high school or college, and he did not go to school more than a year in his entire life. While writing about Mr. Adams in the Atlanta Constitution after he had been elected Georgia commissioner of agriculture in 1932, Stiles A. Martin called him "one of the best educated, best read and most learned men in the state." Perhaps Mr. Adams' greatest accomplishment was organizing the corn club, and he is best known for that; but he was a pioneer in other fields, too. He also single-handedly developed a plan for transporting school children, which probably resulted in our school buses of today. In the same year he organized an oratorical association, the first in the South. The plan was for pupils of the various schools of the county to meet and put on a program, with awards being made to schools making the best showing. Out of this grew the field days which are held in many places today, featuring musical contests, debating and other events. Mr. Adams also served in the state legislature. He was elected to represent Newton County in 1926, and served two years. W. L. Weber was Mr. Adams' good friend. He was head of the English Department for Emory-at-Oxford College. Mr. Adams and Mr. Weber shared many walks from Oxford to Covington. It was during one of the walks in 1903 that Mr. Weber, who was from Illinois, told Mr. Adams about the success of the first known boys' corn-growing contest, held in Winnebago County, Illinois, during 1900. This idea was spreading very rapidly to other states. "Prof. W.L. Weber, of Emory College, who always manifests great interest in our public school, deserves credit for inaugurating this unique contest in Newton" – G.C. Adams. From this conversation was the motivation that sparked Mr. Adams to begin making plans, which he would announce during the fall of 1904, for the first Newton County Boys Corn Club, which developed into the present day 4-H club. The plans for the contest were announced in a small article in the Covington Enterprise Newspaper on December 23, 1904. Later, Mr. Adams published the rules for the contest on February 3, 1905, but this time he had a large article that was on the front page. He established a deadline for March 15. The contest was open to any boy 6 to 18 years old, who was enrolled in any of the county's public schools. Each boy would do all work raising his corn crop. There was no limit to variety of corn planted or extent of field. The contestant was not allowed to have any assistance. The boy selected any ten ears of corn out of his entire patch. The boy should nail them in a rat-proof box, delivered it to the Newton County Courthouse by October 7, and it would be weighed on October 16 and the weight will be recorded on the box. Of the 101 boys entering the contest, only 32 boys exhibited their corn. The first-place winner was George Plunkett with . The second-place winner was Tom Greer with . The third-place winners were brothers Paul and Walter Cowan, with . Other details of this contest are given in the Congressional Record of the 84th Congress, First Session on January 10, 1955. Around that 1907, Oscar Herman Benson designed the first emblem for the clubs. It was a three-leaf clover, which stood for head, heart and hands. Second year members, received a fourth H. In 1911, the 4-H design was adopted and health was added as the fourth H. The emblem has stood for head, heart, hands and health ever since. The Corn Club was followed by many agricultural project clubs in the county and state. The most famous club was the Girls Canning Club, in 1911. Just as the boys' work started with one crop, the same method was used for the girls' club work. The tomato was selected because it was universally grown and appreciated. It wasn't too difficult to get a good crop. It was acid and therefore easy to can without too much spoilage. Each girl was asked to plant a plot large enough to provide tomatoes not only for family but also for sale. By the time Congress passed the Smith-Lever Act, May 8, 1914, creating the Cooperative Extension Service, both boys and girls, all over Georgia, were active in one or more of the project clubs. Their work was supervised by volunteer leaders and a few paid workers in some counties. After 1914, the County Agents and Home Demonstration Agents were being employed in counties throughout Georgia. Positions are funded by county, state, and federal funds. These Agents would give the leadership to disseminating agricultural and home economic research information to farmers, homemakers, youth, and community organizations. Also in 1914, the Georgia Poultry Club was started, which required each member to prepare at least one setting of purebred eggs. By 1915, Georgia had 5,507 club girls and 14,275 club boys. About 1921, serious thought began to be given to the matter of trying to bring back interest and develop a steady growth in 4-H club work. Businessmen and leaders of agricultural organizations established the National Committee on Boys and Girls Club Work with E.T. Meredith as chairperson in 1921. The organization was held in Chicago, where the first National 4-H Club Congress was held in 1922. President Calvin Coolidge accepted honorary chairmanship of the National Committee on Boys and Girls Work, the start of a tradition followed by each succeeding U.S. President. This was also about the time that the name 4-H Club came about, instead of project clubs. It was thought that the emphasis should be placed on the community, county, and state organization of 4-H club members and that there should be combined with this organization idea, emphasis on social, recreational, and leadership training. Under the leadership of Mary Creswell and J. Phil Campbell, Georgia 4-H Clubs grew from 350 members in 1910 to 27,000 in 1920. It wasn't until 1924 when club work acquired the name of 4-H and the 4-H emblem was patented. In 1927, state 4-H leaders adopted the national 4-H pledge and the 4-H motto at the first National 4-H Club Camp. In 1933, Georgia started the first Wildlife Conservation Camp. P. H. Stone became the first Negro state 4-H leader in Georgia in 1924. By 1937, Georgia has county agents working in every county and 4-H enrollment had grown to 82,962. Land was acquired in 1939 in Dublin, Georgia, to build the Negro 4-H Center. The center had 150 meetings for 5,000 people annually. The headquarters for black Extension work was at Savannah State College until 1967. With U.S. entry into World War II, 4-H'ers across the country responded to the needs for increased agricultural production and support of the war effort. 4-H members were directly responsible for more than 77,000 head of dairy cattle, 246,000 swine and 210,000 head of other cattle. 4-H contributed more than 40,000 tongs of forage crops and 109,000 bushels of root crops. By 1942, 4-H had 1.6 million members, gaining 650,000 new members during the war. District Project Achievement (DPA) Meetings were set up in each Extension District in 1935. The Georgia Master 4-H Club was created during the same year. Becoming a Master 4-H'er is the highest award offered in 4-H. The first District Project Achievement meeting was held at Camp Wilkins with 200 members present. As an outgrowth of these contests and state contests, a State 4-H Club Congress was first held in Atlanta, Georgia in 1943 with 53 4-H club members attending. In 1948, the Georgia 4-H Club Foundation was organized to help further 4-H work in the state. The Foundation helped establish 4-H Club Centers at Rock Eagle and Dublin. Each 4-H Club member was asked to donate one dozen eggs to the Foundation during 1949. By year's end there was $7,000 in the bank. In 1952, construction began on Rock Eagle near Eatonton, Georgia. Bill Sutton raised $2.5 million to build the center on a tract of land. The Center was dedicated oct. 30, 1954. It is now one of the largest 4-H Centers in the country, hosting 4-H'ers, students and adults year-round for 4-H camp, environmental education and conferences. In 1963, the World Atlas of 4-H was published by the National 4-H Foundation, indicating 84 4-H and similar programs in 75 countries. Georgia's enrollment of 150,000 was the largest in the nation. In 1956, Newton County 4-H boys and girls worked at Belks, to raise money to help finance other members who were selected as county winners to represent Newton County at the Northwest District Project Achievement at Rock Eagle 4-H Center. Newton County had two Extension Programs, a white and a black, until 1965 when they were combined under the leadership of one County Extension Director. The black extension staff was transferred from Savannah State College to Fort Valley State College in 1967. Both the University of Georgia and Fort Valley State University now conduct active 4-H programs for all Georgia youth. 4-H'ers celebrated the nation's bicentennial in 1976 with a new citizenship program called The Sunshine Brigade and rode an old-fashioned wagon train to the nation's capitol. In 1994, 4-H joined the Character Counts! Coalition to develop a training program for teens to work with young members on the six pillars of character. Georgia 4-H has been an active participant and leader in this effort. In 2008, Georgia 4-H had 180,000 members. Georgia 4-H'ers continue to take part in judging competition, knowledge quiz bowls, livestock shows, animal education shows, food and nutrition contests, teen leadership programs, essay contest, educational camps and conferences, Clovers & Co. performing arts group, the International 4-H Youth Exchange program and many other educational and recreational opportunities. March 2010 cancellation scare On March 1, 2010, it was announced that University of Georgia President Michael F. Adams had proposed to the University System of Georgia's Board of Regents that the Georgia 4-H Program be completely eliminated in the wake of $300 million in budget cuts made by the University System. Organization Cloverleaf, Junior, and Senior 4-H'ers The Georgia 4-H Club classifies its 4-H'ers into three different groups: Cloverleafs, Juniors, and Seniors. Cloverleafs are 4th, 5th and 6th grade 4-H'ers, Juniors are 7th and 8th, and Seniors are 9th-12th grade 4-H'ers. Cloverleaf 4-H'ers may be involved in showing livestock, presenting projects up to the district level, and running for office at the school level. Should they stay active in the club long enough to become Juniors, they become eligible to attend events such as Junior Conference and District Project Achievement weekend, both held annually in winter. Seventh grade Junior 4-H'ers are also eligible to run for their district's Junior 4-H Board of Directors. Upon becoming Seniors in the summer after their eighth grade year, 4-H'ers may attend State 4-H Council and Fall Forum, compete at District Project Achievement for a trip to State 4-H Congress the following summer, and run for their district's Senior 4-H Board of Directors or the Georgia 4-H Board of Directors. Georgia 4-H districts Georgia 4-H is split up into four districts to represent all 159 counties in the state of Georgia; Northeast, Northwest, Southeast, and Southwest. The Northeast District serves the following counties: Baldwin, Banks, Barrow, Butts, Clarke, Columbia, Dawson, Elbert, Fannin, Franklin, Gilmer, Glascock, Greene, Habersham, Hall, Hancock, Hart, Jackson, Jasper, Jones, Lincoln, Lumpkin, Madison, McDuffie, Monroe, Morgan, Oconee, Oglethorpe, Pickens, Putnam, Rabun, Richmond, Stephens, Towns, Union, Walton, Warren, White, and Wilkes. The Northwest District serves the following counties: Bartow, Bibb, Carroll, Catoosa, Chattahoochee, Chattooga, Cherokee, Clayton, Cobb, Coweta, Crawford, Dade, DeKalb, Douglas, Fayette, Floyd, Forsyth, Fulton, Gordon, Gwinnett, Haralson, Harris, Heard, Henry, Lamar, Meriwether, Murray, Muscogee, Newton, Paulding, Pike, Polk, Rockdale, Spalding, Talbot, Troup, Upson, Walker, and Whitfield. The Southeast District serves the following counties: Appling, Atkinson, Bacon, Bleckley, Brantley, Bryan, Bulloch, Burke, Camden, Candler, Charlton, Chatham, Coffee, Dodge, Effingham, Emanuel, Evans, Glynn, Jeff Davis, Jefferson, Jenkins, Johnson, Laurens, Liberty, Long County, Georgia, McIntosh, Montgomery, Pierce, Screven, Tattnall, Telfair, Toombs, Treutlen, Twiggs, Ware, Washington, Wayne, Wheeler, and Wilkinson. The Southwest District serves the following counties: Baker, Ben Hill, Berrien, Brooks, Calhoun, Clay, Clinch, Colquitt, Cook, Crisp, Decatur, Dooly, Dougherty, Early, Echols, Grady, Houston, Irwin, Lanier, Lee, Lowndes, Macon, Marion, Miller, Mitchell, Peach, Pulaski, Quitman, Randolph, Schley, Seminole, Stewart, Sumter, Taylor, Terrell, Thomas, Tift, Turner, Webster, Wilcox, and Worth. Georgia 4-H activities and events Georgia 4-H offers an array of competitions, conventions, and training retreats that instill in its participants a number of valuable skills that will benefit them throughout the course of their lives. Events are usually coordinated by District or State 4-H Staff, a volunteer leaders, and a peer-elected Board of Directors. 4-H'ers have the opportunity to experience responsibility in projects dealing with livestock, judging, and other programs. Georgia 4-H partners with Georgia FFA and the UGA Animal and Dairy Science Department to provide several of these programs. Every year, 2,400 4-H'ers complete a year-long process to prepare more than 4,500 animals for exhibition at the Georgia Junior National Livestock Show and other competitions. Georgia 4-H State Board of Directors The election of the Georgia 4-H Board of Directors is held during State 4-H Council with the annual meeting of the Georgia Council. The election is a peer election with voters from each of the counties in attendance. Each district is awarded a maximum number of votes for the election. Candidates make use of campaign speeches, skits, meets & greets, extemporaneous questions and campaigning to be elected. The general election is held first with voting delegates selecting five individuals from the general poll. The top vote getter will be president, second will be vice president and other three will be state representatives. The remaining candidates are returned to district ballots with each voting delegate selecting one from their district election. Dean's Award Receiving a Dean's Award is considered one of the highest honors that a 4-H'er can earn. Participants create a resumè in the areas of Leadership, Citizenship, Communications and the Arts, Agricultural and Environmental Sciences, and Family and Consumer Sciences, and compete in an interview Winners become Master 4-H'ers, earn a $500 scholarship, a medallion and are recognized at the Annual Banquet of Georgia 4-H Congress and 4-H & Leadership Day at the Capitol. District Project Achievement District Project Achievement is a fun and competitive filled weekend spent with for 4-H'ers in their corresponding district at Rock Eagle. During the weekend, 4-H'ers share their knowledge about a particular subject to others in a presentation. "4-H'ers chose from project areas for their presentations. Project areas included international, veterinary science, air science, computers, water conservation, photography, public speaking, plant and soils, performing arts, safety, agriculture awareness, poultry, beef, sports, etc" (Chapman). Cloverleaf 4-H'ers are allowed a five-minute setup time with six minutes for their presentations; four minutes for performing arts. Junior 4-H'ers allowed a five-minute setup time with ten minutes for their demonstration; four minutes for performing arts. Senior 4-H'ers are allowed a five-minute setup time with twelve minutes for their demonstration; ten minutes for public speaking and four minutes for performing arts. This demonstration or performance is in front of a panel of three judges and a small audience. Junior and Senior 4-H'ers have a portfolio they must turn in before competing. This portfolio is a one-year record of project work, leadership and service. The portfolio is scored separately from the presentation and added to the overall score. Junior 4-H'ers are scored 40% on portfolios and 60% on demonstrations. Senior 4-H'ers are scored 50% on portfolios and 50% on presentations. The Senior 4-H winners of District Project Achievement advance to State Congress in hopes to become a state winners. District Project Achievement is more than competing for awards. Throughout the weekend are plenty of chances for community service projects, and recreational games. It is also an opportunity for 4-H'ers to run for Junior or Senior District Board of Directors. Fall Forum State 4-H Council State Council is an annual meeting of Georgia senior 4-H'ers This event is open to all Senior 4-H'ers, including rising 9th graders and recently graduated 12th graders, and is planned by the incumbent Georgia 4-H Board of Directors. Between the Iron Clover Competition and the election of the new Georgia 4-H Board of Directors Georgia 4-H Council always guarantees its participants a fun and eventful weekend. The Iron Clover Competition The annual Iron Clover Competition is held at State 4-H Council. This event takes its name from the iron statue of the 4-H emblem currently standing in front of Sutton Hall at Rock Eagle and designed to commemorate the district wide competition. During the competition, each district of the Georgia 4-H Club - Northeast, Southeast, Northwest, and Southwest - competes against one another in fun athletic competitions ranging from softball to ultimate frisbee to canoeing. At the end of the weekend, the district team that has been awarded the most Iron Clover points has won the title of "Iron Clover Champion" for their district in the coming year. Board of Directors election Those who are elected to State 4-H Office proceed immediately to Georgia Officer Training, which lasts until noon on the following Wednesday. The new officers are installed each year at the Thursday night banquet at State 4-H Congress. Georgia National Fair The Georgia National Fair celebrated its 20th anniversary with 416,709 attending. It has been named into the Top 50 Fairs. It supports various organizations such as 4-H, FFA, FBLA, FCCLA, and TSA. It is always held around the second week of October. The next fair will be October 6–16, 2011. There are several 4-H activities throughout the fair week such as: Photo Contest, the contest is for youth and adults. Individuals can submit photos into two different categories: General 4-H Photos, Focus on 4-H Youth & Adult and Focus on Agriculture. Top 10 photos are displayed at the Georgia National Fair and the top 20 are displayed at 4-H State Congress. Other competitions include Mini-Exhibits, Talent Contest, Speech Contest, Quiz bowls, Pumpkin Contest, and Chicken BBQ Contests. Also Swine, Sheep, Cattle, Goat, and Horse shows are held for 4-H'ers to compete in. Every year Georgia 4-H also hosts the Clover Café food booth with chicken donated by the Georgia Poultry Commission; it's cheap and the profit goes to help sponsor activities in the Georgia 4-H program. 4-H Volunteers, Collegiate 4-H'ers, and Senior 4-H'ers may volunteer to work in the booth. I am Georgia 4-H The "I am Georgia 4-H" campaign was being planned and grew from the stories and tributes from in March 2012 when Georgia 4-H was slated for elimination. The purpose of the campaign is to promote Georgia 4-H throughout the state and for former and current members to share how 4-H has changed and affected their life in a positive way. Leadership Day at the Capitol The Leadership Day at the Capitol is sponsored by the Department of Community Affairs, the Fanning Institute, Georgia 4-H and the Georgia Academy for Economic Development. Leadership Day is a time when leaders from all across Georgia come together to share best practices and success stories and identify ways to improve leadership efforts in Georgia. Combined with 4-H Day at the Capitol, it provides 4-H'ers an opportunity to learn about all the leadership opportunities offered in their communities, listen to great speakers, tour the state capitol and meet with their representatives. Operation Military Kids Operation Military Kids (OMK) is a branch of Georgia 4-H that reaches out to youth with a family member deployed in military service. During the summer, OMK, partners with the Georgia National Guard for a week of summer camp at the Wahsega 4-H center in Dahlonega. The youth that attend have families in the Georgia National Guard or currently deployed overseas. In 2009 the Naval Submarine Base Kings Bay was announced the top military 4-H club of the year. State 4-H Congress Georgia State 4-H Congress is a four-day event filled with competitions, interviews, tours, and relaxation. Delegates to State 4-H Congress must win first place or receive a sweepstakes scholarship in their project field from their respective district in order to attend. At State 4-H Congress delegates compete against other first place or sweepstakes winners in the project field in which they competed at District Project Achievement. After or before the delegates demonstration they must go to a panel of judges to discuss their portfolio, a resume of their project work over the course of the year. Once the presentation and portfolio discussion are completed these two scores are tallied and it becomes the delegates overall score. If he or she wins first place in their project they will become state winners and may represent Georgia at National 4-H Congress. The day after competition the delegates will begin their tours. They tour businesses and buildings that supported their project and provided funding for them to be able to participate during the week. Tours in the past have been at Turner Field, CNN Center, and the Crowne Plaza Perimeter-Ravinia Hotel to name a few. The penultimate day of the week occurs at Six Flags Over Georgia where delegates to get relax with friends in the park all day. Before leaving to go back to the hotel they announce the new Master 4-H'ers at the park. Georgia 4-H Communications Team The Georgia 4-H Communication Team's goal is to increase the use of computer technology including communication, computer programming, engineering, GPS/GIS systems, graphic design, photography, podcasting, science, teaching and training 4-H'ers and adults, web development, web and technology program delivery, wireless technology, writing, and videography to better the Georgia 4-H Program. The communications team has also implemented the new 4-H Science, Engineering, and Technology, 4-H SET, standards into activities for the communications team. The communications team meets twice a year for retreats; they have been held at Rock Eagle, The University of Georgia Miller Learning Center in Athens, and Founder's Lodge. During the retreats the tech team members are split into tracks: graphics track, videography track, web development track, and the GPS/GIS track. These track groups are taught and led by Collegiate Advisors, past members of the tech team that volunteer their time. The Collegiate Advisors lead the track groups through projects presented during the weekend. These projects are community service projects which help the 4-H'er build their portfolio. Need-A-Computer Program The Need-A-Computer Program is conducted by the Georgia 4-H Communications Team. The program is open to Cloverleaf, Junior, and Senior 4-H'ers. The team receives a number of donated computers from various businesses and schools and later refurbishes them to distribute to children on a need basis. The program was founded by Collegiate Advisors Rachel and Amanda McCarthy. It started when Rachel and her father, Jim McCarthy, refurbished old computers and donated them to 4-H'ers in Walton County that were in need of one. The donated computers are desktop computers with a Pentium III or higher, 850 mHz, 256 megabytes of RAM, and a 20 gigabyte hard drive. All computers ship with a monitor, keyboard, mouse, and speakers; computers are delivered at Fall Forum each year. In 2009, the Georgia 4-H Technology Communications Team, received the State Farm Youth Advisory Team Grant that has helped the team with refurbishing and distributing the computers. Cyber Security Initiative In 2010 the Georgia 4-H Youth Leadership Technology Team, later in the year changed to Georgia 4-H Communications Team, created a branch of the team called the Cyber Security Initiative (CSI). This small team travels around the state of Georgia to teach free classes on internet and social networking safety as well as cyber bullying. The Georgia 4-H Ambassadors Program The Georgia 4-H Clovers and Co. Performing Arts Group Founded in 1981, Clovers & Co. provides the opportunity for 4-H youth to participate in a performing arts group. Members from throughout the state audition for a limited number of spots on the cast and crew. The group is composed of talented singers, dancers, musicians and stagehands ranging in age from 9 to 10. Graduates of the group have gone on to have careers in music and entertainment that are nationally recognized. Clovers & Co. performs throughout the state and country at 4-H events as well as those for other organizations, including the events held at the University of Georgia for the 1996 Olympics. Camping and counselor program Summer camp has been a favorite experience of many 4-H'ers and adults, who were once 4-H'ers. At cloverleaf camp, 5th and 6th graders have the chance to swim, rock climb, canoe, take part in several workshops, and the infamous wet games. The wet games is basically a water war between the counselors and the campers; it involves games, buckets of water, and a large slip & slide. Junior 4-H'ers (7th & 8th graders) also have the chance to go to camp. Their camp is more for the older child. They have the chance to go to the laser show at Stone Mountain, Six Flags White Water, Thursday Night Thunder at the Atlanta Motor Speedway, swim, and more. Senior 4-H'ers are 9th through 12th graders. They also have the chance to go to camp. Senior Camp is designed around teenagers. They get to go whitewater rafting, dances, workshops, and much more. Seniors also have the chance to go to cloverleaf camp as a teen leader, where they get to help counselors and leaders. There are also some special interest camps like Wilderness Challenge and Marine Resource. Rock Eagle 4-H Center is the largest 4-H center. It is centered on the Rock Eagle, which is an eagle made out of rock, which was made by the Native Americans. It has 57 cabins, an auditorium, a chapel, and dining hall. During the summer, Rock Eagle holds 1,000 campers every week for seven weeks. The camp can hold a maximum of 1200 people at any one time. Not only is the camp home to many 4-H'ers, but it also houses many other organizations, when 4-H'ers are not there. Rock Eagle holds many events other camp, like State 4-H Council, Fall Forum, Jr. Conference, Jr. Rally, District Project Achievement (DPA), and more. Cabin number 37 is the G. C. Adams cabin and was funded by many donations from Newton County citizens. In 1952 the Covington Women's Club led a project to raise $10,000 to fund the cabin. Over 9,000 children ages 9–19 annually attend one of Georgia 4-H's Camps' for a week, each summer. Georgia 4-H has 5 separate 4-H Centers each with its own camping program. Every camp is a world unto its own, with a theme full of high adventure, friendship and fun. The 5 Georgia 4-H Centers are Wahsega 4-H Center located in Dahlonega, Georgia Burton 4-H Center located on Tybee Island Camp Jekyll located on Jekyll Island Fortson 4-H Center located in Hampton, Georgia Rock Eagle 4-H Center located in Eatonton, Georgia and known as the largest youth center in the world Note: In 2004 Camp Fortson was officially opened to replace camp Truitt-Fulton. The camping program and counselor program are considered highly effective. The Georgia 4-H Counselor Alumni Association represents 4-H counselor alumni who continue to support Georgia 4-H. Notable alumni Arch Smith, State 4-H Leader (2011–present) and University of Georgia College of Agriculture and Environmental Science senior public service associate faculty member. Lee Berger, State 4-H President 1984, National Geographic Explorer and Paleoanthropologist. Winner of the 1st National Geographic Prize for Research and Exploration Carol Buffard, actress, played the lead role in Junie B. Jones, the Musical Bob Burton, CEO of Flowers Inc. Balloons and burton + Burton, famous for the "greeting card balloon" Bo Ryles, State 4-H Leader and Director of 4-H 1994-2009. Clovers and Company Director and Co Director 1983–present. Maxine Burton, president of Flowers, Inc. Balloons and burton + Burton, famous for the "greeting card balloon" Rosalynn Carter, former first lady and wife of President Jimmy Carter James M. "Bucky" Cook, Former President of Heavenly Ham Shania Twain, country music singer Nikki DeLoach, former member of Clovers & Company is from Blackshear, Georgia. Nancy Grace, Hosts her own primetime legal analysis program "Nancy Grace" on CNN Headline News as well as "Closing Arguments" on Court TV. Bill Gentry of the Atlanta country music club Wild Bill's, located in Duluth, Georgia. Hillary Lindsey, songwriter. Greg Kinnear, actor. Tommy Irvin, Former State Representative for Habersham County, Georgia and Georgia Commissioner of Agriculture. Jennifer Nettles, Grammy Award winning country singer and member of the group "Sugarland." Otis O'neal, Extension Agent and founder of Ham and Eggs Show Kathy S. Palmer, Chief Superior Court Judge of the Georgia Middle Judicial Court. Walter Reeves, The "Georgia Gardener" and host of the "Lawn and Garden Show with Walter Reeves" Tom Rodgers, Head of Georgia 4-H between 1978–1993 and recipient of the Georgia 4-H Lifetime Achievement Award Wayne Shackelford, Former Georgia Commissioner of Transportation and recipient of the Georgia 4-H Lifetime Achievement Award Tommy Walton, University of Georgia State 4-H Leader 1955-1980. Herschel Walker NFL running back 1986-1997, winner of the 1982 Heisman Trophy Paul Wood, President of Georgia EMC Waco O'Guin, co-creator of MTV's Stankervision, Brickleberry and The DAMN! Show Trisha Yearwood, multi-platinum and multi-Grammy Award winning country music artist from Monticello. Her father was a county agent as well. Notes 1 Cong. Rec. (1955) Hunt, Ed (1987). The Georgia Home of 4-H Club Work Knight, Richie (2004). Newton County 4-H, Not Just Our Favorite Past Time References External links Georgia 4-H Camp Rock Eagle Camp Wahsega Camp Jekyll Camp Burton Camp Fortson National 4-H Headquarters National 4-H Council - private sector partner of 4-H Youth organizations based in Georgia (U.S. state) Education in Georgia (U.S. state) 4-H
40691076
https://en.wikipedia.org/wiki/SATA%20Express
SATA Express
SATA Express (sometimes unofficially shortened to SATAe) is a computer bus interface that supports both Serial ATA (SATA) and PCI Express (PCIe) storage devices, initially standardized in the SATA 3.2 specification. The SATA Express connector used on the host side is backward compatible with the standard SATA data connector, while it also provides two PCI Express lanes as a pure PCI Express connection to the storage device. Instead of continuing with the SATA interface's usual approach of doubling its native speed with each major version, SATA 3.2 specification included the PCI Express bus for achieving data transfer speeds greater than the SATA 3.0 speed limit of 6 Gbit/s. Designers of the SATA interface concluded that doubling the native SATA speed would take too much time to catch up with the advancements in solid-state drive (SSD) technology, would require too many changes to the SATA standard, and would result in a much greater power consumption compared with the existing PCI Express bus. As a widely adopted computer bus, PCI Express provides sufficient bandwidth while allowing easy scaling up by using faster or additional lanes. In addition to supporting legacy Advanced Host Controller Interface (AHCI) at the logical interface level, SATA Express also supports NVM Express (NVMe) as the logical device interface for attached PCI Express storage devices. While the support for AHCI ensures software-level backward compatibility with legacy SATA devices and legacy operating systems, NVM Express is designed to fully utilize high-speed PCI Express storage devices by leveraging their capability of executing many I/O operations in parallel. History The Serial ATA (SATA) interface was designed primarily for interfacing with hard disk drives (HDDs), doubling its native speed with each major revision: maximum SATA transfer speeds went from 1.5 Gbit/s in SATA 1.0 (standardized in 2003), through 3 Gbit/s in SATA 2.0 (standardized in 2004), to 6 Gbit/s as provided by SATA 3.0 (standardized in 2009). SATA has also been selected as the interface for gradually more adopted solid-state drives (SSDs), but the need for a faster interface became apparent as the speed of SSDs and hybrid drives increased over time. As an example, some SSDs available in early 2009 were already well over the capabilities of SATA 1.0 and close to the SATA 2.0 maximum transfer speed, while in the second half of 2013 high-end consumer SSDs had already reached the SATA 3.0 speed limit, requiring an even faster interface. While evaluating different approaches to the required speed increase, designers of the SATA interface concluded that extending the SATA interface so it doubles its native speed to 12 Gbit/s would require more than two years, making that approach unsuitable for catching up with advancements in SSD technology. At the same time, increasing the native SATA speed to 12 Gbit/s would require too many changes to the SATA standard, ending up in a more costly and less power efficient solution compared with the already available and widely adopted PCI Express bus. Thus, PCI Express was selected by the designers of SATA interface, as part of the SATA 3.2 revision that was standardized in 2013; extending the SATA specification to also provide a PCI Express interface within the same backward-compatible connector allowed much faster speeds by reusing already existing technology. Some vendors also use proprietary logical interfaces for their enterprise-grade flash-based storage products, connected through the PCI Express bus. Such storage products can use a multi-lane PCI Express link, while interfacing with the operating system through proprietary drivers and host interfaces. Moreover, there are similar enterprise-grade storage products using NVM Express as the non-proprietary logical interface for a PCI Express add-on card. Availability Support for SATA Express was initially announced for the Intel 9 Series chipsets, Z97 and H97 Platform Controller Hubs (PCHs), with both of them supporting Intel Haswell and Haswell Refresh processors; availability of these two chipsets was planned for 2014. In December 2013, Asus unveiled a prototype "Z87-Deluxe/SATA Express" motherboard based on the Intel Z87 chipset, supporting Haswell processors and using additional ASMedia controller to provide SATA Express connectivity; this motherboard was also showcased at CES 2014 although no launch date was announced. In April 2014, Asus also demonstrated support for the so-called separate reference clock with independent spread spectrum clocking (SRIS) with some of its pre-production SATA Express hardware. SRIS eliminates the need for complex and costly shielding on SATA Express cables required for transmitting PCI Express synchronization signals, by providing a separate clock generator on the storage device with additional support from the motherboard firmware. In May 2014, Intel Z97 and H97 chipsets became available, bringing support for both SATA Express and M.2, which is a specification for flash-based storage devices in form of internally mounted computer expansion cards. Z97 and H97 chipsets use two PCI Express 2.0 lanes for each of their SATA Express ports, providing 1 GB/s of bandwidth to PCI Express storage devices. The release of these two new chipsets, intended primarily for high-end desktops, was soon followed by the availability of Z97- and H97-based motherboards. In late August 2014, Intel X99 chipset became available, bringing support for both SATA Express and M.2 to the Intel's enthusiast platform. Each of the X99's SATA Express ports requires two PCI Express 2.0 lanes provided by the chipset, while the M.2 slots can use either two 2.0 lanes from the chipset itself, or up to four 3.0 lanes taken directly from the LGA 2011-v3 CPU. As a result, the X99 provides bandwidths of up to 3.94 GB/s for connected PCI Express storage devices. Following the release of X99 chipset, numerous X99-based motherboards became available. In early March 2017, AMD Ryzen became available, bringing native support for SATA Express to the AMD Socket AM4 platform, through use of its accompanying X370, X300, B350, A320 and A300 chipsets. Ryzen also supports M.2 and other forms of PCI Express storage devices, using up to the total of eight PCI Express 3.0 lanes provided by the chipset and the AM4 CPU. As a result, Ryzen provides bandwidths of up to 7.88 GB/s for connected PCI Express storage devices. SATA Express is considered a failed standard, as when SATA Express was introduced the M.2 form factor and NVMe standard were also launched, gaining much larger popularity than Serial ATA and SATA Express. Not many storage devices utilizing the SATA Express interface were released for consumers and SATA Express ports quickly disappeared from new motherboards. Features SATA Express interface supports both PCI Express and SATA storage devices by exposing two PCI Express 2.0 or 3.0 lanes and two SATA 3.0 (6 Gbit/s) ports through the same host-side SATA Express connector (but not both at the same time). Exposed PCI Express lanes provide a pure PCI Express connection between the host and storage device, with no additional layers of bus abstraction. The SATA revision 3.2 specification, in its gold revision , standardizes the SATA Express and specifies its hardware layout and electrical parameters. The choice of PCI Express also enables scaling up the performance of SATA Express interface by using multiple lanes and different versions of PCI Express. In more detail, using two PCI Express 2.0 lanes provides a total bandwidth of 1000 MB/s (2 × 5 GT/s raw data rate and 8b/10b encoding), while using two PCI Express 3.0 lanes provides 1969 MB/s (2 × 8 GT/s raw data rate and 128b/130b encoding). In comparison, the 6 Gbit/s raw bandwidth of SATA 3.0 equates effectively to 600 MB/s (6 GT/s raw data rate and 8b/10b encoding). There are three options available for the logical device interfaces and command sets used for interfacing with storage devices connected to a SATA Express controller: Legacy SATA Used for backward compatibility with legacy SATA devices, and interfaced through the AHCI driver and legacy SATA 3.0 (6 Gbit/s) ports provided by a SATA Express controller. PCI Express using AHCI Used for PCI Express SSDs and interfaced through the AHCI driver and provided PCI Express lanes, providing backward compatibility with widespread SATA support in operating systems at the cost of not delivering optimal performance by using AHCI for accessing PCI Express SSDs. AHCI was developed back at the time when the purpose of a host bus adapter (HBA) in a system was to connect the CPU/memory subsystem with a much slower storage subsystem based on rotating magnetic media; as a result, AHCI has some inherent inefficiencies when applied to SSD devices, which behave much more like DRAM than like spinning media. PCI Express using NVMe Used for PCI Express SSDs and interfaced through the NVMe driver and provided PCI Express lanes, as a high-performance and scalable host controller interface designed and optimized especially for interfacing with PCI Express SSDs. NVMe has been designed from the ground up, capitalizing on the low latency and parallelism of PCI Express SSDs, and complementing the parallelism of contemporary CPUs, platforms and applications. At a high level, primary advantages of NVMe over AHCI relate to NVMe's ability to exploit parallelism in host hardware and software, based on its design advantages that include data transfers with fewer stages, greater depth of command queues, and more efficient interrupt processing. Connectors Connectors used for SATA Express were selected specifically to ensure backward compatibility with legacy SATA devices where possible, without the need for additional adapters or converters. The connector on the host side accepts either one PCI Express SSD or up to two legacy SATA devices, by providing either PCI Express lanes or SATA 3.0 ports depending on the type of connected storage device. There are five types of SATA Express connectors, differing by their position and purpose: Host plug is used on motherboards and add-on controllers. This connector is backward compatible by accepting legacy standard SATA data cables, resulting in the host plug providing connectivity for up to two SATA devices. Host cable receptacle is the host-side connector on SATA Express cables. This connector is not backward compatible. Device cable receptacle is the device-side connector on SATA Express cables, backward compatible by accepting one SATA device. Device plug is used on SATA Express devices. This connector is partially backward compatible by allowing SATA Express devices to be plugged into U.2 backplanes or MultiLink SAS receptacles; however, a SATA Express device connected that way will be functional only if the host supports PCI Express devices. Host receptacle is used on backplanes for mating directly with SATA Express devices, resulting in cableless connections. This connector is backward compatible by accepting one SATA device. The above listed SATA Express connectors provide only two PCI Express lanes, as the result of overall design focusing on a rapid low-cost platform transition. That choice allowed easier backward compatibility with legacy SATA devices, together with making it possible to use cheaper unshielded cables. , some NVM Express devices in form of 2.5-inch drives use the U.2 connector (originally known as SFF-8639, with the renaming taking place in June 2015), which is expected to gain broader acceptance. The U.2 connector is mechanically identical to the SATA Express device plug, but provides four PCI Express lanes through a different usage of available pins. The table below summarizes the compatibility of involved connectors. Compatibility Device-level backward compatibility for SATA Express is ensured by fully supporting legacy SATA 3.0 (6 Gbit/s) storage devices, both on the electrical level and through the required operating system support. Mechanically, connectors on the host side retain their backward compatibility in a way similar to how USB 3.0 does it the new host-side SATA Express connector is made by "stacking" an additional connector on top of two legacy standard SATA data connectors, which are regular SATA 3.0 (6 Gbit/s) ports that can accept legacy SATA devices. This backward compatibility of the host-side SATA Express connector, which is formally known as the host plug, ensures the possibility for attaching legacy SATA devices to hosts equipped with SATA Express controllers. Backward compatibility on the software level, provided for legacy operating systems and associated device drivers that can access only SATA storage devices, is achieved by retaining support for the AHCI controller interface as a legacy logical device interface, as visible from the operating system perspective. Access to storage devices using AHCI as a logical device interface is possible for both SATA SSDs and PCI Express SSDs, so operating systems that do not provide support for NVMe can optionally be configured to interact with PCI Express storage devices as if they were legacy AHCI devices. However, because NVMe is far more efficient than AHCI when used with PCI Express SSDs, SATA Express interface is unable to deliver its maximum performance when AHCI is used to access PCI Express storage devices; see above for more details. See also List of device bandwidths M.2 (formerly known as the Next Generation Form Factor) Serial attached SCSI (SAS) Notes References External links Official (SATA-IO) website LFCS: Preparing Linux for nonvolatile memory devices, LWN.net, April 19, 2013, by Jonathan Corbet NVMe vs AHCI: Another Win for PCIe, AnandTech, March 13, 2014, by Kristian Vatto Intel SSD DC P3700 Review: Understanding NVM Express, Tom's Hardware, August 13, 2014, by Drew Riley PCIe SSD 101: An Overview of Standards, Markets and Performance, SNIA, August 2013, archived from the original on February 2, 2014 US patent 20130294023, November 7, 2013, assigned to Raphael Gay MultiLink SAS presentations, press releases and roadmaps, SCSI Trade Association 2013 introductions Computer buses Serial ATA Serial buses Computer connectors Motherboard expansion slot
58969969
https://en.wikipedia.org/wiki/Kilbroney%20Park
Kilbroney Park
Kilbroney Park (Irish: Páirc Chill Bhrónai) is a park near Rostrevor in Northern Ireland. Formerly a country estate, it was visited by William Makepeace Thackeray, Charles Dickens and Seamus Heaney and may have been the inspiration for Narnia in the writings of C. S. Lewis. It came into the ownership of the Bowes-Lyon family, and the future Queen Elizabeth II and Princess Margaret holidayed there as children. The park has been run by Newry, Mourne and Down District Council since 1977 and features a children's play area, tennis courts and a cafe. It has a large collection of rare and historic trees, including "Old Homer", a holm oak that was voted Northern Ireland's Tree of the Year in 2016. A glacial erratic in the park is connected with the legend of the giant Finn Mac Cool. History The park was originally known as The Meadow and formed part of the large Ross Family estate in Rostrevor from the early 1700s – their house, known as The Lodge was built in 1716. One of the more famous members of that family was General Robert Ross, who served in the British Army during the Napoleonic Wars and was responsible for the burning of the White House in the War of 1812. The Ross family were responsible for planting many of the non-native trees that are still found in the area including redwood, Monterey pine, holm oak, ash, sycamore and cherry. The park was purchased by Colonel Roxburgh in 1850; William Makepeace Thackeray is thought to have visited at around this time and drawn inspiration from the landscape. Thackeray remarked that had the estate been located in England it would be widely regarded as "a world's wonder". Roxburgh sold the estate to Stratford Canning, 1st Viscount Stratford de Redcliffe – a diplomat and scholar – in 1863, and he added a zoo, aviary and arboretum. Canning was friends with the writer Charles Dickens, who was a frequent guest at the estate. A second cousin of Lady Elizabeth Bowes-Lyon (later Queen to George VI) inherited the estate in 1919. The future Queen Elizabeth II and her sister Princess Margaret holidayed on the estate in 1937. The queen is said to have remembered it well when asked about it decades later. During the Second World War the estate served as a camp for German prisoners of war. Kilbroney was later visited by the writer C. S. Lewis and may have helped provide the inspiration for the land of Narnia and subsequently by poet Seamus Heaney. The Bowes-Lyon family sold Kilbroney to the local council in 1977. They demolished the lodge in 1980 but a plan to develop part of the estate for housing was abandoned after local people threatened to handcuff themselves to the park gates in protest. It is currently in the ownership of Newry, Mourne and Down District Council, is open as a public park and designated as an open space amenity. Description Kilbroney Park is located off the A2 Shore Road, with a pedestrian entrance accessible by footpath from Rostrevor. The park features a two-mile forest drive leading to the car park and expansive views across the nearby Carlingford Lough and the Mourne Area of Outstanding Natural Beauty. The park has a children's play area, tennis courts, cafe and a tourist information point and is open from 9 am. There are two caravan parks on site, a mountain bike trail and a Narnia-themed walking trail. Located above Rostrevor is the Cloughmore Stone, a 30-tonne glacial erratic left behind after the ice age. Local folklore holds that the stone was thrown there by the Irish giant Finn Mac Cool during a fight with a Scottish giant. The Scottish giant is said to have created Lough Neagh by scooping a clod of earth from the ground. Having missed Mac Cool this clod landed in the Irish Sea where it became the Isle of Man. Kilbroney Park is set within the wider Rostrevor Oak Forest – a 16.63 hectare ancient woodland, national nature reserve and special area of conservation – and as well as oak, ash, hazel, sycamore, douglas fir, ferns, wild garlic, primroses and bluebells, contains rarer plants and trees. These include wood avens, the hard shield fern, giant fir, eight monkey puzzle trees, twelve redwoods (planted by Canning between 1880 and 1890), toothwort, bird's nest orchids and wood fescue. Notable individual trees include a 500-year-old sessile oak, a 200-year-old Monterey pine and a 200-year-old Turkey oak, which is said to be the most photographed tree in the park. Old Homer A 200-year-old Quercus ilex (holm oak) tree known as "Old Homer" is located near to the park's pedestrian entrance at Fairy Glen. Famous for growing at a 45-degree angle from the ground, which makes it easy for children to climb, it is said to have been well loved by generations of local people. The evergreen tree is almost in girth and has distinctive "snakeskin" bark; one of its boughs was recently propped to prevent collapse. The tree has links to folk music – it is the site of performances during the park's "Fiddler's Green", and the ashes of Scottish folk singer Danny Kyle were scattered beneath the tree. Old Homer was entered into the Northern Irish Tree of the Year competition in 2016; it secured more than half of the votes cast and won the competition. The £1,000 winner's grant was used to fund plaques for historic trees in the park, produce a book about the trees and to purchase 400 saplings, which were planted across the park by schoolchildren. As a result of the win it was entered into the European Tree of the Year for 2017. It garnered 7,101 vote and placed sixth out of 16 trees. References Civil parish of Kilbroney, County Down Forests and woodlands of Northern Ireland Parks in County Down
40480495
https://en.wikipedia.org/wiki/Mass%20surveillance%20in%20the%20United%20States
Mass surveillance in the United States
The practice of mass surveillance in the United States dates back to wartime monitoring and censorship of international communications from, to, or which passed through the United States. After the First and Second World Wars, mass surveillance continued throughout the Cold War period, via programs such as the Black Chamber and Project SHAMROCK. The formation and growth of federal law-enforcement and intelligence agencies such as the FBI, CIA, and NSA institutionalized surveillance used to also silence political dissent, as evidenced by COINTELPRO projects which targeted various organizations and individuals. During the Civil Rights Movement era, many individuals put under surveillance orders were first labelled as integrationists, then deemed subversive, and sometimes suspected to be supportive of the communist model of the United States' rival at the time, the Soviet Union. Other targeted individuals and groups included Native American activists, African American and Chicano liberation movement activists, and anti-war protesters. The formation of the international UKUSA surveillance agreement of 1946 evolved into the ECHELON collaboration by 1955 of five English-speaking nations, also known as the Five Eyes, and focused on interception of electronic communications, with substantial increases in domestic surveillance capabilities. Following the September 11th attacks of 2001, domestic and international mass surveillance capabilities grew immensely. Contemporary mass surveillance relies upon annual presidential executive orders declaring a continued State of National Emergency, first signed by George W. Bush on September 14, 2001 and then continued on an annual basis by President Barack Obama. Mass surveillance is also based on several subsequent national security Acts including the USA PATRIOT Act and FISA Amendment Act's PRISM surveillance program. Critics and political dissenters currently describe the effects of these acts, orders, and resulting database network of fusion centers as forming a veritable American police state that simply institutionalized the illegal COINTELPRO tactics used to assassinate dissenters and leaders from the 1950s onwards. Additional surveillance agencies, such as the DHS and the position of Director of National Intelligence, have exponentially escalated mass surveillance since 2001. A series of media reports in 2013 revealed more recent programs and techniques employed by the US intelligence community. Advances in computer and information technology allow the creation of huge national databases that facilitate mass surveillance in the United States by DHS managed fusion centers, the CIA's Terrorist Threat Integration Center (TTIC) program, and the FBI's Terrorist Screening Database (TSDB). Mass surveillance databases are also cited as responsible for profiling Latino Americans and contributing to "self-deportation" techniques, or physical deportations by way of the DHS's ICEGang national database. After World War I, the US Army and State Department established the Black Chamber, also known as the Cipher Bureau, which began operations in 1919. The Black Chamber was headed by Herbert O. Yardley, who had been a leader in the Army's Military Intelligence program. Regarded as a precursor to the National Security Agency, it conducted peacetime decryption of material including diplomatic communications until 1929. In the advent of World War II, the Office of Censorship was established. The wartime agency monitored "communications by mail, cable, radio, or other means of transmission passing between the United States and any foreign country". This included the 350,000 overseas cables and telegrams and 25,000 international telephone calls made each week. "Every letter that crossed international or U.S. territorial borders from December 1941 to August 1945 was subject to being opened and scoured for details." With the end of World War II, Project SHAMROCK was established in 1945. The organization was created to accumulate telegraphic data entering and exiting from the United States. Major communication companies such as Western Union, RCA Global and ITT World Communications actively aided the project, allowing American intelligence officials to gain access to international message traffic. Under the project, and many subsequent programs, no precedent had been established for judicial authorization, and no warrants were issued for surveillance activities. The project was terminated in 1975. President Harry S. Truman established the National Security Agency (NSA) in 1952 for the purposes of collecting, processing, and monitoring intelligence data. The existence of NSA was not known to people as the memorandum by President Truman was classified. When the Citizens' Commission to Investigate the FBI published stolen FBI documents revealing abuse of intelligence programs in 1971, Senator Frank Church began an investigation into the programs that become known as the Church Committee. The committee sought to investigate intelligence abuses throughout the 1970s. Following a report provided by the committee outlining egregious abuse, in 1976 Congress established the Senate Select Committee on Intelligence. It would later be joined by the Foreign Intelligence Surveillance Court in 1978. The institutions worked to limit the power of the agencies, ensuring that surveillance activities remained within the rule of law. Following the attacks of September 11, 2001, Congress passed The Patriot Act to strengthen security and intelligence efforts. The act granted the President broad powers on the war against terror, including the power to bypass the FISA Court for surveillance orders in cases of national security. Additionally, mass surveillance activities were conducted alongside various other surveillance programs under the head of President's Surveillance Program. Under pressure from the public, the warrantless wiretapping program was allegedly ended in January 2007. Many details about the surveillance activities conducted in the United States were revealed in the disclosure by Edward Snowden in June 2013. Regarded as one of the biggest media leaks in the United States, it presented extensive details about the surveillance programs of the NSA, that involved interception of Internet data and telephonic calls from over a billion users, across various countries. National Security Agency (NSA) 1947: The National Security Act was signed by President Truman, establishing a National Security Council. 1949: The Armed Forces Security Agency was established to coordinate signal operations between military branches. 1952: The National Security Agency (NSA) was officially established by President Truman by way of a National Security Council Intelligence Directive 9, dated Oct. 24, while the NSA officially came into existence days later on Nov. 4. According to The New York Times, the NSA was created in "absolute secrecy" by President Truman, whose surveillance-minded administration ordered, only six weeks after President Truman took office, wiretaps on the telephones of Thomas Gardiner Corcoran, a close advisor of Franklin D. Roosevelt. The recorded conversations are currently kept at the Harry S. Truman Presidential Library and Museum, along with other documents considered sensitive (≈233,600 pages). Federal Bureau of Investigation (FBI) Institutional domestic surveillance was founded in 1896 with the National Bureau of Criminal Identification, which evolved by 1908 into the Bureau of Investigation, operated under the authority of the Department of Justice. In 1935, the FBI had grown into an independent agency under the direction of J. Edgar Hoover whose staff, through the use of wire taps, cable taps, mail tampering, garbage filtering and infiltrators, prepared secret FBI Index Lists on more than 10 million people by 1939. Purported to be chasing 'communists' and other alleged subversives, the FBI used public and private pressure to destroy the lives of those it targeted during McCarthyism, including those lives of the Hollywood 10 with the Hollywood blacklist. The FBI's surveillance and investigation roles expanded in the 1950s while using the collected information to facilitate political assassinations, including the murders of Fred Hampton and Mark Clark in 1969. FBI is also directly connected to the bombings, assassinations, and deaths of other people including Malcolm X in 1963, Viola Liuzzo in 1965, Dr. Martin Luther King Jr. in 1968, Anna Mae Pictou Aquash in 1976, and Judi Bari in 1990. As the extent of the FBI's domestic surveillance continued to grow, many celebrities were also secretly investigated by the bureau, including: First Lady Eleanor Roosevelt – A vocal critic of Hoover who likened the FBI to an 'American Gestapo' for its Index lists. Roosevelt also spoke out against anti-Japanese prejudice during the second world war, and was later a delegate to the United Nations and instrumental in creating the Universal Declaration of Human Rights. The 3,000-page FBI dossier on Eleanor Roosevelt reveals Hoover's close monitoring of her activities and writings, and contains retaliatory charges against her for suspected Communist activities. Frank Sinatra – His 1,300 page FBI dossier, dating from 1943, contains allegations about Sinatra's possible ties to the American Communist Party. The FBI spent several decades tracking Sinatra and his associates. Marilyn Monroe – Her FBI dossier begins in 1955 and continues up until the months before her death. It focuses mostly on her travels and associations, searching for signs of leftist views and possible ties to communism. Her ex-husband, Arthur Miller, was also monitored. Monroe's FBI dossier is "heavily censored", but a "reprocessed" version has been released by the FBI to the public. John Lennon – In 1971, shortly after Lennon arrived in the United States on a visa to meet up with anti-war activists, the FBI placed Lennon under surveillance, and the U.S. government tried to deport him from the country. At that time, opposition to the Vietnam War had reached a peak and Lennon often showed up at political rallies to sing his anti-war anthem "Give Peace a Chance". The U.S. government argued that Lennon's 300 page FBI dossier was particularly sensitive because its release may "lead to foreign diplomatic, economic and military retaliation against the United States", and therefore only approved a "heavily censored" version. The Beatles, of which John Lennon was a member, had a separate FBI dossier. 1967–73: The now-defunct Project MINARET was created to spy on U.S. citizens. At the request of the U.S. Army, those who protested against the Vietnam War were put on the NSA's "watch list". Church committee review 1975: The Church Committee of the United States Senate was set up to investigate widespread intelligence abuses by the NSA, CIA and FBI. Domestic surveillance, authorized by the highest executive branch of the federal government, spanned from the FDR Administration to the Presidency of Richard Nixon. The following examples were reported by the Church Committee: President Roosevelt asked the FBI to put in its files the names of citizens sending telegrams to the White House opposing his "national defense" policy and supporting Col. Charles Lindbergh. President Truman received inside information on a former Roosevelt aide's efforts to influence his appointments, labor union negotiating plans, and the publishing plans of journalists. President Eisenhower received reports on purely political and social contacts with foreign officials by Bernard Baruch, Eleanor Roosevelt, and Supreme Court Justice William O. Douglas. The Kennedy administration ordered the FBI to wiretap a congressional staff member, three executive officials, a lobbyist, and a Washington law firm. US Attorney General Robert F. Kennedy received data from a FBI wire tap on Martin Luther King Jr. and an electronic listening device targeting a congressman, both of which yielded information of a political nature. President Johnson asked the FBI to conduct "name checks" on his critics and members of the staff of his 1964 opponent, Senator Barry Goldwater. He also requested purely political intelligence on his critics in the Senate, and received extensive intelligence reports on political activity at the 1964 Democratic Convention from FBI electronic surveillance. President Nixon authorized a program of wiretaps which produced for the White House purely political or personal information unrelated to national security, including information about a Supreme Court justice. The Final Report (Book II) of the Church Committee revealed the following statistics: Over 26,000 individuals were at one point catalogued on an FBI list of persons to be rounded up in the event of a "national emergency". Over 500,000 domestic intelligence files were kept at the FBI headquarters, of which 65,000 were opened in 1972 alone. At least 130,000 first class letters were opened and photographed by the FBI from 1940 to 1966. A quarter of a million first class letters were opened and photographed by the CIA from 1953 to 1973. Millions of private telegrams sent from, or to, through the United States were obtained by the National Security Agency (NSA), under a secret arrangement with U.S. telegraph companies, from 1947 to 1975. Over 100,000 Americans have been indexed in U.S. Army intelligence files. About 300,000 individuals were indexed in a CIA computer system during the course of Operation CHAOS. Intelligence files on more than 11,000 individuals and groups were created by the Internal Revenue Service (IRS), with tax investigations "done on the basis of political rather than tax criteria". In response to the committee's findings, the United States Congress passed the Foreign Intelligence Surveillance Act in 1978, which led to the establishment of the United States Foreign Intelligence Surveillance Court, which was authorized to issue surveillance warrants. Several decades later in 2013, the presiding judge of the FISA Court, Reggie Walton, told The Washington Post that the court only has a limited ability to supervise the government's surveillance, and is therefore "forced" to rely upon the accuracy of the information that is provided by federal agents. On August 17, 1975 Senator Frank Church stated on NBC's "Meet the Press" without mentioning the name of the NSA about this agency: ECHELON In 1988 an article titled "Somebody's listening" by Duncan Campbell in the New Statesman described the signals-intelligence gathering activities of a program code-named "ECHELON". The program was engaged by English-speaking World War II Allied countries – Australia, Canada, New Zealand, the United Kingdom and the United States (collectively known as AUSCANNZUKUS). It was created by the five countries to monitor the military and diplomatic communications of the Soviet Union and of its Eastern Bloc allies during the Cold War in the early 1960s. By the 1990s the ECHELON system could intercept satellite transmissions, public switched telephone network (PSTN) communications (including most Internet traffic), and transmissions carried by microwave. The New Zealand journalist Nicky Hager provided a detailed description of ECHELON in his 1996 book Secret Power. While some member governments denied the existence of ECHELON, a report by a committee of the European Parliament in 2001 confirmed the program's use and warned Europeans about its reach and effects. The European Parliament stated in its report that the term "ECHELON" occurred in a number of contexts, but that the evidence presented indicated it was a signals-intelligence collection system capable of interception and content-inspection of telephone calls, fax, e-mail and other data-traffic globally. James Bamford further described the capabilities of ECHELON in Body of Secrets (2002) about the National Security Agency. Intelligence monitoring of citizens, and their communications, in the area covered by the AUSCANNZUKUS security agreement have, over the years, caused considerable public concern. Escalation following September 11, 2001 attacks In the aftermath of the September 2001 attacks on the World Trade Center and the Pentagon, bulk domestic spying in the United States increased dramatically. The desire to prevent future attacks of this scale led to the passage of the Patriot Act. Later acts include the Protect America Act (which removes the warrant requirement for government surveillance of foreign targets) and the FISA Amendments Act (which relaxed some of the original FISA court requirements). In 2002, "Total Information Awareness" was established by the U.S. government in order to "revolutionize the ability of the United States to detect, classify and identify foreign terrorists". In 2005, a report about President Bush's President's Surveillance Program appeared in the New York Times. According to reporters James Risen and Eric Lichtblau, the actual publication of their report was delayed for a year because "The White House asked The New York Times not to publish this article". Also in 2005, the existence of STELLARWIND was revealed by Thomas Tamm. In 2006, Mark Klein revealed the existence of Room 641A that he had wired back in 2003. In 2008, Babak Pasdar, a computer security expert, and CEO of Bat Blue publicly revealed the existence of the "Quantico circuit", that he and his team found in 2003. He described it as a back door to the federal government in the systems of an unnamed wireless provider; the company was later independently identified as Verizon. The NSA's database of American's phone calls was made public in 2006 by USA Today journalist Leslie Cauley in an article titled, "NSA has massive database of Americans' phone calls." The article cites anonymous sources that described the program's reach on American citizens: ... it means that the government has detailed records of calls they made — across town or across the country — to family members, co-workers, business contacts and others. The three telecommunications companies are working under contract with the NSA, which launched the program in 2001 shortly after the Sept. 11 terrorist attacks. The report failed to generate discussion of privacy rights in the media and was not referenced by Greenwald or the Washington Post in any of their subsequent reporting. In 2009, The New York Times cited several anonymous intelligence officials alleging that "the N.S.A. made Americans targets in eavesdropping operations based on insufficient evidence tying them to terrorism" and "the N.S.A. tried to wiretap a member of Congress without a warrant". Acceleration of media leaks (2010–present) On 15 March 2012, the American magazine Wired published an article with the headline "The NSA Is Building the Country's Biggest Spy Center (Watch What You Say)", which was later mentioned by U.S. Rep. Hank Johnson during a congressional hearing. In response to Johnson's inquiry, NSA director Keith B. Alexander testified that these allegations made by Wired magazine were untrue: 2013 mass surveillance disclosures On 6 June 2013, Britain's The Guardian newspaper began publishing a series of revelations by an unnamed American whistleblower, revealed several days later to be former CIA and NSA-contracted systems analyst Edward Snowden. Snowden gave a cache of internal documents in support of his claims to two journalists: Glenn Greenwald and Laura Poitras, Greenwald later estimated that the cache contains 15,000 – 20,000 documents, some very large and very detailed, and some very small. This was one of the largest news leaks in the modern history of the United States. In over two months of publications, it became clear that the NSA operates a complex web of spying programs which allow it to intercept internet and telephone conversations from over a billion users from dozens of countries around the world. Specific revelations have been made about China, the European Union, Latin America, Iran and Pakistan, and Australia and New Zealand, however the published documentation reveals that many of the programs indiscriminately collect bulk information directly from central servers and internet backbones, which almost invariably carry and reroute information from distant countries. Due to this central server and backbone monitoring, many of the programs overlap and interrelate among one another. These programs are often done with the assistance of US entities such as the United States Department of Justice and the FBI, are sanctioned by US laws such as the FISA Amendments Act, and the necessary court orders for them are signed by the secret Foreign Intelligence Surveillance Court. In addition to this, many of the NSA's programs are directly aided by national and foreign intelligence services, Britain's GCHQ and Australia's DSD, as well as by large private telecommunications and Internet corporations, such as Verizon, Telstra, Google and Facebook. On 9 June 2013, Edward Snowden told The Guardian: The US government has aggressively sought to dismiss and challenge Fourth Amendment cases raised: Hepting v. AT&T, Jewel v. NSA, Clapper v. Amnesty International, Al-Haramain Islamic Foundation v. Obama, and Center for Constitutional Rights v. Bush. The government has also granted retroactive immunity to ISPs and telecoms participating in domestic surveillance. The US district court judge for the District of Columbia, Richard Leon, declared on December 16, 2013 that the mass collection of metadata of Americans' telephone records by the National Security Agency probably violates the Fourth Amendment prohibition of unreasonable searches and seizures. Given the limited record before me at this point in the litigation – most notably, the utter lack of evidence that a terrorist attack has ever been prevented because searching the NSA database was faster than other investigative tactics – I have serious doubts about the efficacy of the metadata collection program as a means of conducting time-sensitive investigations in cases involving imminent threats of terrorism. "Plaintiffs have a substantial likelihood of showing that their privacy interests outweigh the government's interest in collecting and analysing bulk telephony metadata and therefore the NSA's bulk collection program is indeed an unreasonable search under the fourth amendment," he wrote. "The Fourth Amendment typically requires 'a neutral and detached authority be interposed between the police and the public,' and it is offended by 'general warrants' and laws that allow searches to be conducted 'indiscriminately and without regard to their connections with a crime under investigation,'" he wrote. He added: I cannot imagine a more 'indiscriminate' and 'arbitrary invasion' than this systematic and high-tech collection and retention of personal data on virtually every single citizen for purposes of querying and analyzing it without prior judicial approval. Surely such a program infringes on 'that degree of privacy' that the founders enshrined in the Fourth Amendment. Indeed I have little doubt that the author of our Constitution, James Madison, who cautioned us to beware 'the abridgement of freedom of the people by gradual and silent encroachments by those in power,' would be aghast. Leon granted the request for a preliminary injunction that blocks the collection of phone data for two private plaintiffs (Larry Klayman, a conservative lawyer, and Charles Strange, father of a cryptologist killed in Afghanistan when his helicopter was shot down in 2011) and ordered the government to destroy any of their records that have been gathered. But the judge stayed action on his ruling pending a government appeal, recognizing in his 68-page opinion the "significant national security interests at stake in this case and the novelty of the constitutional issues." H.R.4681 – Intelligence Authorization Act for Fiscal Year 2015 On 20 May 2014, U.S. Representative for Republican congressman Mike Rogers introduced Intelligence Authorization Act for Fiscal Year 2015 with the goal of authorizing appropriations for fiscal years 2014 and 2015 for intelligence and intelligence-related activities of the United States Government, the Community Management Account, and the Central Intelligence Agency (CIA) Retirement and Disability System, and for other purposes. Some of its measures cover the limitation on retention. A covered communication (meaning any nonpublic telephone or electronic communication acquired without the consent of a person who is a party to the communication, including communications in electronic storage) shall not be retained in excess of 5 years, unless: (i) the communication has been affirmatively determined, in whole or in part, to constitute foreign intelligence or counterintelligence or is necessary to understand or assess foreign intelligence or counterintelligence; (ii) the communication is reasonably believed to constitute evidence of a crime and is retained by a law enforcement agency; (iii) the communication is enciphered or reasonably believed to have a secret meaning; (iv) all parties to the communication are reasonably believed to be non-United States persons; (v) retention is necessary to protect against an imminent threat to human life, in which case both the nature of the threat and the information to be retained shall be reported to the congressional intelligence committees not later than 30 days after the date such retention is extended under this clause; (vi) retention is necessary for technical assurance or compliance purposes, including a court order or discovery obligation, in which case access to information retained for technical assurance or compliance purposes shall be reported to the congressional intelligence committees on an annual basis; (vii) retention for a period in excess of 5 years is approved by the head of the element of the intelligence community responsible for such retention, based on a determination that retention is necessary to protect the national security of the United States, in which case the head of such element shall provide to the congressional intelligence committees a written certification describing (I) the reasons extended retention is necessary to protect the national security of the United States; (II) the duration for which the head of the element is authorizing retention; (III) the particular information to be retained; and (IV) the measures the element of the intelligence community is taking to protect the privacy interests of United States persons or persons located inside the United States. On 10 December 2014, Republican U.S. Representative for member of Congress Justin Amash criticized the act on his Facebook as being "one of the most egregious sections of law I've encountered during my time as a representative" and "It grants the executive branch virtually unlimited access to the communications of every American". On 11 December 2014, a petition was created on We the People section of the whitehouse.gov website petitioning the Obama administration to veto the law. USA Freedom Act The USA Freedom Act was signed into law on June 2, 2015, the day after certain provisions of the Patriot Act had expired. It mandated an end to bulk collection of phone call metadata by the NSA within 180 days, but allowed continued mandatory retention of metadata by phone companies with access by the government with case-by-case approval from the Foreign Intelligence Surveillance Court. Modalities, concepts, and methods Logging postal mail Under the Mail Isolation Control and Tracking program, the U.S. Postal Service photographs the exterior of every piece of paper mail that is processed in the United States — about 160 billion pieces in 2012. The U.S. Postmaster General stated that the system is primarily used for mail sorting, but the images are available for possible use by law enforcement agencies. Created in 2001 following the anthrax attacks that killed five people, it is a sweeping expansion of a 100-year-old program called "mail cover" which targets people suspected of crimes. Together, the two programs show that postal mail is subject to the same kind of scrutiny that the National Security Agency gives to telephone calls, e-mail, and other forms of electronic communication. Mail cover surveillance requests are granted for about 30 days, and can be extended for up to 120 days. Images captured under the Mail Isolation Control and Tracking program are retained for a week to 30 days and then destroyed. There are two kinds of mail covers: those related to criminal activity and those requested to protect national security. Criminal activity requests average 15,000 to 20,000 per year, while the number of requests for national security mail covers has not been made public. Neither the Mail Isolation Control and Tracking program nor the mail cover program require prior approval by a judge. For both programs the information gathered is metadata from the outside of the envelope or package for which courts have said there is no expectation of privacy. Opening the mail to view its contents would require a warrant approved by a judge. Wiretapping Billions of dollars per year are spent, by agencies such as the Information Awareness Office, National Security Agency, and the Federal Bureau of Investigation, to develop, purchase, implement, and operate systems such as Carnivore, ECHELON, and NarusInsight to intercept and analyze the immense amount of data that traverses the Internet and telephone system every day. The Total Information Awareness program, of the Information Awareness Office, was formed in 2002 by the Pentagon and led by former rear admiral John Poindexter. The program designed numerous technologies to be used to perform mass surveillance. Examples include advanced speech-to-text programs (so that phone conversations can be monitored en-masse by a computer, instead of requiring human operators to listen to them), social network analysis software to monitor groups of people and their interactions with each other, and "Human identification at a distance" software which allows computers to identify people on surveillance cameras by their facial features and gait (the way they walk). The program was later renamed "Terrorism Information Awareness", after a negative public reaction. Legal foundations The Communications Assistance for Law Enforcement Act (CALEA), passed in 1994, requires that all U.S. telecommunications companies modify their equipment to allow easy wiretapping of telephone, VoIP, and broadband Internet traffic. In 1999 two models of mandatory data retention were suggested for the US. The first model would record the IP address assigned to a customer at a specific time. In the second model, "which is closer to what Europe adopted", telephone numbers dialed, contents of Web pages visited, and recipients of e-mail messages must be retained by the ISP for an unspecified amount of time. In 2006 the International Association of Chiefs of Police adopted a resolution calling for a "uniform data retention mandate" for "customer subscriber information and source and destination information." The U.S. Department of Justice announced in 2011 that criminal investigations "are being frustrated" because no law currently exists to force Internet providers to keep track of what their customers are doing. The Electronic Frontier Foundation has an ongoing lawsuit (Hepting v. AT&T) against the telecom giant AT&T Inc. for its assistance to the U.S. government in monitoring the communications of millions of American citizens. It has managed thus far to keep the proceedings open. Recently the documents, which were exposed by a whistleblower who had previously worked for AT&T, and showed schematics of the massive data mining system, were made public. Internet communications The FBI developed the computer programs "Magic Lantern" and CIPAV, which it can remotely install on a computer system, in order to monitor a person's computer activity. The NSA has been gathering information on financial records, Internet surfing habits, and monitoring e-mails. It has also performed extensive surveillance on social networks such as Facebook. Recently, Facebook has revealed that, in the last six months of 2012, they handed over the private data of between 18,000 and 19,000 users to law enforcement of all types—including local police and federal agencies, such as the FBI, Federal Marshals and the NSA. One form of wiretapping utilized by the NSA is RADON, a bi-directional host tap that can inject Ethernet packets onto the same target. It allows bi-directional exploitation of Denied networks using standard on-net tools. The one limitation of RADON is that it is a USB device that requires a physical connection to a laptop or PC to work. RADON was created by a Massachusetts firm called Netragard. Their founder, Adriel Desautels, said about RADON, "it is our 'safe' malware. RADON is designed to enable us to infect customer systems in a safe and controllable manner. Safe means that every strand is built with an expiration date that, when reached, results in RADON performing an automatic and clean self-removal." The NSA is also known to have splitter sites in the United States. Splitter sites are places where a copy of every packet is directed to a secret room where it is analyzed by the Narus STA 6400, a deep packet inspection device. Although the only known location is at 611 Folsom Street, San Francisco, California, expert analysis of Internet traffic suggests that there are likely several locations throughout the United States. Intelligence apparatus to monitor Americans Since the September 11, 2001 terrorist attacks, a vast domestic intelligence apparatus has been built to collect information using FBI, local police, state homeland security offices and military criminal investigators. The intelligence apparatus collects, analyzes and stores information about millions of (if not all) American citizens, most of whom have not been accused of any wrongdoing. Every state and local law enforcement agency is to feed information to federal authorities to support the work of the FBI. The PRISM special source operation system was enabled by the Protect America Act of 2007 under President Bush and the FISA Amendments Act of 2008, which legally immunized private companies that cooperated voluntarily with US intelligence collection and was renewed by Congress under President Obama in 2012 for five years until December 2017. According to The Register, the FISA Amendments Act of 2008 "specifically authorizes intelligence agencies to monitor the phone, email, and other communications of U.S. citizens for up to a week without obtaining a warrant" when one of the parties is outside the U.S. PRISM was first publicly revealed on 6 June 2013, after classified documents about the program were leaked to The Washington Post and The Guardian by Edward Snowden. Telephones In early 2006, USA Today reported that several major telephone companies were cooperating illegally with the National Security Agency to monitor the phone records of U.S. citizens, and storing them in a large database known as the NSA call database. This report came on the heels of allegations that the U.S. government had been conducting electronic surveillance of domestic telephone calls without warrants. Law enforcement and intelligence services in the United States possess technology to remotely activate the microphones in cell phones in order to listen to conversations that take place nearby the person who holds the phone. U.S. federal agents regularly use mobile phones to collect location data. The geographical location of a mobile phone (and thus the person carrying it) can be determined easily (whether it is being used or not), using a technique known multilateration to calculate the differences in time for a signal to travel from the cell phone to each of several cell towers near the owner of the phone. In 2013, the existence of the Hemisphere Project, through which AT&T provides call detail records to government agencies, became publicly known. Infiltration of smartphones As worldwide sales of smartphones began exceeding those of feature phones, the NSA decided to take advantage of the smartphone boom. This is particularly advantageous because the smartphone combines a myriad of data that would interest an intelligence agency, such as social contacts, user behavior, interests, location, photos and credit card numbers and passwords. An internal NSA report from 2010 stated that the spread of the smartphone has been occurring "extremely rapidly"—developments that "certainly complicate traditional target analysis." According to the document, the NSA has set up task forces assigned to several smartphone manufacturers and operating systems, including Apple Inc.'s iPhone and iOS operating system, as well as Google's Android mobile operating system. Similarly, Britain's GCHQ assigned a team to study and crack the BlackBerry. Under the heading "iPhone capability", the document notes that there are smaller NSA programs, known as "scripts", that can perform surveillance on 38 different features of the iPhone 3 and iPhone 4 operating systems. These include the mapping feature, voicemail and photos, as well as Google Earth, Facebook and Yahoo! Messenger. Data mining of subpoenaed records The FBI collected nearly all hotel, airline, rental car, gift shop, and casino records in Las Vegas during the last two weeks of 2003. The FBI requested all electronic data of hundreds of thousands of people based on a very general lead for the Las Vegas New Year's celebration. The Senior VP of The Mirage went on record with PBS' Frontline describing the first time they were requested to help in the mass collection of personal information. Surveillance cameras Wide Area Persistent Surveillance (also Wide Area Motion Imaging) is a form of airborne surveillance system that collects pattern-of-life data by recording motion images of an area larger than a city – in sub-meter resolution. This video allows for anyone within the field of regard to be tracked – both live and retroactively, for forensic analysis. The use of sophisticated tracking algorithms applied to the WAMI dataset also enables mass automated geo-location tracking of every vehicle and pedestrian. WAMI sensors are typically mounted on manned airplanes, drones, blimps and aerostats. WAMI is currently in use on the southern border of the US and has been deployed in Baltimore, Dayton Ohio as well as in Los Angeles, specifically targeting Compton. Wide Area Persistent Surveillance systems such as ARGUS WAMI are capable of live viewing and recording a 68 square mile area with enough detail to view pedestrians and vehicles and generate chronographs These WAMI cameras, such as Gorgon Stare, Angelfire, Hiper Stare, Hawkeye and ARGUS, create airborne video so detailed that pedestrians can be followed across the city through forensic analysis. This allows investigators to rewind and playback the movements of anyone within this 68 square mile area for hours, days or even months at a time depending on the airframe the WAMI sensors are mounted on. JLENS, a surveillance aerostat scheduled for deployment over the east coast of the US, is a form of WAMI that uses sophisticated radar imaging along with electro-optical WAMI sensors to enable mass geo-location tracking of ground vehicles. While a resistance to the domestic deployment of WAMI has emerged in areas where the public has learned of the technologies use, the deployments have been intentionally hidden from the public, as in Compton California, where the mayor learned about the surveillance from groups like the American Civil Liberties Union, Teame Zazzu and the Center for Investigative Reporting. PeSEAS and PerMIATE software automate and record the movement observed in the WAMI video. This technology uses software to track and record the movements of pedestrians and vehicles using automatic object recognition software across the entire frame, generating "tracklets" or chronographs of every car and pedestrian movements. 24/7 deployment of this technology has been suggested by the DHS on spy blimps such as the recently killed Blue Devil Airship. Traffic cameras, which were meant to help enforce traffic laws at intersections, have also sparked some controversy, due to their use by law enforcement agencies for purposes unrelated to traffic violations. These cameras also work as transit choke-points that allow individuals inside the vehicle to be positively identified and license plate data to be collected and time stamped for cross reference with airborne WAMI such as ARGUS and HAWKEYE used by police and Law Enforcement. The Department of Homeland Security is funding networks of surveillance cameras in cities and towns as part of its efforts to combat terrorism. In February 2009, Cambridge, MA rejected the cameras due to privacy concerns. In July 2020, the Electronic Frontier Foundation (EFF) reported that the San Francisco Police Department (SFPD) used a camera network in the city's Business Improvement District amid protests against police violence. The report claims that the SFPD's usage of the camera network went beyond investigating footage, likening the department's access to real-time video feeds as "indiscriminate surveillance of protestors." Surveillance drones On 19 June 2013, FBI Director Robert Mueller told the United States Senate Committee on the Judiciary that the federal government had been employing surveillance drones on U.S. soil in "particular incidents". According to Mueller, the FBI is currently in the initial stage of developing drone policies. Earlier in 2012, Congress passed a US$63 billion bill that will grant four years of additional funding to the Federal Aviation Administration (FAA). Under the bill, the FAA is required to provide military and commercial drones with expanded access to U.S. airspace by October 2015. In February 2013, a spokesman for the Los Angeles Police Department explained that these drones would initially be deployed in large public gatherings, including major protests. Over time, tiny drones would be used to fly inside buildings to track down suspects and assist in investigations. According to The Los Angeles Times, the main advantage of using drones is that they offer "unblinking eye-in-the-sky coverage". They can be modified to carry high-resolution video cameras, infrared sensors, license plate readers, listening devices, and be disguised as sea gulls or other birds to mask themselves. The FBI and Customs and Border Protection have used drones for surveillance of protests by the Black Lives Matter movement. Infiltration of activist groups In 2003, consent decrees against surveillance around the country were lifted, with the assistance of the Justice Department. The New York City Police Department infiltrated and compiled dossiers on protest groups before the 2004 Republican National Convention, leading to over 1,800 arrests and subsequent fingerprinting. In 2008, Maryland State Police infiltrated local peace groups. In 2013, a Washington, D.C. undercover cop infiltrated peace groups. International cooperation During World War II, the BRUSA Agreement was signed by the governments of the United States and the United Kingdom for the purpose of intelligence sharing. This was later formalized in the UKUSA Agreement of 1946 as a secret treaty. The full text of the agreement was released to the public on 25 June 2010. Although the treaty was later revised to include other countries such as Denmark, Germany, Ireland, Norway, Turkey, and the Philippines, most of the information sharing is performed by the so-called "Five Eyes", a term referring to the following English-speaking western democracies and their respective intelligence agencies: – The Defence Signals Directorate of Australia – The Communications Security Establishment of Canada – The Government Communications Security Bureau of New Zealand – The Government Communications Headquarters of the United Kingdom, which is widely considered to be a leader in traditional spying due to its influence on countries that were once part of the British Empire. – The National Security Agency of the United States, which has the biggest budget and some of the most advanced technical abilities among the "five eyes". In 2013, media disclosures revealed how other government agencies have cooperated extensively with the "Five Eyes": – The Politiets Efterretningstjeneste (PET) of Denmark, a domestic intelligence agency, exchanges data with the NSA on a regular basis, as part of a secret agreement with the United States. – The Bundesnachrichtendienst (Federal Intelligence Service) of Germany systematically transfers metadata from German intelligence sources to the NSA. In December 2012 alone, Germany provided the NSA with 500 million metadata records. The NSA granted the Bundesnachrichtendienst access to X-Keyscore, in exchange for Mira4 and Veras. In early 2013, Hans-Georg Maaßen, President of the German domestic security agency BfV, made several visits to the headquarters of the NSA. According to classified documents of the German government, Maaßen had agreed to transfer all data collected by the BfV via XKeyscore to the NSA. In addition, the BfV has been working very closely with eight other U.S. government agencies, including the CIA. – The SIGINT National Unit of Israel routinely receives raw intelligence data (including those of U.S. citizens) from the NSA. (See also: Memorandum of understanding between the NSA and Israel) – The Algemene Inlichtingen en Veiligheidsdienst (General Intelligence and Security Service) of the Netherlands has been receiving and storing user information gathered by U.S. intelligence sources such as PRISM. – The Defence Ministry of Singapore and its Security and Intelligence Division have been secretly intercepting much of the fibre optic cable traffic passing through the Asian continent. Information gathered by the Government of Singapore is transferred to the Government of Australia as part of an intelligence sharing agreement. This allows the "Five Eyes" to maintain a "stranglehold on communications across the Eastern Hemisphere". – The National Defence Radio Establishment of Sweden (codenamed Sardines) has been working extensively with the NSA, and it has granted the "five eyes" access to underwater cables in the Baltic Sea. – The Federal Intelligence Service (FSI) of Switzerland regularly exchanges information with the NSA, based on a secret agreement. In addition, the NSA has been granted access to Swiss monitoring facilities in Leuk (canton of Valais) and Herrenschwanden (canton of Bern). Aside from the "Five Eyes", most other Western countries are also participating in the NSA surveillance system and sharing information with each other. However, being a partner of the NSA does not automatically exempt a country from being targeted by the NSA. According to an internal NSA document leaked by Snowden, "We (the NSA) can, and often do, target the signals of most 3rd party foreign partners." Examples of members of the "Five Eyes" spying for each other: On behalf of the British Prime Minister Margaret Thatcher, the Security Intelligence Service of Canada spied on two British cabinet ministers in 1983. The U.S. National Security Agency spied on and intercepted the phone calls of Princess Diana right until she died in a Paris car crash with Dodi Fayed in 1997. The NSA currently holds 1,056 pages of classified information about Princess Diana, which cannot be released to the public because their disclosure is expected to cause "exceptionally grave damage" to the national security of the United States. Uses of intercepted data Most of the NSA's collected data which was seen by human eyes (i.e., used by NSA operatives) was used in accordance with the stated objective of combating terrorism. Other than to combat terrorism, these surveillance programs have been employed to assess the foreign policy and economic stability of other countries. According to reports by Brazil's O Globo newspaper, the collected data was also used to target "commercial secrets". In a statement addressed to the National Congress of Brazil, journalist Glenn Greenwald testified that the U.S. government uses counter-terrorism as a "pretext" for clandestine surveillance in order to compete with other countries in the "business, industrial and economic fields". In an interview with Der Spiegel published on 12 August 2013, former NSA Director Michael Hayden admitted that "We [the NSA] steal secrets. We're number one in it". Hayden also added that "We steal stuff to make you safe, not to make you rich". According to documents seen by the news agency Reuters, information obtained in this way is subsequently funnelled to authorities across the nation to help them launch criminal investigations of Americans. Federal agents are then instructed to "recreate" the investigative trail in order to "cover up" where the information originated, known as parallel construction. (Were the true origins known, the evidence and resulting case might be invalidated as "fruit of the poisonous tree", a legal doctrine designed to deter abuse of power that prevents evidence or subsequent events being used in a case if they resulted from a search or other process that does not conform to legal requirements.) According to NSA Chief Compliance Officer John DeLong, most violations of the NSA's rules were self-reported, and most often involved spying on personal love interests using surveillance technology of the agency. See also Censorship in the United States Domain Awareness System Freedom of speech in the United States Global surveillance Internet censorship in the United States Labor spying in the United States List of Americans under surveillance List of government mass surveillance projects Mass surveillance in the United Kingdom Police surveillance in New York City References External links Dozens of articles about the U.S. National Security Agency and its spying and surveillance programs CriMNet Evaluation Report by the Office of the Legislative Auditor of Minnesota, March 2004; part of a program to improve sharing of criminal justice information. Smyth, Daniel. "Avoiding Bloodshed? US Journalists and Censorship in Wartime", War & Society, Volume 32, Issue 1, 2013. . Deflem, Mathieu; Silva, Derek, M.D.; and Anna S. Rogers. 2018. "Domestic Spying: A Historical-Comparative Perspective". pp. 109–125 in The Cambridge Handbook of Social Problems, Volume 2, edited by A. Javier Treviño. New York: Cambridge University Press. United States Espionage in the United States Human rights abuses in the United States United States
44253533
https://en.wikipedia.org/wiki/Soundtracker%20%28music%20streaming%29
Soundtracker (music streaming)
SoundayMusic (Formerly known as Soundtracker) is a geosocial networking mobile music streaming app that enables users to listen to and track the music their friends and neighbors are playing in real time. The service provides over 32 million tracks and allows users to create "music stations" choosing between a mix of up to three artists, or choosing a music genre. In the free version users can create up to 10 personalized stations, look at the stations that are being played nearby in real time, and interact with other users through instant chat. The paid, premium subscription removes advertisements and allows users to create an unlimited number of stations. It was launched in 2009> by Soundtracker, and as of December 2014 the service had 1.3 million registered users. Soundtracker is available for iOS App Store, Android Google Play, Windows Phone Store, Windows Store, Google Glass, BlackBerry World, Samsung Apps, Amazon Appstore, Nook, and Samsung Smart TV, in 10 languages: English, Spanish, French, German, Portuguese, Italian, Chinese Simplified, Japanese, Korean and Russian. Soundtracker is a registered trademark. Beginnings The company was formed in late 2008 by a team composed of Daniele Calabrese and 25 software developers and designers. Soundtracker was first marketed in 2010 in San Francisco, and today has offices in Washington DC and Cagliari, Italy. Evolution The first mobile platform, iOS, was developed by Daniele Calabrese and his team in Silicon Valley in 2009. The iOS app at its inception featured 13 million tracks and allowed geo-tagging. In 2010 the team moved to Boston where it developed the stations, push notifications, interaction with nearby listeners, and the app for Windows Phone 7. Also in 2010, a website was introduced to provide access to non-mobile users, and the app was made available on Android, Java, Windows 7 and BlackBerry platforms. In 2011 the app was integrated with Facebook, Twitter, Foursquare and Songkick. In the second half of 2013, a system of in-app purchase and premium subscription was implemented allowing users to create an unlimited number of stations. In June 2014 Soundtracker launched Autodiscovery, a button that discovers the music played in a certain area. In October 2014, the App launched a location and proximity based advertising service with Facebook and Twitter. Features Playlists Soundtracker allows users to create playlists by choosing between artists, genre, or using the Autodiscovery feature. Each playlist features new music based on an algorithm for music discovery. The playlists are geolocated and users can see the playlists that are being played around them on a map. Autodiscovery In June 2014 Soundtracker introduced Autodiscovery, available on the iOS and Android platforms. Similar to Shazam, the app analyzes a track being played and provides the user with its details. It offers the option to create a station, buy the track from iTunes or Google Play Music, or watch the related music video on YouTube. Proximity In September 2014 Soundtracker implemented Proximity a feature that allows a user to listen to music with people nearby in real time. Using the live broadcasting option, a user can meet people nearby by tuning into their stations. Proximity was implemented using the Wireless Registry, the first global registry for wireless names and identifiers. Social network integration Soundtracker is integrated with major social networks, such as Facebook and Twitter. Geolocation and geotag is possible because of integration with Foursquare, and Songkick provides the latest updates about live concerts or shows by the selected artist. Analytics platform Soundtracker captures user data and combines it with other data sources to generate information that can be retrieved and filtered by business intelligence tools. Revenue model Sounstracker is an ad-based music streaming service which offers the option of an ad-free subscription. Advertisement In October 2014, in addition to traditional advertising banners in the app, Soundtracker implemented in-app advertising with Facebook Audience Network and Twitter Mopub. Subscriptions Soundtracker offers two different type of plans: basic and premium. The basic plan allows users to create up to ten free stations from the music catalogue. The Premium plan contains no ads and allows users to create an unlimited number of stations, listen to the top user generated charts in seven countries divided by music genre. Royalties Soundtracker is a non-interactive webcaster with compulsory license from copyright societies and the right to use any music or recording that have been released to the public. It pays royalties to copyright societies, ASCAP, BMI and SESAC, the portion of royalties are between 3% and 5% of total revenues. Soundtracker also negotiates directly with indie labels and artists. Future projects The company is developing applications for the automotive industry, as well as a special design of the app for different kinds of wearable technology. See also References External links 2008 establishments in California Android (operating system) software BlackBerry software Companies based in Washington, D.C. Internet radio in the United States IOS software Windows Phone software Windows software
352817
https://en.wikipedia.org/wiki/Margaret%20Forster
Margaret Forster
Margaret Forster (25 May 1938 – 8 February 2016) was an English novelist, biographer, memoirist, historian and critic, best known for the 1965 novel Georgy Girl, made into a successful film of the same name, which inspired a hit song by The Seekers. Other successes were a 2003 novel, Diary of an Ordinary Woman, biographies of Daphne du Maurier and Elizabeth Barrett Browning, and her memoirs Hidden Lives and Precious Lives. Early life and education Forster was born in the Raffles council estate in Carlisle, England. Her father, Arthur Forster, was a mechanic or factory fitter, her mother, Lilian (née Hind), a housewife who had worked as a clerk or secretary before her marriage. Forster attended Carlisle and County High School for Girls (1949–1956), a grammar school. She went on to win an open scholarship to read history at Somerville College, Oxford, graduating in 1960. Her first job was two years' (1961–1963) of teaching English at Barnsbury Girls' School in Islington, north London. During that time she started to write, but her first draft novel was rejected. Writing Novels Forster's first published novel Dames' Delight, loosely based on her experiences at Oxford, launched her writing career in 1964. Her second, published the following year, was a bestseller: Georgy Girl describes the choices open to a young working-class woman in London in the Swinging Sixties. It was adapted as a successful 1966 film starring Lynn Redgrave as Georgy, with Charlotte Rampling, Alan Bates and James Mason, for which Forster co-wrote the screenplay with Peter Nichols. The book was also adapted as a short-lived Broadway musical, Georgy, in 1970. Forster wrote prolifically in the 1960s and 1970s while bringing up three children, but later criticised many of her own early novels as "skittery", feeling she had not found a voice until her 1974 novel The Seduction of Mrs Pendlebury. Those early novels are mainly light and humorous, driven by a strong plot. An exception was The Travels of Maudie Tipstaff (1967), which presents the difference in values between generations in a Glaswegian family. The theme of family relations became prominent in her later works. Mother, Can You Hear Me? (1979) and Private Papers (1986) are darker in tone. She tackled subjects such as single mothers and young offenders. Have the Men Had Enough? (1989) scours care of the elderly and the problem of Alzheimer's disease, inspired by her mother-in-law's decline and death from the disease. In 1991, she and her husband, Hunter Davies, contributed to a BBC2 First Sight episode "When Love Isn't Enough", telling Marion Davies's story; Forster sharply criticised government policies on care for the elderly. The publisher Carmen Callil sees as Forster's best work Lady's Maid (1990), a historical novel about Elizabeth Barrett Browning viewed through the eyes of her maid. Diary of an Ordinary Woman (2003), narrated as the diary of a fictional woman who lives through the major events of the 20th century, is so realistic that many readers believed it was an authentic diary. Other later novels include The Memory Box (1999) and Is There Anything You Want? (2005). Her final novel, How to Measure a Cow, was published in March 2016. Forster published over 25 novels. A lifelong feminist and socialist, most of her works address these themes. Callil ascribes to Forster a world view "shaped by her sense of her working-class origins: most of her stories were about women's lives." Author Valerie Grove places her novels as being about "women's lives and the deceit within families". Biographies, memoirs and other non-fiction Forster's non-fiction included 14 biographies, historical works and memoirs. Her best-known biographies are those of the novelist Daphne du Maurier (1993) and the poet Elizabeth Barrett Browning (1988). The former was a groundbreaking exploration of the author's sexuality and her association with Gertrude Lawrence, filmed by the BBC as Daphne in 2007. In her biography of Barrett Browning, Forster draws on recently found letters and papers that shed light on the poet's life before she met and eloped with Robert Browning, replacing the myth of an invalid poet guarded by an ogre-like father with a more nuanced picture of an active, difficult woman, complicit in her virtual imprisonment. Forster also wrote fictionalised biographies of the novelist William Makepeace Thackeray (1978) and the artist Gwen John (2006). Significant Sisters (1984) chronicled the growing feminist movement through the lives of eight pioneering British and American women: Caroline Norton, Elizabeth Blackwell, Florence Nightingale, Emily Davies, Josephine Butler, Elizabeth Cady Stanton, Margaret Sanger and Emma Goldman. Good Wives (2001) surveyed contemporary and historical women married to famous men, including Mary Livingstone, Fanny Stevenson, Jennie Lee and herself. Her other historical writings include Rich Desserts and Captain's Thin (1997), an account of the Carr's biscuit factory in Carlisle. Forster's two memoirs based on her family background, Hidden Lives: A Family Memoir (1995) and Precious Lives (1998) join an autobiographical My Life in Houses (2014). Hidden Lives, drawing on the life of her grandmother, a servant with a secret illegitimate daughter, was praised by the historian and critic Claire Tomalin as "a slice of history to be recalled whenever people lament the lovely world we have lost." Frances Osborne cites it as her own inspiration for becoming a biographer: "It opened my eyes to how riveting the history of real girl-next-door women could be." The sequel, Precious Lives, tackled Forster's father, whom she reportedly disliked. Broadcasting, journalism and other roles Forster joined the BBC Advisory Committee on the Social Effects of Television (1975–1977) and the Arts Council Literary Panel (1978–1981). She served as a Booker Prize judge in 1980. She was the main non-fiction reviewer for the Evening Standard (1977–1980). She contributed often to literature programmes on television and BBC Radio 4, and to newspapers and magazines. She was interviewed by Sue Lawley for Radio 4's Desert Island Discs in 1994. Awards Forster was elected a Fellow of the Royal Society of Literature in 1975. She gained several awards for non-fiction. Elizabeth Barrett Browning: A Biography won the Heinemann Award of the Royal Society of Literature (1988), Daphne du Maurier: The Secret Life of the Renowned Storyteller the Writers' Guild Award for Best Non-Fiction (1993) and the Fawcett Society Book Prize (1994). Rich Desserts and Captain's Thin: A Family and Their Times 1831–1931 won the Lex Prize of The Global Business Book Award (1998). Precious Lives won the J. R. Ackerley Prize for Autobiography (1999). Personal life Forster met the writer, journalist and broadcaster Hunter Davies in their native Carlisle as a teenager. They married in 1960, right after she had completed her finals. The marriage lasted until Forster's death. They moved to London, where Davies had a job, at first living in rented accommodation in Hampstead, then buying and renovating a Victorian house in Boscastle Road, Dartmouth Park, north London, which remained their main home. After the success of Georgy Girl in the mid-1960s, Forster bought a house for her mother. The couple had three children, a son and two daughters; Caitlin Davies is an author and journalist. The family lived for some time in the Algarve in Portugal, before returning to London. They also had homes in Caldbeck and Loweswater in the Lake District. She led a somewhat reclusive life, often refusing to attend book signings and other publicity events. Her friends included broadcaster Melvyn Bragg and playwright Dennis Potter. Forster contracted breast cancer in the 1970s and had two mastectomies. A further cancer diagnosis ensued in 2007. By 2014, the cancer had metastasized, and she died in February 2016, aged 77. Legacy The British Library acquired the Margaret Forster Archive in March 2018, which consists of material relating to her works, professional and private correspondence, and personal papers. It includes manuscripts and typescript drafts of most of her published work, and some personal diaries. Selected works Novels 1964 Dames' Delight (Jonathan Cape) 1965 The Bogeyman (Secker & Warburg) 1965 Georgy Girl (Secker & Warburg) 1967 The Travels of Maudie Tipstaff (Secker & Warburg) 1968 The Park (Secker & Warburg) 1969 Miss Owen-Owen is at Home (Secker & Warburg) 1970 Fenella Phizackerley (Secker & Warburg) 1971 Mr Bone's Retreat (Secker & Warburg) 1974 The Seduction of Mrs Pendlebury (Secker & Warburg) 1979 Mother Can You Hear Me? (Secker & Warburg) 1980 The Bride of Lowther Fell: a Romance (Secker & Warburg) 1981 Marital Rites (Secker & Warburg) 1986 Private Papers (Chatto & Windus) 1989 Have the Men Had Enough? (Chatto & Windus) 1990 Lady's Maid (Chatto & Windus) 1991 The Battle for Christabel (Chatto & Windus) 1994 Mother's Boys (Chatto & Windus) 1996 Shadow Baby (Chatto & Windus) 1999 The Memory Box (Chatto & Windus) 2003 Diary of an Ordinary Woman 1914–1995 (Chatto & Windus) 2005 Is There Anything You Want? (Chatto & Windus) 2006 Keeping the World Away (Chatto & Windus) 2007 Over (Chatto & Windus) 2010 Isa and May (Chatto & Windus) 2013 The Unknown Bridesmaid (Chatto & Windus) 2016 How to Measure a Cow (Chatto & Windus) Biography and history 1973 The Rash Adventurer: The Rise and Fall of Charles Edward Stuart (Secker & Warburg) 1978 Memoirs of a Victorian Gentleman: William Makepeace Thackeray (Secker & Warburg) 1984 Significant Sisters: The Grassroots of Active Feminism 1839–1939 (Secker & Warburg) 1988 Elizabeth Barrett Browning: A Biography (Chatto & Windus) 1993 Daphne du Maurier: The Secret Life of the Renowned Storyteller (Chatto & Windus) 1997 Rich Desserts and Captain's Thin: A Family and Their Times 1831–1931 (Chatto & Windus) 2001 Good Wives?: Mary, Fanny, Jennie & Me 1845–2001 (Chatto & Windus) 2006 Keeping the World Away (Chatto & Windus) Family memoirs and autobiography 1995 Hidden Lives: A Family Memoir (Viking) 1998 Precious Lives (Chatto & Windus) 2014 My Life in Houses (Chatto & Windus) 2017 Diary of an Ordinary Schoolgirl (Chatto & Windus) Literary editions 1984 Drawn from Life: The Journalism of William Makepeace Thackeray (Folio Society) 1988 Elizabeth Barrett Browning, Selected Poems (Chatto & Windus) 1991 Virginia Woolf, Flush: A Biography (1933) New intro. by Margaret Forster (Hogarth Press) References Further reading David Bordelon, "Margaret Forster", in Twentieth Century Literary Biographers (Dictionary of Literary Biography, Vol. 155) (Detroit: Gale, 1995), pp. 76–87 "Forster, Margaret" in The Oxford Companion to English Literature. 6th ed. rev., ed. Margaret Drabble. (Oxford: Oxford University Press, 2000) Rosanna Greenstreet, "My perfect weekend: Margaret Forster", The Times, 19 December 1992 [Interview] "Margaret Forster'", Contemporary Literary Criticism, Vol. 149 (Detroit: Gale, 2002), pp. 62–107 "Margaret Forster", Contemporary British Novelists, ed. Nick Rennison (London: Routledge, 2005), pp. 72–76, Merritt Moseley, "Margaret Forster", British and Irish Novelists since 1960 (Dictionary of Literary Biography, Vol. 271, Detroit: Gale, 2003), pp. 139–155 Christina Patterson, "A life less ordinary: Margaret Forster worries, after 30 books, that she loves writing too much", The Independent, 15 March 2003, pp. 20–21 [Interview] Annie Taylor, "The difference a day made (14 May 1957)... Margaret Forster was on a mission", The Guardian, 6 June 1996 [Interview] Kathleen Jones Margaret Forster: An Introduction (Northern Lights; 2003, ) Kathleen Jones, Margaret Forster: A Life in Books (The Book Mill; 2012, ) External links Lindsay, Cora, 'Critical perspective (and biog & bibliog. on Margaret Forster)' Contemporary Writers (British Council) Margaret Forster at Random House (publisher's website) 'Biography of Margaret Forster' by Hana Sambrook & Roberta Schreyer Margaret Forster discusses her latest book Isa and May with The Interview Online 1938 births 2016 deaths Alumni of Somerville College, Oxford English biographers English women journalists English literary critics Women literary critics English women novelists Fellows of the Royal Society of Literature People from Carlisle, Cumbria English women non-fiction writers Women biographers
70086713
https://en.wikipedia.org/wiki/Lenovo%20Vibe%20K5
Lenovo Vibe K5
The Lenovo Vibe K5 is an Android-based smartphone released on June 13, 2016, by Lenovo. It has 13 MP rear camera and 5 MP front camera. It has 2 GB RAM and 16 GB internal storage. Specifications Cameras The Lenovo Vibe K5 has a 13 MP (f/2.2) single rear camera. It also has a LED Flash, and it also supports HDR and Panorama. It can record video in 1080p @ 30fps. It also has a 5 MP (f/2.8) single front camera. Battery The Lenovo Vibe K5 has a 2750 mAh Li-ion removable battery. Display The Lenovo Vibe K5 has a 5.0 inch IPS LCD display with a resolution of 720 x 1280 pixels. Its ratio is 16:9 and ppi density is 294 ppi. Operating System The Lenovo Vibe K5 has Android 5.1 (Lollipop) Operating System. CPU The Lenovo Vibe K5 has Qualcomm Snapdragon 415 chipset and Octa-core (4x1.5 GHz Cortex-A53 & 4x1.2 GHz Cortex-A53) CPU. It also has Adreno 405 GPU. Memory The Lenovo Vibe K5 has 2 GB RAM and 16 GB (eMMC 4.5) internal storage. Sound The Lenovo Vibe K5 has stereo speakers and 3.5 mm jack. Connectivity The Lenovo Vibe K5 supports Wi-Fi 802.11 b/g/n, Wi-Fi Hotspot, Bluetooth 4.1, GPS (with A-GPS), FM Radio, microUSB 2.0 and USB On-The-Go. It also supports 2G, 3G and 4G networks. Body The Lenovo Vibe K5 has Glass front, aluminum/plastic back and aluminum frame. Its weight is 142 g (5.01 oz). Its dimensions are 142 mm x 71 mm x 8 mm. Its colors are Champagne Gold and Platinum Silver. Sensors The Lenovo Vibe K5 has an Accelerometer and a Proximity sensor. See also Lenovo Vibe K4 Note Lenovo Android (operating system) References Lenovo smartphones Smartphones Android (operating system) devices Mobile phones introduced in 2016
21461784
https://en.wikipedia.org/wiki/Taxonomy%20of%20Anopheles
Taxonomy of Anopheles
Anopheles is a genus of mosquitoes (Culicidae). Of about 484 recognised species, over 100 can transmit human malaria, but only 30–40 commonly transmit parasites of the genus Plasmodium that cause malaria, which affects humans in endemic areas. Anopheles gambiae is one of the best known, because of its predominant role in the transmission of the deadly species Plasmodium falciparum. Classification The classification of this genus began in 1901 with Frederick Vincent Theobald. Despite the passage of time, the taxonomy remains incompletely settled. Classification into species is based on morphological characteristics - wing spots, head anatomy, larval and pupal anatomy, and chromosome structure, and more recently on DNA sequences. The genus Anopheles belongs to a subfamily Anophelinae with three genera: Anopheles Meigen (nearly worldwide distribution), Bironella Theobald (Australia only: 11 described species) and Chagasia Cruz (Neotropics: four described species). The genus Bironella has been divided into three subgenera: Bironella Theobald (two species), Brugella Edwards (three species) and Neobironella Tenorio (three species). Bironella appears to be the sister taxon to the Anopheles, with Chagasia forming the outgroup in this subfamily. The type species of the genus is Anopheles maculipennis. Subgenera The genus has been subdivided into seven subgenera based primarily on the number and positions of specialized setae on the gonocoxites of the male genitalia. The system of subgenera originated with the work of Christophers, who in 1915 described three subgenera: Anopheles (widely distributed), Myzomyia (later renamed Cellia) (Old World) and Nyssorhynchus (Neotropical). Nyssorhynchus was first described as Lavernia by Theobald. Frederick Wallace Edwards in 1932 added the subgenus Stethomyia (Neotropical distribution). Kerteszia was also described by Edwards in 1932, but then was recognised as a subgrouping of Nyssorhynchus. It was elevated to subgenus status by Komp in 1937; this subgenus is also found in the Neotropics. Two additional subgenera have since been recognised: Baimaia (Southeast Asia only) by Harbach et al. in 2005 and Lophopodomyia (Neotropical) by Antunes in 1937. One species within each subgenus has been identified as the type species of that particular subgenus: Subgenus Anopheles - Anopheles maculipennis Meigen 1918 Subgenus Baimaia - Anopheles kyondawensis Abraham 1947 Subgenus Cellia - Anopheles pharoensis Giles 1899 Subgenus Kerteszia - Anopheles boliviensis Theobald 1905 Subgenus Lophopodomyia - Anopheles squamifemur Antunes 1937 Subgenus Nyssorhynchus - Anopheles argyritarsis Robineau-Desvoidy 1827 Subgenus Stethomyia - Anopheles nimbus Theobald 1902 Within the genus Anopheles are two main groupings, one formed by the Cellia and Anopheles subgenera and a second by Kerteszia, Lophopodomyia, and Nyssorhynchus. Subgenus Stethomyia is an outlier with respect to these two taxa. Within the second group, Kerteszia and Nyssorhynchus appear to be sister taxa. Cellia appears to be more closely related to the Kerteszia-Lophopodomyia-Nyssorhynchus group than to Anopheles or Stethomyia, tentatively suggesting the following branching order: ( Stethomyia ( Anopheles ( Cellia ( Lophopodomyia ( Kerteszia, Nyssorhynchus))))). The number of species currently recognised within the subgenera is given here in parentheses: Anopheles (206 species), Baimaia (one), Cellia (239), Kerteszia (12), Lophopodomyia (six), Nyssorhynchus (34) and Stethomyia (five). The subgenus Baimaia may be elevated to genus level, as it appears to be a sister group to Bironella and all other Anopheles. The ancestors of Drosophila and Anopheles diverged . The Old and New World Anopheles species subsequently diverged between 80 and . Divisions below subgenus Taxonomic units between subgenus and species are not currently recognised as official zoological names. In practice, a number of taxonomic levels have been introduced. The larger subgenera (Anopheles, Cellia, and Nyssorhynchus) have been subdivided into sections and series, which in turn have been divided into groups and subgroups. Below subgroup but above species level is the species complex. Taxonomic levels above species complex can be distinguished on morphological grounds. Species within a species complex are either morphologically identical or extremely similar and can only be reliably separated by microscopic examination of the chromosomes or DNA sequencing. The classification continues to be revised. The first species complex was described in 1926 when the problem of nontransmission of malaria by Anopheles gambiae was solved by Falleroni, who recognised that An. gambiae was a complex of six species, of which only four could transmit malaria. This complex has subsequently been revised to a total of seven species of which five transmit malaria. Subgenus Nyssorhynchus has been divided in three sections: Albimanus (19 species), Argyritarsis (11 species) and Myzorhynchella (four species). The Argyritarsis section has been subdivided into Albitarsis and Argyritarsis groups. The Anopheles group was divided by Edwards into four series: Anopheles (worldwide), Myzorhynchus (Palearctic, Oriental, Australasian and Afrotropical), Cycloleppteron (Neotropical) and Lophoscelomyia (Oriental); and two groups, Arribalzagia (Neotropical) and Christya (Afrotropical). Reid and Knight (1961) modified this classification by subdividing the subgenus Anopheles into two sections, Angusticorn and Laticorn and six series. The division was based on the shape of their pupal trumpets. The Laticorn section was created for those species with wide, funnel-shaped trumpets having the longest axis transverse to the stem, and the Angusticorn section for species with semitubular trumpets having the longest axis vertical more or less in line with the stem. The earlier Arribalzagia and Christya groups were considered to be series. The Angusticorn section includes members of the Anopheles, Cycloleppteron, and Lophoscelomyia series, and the Laticorn section includes the Arribalzagia (24 species), Christya, and Myzorhynchus series. Cellia is the largest subgenus: all species within this subgenus are found in the Old World. It has been divided into six series - Cellia (eight species), Myzomyia (69 species), Neocellia (33 species), Neomyzomyia (99 species), Paramyzomyia (six species) and Pyretophorus (22 species). This classification was developed by Grjebine (in 1966), Reid (in 1968), and Gillies & de Meillon (also in 1968) based on the work by Edwards in 1932. Series definition within this subgenus is based on the cibarial armature - a collection of specialized spicules borne ventrally at the posterior margin of the cibarium - which was first used as a taxonomic method by Christophers in 1933. Kerteszia is a small subgenus found in South America whose larvae have specific ecological requirements; these can only develop within water that accumulates at the base of the follicular axis of the epiphytic Bromeliaceae. Unlike the majority of mosquitoes, species in this subgenus are active during the day. Within a number of species, separate subspecies have been identified. The diagnostic criteria and characteristic features of each subgenus are discussed on the own page. Species complexes Anopheles nuneztovari is a species complex with at least one occurring in Colombia and Venezuela and another occurring in the Amazon Basin. These clades appear to have diverged and expanded in the Pleistocene. Medical and veterinary importance The first demonstration that mosquitoes could act as vectors of disease was by Patrick Manson, a British physician working in China, who showed that a Culex species could transmit filariasis in 1878. This was then followed in 1897 by Ronald Ross, who showed avian malaria could also be transmitted by a species of Culex. Grassi in Italy showed that the species causing human malaria were transmitted by species of the genus Anopheles in 1898. Anopheles gambiae (then Anopheles coastalis), the most important of the vectors transmitting human malaria, was first recognised as such in 1899 at Freetown, Sierra Leone. It was later realised that only a small number of species of mosquitoes were responsible for the vast majority of human malaria and other diseases. This generated a considerable interest in the taxonomy of this and other mosquito genera. The species of the subgenera Baimaia, Lophopodomyia, and Stethomyia are not of medical importance. All species within the subgenus Anopheles known to carry human malaria lie within either the Myzorhynchus or the Anopheles series. Anopheles maculipennis s.l. is a known vector of West Nile virus. Six species in the subgenus Kerteszia can carry human malaria. Of these, only An. bellator and An. cruzii are of importance. Anopheles bellator can also transmit Wuchereria bancrofti. Several species of the subgenus Nyssorhynchus are of medical importance. All series of the subgenus Cellia contain vectors of malarial protozoa and microfilariae. Five species of anopheline mosquitoes (An. arabiensis, An. funestus, An. gambiae, An. moucheti, An. nili) all belonging to the subgenus Cellia are responsible for over 95% of total malaria transmission for Plasmodium falciparum in continental sub-Saharan Africa. Anopheles sundaicus and An. subpictus are important vectors of Plasmodium vivax. Species evolution The Anopheles gambiae complex has a number of important malaria vectors. A chromosomal study suggests that An. merus is the basal member of this complex and is sister species to An. gambiae. The two species An. quadriannulatus A and An. quadriannulatus B - neither of whom are vectors for malaria - are derived from An. gambiae. The subgenera Anopheles and Cellia appear to be sister clades as do Kerteszia and Nyssorhynchus. Species listing Species that have been shown to be vectors of human malaria are marked with a star (*) after the name. Subgenus Anopheles Anopheles anthropophagus* Xu & Feng 1975 Anopheles confusa Anopheles derooki Soesilo & Van Slooten 1931 Anopheles gracilis Theobald 1905 Anopheles hollandi Taylor 1934 Anopheles obscura Tenorio 1975 Anopheles papuae Swellengrebel & Swellengrebel de Graaf 1919 Anopheles simmondsi Tenorio 1977 Anopheles travestita Brug 1928 Section Angusticorn Series Anopheles Anopheles algeriensis Theobald 1903 Anopheles concolor Edwards 1938 Anopheles marteri subspecies marteri subspecies sogdianus Keshishian Complex Claviger (Coluzzi et al. 1965) Anopheles claviger* Meigen 1804 Anopheles petragnani Del Vecchio 1939 Group Aitkenii (Reid & Knight, 1961) Anopheles aberrans Harrison & Scanlon 1975 Anopheles acaci Baisas 1946 Anopheles aitkenii James 1903 Anopheles bengalensis Puri 1930 Anopheles borneensis McArthur 1949 Anopheles fragilis Theobald 1903 Anopheles insulaeflorum Swellengrebel & Swellengrebel de Graaf 1919 Anopheles palmatus Rodenwaldt 1926 Anopheles peytoni Kulasekera Harrison & Amerasinghe 1989 Anopheles pilinotum Harrison & Scanlon 1974 Anopheles pinjaurensis Barraud 1932 Anopheles stricklandi Reid 1965 Anopheles tigertti Scanlon & Peyton 1967 Group Alongensis (Phan et al. 1991) Anopheles alongensis Evenhuis 1940 Anopheles cucphuongensis Phan, Manh, Hinh & Vien 1991 Group Atratipes (Lee et al. 1987) Anopheles atratipes Skuse 1889 Anopheles tasmaniensis Dobrowtorsky 1966 Group Culiciformis (Reid & Knight 1961) Anopheles culiciformis Cogill 1903 Anopheles sintoni Puri 1929 Anopheles sintonoides Ho 1938 Group Lindesayi (Reid & Knight 1961) Anopheles mengalengensis Ma 1981 Anopheles nilgiricus Christophers 1924 Anopheles wellingtonianus Alcock 1912 Complex Gigas (Harrison et al. 1991) Anopheles baileyi Edwards 1923 Anopheles gigas Giles 1901 subspecies crockeri Colless subspecies danaubento Mochtar & Walandouw subspecies formosus Ludlow subspecies gigas Giles subspecies oedjalikalah Nainggolan subspecies pantjarbatu Waktoedi subspecies refutans Alcock subspecies simlensis James subspecies sumatrana Swellengrebel & Rodenwaldt Complex Lindesayi (Harrison et al. 1991) Anopheles lindesayi Giles 1900 subspecies benguetensis King subspecies cameronensis Edwards subspecies japonicus Yamada subspecies lindesayi Giles subspecies pleccau Koidzumi Group Maculipennis (Reid & Knight 1961) Anopheles atropos Dyar & Knab 1906 Anopheles aztecus Hoffmann 1935 Anopheles lewisi Ludlow 1920 Anopheles walkeri Complex Quadrimaculatus (Linton 2004) Anopheles diluvialis Reinert 1997 Anopheles inundatus Reinert 1997 Anopheles maverlius Reinert 1997 Anopheles smaragdinus Reinert 1997 Anopheles quadrimaculatus* Say 1824 Subgroup Freeborni (Linton 2004) Anopheles earlei Vargas 1943 Anopheles freeborni* Aitken 1939 Anopheles hermsi Barr & Guptavanij 1989 Subgroup Maculipennis (Linton 2004) Anopheles artemievi Gordeyev, Zvantsov, Goryacheva, Shaikevich & Yezhov Anopheles atroparvus* Van Thiel 1927 Anopheles beklemishevi Stegnii & Kabanova 1976 Anopheles daciae Linton, Nicolescu & Harbach 2004 Anopheles labranchiae* Falleroni 1926 Anopheles maculipennis Anopheles martinius Shingarev 1926 Anopheles melanoon* Hackett 1934 Anopheles messeae* Falleroni 1926 Anopheles occidentalis Dyar & Knab 1906 Anopheles persiensis Linton, Sedaghat & Harbach 2003 Anopheles sacharovi* Favre 1903 Anopheles sicaulti Roubaud 1935 Anopheles subalpinus Hackett & Lewis 1935 Group Plumbeus (Reid & Knight 1961) Anopheles arboricola Zavortink 1970 Anopheles barberi Coquillett 1903 Anopheles barianensis James 1911 Anopheles fausti Vargas 1943 Anopheles judithae Zavortink 1969 Anopheles omorii Sakakibara 1959 Anopheles plumbeus* Stegnii & Kabanova 1828 Anopheles powderi Zavortink 1970 Anopheles xelajuensis De Leon 1938 Group Pseudopunctipennis (Reid & Knight 1961) Anopheles chiriquiensis Komp 1936 Anopheles franciscanus McCracken 1904 Anopheles hectoris Giaquinto-Mira 1931 Anopheles tibiamaculatus Neiva 1906 Anopheles eiseni Coquillett 1902 subspecies eiseni Coquillett subspecies geometricus Corrêa Anopheles parapunctipennis* Martini 1932 subspecies guatemalensis de Leon subspecies parapunctipennis Martini Anopheles pseudopunctipennis subspecies levicastilloi Levi-Castillo subspecies neghmei Mann subspecies noei Mann subspecies patersoni Alvarado & Heredia subspecies pseudopunctipennis Theobald subspecies rivadeneirai Levi-Castillo Group Punctipennis (Reid & Knight 1961) Anopheles perplexens Ludlow 1907 Anopheles punctipennis Say 1823 Complex Crucians (Wilkerson et al. 2004) Anopheles bradleyi King 1939 Anopheles crucians Wiedemann 1828 Anopheles georgianus King 1939 Group Stigmaticus (Reid & Knight 1961) Anopheles colledgei Marks 1956 Anopheles corethroides Theobald 1907 Anopheles papuensis Dobrowtorsky 1957 Anopheles powelli Lee 1944 Anopheles pseudostigmaticus Dobrowtorsky 1957 Anopheles stigmaticus Skuse 1889 Series Cycloleppteron (Edwards 1932) Anopheles annulipalpis Lynch 1878 Anopheles grabhamii Series Lophoscelomyia (Edwards 1932) Anopheles bulkleyi Causey 1937 Group Asiaticus (Reid 1968) Anopheles annandalei Prashad 1918 Anopheles noniae Reid 1963 Subgroup Asiaticus (Rattanarithikul et al. 2004) Anopheles asiaticus Leicester 1903 Subgroup Interruptus (Rattanarithikul et al. 2004) Anopheles interruptus Puri 1929 Section Laticorn (Reid & Knight 1961) Series Arribalzagia (Root 1922) Anopheles anchietai Correa & Ramalho 1968 Anopheles apicimacula Dyar & Knab 1906 Anopheles bustamentei Galvao 1955 Anopheles calderoni Wilkerson 1991 Anopheles costai Da Fonseca & Da Silva Ramos 1939 Anopheles evandroi Da Costa Lima 1937 Anopheles fluminensis Root 1927 Anopheles forattinii Wilkerson 1999 Anopheles gabaldoni Vargas 1941 Anopheles guarao Anduze & Capdevielle 1949 Anopheles intermedius* Peryassu 1908 Anopheles maculipes Theobald 1903 Anopheles malefactor Dyar & Knab 1907 Anopheles mattogrossensis Lutz & Neiva 1911 Anopheles mediopunctatus Lutz 1903 Anopheles minor Da Costa Lima 1929 Anopheles neomaculipalpis Curry 1931 Anopheles peryassui Dyar & Knab 1908 Anopheles pseudomaculipes Peryassu 1908 Anopheles punctimacula Dyar & Knab 1906 Anopheles mediopunctatus Lutz 1903 Anopheles rachoui Galvao 1952 Anopheles shannoni Davis 1931 Anopheles veruslanei Vargas 1979 Anopheles vestitipennis Dyar & Knab 1906 Series Christya (Christophers 1924) Anopheles implexus Theobald 1903 Anopheles okuensis Brunhes, le Goff & Geoffroy 1997 Series Myzorhynchus (Edwards 1932) Anopheles obscurus Grunberg 1905 Anopheles bancroftii Giles 1902 Anopheles barbirostris* Van der Wulp 1884 Anopheles pollicaris Reid 1962 Group Albotaeniatus (Reid & Knight 1961) Anopheles albotaeniatus Theobald 1903 Anopheles balerensis Mendoza 1947 Anopheles ejercitoi Mendoza 1947 Anopheles montanus Stanton & Hacker 1917 Anopheles saperoi Bohart & Ingram 1946 subspecies ohamai Ohama subspecies saperoi Bohart & Ingram Group Bancroftii (Reid & Knight 1961) Anopheles pseudobarbirostris Ludlow 1935 Anopheles bancroftii Giles 1902 subspecies bancroftii Giles subspecies barbiventris Brug Group Barbirostris (Reid & Knight 1961) Anopheles freyi Meng 1957 Anopheles koreicus Yamada & Watanabe 1918 Subgroup Barbirostris (Reid 1968) Anopheles barbirostris van der Wulp 1884 Anopheles campestris Reid 1962 Anopheles donaldi Reid 1962 Anopheles franciscoi Reid 1962 Anopheles hodgkini Reid 1962 Anopheles pollicaris Reid 1962 Subgroup Vanus (Reid 1968) Anopheles ahomi Chowdhury 1929 Anopheles barbumbrosus Strickland & Chowdhury 1927 Anopheles manalangi Mendoza 1940 Anopheles reidi Harrison 1973 Anopheles vanus Walker 1859 Group Coustani (Reid & Knight 1961) Anopheles caliginosus De Meillon 1943 Anopheles coustani Laveran 1900 Anopheles crypticus Coetzee 1994 Anopheles fuscicolor Van Someren 1947 Anopheles namibiensis Coetzee 1984 Anopheles paludis Theobald 1900 Anopheles symesi Edwards 1928 Anopheles tenebrosus Donitz 1902 Anopheles ziemanni Grunberg 1902 Group Hyrcanus (Reid 1953) Anopheles anthropophagus Xu and Feng 1975 Anopheles argyropus Swellengrebel 1914 Anopheles belenrae Rueda 2005 Anopheles changfus Ma 1981 Anopheles chodukini Martini 1929 Anopheles dazhaius Ma 1981 Anopheles engarensis Kanda & Oguma 1978 Anopheles hailarensis Xu JinJiang & Luo XinFu 1998 Anopheles heiheensis Ma 1981 Anopheles hyrcanus* Pallas 1771 Anopheles junlianensis Lei 1996 Anopheles kiangsuensis Xu and Feng 1975 Anopheles kleini Rueda 2005 Anopheles kummingensis Dong & Wang 1985 Anopheles kweiyangensis Yao & Wu 1944 Anopheles liangshanensis Kang Tan Cao Cheng Yang & Huang 1984 Anopheles nimpe Nguyen, Tran & Harbach Anopheles pseudopictus Graham 1899 Anopheles pullus Yamada 1937 Anopheles sinensis* Wiedemann 1828 Anopheles sineroides Yamada 1924 Anopheles xiaokuanus Ma 1981 Anopheles xui Dong, Zhou, Dong & Mao 2007 Anopheles yatsushiroensis Miyazaki 1951 Subgroup Lesteri (Harrison 1972) Anopheles crawfordi Reid 1953 Anopheles kiangsuensis Xu & Feng 1975 Anopheles lesteri de Meillon 1931 Anopheles paraliae Sandosham 1959 Anopheles peditaeniatus Leicester 1908 Anopheles vietnamensis Manh Hinh & Vien 1993 Subgroup Nigerrimus (Harrison 1972) Anopheles nigerrimus* Giles 1900 Anopheles nitidus Harrison, Scanlon & Reid 1973 Anopheles pseudosinensis Baisas 1935 Anopheles pursati Laveran 1902 Group Umbrosus (Reid 1950) Anopheles brevipalpis Roper 1914 Anopheles brevirostris Reid 1950 Anopheles hunteri Strickland 1916 Anopheles samarensis Rozeboom 1951 Anopheles similissimus Strickland & Chowdhury 1927 Subgroup Baezai (Rattanarithikul et al. 2004) Anopheles baezai Gater 1934 Subgroup Letifer (Reid 1968) Anopheles collessi Reid 1963 Anopheles letifer* Sandosham 1944 Anopheles roperi Reid 1950 Anopheles whartoni Reid 1963 Subgroup Separatus (Rattanarithikul et al. 2004) Anopheles separatus Leicester 1908 Subgroup Umbrosus (Rattanarithikul et al. 2004) Anopheles umbrosus Theobald 1903 Subgenus Baimaia Anopheles kyondawensisAbraham 1947 Subgenus Cellia Anopheles rageaui Mattingly and Adam Series Cellia (Christophers 1924) Anopheles argenteolobatus Gough 1910 Anopheles brumpti Hamon & Rickenbach 1955 Anopheles carnevalei Brunhes le Goff & Geoffroy 1999 Anopheles cristipalpis Service 1977 Anopheles murphyi Gillies 1968 Anopheles pharoensis Theobald 1901 Anopheles swahilicus Gillies 1964 Group Squamosus (Grjebine 1966) Anopheles cydippis de Meillon 1931 Anopheles squamosusTheobald 1901 Series Myzomyia Anopheles apoci Marsh 1933 Anopheles azaniae Bailly-Choumara 1960 Anopheles barberellus Evans 1932 Anopheles brunnipes Theobald 1910 Anopheles domicola Edwards 1916 Anopheles dthali Patton 1905 Anopheles erythraeus Corradetti 1939 Anopheles ethiopicus Gillies & Coetzee 1987 Anopheles flavicosta Edwards 1911 Anopheles fontinalis Gillies 1968 Anopheles majidi Young & Majid 1928 Anopheles moucheti* Evans 1925 subspecies bervoetsi D'Haenans 1961 subspecies moucheti Evans 1925 subspecies nigeriensis Anopheles schwetzi Evans 1934 Anopheles tchekedii de Meillon & Leeson 1940 Anopheles walravensi Edwards 1930 Group Demeilloni (Gillies & De Meillon 1962) Anopheles carteri Evans & de Meillon 1933 Anopheles demeilloni Evans 1933 Anopheles freetownensis Evans 1925 Anopheles garnhami Edwards 1930 Anopheles keniensis Evans 1931 Anopheles lloreti Gil Collado 1936 Anopheles sergentii* Theobald 1907 subspecies macmahoni Evans 1936 subspecies sergentii Theobald 1907 Group Funestus (Garros et al 2004) Anopheles jeyporiensis James 1902 Subgroup Aconitus (Chen et al. 2003) Anopheles aconitus Dönitz 1902 Anopheles filipinae Manalang 1930 Anopheles mangyanus Banks 1906 Anopheles pampanai Buttiker & Beales 1959 Anopheles varuna Iyengar 1924 Subgroup Culicifacies (Garros et al. 2004) Anopheles culicifacies* Giles 1901 Subgroup Funestus (Garros et al. 2004) Anopheles aruni Sobti 1968 Anopheles confusus Evans & Leeson 1935 Anopheles funestus* Giles 1900 Anopheles funestus-like* Spillings et al 2009 Anopheles longipalpis Type C Koekemoer et al. 2009 Anopheles parensis Gillies 1962 Anopheles vaneedeni Gillies & Coetzee 1987 Subgroup Minimus (Chen et al. 2003) Anopheles flavirostris* Ludlow 1914 Anopheles leesoni Evans 1931 Anopheles longipalpis Type A Koekemoer et al 2009 Complex Fluviatilis (Salara et al. 1993) Anopheles fluviatilis* (species S, T, U, V) James 1902 Complex Minimus (Green et al. 1990) Anopheles harrisoni Harbach & Manguin 2007 Anopheles minimusTheobald 1901* Subgroup Rivulorum (Garros et al 2004) Anopheles brucei Service 1960 Anopheles fuscivenosus Leeson 1930 Anopheles rivulorum* Leeson 1935 Group Marshallii Anopheles austenii Theobald 1905 Anopheles berghei Vincke & Leleup 1949 Anopheles brohieri Edwards 1929 Anopheles gibbinsi Evans 1935 Anopheles hancocki Edwards 1929 Anopheles hargreavesi Evans 1927 Anopheles harperi Evans 1936 Anopheles mortiauxi Edwards 1938 Anopheles mousinhoi de Meillon & Pereira 1940 Anopheles njombiensis Peters 1955 Anopheles seydeli Edwards 1929 Complex Marshalli (Gillies & Coetzee 1987) Anopheles hughi Lambert & Coetzee 1982 Anopheles kosiensis Coetzee, Segerman & Hunt 1987 Anopheles letabensis Lambert & Coetzee 1982 Anopheles marshallii Theobald 1903 Group Wellcomei Anopheles distinctus Newstead & Carter 1911 Anopheles erepens Gillies 1958 Anopheles theileri Edwards 1912 Anopheles wellcomei Theobald 1904 subspecies ugandae Evans 1934 subspecies ungujae White 1975 subspecies wellcomei Theobald 1904 Series Neocellia (Christophers 1924) Anopheles ainshamsi Gad, Harbach & Harrison 2006 Anopheles dancalicus Corradetti 1939 Anopheles hervyi Brunhes, le Goff & Geoffroy 1999 Anopheles jamesiiTheobald 1901 Anopheles karwari* James 1903 Anopheles maculipalpis Giles 1902 Anopheles moghulensis Christophers 1924 Anopheles paltrinierii Shidrawi & Gillies 1988 Anopheles pattoni Christophers 1926 Anopheles pretoriensis Theobald 1903 Anopheles pulcherrimusTheobald 1902* Anopheles rufipes Gough 1910 subspecies broussesi Edwards 1929 subspecies rufipes Gough 1910 Anopheles salbaii Maffi & Coluzzi 1958 Anopheles splendidus Koidzumi 1920 Anopheles theobaldi Giles 1901 Complex StephensiAnopheles stephensi* Liston 1901 Complex Superpictus Anopheles superpictus* Grassi 1899 Group Annularis (Reid 1968) Anopheles pallidus Theobald 1901 Anopheles philippinensis* Ludlow 1902 Anopheles schueffneri Stanton 1915 Complex Annularis (Reid 1968)Anopheles annularis* van der Wulp 1884 Complex Nivipes (Green et al. 1985)Anopheles nivipes Theobald 1903 Group Jamesii (Rattanarithikul et al. 2004)Anopheles jamesii Theobald 1901Anopheles pseudojamesi Strickland & Chowdhury 1927Anopheles splendidus Koidzumi 1920 Group Maculatus (Rattanarithikul & Green 1987)Anopheles dispar Rattanarithikul & Harbach 1991Anopheles greeni Rattanarithikul & Harbach 1991Anopheles pseudowillmori Theobald 1910Anopheles rampaeAnopheles willmori James 1903 Subgroup Maculatus (Rattanarithikul et al 2004)Anopheles dravidicus Christophers 1924Anopheles maculatus* Subgroup Sawadwongporni (Rattanarithikul et al 2004)Anopheles notanandai Rattanarithikul & Green 1987Anopheles sawadwongporni Rattanarithikul & Green 1987 Series Neomyzomyia (Christophers 1924)Anopheles amictus Edwards 1921Anopheles annulatus Haga 1930Anopheles aurirostris Watson 1910Anopheles dualaensis Brunhes le Goff & Geoffroy 1999Anopheles hilli Woodhill & Lee 1944Anopheles incognitus Brug 1931Anopheles kochi Dönitz 1901Anopheles kokhani Vythilingam, Jeffery & Harbach 2007Anopheles kolambuganensis Baisas 1932Anopheles longirostris Brug 1928Anopheles mascarensis de Meillon 1947Anopheles meraukensis Venhuis 1932Anopheles novaguinensis Venhuis 1933Anopheles saungi Colless 1955Anopheles stookesi Colless 1955Anopheles watsonii Leicester 1908 Complex AnnulipesAnopheles annulipes Walker 1856 Complex LungaeAnopheles lungae Belkin & Schlosser 1944Anopheles nataliae Belkin 1945Anopheles solomonis Belkin, Knight & Rozeboom 1945 Complex PunctulatusAnopheles clowi Rozeboom & Knight 1946Anopheles farauti* Laveran 1902Anopheles hinesorum Schmidt 2001Anopheles irenicus Schmidt 2003Anopheles koliensis Owen 1945Anopheles punctulatus Dönitz 1901 Hosts: Bos taurus, Canis familiaris, Equus, Felis, GallusAnopheles torresiensis Schmidt 2001 Group ArdensisAnopheles ardensis Theobald 1905Anopheles buxtoni Service 1958Anopheles cinctus Newstead & Carter 1910Anopheles deemingi Service 1970Anopheles eouzani Brunhes le Goff & Bousses 2003Anopheles kingi Christophers 1923Anopheles machardyi Edwards 1930Anopheles maliensis Bailly-Choumara & Adam 1959Anopheles millecampsi Lips 1960Anopheles multicinctus Edwards 1930Anopheles natalensis Hill & Haydon 1907Anopheles vernus Gillies 1968Anopheles vinckei de Meillon 1942 Complex NiliAnopheles carnevalei Brunhes, le Geoff & Geoffrey 1999Anopheles nili* Theobald 1904Anopheles ovengensis Awono-Ambene Simard Antonio-Nkonkjio & Fontenille 2004Anopheles somalicus Rivola & Holstein 1957 Group Kochi (Rattanarithikul et al 2004)Anopheles kochi Donitz 1901 Group LeucosphyrusAnopheles baisasi Colless 1957Anopheles cristatus King & Baisas Subgroup ElegansAnopheles elegans James 1903 Subgroup HackeriAnopheles hackeri Edwards 1921Anopheles mirans Sallum & Peyton 2005Anopheles pujutensis Colless 1948Anopheles recens Sallum & Peyton 2005Anopheles sulawesi* Waktoedi 1954 Subgroup LeucosphyrusAnopheles baimaii* Sallum & Peyton 2005Anopheles cracens Sallum & Peyton 2005Anopheles scanloni Sallum & Peyton 2005 Complex DirusAnopheles dirus* Peyton & Harrison 1979Anopheles nemophilous Peyton & Ramalingam 1988Anopheles takasagoensis Morishita 1946 Complex Leucosphyrus (Peyton 1990)Anopheles balabacensis* Baisas 1936Anopheles introlatus Colless 1957Anopheles latens* Sallum & Peyton 2005Anopheles leucosphyrus* Dönitz 1901 Subgroup Riparis (Peyton 1990)Anopheles cristatus King & BaisasAnopheles macarthuri Colless 1956Anopheles riparis King & Baisas 1936 Group Tessellatus (Rattanarithikul et al 2004)Anopheles tessellatus Theobald subspecies A. t. kalawara Stoker & Waktoedi subspecies A. t. orientalis Swellengrebel & Swellengrebel de Graaf subspecies A. t. tessellatus Theobald Series Paramyzomyia (Christophers & Barraud 1931) Group CinereusAnopheles azevedoi Ribeiro 1969Anopheles cinereus Theobald 1901 subspecies cinereus Theobald 1901 subspecies hispaniola Theobald 1903 Complex Turkhudi (Liston)Anopheles turkhudi Liston 1901 subspecies telamali Saliternik & Theodor 1942 subspecies turkhudi Liston 1901 Group ListeriAnopheles listeri de Mellion 1931Anopheles multicolor* Cambouliu 1902Anopheles seretsei Abdulla-Chan Coetzee & Hunt 1998 Series Pyretophorus (Blanchard 1902)Anopheles christyi Newstead & Carter 1911Anopheles daudi Coluzzi 1958Anopheles indefinitus Ludlow 1904Anopheles limosus King 1932Anopheles litoralis King 1932Anopheles ludlowae Theobald 1903 subspecies ludlowae Theobald 1903 subspecies torakala Stoker & Waktoedi 1949Anopheles parangensis Ludlow 1914Anopheles vagus* Dönitz 1902 Complex Gambiae (White 1985)Anopheles amharicus Hunt, Wilkerson & Coetzee 2013Anopheles arabiensis* Patton 1905Anopheles bwambae White 1985Anopheles coluzzii* Coetzee & Wilkerson 2013 Anopheles comorensis Brunhes le Goff & Geoffroy 1997Anopheles fontenillei Barrón et al 2019Anopheles gambiae* Giles 1902Anopheles melas* Theobald 1903Anopheles merus Dontiz 1902Anopheles quadriannulatus Theobald 1911 Complex Subpictus (Sugana et al. 1994)Anopheles subpictus* Grassi 1899 Complex Sundaicus (Sukowati 1999)Anopheles epiroticus Linton & Harbach 2005Anopheles sundaicus* Rodenwaldt 1925 Subgenus KertesziaAnopheles auyantepuiensis Harbach & Navarro 1996Anopheles bambusicolus Komp 1937Anopheles bellator* Dyar & Knab 1906Anopheles boliviensis Theobald 1905Anopheles cruzii* Dyar & Knab 1908Anopheles gonzalezrinconesi Cova Garcia, Pulido & de Ugueto, 1977Anopheles homunculus* Komp 1937Anopheles laneanus Corrêa & Cerqueira 1944Anopheles lepidotus Zavortink 1973Anopheles neivai Howard, Dyar & Knab 1913Anopheles pholidotus Zavortink 1973Anopheles rollai Cova Garcia, Pulido & de Ugueto 1977 Note: Anopheles cruzii is known to be a species complex, but the number species in this complex has yet to be finalised. Subgenus LophopodomyiaAnopheles gilesi Peryassu 1928Anopheles gomezdelatorrei Levi-Castillo 1955Anopheles oiketorakras Osorno-Mesa 1947Anopheles pseudotibiamaculata Galvao & Barretto 1941Anopheles squamifemur Antunes 1937Anopheles vargasi Gabaldon Cova Garcia & Lopez 1941 Subgenus NyssorhynchusAnopheles dominicanus Zavortinkb and Poinarab 2000 Section AlbimanusAnopheles noroestensis Galvao and Lane 1937 Series Albimanus (Faran 1980)Anopheles albimanus* Weidemann 1820 Series Oswaldoi (Faran 1980) Group Oswaldoi (Faran 1980) Subgroup Oswaldoi (Faran 1980)Anopheles anomalphyllus KompAnopheles aquasalis* Curry 1932Anopheles dunhamii Causey 1945Anopheles evansae Brethes 1926Anopheles galvaoi Causey, Deane and Deane 1943Anopheles ininii Sevenet & Abonnenc 1938Anopheles konderi Galvfio & Damasceno 1942Anopheles oswaldoi Peryassú 1922Anopheles rangeli Galabadon, Cova-Garcia & Lopez 1941Anopheles sanctielii Sevenet & Abonnenc 1938Anopheles trinkae Faran 1980 Complex Nuneztovari (Conn et al. 1993)Anopheles geoeldii Rozeboom and Gabaldón 1941Anopheles nuneztovari* Galbadón 1940 Subgroup Strodei (Faran 1980)Anopheles albertoAnopheles arthuri*Anopheles benarrochi Galabadon, Cova-Garcia & LopezAnopheles rondoni Neiva & Pinto 1922Anopheles strodei Root 1926 Group TriannulatusAnopheles halophylus do Nascimento & de Oliveira 2002Anopheles triannulatus Neiva & Pinto 1922 Section Argyritarsis (Levi Castillo 1949) Series AlbitarsisAnopheles rooti Brethes 1926 Group AlbitarsisAnopheles albitarsisAnopheles deaneorum Rosa-Freitas 1989Anopheles janconnae Wilkerson & Sallum 2009Anopheles oryzalimnetes Wilkerson & Motoki 2009Anopheles marajoara* Galvao & Damesceno 1942 Group BraziliensisAnopheles braziliensis Chagas 1907 Series ArgyritarsisGroup ArgyritarsisAnopheles argyitarsis Robineau-Desvoidy 1827Anopheles sawyeri Causey, Deane, Deane & Sampaio 1943 Group DarlingiAnopheles darlingi* Root Group LanieAnopheles lanei Galvao & Amaral 1938 Group PictipennisAnopheles pictipennis Phillippi 1865 Section Myzorhynchella (Peyton et al. 1992)Anopheles antunesi Galvao & Amaral 1940Anopheles lutzii Cruz 1901Anopheles nigritarsis Chagas 1907Anopheles parvus Chagas 1907 Subgenus StethomyiaAnopheles acanthotorynus Komp 1937Anopheles canorii Floch & Abonnenc 1945Anopheles kompi Edwards 1930Anopheles nimbusAnopheles thomasi Shannon 1933 NotesAnopheles anthropophagus Xu and Feng is considered to be a junior synonym of Anopheles lesteri de Meillon 1931.Anopheles bonneorum Fonseca & Ramos is considered to be a synonym of Anopheles costai.Anopheles lewisi Ludlow 1920 is a synonym of Anopheles thomasi Shannon 1933.Anopheles lineata Lutz is a synonym of Anopheles nimbus Theobald.Anopheles mesopotamiae is considered to be a synonym of Anopheles hyrcanus.Anopheles rossii Giles 1899 was originally described as Anopheles subpictus Grassi 1899.Bironella derooki is a synonym of Anopheles soesiloi. The following are currently regarded as nomina nuda:Anopheles (Anopheles) solomonensis Cumpston 1924Anopheles (Cellia) melanotarsis Woodhill & Lee A subgroup of Anopheles gambiae sensu stricto has been reported and given the name Goundry''. This subgroup has not yet been elevated to species status. References External links Mosquito Taxonomic Inventory page on Anopheles The Walter Reed Biosystematics Unit Taxonomy Diptera taxonomy
46472609
https://en.wikipedia.org/wiki/N.I.%20Lobachevsky%20Institute%20of%20Mathematics%20and%20Mechanics
N.I. Lobachevsky Institute of Mathematics and Mechanics
N.I. Lobachevsky Institute of Mathematics and Mechanics - N.I. Lobachevsky Institute of Mathematics and Mechanics – one of KFU’s Institutes. It was established in 2011 at the premises of the Faculty of Mechanics and Mathematics including N.G.Chebotarev Research Institute for Mathematics and Mechanics and the part of TSUHE’s Faculty of Mathematics. The Institute has Bachelor program, Master program, Postgraduate and Doctoral studies. History In 1814 classical university was fully opened with the division of Physics and Mathematics as its part. In 1961 the Faculty of Mechanics and Mathematics became independent from the Faculty of Physics and Mathematics. In 1978 the Faculty of Computational Mathematics and Cybernetics was segregated from the Faculty of Mechanics and Mathematics. In 2011 Nikolai Lobachevsky Institute of Mathematics and Mechanics was established in KFU by merging the Faculty of Mechanics and Mathematics of Kazan University and Nikolai Chebotarev Research Institute of Mathematics and Mechanics. Johann Christian Martin Bartels is regarded as the founder of Kazan mathematical school. He was the teacher of outstanding scientists - N. I. Lobachevsky. A. F. Popov, F. M. Suvorov, A. V. Vasilyev, D. M. Sintsov, A. P. Kotelnikov worked at Kazan University in pre-revolutionary period. Corresponding members of the USSR’s Academy of Science, N. G. Chebotaryev, N. G., Chetaev, academicians of the Ukrainian and Belorussian SSRs’ AS, A. Z. Petrov and F. D. Gachov, worked there in post-revolutionary period. Structure Division of Mathematics: Department of General Mathematics Department of Algebra and Mathematical Logic Department of Geometry Department of Mathematical Analysis Department of Differential Equations Department of Theory of Functions and Approximations Division of Mechanics: Department of Theoretical Mechanics Department of Aerohydromeachanics Division of Pedagogical Education: Department of Higher Mathematics and Mathematical Modeling Department of Theories and Technologies of Mathematics and Information Technology Teaching Nikolai Chebotarev Research Centre Education The Institute provides the following Bachelor programs: Mathematics, Mathematics and Computer Science, Mechanics and Mathematical Modeling, as well as in three majors of Pedagogical Education. Some majors are offered as Master programs: Algebra, Geometry and Topology, Complex Analysis, Mechanics of Deformable Solids, Fluid Mechanics, Theory of Functions and Information Technology, PDEs, Functional Analysis, Pedagogical Education: Information Technology in Physical and Mathematical Education; Mathematics, Informatics and Information Technologies in Education. The Institute offers doctoral programs according to all Russian Higher Attestation Committee list of specialties in the sphere of mathematical studies. External links Kazan (Volga region) Federal University Official site Museum of History of Kazan University Kazan Federal University Educational institutions established in 1804 Universities in Volga Region Universities in Kazan History of Tatarstan 1804 establishments in the Russian Empire
926965
https://en.wikipedia.org/wiki/Scareware
Scareware
Scareware is a form of malware which uses social engineering to cause shock, anxiety, or the perception of a threat in order to manipulate users into buying unwanted software. Scareware is part of a class of malicious software that includes rogue security software, ransomware and other scam software that tricks users into believing their computer is infected with a virus, then suggests that they download and pay for fake antivirus software to remove it. Usually the virus is fictional and the software is non-functional or malware itself. According to the Anti-Phishing Working Group, the number of scareware packages in circulation rose from 2,850 to 9,287 in the second half of 2008. In the first half of 2009, the APWG identified a 585% increase in scareware programs. The "scareware" label can also apply to any application or virus which pranks users with intent to cause anxiety or panic. Scam scareware Internet security writers use the term "scareware" to describe software products that produce frivolous and alarming warnings or threat notices, most typically for fictitious or useless commercial firewall and registry cleaner software. This class of program tries to increase its perceived value by bombarding the user with constant warning messages that do not increase its effectiveness in any way. Software is packaged with a look and feel that mimics legitimate security software in order to deceive consumers. Some websites display pop-up advertisement windows or banners with text such as: "Your computer may be infected with harmful spyware programs. Immediate removal may be required. To scan, click 'Yes' below." These websites can go as far as saying that a user's job, career, or marriage would be at risk. Products using advertisements such as these are often considered scareware. Serious scareware applications qualify as rogue software. Some scareware is not affiliated with any other installed programs. A user can encounter a pop-up on a website indicating that their PC is infected. In some scenarios, it is possible to become infected with scareware even if the user attempts to cancel the notification. These popups are specially designed to look like they come from the user's operating system when they are actually a webpage. A 2010 study by Google found 11,000 domains hosting fake anti-virus software, accounting for 50% of all malware delivered via internet advertising. Starting on March 29, 2011, more than 1.5 million web sites around the world have been infected by the LizaMoon SQL injection attack spread by scareware. Research by Google discovered that scareware was using some of its servers to check for internet connectivity. The data suggested that up to a million machines were infected with scareware. The company has placed a warning in the search results of users whose computers appear to be infected. Another example of scareware is Smart Fortress. This site scares people into thinking they have many viruses on their computer and asks them to buy the professional service. Spyware Some forms of spyware also qualify as scareware because they change the user's desktop background, install icons in the computer's notification area (under Microsoft Windows), and claiming that some kind of spyware has infected the user's computer and that the scareware application will help to remove the infection. In some cases, scareware trojans have replaced the desktop of the victim with large, yellow text reading "Warning! You have spyware!" or a box containing similar text, and have even forced the screensaver to change to "bugs" crawling across the screen. Winwebsec is the term usually used to address the malware that attacks the users of Windows operating system and produces fake claims similar to that of genuine anti-malware software. SpySheriff exemplifies spyware and scareware: it purports to remove spyware, but is actually a piece of spyware itself, often accompanying SmitFraud infections. Other antispyware scareware may be promoted using a phishing scam. Uninstallation of security software Another approach is to trick users into uninstalling legitimate antivirus software, such as Microsoft Security Essentials, or disabling their firewall. Since antivirus programs typically include protection against being tampered with or disabled by other software, scareware may use social engineering to convince the user to disable programs which would otherwise prevent the malware from working. Legal action In 2005, Microsoft and Washington state successfully sued Secure Computer (makers of Spyware Cleaner) for $1 million over charges of using scareware pop-ups. Washington's attorney general has also brought lawsuits against Securelink Networks, SoftwareOnline.com, High Falls Media, and the makers of Quick Shield. In October 2008, Microsoft and the Washington attorney general filed a lawsuit against two Texas firms, Branch Software and Alpha Red, producers of the Registry Cleaner XP scareware. The lawsuit alleges that the company sent incessant pop-ups resembling system warnings to consumers' personal computers stating "CRITICAL ERROR MESSAGE! - REGISTRY DAMAGED AND CORRUPTED", before instructing users to visit a web site to download Registry Cleaner XP at a cost of $39.95. On December 2, 2008, the U.S. Federal Trade Commission ("FTC") filed a Complaint in federal court against Innovative Marketing, Inc., ByteHosting Internet Services, LLC, as well as individuals Sam Jain, Daniel Sundin, James Reno, Marc D’Souza, and Kristy Ross. The Complaint also listed Maurice D’Souza as a Relief Defendant, alleged that he held proceeds of wrongful conduct but not accusing him of violating any law. The FTC alleged that the other Defendants violated the FTC Act by deceptively marketing software, including WinFixer, WinAntivirus, DriveCleaner, ErrorSafe, and XP Antivirus. According to the complaint, the Defendants falsely represented that scans of a consumer's computer showed that it had been compromised or infected and then offered to sell software to fix the alleged problems. Prank software Another type of scareware involves software designed to literally scare the user through the use of unanticipated shocking images, sounds or video. An early program of this type is NightMare, a program distributed on the Fish Disks for the Amiga computer (Fish #448) in 1991. When NightMare executes, it lies dormant for an extended and random period of time, finally changing the entire screen of the computer to an image of a skull while playing a horrifying shriek on the audio channels. Anxiety-based scareware puts users in situations where there are no positive outcomes. For example, a small program can present a dialog box saying "Erase everything on hard drive?" with two buttons, both labeled "OK". Regardless of which button is chosen, nothing is destroyed. This tactic was used in an advertisement campaign by Sir-Tech in 1997 to advertise Virus: The Game. When the file is run, a full screen representation of the desktop appears. The software then begins simulating deletion of the Windows folder. When this process is complete, a message is slowly typed on screen saying "Thank God this is only a game." A screen with the purchase information appears on screen and then returns to the desktop. No damage is done to the computer during the advertisement. See also Ransomware Rogue security software Winwebsec Notes Further reading External links The Case of the Unusable System Yes, that PC cleanup app you saw on TV at 3 a.m. is a waste Types of malware False advertising Cybercrime Social engineering (computer security)
45523767
https://en.wikipedia.org/wiki/Hyperjacking
Hyperjacking
Hyperjacking is an attack in which a hacker takes malicious control over the hypervisor that creates the virtual environment within a virtual machine (VM) host. The point of the attack is to target the operating system that is below that of the virtual machines so that the attacker's program can run and the applications on the VMs above it will be completely oblivious to its presence. Overview Hyperjacking involves installing a malicious, fake hypervisor that can manage the entire server system. Regular security measures are ineffective because the operating system will not be aware that the machine has been compromised. In hyperjacking, the hypervisor specifically operates in stealth mode and runs beneath the machine, it makes it more difficult to detect and more likely to gain access to computer servers where it can affect the operation of the entire institution or company. If the hacker gains access to the hypervisor, everything that is connected to that server can be manipulated. The hypervisor represents a single point of failure when it comes to the security and protection of sensitive information. For a hyperjacking attack to succeed, an attacker would have to take control of the hypervisor by the following methods: Injecting a rogue hypervisor beneath the original hypervisor Directly obtaining control of the original hypervisor Running a rogue hypervisor on top of an existing hypervisor Mitigation techniques Some basic design features in a virtual environment can help mitigate the risks of hyperjacking: Security management of the hypervisor must be kept separate from regular traffic. This is a more network related measure than hypervisor itself related. Guest operating systems should never have access to the hypervisor. Management tools should not be installed or used from guest OS. Regularly patching the hypervisor. Known attacks As of early 2015, there had not been any report of an actual demonstration of a successful hyperjacking besides "proof of concept" testing. The VENOM vulnerability () was revealed in May 2015 and had the potential to affect many datacenters. Hyperjackings are rare due to the difficulty of directly accessing hypervisors; however, hyperjacking is considered a real-world threat. See also Virtual machine escape References Cloud computing Computer security exploits
5698
https://en.wikipedia.org/wiki/Charles%20Babbage
Charles Babbage
Charles Babbage (; 26 December 1791 – 18 October 1871) was an English polymath. A mathematician, philosopher, inventor and mechanical engineer, Babbage originated the concept of a digital programmable computer. Babbage is considered by some to be "father of the computer". Babbage is credited with inventing the first mechanical computer, the Difference Engine, that eventually led to more complex electronic designs, though all the essential ideas of modern computers are to be found in Babbage's Analytical Engine, programmed using a principle openly borrowed from the Jacquard loom. Babbage had a broad range of interests in addition to his work on computers covered in his book Economy of Manufactures and Machinery. His varied work in other fields has led him to be described as "pre-eminent" among the many polymaths of his century. Babbage, who died before the complete successful engineering of many of his designs, including his Difference Engine and Analytical Engine, remained a prominent figure in the ideating of computing. Parts of Babbage's incomplete mechanisms are on display in the Science Museum in London. In 1991, a functioning difference engine was constructed from Babbage's original plans. Built to tolerances achievable in the 19th century, the success of the finished engine indicated that Babbage's machine would have worked. Early life Babbage's birthplace is disputed, but according to the Oxford Dictionary of National Biography he was most likely born at 44 Crosby Row, Walworth Road, London, England. A blue plaque on the junction of Larcom Street and Walworth Road commemorates the event. His date of birth was given in his obituary in The Times as 26 December 1792; but then a nephew wrote to say that Babbage was born one year earlier, in 1791. The parish register of St. Mary's, Newington, London, shows that Babbage was baptised on 6 January 1792, supporting a birth year of 1791. Babbage was one of four children of Benjamin Babbage and Betsy Plumleigh Teape. His father was a banking partner of William Praed in founding Praed's & Co. of Fleet Street, London, in 1801. In 1808, the Babbage family moved into the old Rowdens house in East Teignmouth. Around the age of eight, Babbage was sent to a country school in Alphington near Exeter to recover from a life-threatening fever. For a short time, he attended King Edward VI Grammar School in Totnes, South Devon, but his health forced him back to private tutors for a time. Babbage then joined the 30-student Holmwood Academy, in Baker Street, Enfield, Middlesex, under the Reverend Stephen Freeman. The academy had a library that prompted Babbage's love of mathematics. He studied with two more private tutors after leaving the academy. The first was a clergyman near Cambridge; through him Babbage encountered Charles Simeon and his evangelical followers, but the tuition was not what he needed. He was brought home, to study at the Totnes school: this was at age 16 or 17. The second was an Oxford tutor, under whom Babbage reached a level in Classics sufficient to be accepted by the University of Cambridge. At the University of Cambridge Babbage arrived at Trinity College, Cambridge, in October 1810. He was already self-taught in some parts of contemporary mathematics; he had read in Robert Woodhouse, Joseph Louis Lagrange, and Marie Agnesi. As a result, he was disappointed in the standard mathematical instruction available at the university. Babbage, John Herschel, George Peacock, and several other friends formed the Analytical Society in 1812; they were also close to Edward Ryan. As a student, Babbage was also a member of other societies such as The Ghost Club, concerned with investigating supernatural phenomena, and the Extractors Club, dedicated to liberating its members from the madhouse, should any be committed to one. In 1812, Babbage transferred to Peterhouse, Cambridge. He was the top mathematician there, but did not graduate with honours. He instead received a degree without examination in 1814. He had defended a thesis that was considered blasphemous in the preliminary public disputation, but it is not known whether this fact is related to his not sitting the examination. After Cambridge Considering his reputation, Babbage quickly made progress. He lectured to the Royal Institution on astronomy in 1815, and was elected a Fellow of the Royal Society in 1816. After graduation, on the other hand, he applied for positions unsuccessfully, and had little in the way of career. In 1816 he was a candidate for a teaching job at Haileybury College; he had recommendations from James Ivory and John Playfair, but lost out to Henry Walter. In 1819, Babbage and Herschel visited Paris and the Society of Arcueil, meeting leading French mathematicians and physicists. That year Babbage applied to be professor at the University of Edinburgh, with the recommendation of Pierre Simon Laplace; the post went to William Wallace. With Herschel, Babbage worked on the electrodynamics of Arago's rotations, publishing in 1825. Their explanations were only transitional, being picked up and broadened by Michael Faraday. The phenomena are now part of the theory of eddy currents, and Babbage and Herschel missed some of the clues to unification of electromagnetic theory, staying close to Ampère's force law. Babbage purchased the actuarial tables of George Barrett, who died in 1821 leaving unpublished work, and surveyed the field in 1826 in Comparative View of the Various Institutions for the Assurance of Lives. This interest followed a project to set up an insurance company, prompted by Francis Baily and mooted in 1824, but not carried out. Babbage did calculate actuarial tables for that scheme, using Equitable Society mortality data from 1762 onwards. During this whole period, Babbage depended awkwardly on his father's support, given his father's attitude to his early marriage, of 1814: he and Edward Ryan wedded the Whitmore sisters. He made a home in Marylebone in London and established a large family. On his father's death in 1827, Babbage inherited a large estate (value around £100,000, equivalent to £ or $ today), making him independently wealthy. After his wife's death in the same year he spent time travelling. In Italy he met Leopold II, Grand Duke of Tuscany, foreshadowing a later visit to Piedmont. In April 1828 he was in Rome, and relying on Herschel to manage the difference engine project, when he heard that he had become a professor at Cambridge, a position he had three times failed to obtain (in 1820, 1823 and 1826). Royal Astronomical Society Babbage was instrumental in founding the Royal Astronomical Society in 1820, initially known as the Astronomical Society of London. Its original aims were to reduce astronomical calculations to a more standard form, and to circulate data. These directions were closely connected with Babbage's ideas on computation, and in 1824 he won its Gold Medal, cited "for his invention of an engine for calculating mathematical and astronomical tables". Babbage's motivation to overcome errors in tables by mechanisation had been a commonplace since Dionysius Lardner wrote about it in 1834 in the Edinburgh Review (under Babbage's guidance). The context of these developments is still debated. Babbage's own account of the origin of the difference engine begins with the Astronomical Society's wish to improve The Nautical Almanac. Babbage and Herschel were asked to oversee a trial project, to recalculate some part of those tables. With the results to hand, discrepancies were found. This was in 1821 or 1822, and was the occasion on which Babbage formulated his idea for mechanical computation. The issue of the Nautical Almanac is now described as a legacy of a polarisation in British science caused by attitudes to Sir Joseph Banks, who had died in 1820. Babbage studied the requirements to establish a modern postal system, with his friend Thomas Frederick Colby, concluding there should be a uniform rate that was put into effect with the introduction of the Uniform Fourpenny Post supplanted by the Uniform Penny Post in 1839 and 1840. Colby was another of the founding group of the Society. He was also in charge of the Survey of Ireland. Herschel and Babbage were present at a celebrated operation of that survey, the remeasuring of the Lough Foyle baseline. British Lagrangian School The Analytical Society had initially been no more than an undergraduate provocation. During this period it had some more substantial achievements. In 1816 Babbage, Herschel and Peacock published a translation from French of the lectures of Sylvestre Lacroix, which was then the state-of-the-art calculus textbook. Reference to Lagrange in calculus terms marks out the application of what are now called formal power series. British mathematicians had used them from about 1730 to 1760. As re-introduced, they were not simply applied as notations in differential calculus. They opened up the fields of functional equations (including the difference equations fundamental to the difference engine) and operator (D-module) methods for differential equations. The analogy of difference and differential equations was notationally changing Δ to D, as a "finite" difference becomes "infinitesimal". These symbolic directions became popular, as operational calculus, and pushed to the point of diminishing returns. The Cauchy concept of limit was kept at bay. Woodhouse had already founded this second "British Lagrangian School" with its treatment of Taylor series as formal. In this context function composition is complicated to express, because the chain rule is not simply applied to second and higher derivatives. This matter was known to Woodhouse by 1803, who took from Louis François Antoine Arbogast what is now called Faà di Bruno's formula. In essence it was known to Abraham De Moivre (1697). Herschel found the method impressive, Babbage knew of it, and it was later noted by Ada Lovelace as compatible with the analytical engine. In the period to 1820 Babbage worked intensively on functional equations in general, and resisted both conventional finite differences and Arbogast's approach (in which Δ and D were related by the simple additive case of the exponential map). But via Herschel he was influenced by Arbogast's ideas in the matter of iteration, i.e. composing a function with itself, possibly many times. Writing in a major paper on functional equations in the Philosophical Transactions (1815/6), Babbage said his starting point was work of Gaspard Monge. Academic From 1828 to 1839, Babbage was Lucasian Professor of Mathematics at Cambridge. Not a conventional resident don, and inattentive to his teaching responsibilities, he wrote three topical books during this period of his life. He was elected a Foreign Honorary Member of the American Academy of Arts and Sciences in 1832. Babbage was out of sympathy with colleagues: George Biddell Airy, his predecessor as Lucasian Professor of Mathematics at Trinity College, Cambridge, thought an issue should be made of his lack of interest in lecturing. Babbage planned to lecture in 1831 on political economy. Babbage's reforming direction looked to see university education more inclusive, universities doing more for research, a broader syllabus and more interest in applications; but William Whewell found the programme unacceptable. A controversy Babbage had with Richard Jones lasted for six years. He never did give a lecture. It was during this period that Babbage tried to enter politics. Simon Schaffer writes that his views of the 1830s included disestablishment of the Church of England, a broader political franchise, and inclusion of manufacturers as stakeholders. He twice stood for Parliament as a candidate for the borough of Finsbury. In 1832 he came in third among five candidates, missing out by some 500 votes in the two-member constituency when two other reformist candidates, Thomas Wakley and Christopher Temple, split the vote. In his memoirs Babbage related how this election brought him the friendship of Samuel Rogers: his brother Henry Rogers wished to support Babbage again, but died within days. In 1834 Babbage finished last among four. In 1832, Babbage, Herschel and Ivory were appointed Knights of the Royal Guelphic Order, however they were not subsequently made knights bachelor to entitle them to the prefix Sir, which often came with appointments to that foreign order (though Herschel was later created a baronet). "Declinarians", learned societies and the BAAS Babbage now emerged as a polemicist. One of his biographers notes that all his books contain a "campaigning element". His Reflections on the Decline of Science and some of its Causes (1830) stands out, however, for its sharp attacks. It aimed to improve British science, and more particularly to oust Davies Gilbert as President of the Royal Society, which Babbage wished to reform. It was written out of pique, when Babbage hoped to become the junior secretary of the Royal Society, as Herschel was the senior, but failed because of his antagonism to Humphry Davy. Michael Faraday had a reply written, by Gerrit Moll, as On the Alleged Decline of Science in England (1831). On the front of the Royal Society Babbage had no impact, with the bland election of the Duke of Sussex to succeed Gilbert the same year. As a broad manifesto, on the other hand, his Decline led promptly to the formation in 1831 of the British Association for the Advancement of Science (BAAS). The Mechanics' Magazine in 1831 identified as Declinarians the followers of Babbage. In an unsympathetic tone it pointed out David Brewster writing in the Quarterly Review as another leader; with the barb that both Babbage and Brewster had received public money. In the debate of the period on statistics (qua data collection) and what is now statistical inference, the BAAS in its Statistical Section (which owed something also to Whewell) opted for data collection. This Section was the sixth, established in 1833 with Babbage as chairman and John Elliot Drinkwater as secretary. The foundation of the Statistical Society followed. Babbage was its public face, backed by Richard Jones and Robert Malthus. On the Economy of Machinery and Manufactures Babbage published On the Economy of Machinery and Manufactures (1832), on the organisation of industrial production. It was an influential early work of operational research. John Rennie the Younger in addressing the Institution of Civil Engineers on manufacturing in 1846 mentioned mostly surveys in encyclopaedias, and Babbage's book was first an article in the Encyclopædia Metropolitana, the form in which Rennie noted it, in the company of related works by John Farey Jr., Peter Barlow and Andrew Ure. From An essay on the general principles which regulate the application of machinery to manufactures and the mechanical arts (1827), which became the Encyclopædia Metropolitana article of 1829, Babbage developed the schematic classification of machines that, combined with discussion of factories, made up the first part of the book. The second part considered the "domestic and political economy" of manufactures. The book sold well, and quickly went to a fourth edition (1836). Babbage represented his work as largely a result of actual observations in factories, British and abroad. It was not, in its first edition, intended to address deeper questions of political economy; the second (late 1832) did, with three further chapters including one on piece rate. The book also contained ideas on rational design in factories, and profit sharing. "Babbage principle" In Economy of Machinery was described what is now called the "Babbage principle". It pointed out commercial advantages available with more careful division of labour. As Babbage himself noted, it had already appeared in the work of Melchiorre Gioia in 1815. The term was introduced in 1974 by Harry Braverman. Related formulations are the "principle of multiples" of Philip Sargant Florence, and the "balance of processes". What Babbage remarked is that skilled workers typically spend parts of their time performing tasks that are below their skill level. If the labour process can be divided among several workers, labour costs may be cut by assigning only high-skill tasks to high-cost workers, restricting other tasks to lower-paid workers. He also pointed out that training or apprenticeship can be taken as fixed costs; but that returns to scale are available by his approach of standardisation of tasks, therefore again favouring the factory system. His view of human capital was restricted to minimising the time period for recovery of training costs. Publishing Another aspect of the work was its detailed breakdown of the cost structure of book publishing. Babbage took the unpopular line, from the publishers' perspective, of exposing the trade's profitability. He went as far as to name the organisers of the trade's restrictive practices. Twenty years later he attended a meeting hosted by John Chapman to campaign against the Booksellers Association, still a cartel. Influence It has been written that "what Arthur Young was to agriculture, Charles Babbage was to the factory visit and machinery". Babbage's theories are said to have influenced the layout of the 1851 Great Exhibition, and his views had a strong effect on his contemporary George Julius Poulett Scrope. Karl Marx argued that the source of the productivity of the factory system was exactly the combination of the division of labour with machinery, building on Adam Smith, Babbage and Ure. Where Marx picked up on Babbage and disagreed with Smith was on the motivation for division of labour by the manufacturer: as Babbage did, he wrote that it was for the sake of profitability, rather than productivity, and identified an impact on the concept of a trade. John Ruskin went further, to oppose completely what manufacturing in Babbage's sense stood for. Babbage also affected the economic thinking of John Stuart Mill. George Holyoake saw Babbage's detailed discussion of profit sharing as substantive, in the tradition of Robert Owen and Charles Fourier, if requiring the attentions of a benevolent captain of industry, and ignored at the time. Works by Babbage and Ure were published in French translation in 1830; On the Economy of Machinery was translated in 1833 into French by Édouard Biot, and into German the same year by Gottfried Friedenberg. The French engineer and writer on industrial organisation Léon Lalanne was influenced by Babbage, but also by the economist Claude Lucien Bergery, in reducing the issues to "technology". William Jevons connected Babbage's "economy of labour" with his own labour experiments of 1870. The Babbage principle is an inherent assumption in Frederick Winslow Taylor's scientific management. Mary Everest Boole claimed that there was profound influence – via her uncle George Everest – of Indian thought in general and Indian logic, in particular, on Babbage and on her husband George Boole, as well as on Augustus De Morgan: Think what must have been the effect of the intense Hinduizing of three such men as Babbage, De Morgan, and George Boole on the mathematical atmosphere of 1830–65. What share had it in generating the Vector Analysis and the mathematics by which investigations in physical science are now conducted? Natural theology In 1837, responding to the series of eight Bridgewater Treatises, Babbage published his Ninth Bridgewater Treatise, under the title On the Power, Wisdom and Goodness of God, as manifested in the Creation. In this work Babbage weighed in on the side of uniformitarianism in a current debate. He preferred the conception of creation in which a God-given natural law dominated, removing the need for continuous "contrivance". The book is a work of natural theology, and incorporates extracts from related correspondence of Herschel with Charles Lyell. Babbage put forward the thesis that God had the omnipotence and foresight to create as a divine legislator. In this book, Babbage dealt with relating interpretations between science and religion; on the one hand, he insisted that "there exists no fatal collision between the words of Scripture and the facts of nature;" on the one hand, he wrote the Book of Genesis was not meant to be read literally in relation to scientific terms. Against those who said these were in conflict, he wrote "that the contradiction they have imagined can have no real existence, and that whilst the testimony of Moses remains unimpeached, we may also be permitted to confide in the testimony of our senses." The Ninth Bridgewater Treatise was quoted extensively in Vestiges of the Natural History of Creation. The parallel with Babbage's computing machines is made explicit, as allowing plausibility to the theory that transmutation of species could be pre-programmed. Jonar Ganeri, author of Indian Logic, believes Babbage may have been influenced by Indian thought; one possible route would be through Henry Thomas Colebrooke. Mary Everest Boole argues that Babbage was introduced to Indian thought in the 1820s by her uncle George Everest: Some time about 1825, [Everest] came to England for two or three years, and made a fast and lifelong friendship with Herschel and with Babbage, who was then quite young. I would ask any fair-minded mathematician to read Babbage's Ninth Bridgewater Treatise and compare it with the works of his contemporaries in England; and then ask himself whence came the peculiar conception of the nature of miracle which underlies Babbage's ideas of Singular Points on Curves (Chap, viii) – from European Theology or Hindu Metaphysic? Oh! how the English clergy of that day hated Babbage's book! Religious views Babbage was raised in the Protestant form of the Christian faith, his family having inculcated in him an orthodox form of worship. He explained: Rejecting the Athanasian Creed as a "direct contradiction in terms", in his youth he looked to Samuel Clarke's works on religion, of which Being and Attributes of God (1704) exerted a particularly strong influence on him. Later in life, Babbage concluded that "the true value of the Christian religion rested, not on speculative [theology] … but … upon those doctrines of kindness and benevolence which that religion claims and enforces, not merely in favour of man himself but of every creature susceptible of pain or of happiness." In his autobiography Passages from the Life of a Philosopher (1864), Babbage wrote a whole chapter on the topic of religion, where he identified three sources of divine knowledge: A priori or mystical experience From Revelation From the examination of the works of the Creator He stated, on the basis of the design argument, that studying the works of nature had been the more appealing evidence, and the one which led him to actively profess the existence of God. Advocating for natural theology, he wrote: Like Samuel Vince, Babbage also wrote a defence of the belief in divine miracles. Against objections previously posed by David Hume, Babbage advocated for the belief of divine agency, stating "we must not measure the credibility or incredibility of an event by the narrow sphere of our own experience, nor forget that there is a Divine energy which overrides what we familiarly call the laws of nature." He alluded to the limits of human experience, expressing: "all that we see in a miracle is an effect which is new to our observation, and whose cause is concealed. The cause may be beyond the sphere of our observation, and would be thus beyond the familiar sphere of nature; but this does not make the event a violation of any law of nature. The limits of man's observation lie within very narrow boundaries, and it would be arrogance to suppose that the reach of man's power is to form the limits of the natural world." Later life The British Association was consciously modelled on the Deutsche Naturforscher-Versammlung, founded in 1822. It rejected romantic science as well as metaphysics, and started to entrench the divisions of science from literature, and professionals from amateurs. Belonging as he did to the "Wattite" faction in the BAAS, represented in particular by James Watt the younger, Babbage identified closely with industrialists. He wanted to go faster in the same directions, and had little time for the more gentlemanly component of its membership. Indeed, he subscribed to a version of conjectural history that placed industrial society as the culmination of human development (and shared this view with Herschel). A clash with Roderick Murchison led in 1838 to his withdrawal from further involvement. At the end of the same year he sent in his resignation as Lucasian professor, walking away also from the Cambridge struggle with Whewell. His interests became more focussed, on computation and metrology, and on international contacts. Metrology programme A project announced by Babbage was to tabulate all physical constants (referred to as "constants of nature", a phrase in itself a neologism), and then to compile an encyclopaedic work of numerical information. He was a pioneer in the field of "absolute measurement". His ideas followed on from those of Johann Christian Poggendorff, and were mentioned to Brewster in 1832. There were to be 19 categories of constants, and Ian Hacking sees these as reflecting in part Babbage's "eccentric enthusiasms". Babbage's paper On Tables of the Constants of Nature and Art was reprinted by the Smithsonian Institution in 1856, with an added note that the physical tables of Arnold Henry Guyot "will form a part of the important work proposed in this article". Exact measurement was also key to the development of machine tools. Here again Babbage is considered a pioneer, with Henry Maudslay, William Sellers, and Joseph Whitworth. Engineer and inventor Through the Royal Society Babbage acquired the friendship of the engineer Marc Brunel. It was through Brunel that Babbage knew of Joseph Clement, and so came to encounter the artisans whom he observed in his work on manufactures. Babbage provided an introduction for Isambard Kingdom Brunel in 1830, for a contact with the proposed Bristol & Birmingham Railway. He carried out studies, around 1838, to show the superiority of the broad gauge for railways, used by Brunel's Great Western Railway. In 1838, Babbage invented the pilot (also called a cow-catcher), the metal frame attached to the front of locomotives that clears the tracks of obstacles; he also constructed a dynamometer car. His eldest son, Benjamin Herschel Babbage, worked as an engineer for Brunel on the railways before emigrating to Australia in the 1850s. Babbage also invented an ophthalmoscope, which he gave to Thomas Wharton Jones for testing. Jones, however, ignored it. The device only came into use after being independently invented by Hermann von Helmholtz. Cryptography Babbage achieved notable results in cryptography, though this was still not known a century after his death. Letter frequency was category 18 of Babbage's tabulation project. Joseph Henry later defended interest in it, in the absence of the facts, as relevant to the management of movable type. As early as 1845, Babbage had solved a cipher that had been posed as a challenge by his nephew Henry Hollier, and in the process, he made a discovery about ciphers that were based on Vigenère tables. Specifically, he realised that enciphering plain text with a keyword rendered the cipher text subject to modular arithmetic. During the Crimean War of the 1850s, Babbage broke Vigenère's autokey cipher as well as the much weaker cipher that is called Vigenère cipher today. His discovery was kept a military secret, and was not published. Credit for the result was instead given to Friedrich Kasiski, a Prussian infantry officer, who made the same discovery some years later. However, in 1854, Babbage published the solution of a Vigenère cipher, which had been published previously in the Journal of the Society of Arts. In 1855, Babbage also published a short letter, "Cypher Writing", in the same journal. Nevertheless, his priority was not established until 1985. Public nuisances Babbage involved himself in well-publicised but unpopular campaigns against public nuisances. He once counted all the broken panes of glass of a factory, publishing in 1857 a "Table of the Relative Frequency of the Causes of Breakage of Plate Glass Windows": Of 464 broken panes, 14 were caused by "drunken men, women or boys". Babbage's distaste for commoners (the Mob) included writing "Observations of Street Nuisances" in 1864, as well as tallying up 165 "nuisances" over a period of 80 days. He especially hated street music, and in particular the music of organ grinders, against whom he railed in various venues. The following quotation is typical: Babbage was not alone in his campaign. A convert to the cause was the MP Michael Thomas Bass. In the 1860s, Babbage also took up the anti-hoop-rolling campaign. He blamed hoop-rolling boys for driving their iron hoops under horses' legs, with the result that the rider is thrown and very often the horse breaks a leg. Babbage achieved a certain notoriety in this matter, being denounced in debate in Commons in 1864 for "commencing a crusade against the popular game of tip-cat and the trundling of hoops." Computing pioneer Babbage's machines were among the first mechanical computers. That they were not actually completed was largely because of funding problems and clashes of personality, most notably with George Biddell Airy, the Astronomer Royal. Babbage directed the building of some steam-powered machines that achieved some modest success, suggesting that calculations could be mechanised. For more than ten years he received government funding for his project, which amounted to £17,000, but eventually the Treasury lost confidence in him. While Babbage's machines were mechanical and unwieldy, their basic architecture was similar to a modern computer. The data and program memory were separated, operation was instruction-based, the control unit could make conditional jumps, and the machine had a separate I/O unit. Background on mathematical tables In Babbage's time, printed mathematical tables were calculated by human computers; in other words, by hand. They were central to navigation, science and engineering, as well as mathematics. Mistakes were known to occur in transcription as well as calculation. At Cambridge, Babbage saw the fallibility of this process, and the opportunity of adding mechanisation into its management. His own account of his path towards mechanical computation references a particular occasion: There was another period, seven years later, when his interest was aroused by the issues around computation of mathematical tables. The French official initiative by Gaspard de Prony, and its problems of implementation, were familiar to him. After the Napoleonic Wars came to a close, scientific contacts were renewed on the level of personal contact: in 1819 Charles Blagden was in Paris looking into the printing of the stalled de Prony project, and lobbying for the support of the Royal Society. In works of the 1820s and 1830s, Babbage referred in detail to de Prony's project. Difference engine Babbage began in 1822 with what he called the difference engine, made to compute values of polynomial functions. It was created to calculate a series of values automatically. By using the method of finite differences, it was possible to avoid the need for multiplication and division. For a prototype difference engine, Babbage brought in Joseph Clement to implement the design, in 1823. Clement worked to high standards, but his machine tools were particularly elaborate. Under the standard terms of business of the time, he could charge for their construction, and would also own them. He and Babbage fell out over costs around 1831. Some parts of the prototype survive in the Museum of the History of Science, Oxford. This prototype evolved into the "first difference engine." It remained unfinished and the finished portion is located at the Science Museum in London. This first difference engine would have been composed of around 25,000 parts, weighed , and would have been tall. Although Babbage received ample funding for the project, it was never completed. He later (1847–1849) produced detailed drawings for an improved version,"Difference Engine No. 2", but did not receive funding from the British government. His design was finally constructed in 1989–1991, using his plans and 19th-century manufacturing tolerances. It performed its first calculation at the Science Museum, London, returning results to 31 digits. Nine years later, in 2000, the Science Museum completed the printer Babbage had designed for the difference engine. Completed models The Science Museum has constructed two Difference Engines according to Babbage's plans for the Difference Engine No 2. One is owned by the museum. The other, owned by the technology multimillionaire Nathan Myhrvold, went on exhibition at the Computer History Museum in Mountain View, California on 10 May 2008. The two models that have been constructed are not replicas. Analytical Engine After the attempt at making the first difference engine fell through, Babbage worked to design a more complex machine called the Analytical Engine. He hired C. G. Jarvis, who had previously worked for Clement as a draughtsman. The Analytical Engine marks the transition from mechanised arithmetic to fully-fledged general purpose computation. It is largely on it that Babbage's standing as computer pioneer rests. The major innovation was that the Analytical Engine was to be programmed using punched cards: the Engine was intended to use loops of Jacquard's punched cards to control a mechanical calculator, which could use as input the results of preceding computations. The machine was also intended to employ several features subsequently used in modern computers, including sequential control, branching and looping. It would have been the first mechanical device to be, in principle, Turing-complete. The Engine was not a single physical machine, but rather a succession of designs that Babbage tinkered with until his death in 1871. Ada Lovelace and Italian followers Ada Lovelace, who corresponded with Babbage during his development of the Analytical Engine, is credited with developing an algorithm that would enable the Engine to calculate a sequence of Bernoulli numbers. Despite documentary evidence in Lovelace's own handwriting, some scholars dispute to what extent the ideas were Lovelace's own. For this achievement, she is often described as the first computer programmer; though no programming language had yet been invented. Lovelace also translated and wrote literature supporting the project. Describing the engine's programming by punch cards, she wrote: "We may say most aptly that the Analytical Engine weaves algebraical patterns just as the Jacquard loom weaves flowers and leaves." Babbage visited Turin in 1840 at the invitation of Giovanni Plana, who had developed in 1831 an analog computing machine that served as a perpetual calendar. Here in 1840 in Turin, Babbage gave the only public explanatio and lectures about the Analytical Engine. In 1842 Charles Wheatstone approached Lovelace to translate a paper of Luigi Menabrea, who had taken notes of Babbage's Turin talks; and Babbage asked her to add something of her own. Fortunato Prandi who acted as interpreter in Turin was an Italian exile and follower of Giuseppe Mazzini. Swedish followers Per Georg Scheutz wrote about the difference engine in 1830, and experimented in automated computation. After 1834 and Lardner's Edinburgh Review article he set up a project of his own, doubting whether Babbage's initial plan could be carried out. This he pushed through with his son, Edvard Scheutz. Another Swedish engine was that of Martin Wiberg (1860). Legacy In 2011, researchers in Britain proposed a multimillion-pound project, "Plan 28", to construct Babbage's Analytical Engine. Since Babbage's plans were continually being refined and were never completed, they intended to engage the public in the project and crowd-source the analysis of what should be built. It would have the equivalent of 675 bytes of memory, and run at a clock speed of about 7 Hz. They hope to complete it by the 150th anniversary of Babbage's death, in 2021. Advances in MEMS and nanotechnology have led to recent high-tech experiments in mechanical computation. The benefits suggested include operation in high radiation or high temperature environments. These modern versions of mechanical computation were highlighted in The Economist in its special "end of the millennium" black cover issue in an article entitled "Babbage's Last Laugh". Due to his association with the town Babbage was chosen in 2007 to appear on the 5 Totnes pound note. An image of Babbage features in the British cultural icons section of the newly designed British passport in 2015. Family On 25 July 1814, Babbage married Georgiana Whitmore at St. Michael's Church in Teignmouth, Devon; her sister Louisa married Sir Edward Ryan five months later. The couple lived at Dudmaston Hall, Shropshire (where Babbage engineered the central heating system), before moving to 5 Devonshire Street, London in 1815. Charles and Georgiana had eight children, but only four – Benjamin Herschel, Georgiana Whitmore, Dugald Bromhead and Henry Prevost – survived childhood. Charles' wife Georgiana died in Worcester on 1 September 1827, the same year as his father, their second son (also named Charles) and their newborn son Alexander. Benjamin Herschel Babbage (1815–1878) Charles Whitmore Babbage (1817–1827) Georgiana Whitmore Babbage (1818 – 26 September 1834) Edward Stewart Babbage (1819–1821) Francis Moore Babbage (1821–????) Dugald Bromhead (Bromheald?) Babbage (1823–1901) (Maj-Gen) Henry Prevost Babbage (1824–1918) Alexander Forbes Babbage (1827–1827) His youngest surviving son, Henry Prevost Babbage (1824–1918), went on to create six small demonstration pieces for Difference Engine No. 1 based on his father's designs, one of which was sent to Harvard University where it was later discovered by Howard H. Aiken, pioneer of the Harvard Mark I. Henry Prevost's 1910 Analytical Engine Mill, previously on display at Dudmaston Hall, is now on display at the Science Museum. Death Babbage lived and worked for over 40 years at 1 Dorset Street, Marylebone, where he died, at the age of 79, on 18 October 1871; he was buried in London's Kensal Green Cemetery. According to Horsley, Babbage died "of renal inadequacy, secondary to cystitis." He had declined both a knighthood and baronetcy. He also argued against hereditary peerages, favouring life peerages instead. Autopsy report In 1983, the autopsy report for Charles Babbage was discovered and later published by his great-great-grandson. A copy of the original is also available. Half of Babbage's brain is preserved at the Hunterian Museum in the Royal College of Surgeons in London. The other half of Babbage's brain is on display in the Science Museum, London. Memorials There is a black plaque commemorating the 40 years Babbage spent at 1 Dorset Street, London. Locations, institutions and other things named after Babbage include: The Moon crater Babbage The Charles Babbage Institute, an information technology archive and research center at the University of Minnesota The Charles Babbage Premium, an annual computing award British Rail named a locomotive after him in the 1990s The Babbage Building at the University of Plymouth, where the university's school of computing is based The Babbage programming language for GEC 4000 series minicomputers "Babbage", The Economists Science and Technology blog. The former chain retail computer and video-games store "Babbage's" (now GameStop) was named after him. In fiction and film Babbage frequently appears in steampunk works; he has been called an iconic figure of the genre. Other works in which Babbage appears include: The 2008 short film Babbage, screened at the 2008 Cannes Film Festival, a 2009 finalist with Haydenfilms, and shown at the 2009 HollyShorts Film Festival and other international film festivals. The film shows Babbage at a dinner party, with guests discussing his life and work. Sydney Padua created The Thrilling Adventures of Lovelace and Babbage, a cartoon alternate history in which Babbage and Lovelace succeed in building the Analytical Engine. It quotes heavily from the writings of Lovelace, Babbage and their contemporaries. Kate Beaton, cartoonist of webcomic Hark! A Vagrant, devoted one of her comic strips to Charles and Georgiana Babbage. The Doctor Who episode "Spyfall, Part 2" (Season 12, episode 2) features Charles Babbage and Ada Gordon as characters who assist the Doctor when she's stuck in the year 1834. Publications (Reissued by Cambridge University Press 2009, .) (The LOCOMAT site contains a reconstruction of this table.) See also Babbage's congruence List of pioneers in computer science Notes References . External links The Babbage Papers The papers held by the Science Museum Library and Archives which relate mostly to Babbage's automatic calculating engines The Babbage Engine: Computer History Museum, Mountain View CA, USA. Multi-page account of Babbage, his engines and his associates, including a video of the Museum's functioning replica of the Difference Engine No 2 in action Analytical Engine Museum John Walker's (of AutoCAD fame) comprehensive catalogue of the complete technical works relating to Babbage's machine. Charles Babbage A history at the School of Mathematics and Statistics, University of St Andrews Scotland. Mr. Charles Babbage: obituary from The Times (1871) The Babbage Pages Charles Babbage, The Online Books Page, University of Pennsylvania The Babbage Difference Engine: an overview of how it works "On a Method of Expressing by Signs the Action of Machinery", 1826. Original edition Charles Babbage Institute: pages on "Who Was Charles Babbage?" including biographical note, description of Difference Engine No. 2, publications by Babbage, archival and published sources on Babbage, sources on Babbage and Ada Lovelace Babbage's Ballet by Ivor Guest, Ballet Magazine, 1997 Babbage's Calculating Machine (1872) – full digital facsimile from Linda Hall Library Author profile in the database zbMATH The 'difference engine' built by Georg & Edvard Scheutz in 1843 1791 births 1871 deaths 19th-century English mathematicians 19th-century English people Alumni of Peterhouse, Cambridge Alumni of Trinity College, Cambridge British business theorists Burials at Kensal Green Cemetery Corresponding members of the Saint Petersburg Academy of Sciences English Christians English computer scientists English engineers Fellows of the American Academy of Arts and Sciences Fellows of the Royal Astronomical Society Fellows of the Royal Society of Edinburgh Fellows of the Royal Society Lucasian Professors of Mathematics People educated at Totnes Grammar School People of the Industrial Revolution Recipients of the Gold Medal of the Royal Astronomical Society Mathematicians from London
25070057
https://en.wikipedia.org/wiki/Anyplace%20Control
Anyplace Control
Anyplace Control is a Windows-based remote PC control product for remote access and control of remote PC located either in local network or in the Internet. It displays the remote computer's desktop on the screen of local PC, and allows control of that computer using the local mouse and keyboard. The software has a file transfer feature to send files between computers. Remote users create an Access Password, username and password to access the target PC. The program must be installed at both computers, with client part of the software at the local PC. Features Display of a remote computer's desktop in a real-time mode on local screen. Remote PC control. File transfer between remote computers. Turn on, turn off or reboot the remote PC. Disabling of remote mouse, keyboard, monitor. Connection through routers, firewalls and dynamic IP addresses Security 1. Secure authentication and traffic encryption All transferred data is encrypted with the RC4 algorithm with 128 bit random key for encryption. 2. Double password protection For connection to the remote computer via Internet the knowledge of at least two passwords is required: Account Password and remote PC Access Password. 3. No open ports in the Firewall No need to open additional ports in the firewall when connecting using Account Connection service. Licensing The product is shareware available for private and corporate usage. Licenses are provided on a per Host or Admin module basis. References External links Official Site PCIN Review PC Advisor Review Remote Desktop Software Remote desktop
22334780
https://en.wikipedia.org/wiki/Corporate%20venture%20capital
Corporate venture capital
Corporate venture capital (CVC) is the investment of corporate funds directly in external startup companies. CVC is defined by the Business Dictionary as the "practice where a large firm takes an equity stake in a small but innovative or specialist firm, to which it may also provide management and marketing expertise; the objective is to gain a specific competitive advantage." Examples of CVCs include GV and Intel Capital. The definition of CVC often becomes clearer by explaining what it is not. An investment made through an external fund managed by a third party, even when the investment vehicle is funded by a single investing company, is not considered CVC. Most importantly, CVC is not synonymous with venture capital (VC); rather, it is a specific subset of venture capital. In essence, Corporate Venturing is about setting up structural collaborations with external ventures or parties to drive mutual growth. These external ventures are startups (early stage companies) or scaleup company (companies that have found product/market fit) that come from outside the organization. Due to its hybrid nature involving both elements of corporate rigidity and startup culture, managing a successful CVC unit is a difficult task that involves a number of hurdles and often fails to deliver the expected outcomes. Objectives As Henry Chesbrough, professor at Haas School of Business at UC Berkeley, explains in his "Making Sense of Corporate Venture Capital" article, CVC has two hallmarks: (1) its objective; and (2) the degree to which the operations of the start up and investing company are connected. CVC is unique from private VC in that it commonly strives to advance both strategic and financial objectives. Strategically driven CVC investments are made primarily to increase, directly or indirectly, the sales and profits of the incumbent firm's business. A well established firm making a strategic CVC investment seeks to identify and exploit synergies between itself and the new venture. The Goal is to exploit the potential for additional growth within the parent firm. For instance, investing firms may want to obtain a window on new technologies, to enter new markets, to identify acquisition targets and/or to access new resources. Financially driven CVC investments are investments where parent firms are looking for leverage on returns. The full potential of leverage is often achieved through exits such as initial public offering (IPO) or sales of stakes to third parties. The objective is to exploit the independent revenue and profit in the new venture itself. Specifically for CVC, the parent company seeks to do as well as if not better than private VC investors, hence the motivation to keep its VC efforts "in house". The CVC division often believes it has a competitive advantage over private VC firms due to what it considers to be superior knowledge of markets and technologies, its strong balance sheet, and its ability to be a patient investor. Chesbrough points out that a company's brand may signal the quality of the start-up to other investors and potential customers; this may eventually result in rewards to the initial investor. He gives the example of Dell Ventures, Dell Computer's in-house VC division, which made multiple Internet investments with the expectation of earning favorable returns. Although Dell hoped the seed money will help its own business grow, the primary motivation for the investments was the opportunity to earn high financial returns. Reaching strategic goals is not necessarily in opposition to financial objectives. As literature demonstrates, both objectives can go hand in hand and offer complementary motivations. In the long run, all strategic investments produce financial added value. This is not to say that, occasionally the short-term concordance between financial and strategic objectives might be questionable. For instance, a strong focus on achieving short-term financial goals might have a counterproductive impact on the ability to achieve long-term strategic objectives, which would in turn reduce long-term financial returns. In lieu of this dilemma, parent firms first screen venture proposals for strategic rationales. Then when an investment proposal fits strategic objectives, corporate firms analyze it according to financial investment standards, using methods analogous to independent venture capitalists. A recent empirical study has examined the degree to which European incumbent firms emphasize on strategic and financial objectives. Results show that 54% of European parent firms invest primarily for strategic reasons, yet with financial concerns, 33% invest primarily for financial reasons with strategic concerns and 13% invest purely financial. None invest solely for strategic rationals. In comparison with an American study, results show significant differences in investment styles. In fact, 50% of US parent firms invest primarily strategic with financial concerns, 20% financial with strategic concerns, 15% purely financial and 15% purely strategic. As a rule, incumbent firms, whether European or American, invest primarily for strategic reasons. The second hallmark of corporate VC investments is the extent to which companies in the investment portfolio are linked to the investing company's current operational abilities. For example, a start-up with strong links to the investing company might make use of that company's manufacturing plants, distribution channels, technology, or brand. It might adopt the investing company's business practices to build, sell, or service its products. An external venture may offer the investing company an opportunity to build new and different capabilities—ones that could threaten the viability of current corporate capabilities. Housing these capabilities in a separate legal entity can insulate them from internal efforts to undermine them. If the venture and its processes fare well, the corporation can then evaluate whether and how to adapt its own processes to be more like those of the start-up. Although it happens far less than commonly thought, the CVC parent company may attempt to acquire the new venture. (1969~99) 20 Largest venturing firms (from Dushnitsky, 2006) 1. Intel 2. Cisco 3. Microsoft 4. Comdisco 5. Dell 6. MCIworld.com 7. AOL 8. Motorola 9. Sony 10. Qualcomm 11. Safeguard 12. Sun Micro 13. J&J 14. Global-Tech 15. Yahoo 16. Xerox 17. Compaq 18. Citigroup 19. Ford Motor 20. Comcast Investing and Financing Types of Investing By combining the two dimensions of CVC investing - strategic and financial objectives - four distinct investment motivations and strategies can be outlined. A. Driving Investments Driving investments are pursued by CVCs for strategic alignment that is tightly linked between the investment company's operations and the startup company that is being invested in. The purpose of this investing option is to advance the strategy of the current business. The CVC looks for key growth areas within the startup companies and then hopes to combine them with the company's initiatives. Appropriately selected investing and alignment can benefit the investing company by furthering the corporate strategy. On the other hand, this could result in failure. Closely linked investments essentially roll into the current strategy in place. This would not be useful in dealing with already disruptive strategies in place, or in finding new ones when the investing company needs to update processes when trying to keep up with a changing environment. Thus, if CVCs are looking to “transcend current strategies and processes,” they would need to look to other investing strategies. B. Enabling Investments Enabling investments are also made for strategic purpose, but in this case, they are not linked closely with the investing company's operations. The thought process is that a tight link is not necessary for a successful investment to help the investing company to succeed. Although this may seem counterintuitive, the idea is to take advantage of complementary products. Enabling investments complement the strategy of the current business. Ideally, the popularity of the investments will help to create demand for the investing company's products by stimulating the industry in which the products are used. The limits of enabling investments are that they will only be successful if they “capture a substantial portion of the market growth they stimulate”. C. Emergent Investments While Emergent investments do not promote current strategies, they do link tightly with the investing company's operations. If the business environment or company's strategy changes, the investment could become strategically valuable. This design helps create a sort of option strategy that is independent of financial returns. Emergent investments allow investing companies to explore new untapped markets that they are unable to enter due to their focus on the current markets they serve. Investment products can be sold in new markets to help gather vital information that could not be otherwise obtained. If the information looks promising, the company could look to shift towards this new direction. Emergent investments are initially made for financial gains but could ultimately result in strategic gains as well. On the contrary, if they do not prove to be important for the company strategy, they should be left untouched to generate whatever financial returns possible. In summary, emergent investments require “balancing financial discipline and strategic potential.” D. Passive Investments Passive investments are connected to neither the investing company's strategy nor their operations. Thus, these investments do not help the investing company to actively advance their own business and can only provide financial returns. Essentially, passive investments are no different from typical investments whose financial returns are contingent on the volatility of the private equity market. Due to the lack of any strategic advantages with this kind of investing, passive investments are not very practical or advantageous. Stages of financing Corporate venture capital firms provide funding to startup companies during various phases of development. Each phase has its own financing requirements and CVCs will often indicate the stage of financing needed and the type of investments they prefer to make. Later stages of financing usually mean less risky investments and thus investment in these companies typically cost more money due to higher company or product valuations. Investing in startup companies hope to provide CVCs with a return on investment within 4–7 years whereas investments in established companies are expected in a shorter 2- to 4-year period. A. Early-stage financing In this stage, the startup company basically has a concept. Capital is used to carry out market research and product development. Startup financing can be used to establish management, research and development, marketing, and quality management teams and buy additional equipment and resources. An extension of early-stage financing is First-Stage Financing, where companies can start manufacturing and sales processes to initiate a product launch. B. Seed capital funds This phase finances early stage companies. The startup is still shaping its concept and production and service are not yet developed fully. Investment money can be used at this time to construct a working prototype. Additionally, the funds can be used to further market research and legal issues such as patents. Investment firms only expect 20% of companies to succeed, moving to second round of financing. In this stage, the company can often be moved to another round of funding or even a series of funds that take over the management of the investment. Investing firms expect a high percentage of the business and often provide funding in stages that is dependent on the startup company reaching set milestones. For example, a venture capital may agree to $5 million during this phase, but may pay out the funding in 1/3 installments based on the startup meeting set milestones. Finally, CVCs often look to promote or insist on specific executives to manage the startup at this time. C. Expansion financing Second-Stage: Companies already selling product are funded at this stage to help in their expansion. One to ten million dollars can be provided to help recruit more employees to establish engineering, sales, and marketing functions. Companies are often not making profits at this time and thus funds can also be used to cover negative cash flow. Third-Stage or Mezzanine Financing allows for greater company expansion. This can include further development of management, plants, marketing, and possibly even additional products. Companies at this stage are often doing very well, breaking even or even turning a profit. D. Initial public offering (IPO) When a company goes public, it is called an initial public offering. This is often the ideal scenario CVCs hope to achieve with an investment. The startup company's stock can now be bought and sold by the public. This is when an investing company can finally earn a significant return on its investment. For example, suppose a CVC were to invest or buy 50% of a startup for $5 million. If the startup then goes public for $100 million, the CVCs investment would grow to $50 million, or tenfold its initial investment. This is often the last phase where CVCs are involved. They look to sell their stock to cash in on the returns from their investment. Next, they look to reinvest in new ventures, starting from the beginning to invest in a new startup. E. Mergers and acquisitions Due to the current economic climate, IPOs have become a rarer occurrence recently, causing venture capital firms to look towards mergers and acquisitions. This is a more realistic scenario, especially when startup companies do not look to function independently. Acquisition financing uses investment funds to acquire or buy another company. This may be completed by venture capital firms to align their startup with a complementary product or business line where the combined companies look to assimilate smoothly, creating advantages. Acquisitions could also work in the opposite direction where an invested startup is acquired by another firm. In this case, the CVC would be cashing in by selling its investment. Using the capital gains, it can look to reinvest with a new venture. Mergers are similar to acquisitions; however, in this case, one company is not buying another. Rather the two companies are combining to share resources, processes, and technology, which it hopes to leverage for several advantages such as cost savings, liquidity, market positioning and sharing burdens such as fund raising. Each individual CVC uses specific procedures and financing stages that serve its interests best. The financing stages presented above are only a basic format. CVCs can have more financing stages or even less. Thus, it is important that startup companies understand that the financing strategies employed by CVC that they are working with fit their needs. Financing process The financing process outlines basic steps taken by CVCs from initial contact with potential startup companies through the first round of financing. Startup companies looking for financing make initial contact with CVCs. CVCs can also seek out potential startups looking for funding. Startup management team presents a business plan to the CVC. If the reviewed business plan generates interest, the CVC will ask the startup for more information including a product demonstration. Investors will also conduct their own due diligence to investigate and better understand the product, technology, market, and any other related issues. If the CVCs are interested in the proposed startups product or service, they will look to determine the value of the startup. They communicate this valuation to the startup, often via a term sheet. If the startup is happy with the offer, a purchase price and investor equity is agreed on. Negotiations can take place during this stage of investment valuation. Legal counsels from both sides agree to a finalized term sheet where business terms for the investment are specified. A closed period, referred to as a lock-up time period, is also established during which the start up company cannot discuss investing opportunities with other investment groups. This indicates that a pending deal is in the process of completion. Once a term sheet is finalized, both sides look to negotiate and finalize financing terms. Negotiations are conducted between the legal counsels from the CVC and the startup company. The startup legal team typically creates transaction documents that the CVC counsel reviews. Negotiations continue until all legal and business issues are addressed. During this time, the CVC conducts a more thorough investigation of the startup company, understanding the startup's books and records, financial statements, projected performance, employees and suppliers, and even its customer base. Closing of financing is the final step. This can take place immediately upon execution of the definitive agreements or after a few weeks. The additional time may be necessary if the CVC needs time to complete their due-diligence or based on the startup company's financial needs. In the healthcare industry Introduction This section discusses venture capital activities of healthcare providers such as Ascension Health, biotech firms such as Biogen Idec, and pharmaceutical companies such as GlaxoSmithKline, all of which are healthcare-related companies and have internal venture capital units or wholly owned subsidiaries focused on venture capital. The structure of corporate venture capital within the healthcare arena; the most common types of investments made; and the main reasons for which healthcare companies invest will be addressed, as well as the current trends and some predictions for the industry. Structure By definition, the corporate venture capital field is made up of organizations whose primary activities are not investing in other firms (see above). Since that definition rules out freestanding healthcare venture capital firms such as De Novo Ventures, as well as publicly traded firms, what remains are a limited number of organizational structure types used by healthcare corporate venture capital. The two main types are: 1) divisions within a larger healthcare company; and 2) wholly owned subsidiaries of larger healthcare companies. CVC units of both of these types often engage in partnerships with other firms. In most cases the other firms are limited partners and the primary company manages the fund and is the only general partner. Types of investments The largest segment of healthcare-related venture capital investments are made by the venture arms of firms who focus on biotechnology and pharmaceutical products, such as Eli Lilly and Company, GlaxoSmithKline, Takeda Pharmaceutical Company (TCP), Biogen Idec, and Roche. Because the top priority of many of these venture capital units or divisions is to fund ventures that may result in scientific and technological discoveries and advancements that may benefit their parent companies, most tend to invest in companies whose products or proposed products are similar to their own. For example, the portfolio of Takeda Research Investment (the venture capital arm of TCP), includes such companies as: (a) Lectus Therapeutics, a biotech company based in the UK that employs a proprietary process that has enabled them to discover and develop unique, small molecules that modulate ion channels; (b) Adamas Pharmaceuticals, a California specialty pharmaceutical company whose primary focus is on novel approaches to treating neurological disorders; and (c) Xenon, a leading Canadian biopharmaceutical company that aims to treat a broad range of major human diseases by isolating the genes that underlie these disorders and identifying drugs that target these genes. Many of these venture arms of larger organizations thus consider themselves financial investors in strategic areas of interest. These areas nearly always include the focus of their parent companies, but often are much broader than just the parent organization's specific focus. Reasons for Investing There are a number of reasons for which a healthcare-related company chooses to pursue corporate venture capital, both strategic and financial. While most healthcare corporate venture capital companies or divisions seek to at least be budget neutral to their parent companies, strategic reasons are generally stronger motivators than financial ones. One primary strategic reason many healthcare-related CVCs cite for investing is to seek new directions and develop new products. For this reason the focus of most CVC units is broader than their parent company, as stated previously. CVC fund managers usually will examine a broad array of investment opportunities that are related to some degree to the activities of their parent company in the hopes that the technologies developed will complement the product line of the parent company or even lead the parent company into an area of the industry in which it previously had not been occupied. Another strategic motivation for engaging in corporate venture capital activities is to supplement and support the activities of the parent company. While all of the companies which were researched (with the exception of De Novo Ventures) also had research and development divisions and were actively pursuing improvements to their products, the leadership of these firms recognize that developments reached by other companies could certainly be helpful to them. This occurs most often when the firm in which the CVC division invests is focused on products or services that are fairly similar to those produced or offered by the parent company. This rationale applies not just to improvements on the products a company makes, but also the process by which it makes them. In fact, interviews with top executives at one firm reveal that the improvement of manufacturing processed is very much a consideration in their investments. Current trends and predictions The repercussions of the 2009 economic downturn have been felt throughout the field of corporate venture capital and healthcare has not been completely immune. For example, because of the scarcity of available funds within the biotech segment of the industry the valuation of private biotech firms is at an all-time low. However, the healthcare industry is seen as somewhat recession-resistant and this has encouraged some investors within the field. As such, while most major corporations with venture capital arms are keeping cash close to home, some of the first positive movement within corporate venture capital as a whole is coming from healthcare. The new venture fund recently opened by Merck Serono and the fund just closed by Ascension Health Ventures are just a few examples of the optimism that is slowly reentering venture capital within healthcare. The latest survey on the top 75 most influential healthcare corporate venture capital divisions shows they are increasingly influential and larger than many independent VCs. Another trend that may see more action as venture capital funds continue to be scarce is an increasing number of strategic partnerships between firms that are based not on exchange of funds, but rather on exchange of technologies or process information. These types of partnerships, also known as corporate in-kind investments, may become increasingly common as liquid funds become less available but technologies that have been developed are readily shareable. One specific example of this sort of information exchange is a partnership between a large, well-established company and a small, developing company who have complementary technologies or processes. Such 'David and Goliath' partnerships are already starting to emerge outside of the healthcare industry and will likely emerge within healthcare soon. Sectors Investments from venture capital firms and CVC in 1998 were mostly in software and telecommunications sectors. By 2006 the biotechnology and medical devices became the sectors with the most investments from both, venture capital firms and CVC. However, there has not been significant investments in health services and have actually decreased through these periods. Biotechnology CVC's investments in biotechnology are higher than those from VC firms. However, medical devices and health services had a is not a top sector for CVC investments as it is for VC firms. Other top sectors for CVC investments are software, telecommunications, semiconductors, and media/entertainment. Top sectors for CVC investment: Biotechnology, Software, Telecommunications, Semiconductors, and Media/entertainment. Top sectors for VC investment: Software, Biotechnology, Medical devices, and Telecommunications. In the life sciences Many of today's well known companies in life sciences have been backed with billions of dollars by venture capital investments. Some of these are: Boston Scientific, Amgen, Genentech, Genzyme, Gilead Sciences, Kyphon, Intuitive Surgical, and Scimed Life Systems. From the $25.5 billion in total venture capital investments, there were $7.2 billion targeted to the life sciences industry. The life sciences include sectors in biotechnology and medical devices and equipment. Venture capital investments within biotechnology accounted for $4.5 billion and within medical devices and equipment for $2.7 billion. Venture capital investments have gone toward specific disease. For example, there was venture capital support during the past 20 years of $14.9 billion in cardiovascular/heart diseases, $14.7 billion in cancer, and $4.9 billion in diabetes. Examples Eli Lilly Corporate Business Development Eli Lilly Corporate Business Development (CBD), CVC division from Eli Lilly and Company: Johnson & Johnson Development Corporation JJDC is the venture capital subsidiary of Johnson & Johnson. Dow Venture Capital Dow Venture Capital (DVC), CVC division from Dow Chemical, invests in promising start-up companies in North America, Europe and Asia. DVC is located in company headquarters in Midland, MI; in European headquarters in Zurich; and in Gotemba, Japan. Siemens Venture Capital Siemens Venture Capital (SVC) is the corporate venture organization for Siemens AG. Its focus is on growth segments in the energy, industry and healthcare sectors. To date, SVC has invested over 800 million euros in more than 150 startup companies and 40 venture capital funds. SVC is located in Germany (Munich), in the U.S. (Palo Alto, CA and Boston, MA), in China (Beijing), in India (Mumbai), and is active through Siemens´ regional unit in Israel. Kaiser Permanente Ventures Kaiser Permanente Ventures (KPV) the corporate venture capital arm of Kaiser Permanente. KPV invests in medical devices, health care services and health care information technology companies. Geisinger Ventures Geisinger Ventures is the corporate venture arm of Geisinger Health System. GV invests resources in healthcare technology, information technology, medical devices, medical diagnostics and therapeutics. GV utilizes funds from its balance sheet. Ascension Health Ventures Ascension Health Ventures was established in 2001 by Ascension Health with a commitment of $125 million to invest in expansion- to late-stage healthcare companies. The University of Texas Horizon Fund The UT Horizon Fund (UTHF) is the strategic corporate venture arm of The University of Texas. The UTHF's goals are to: (1) Improve commercialization of UT technologies, and (2) Improve sustainability through a positive return on investment. The Fund is evergreen where a significant portion of gains are re-invested back into the Horizon Fund for future growth. Phase I of the Fund has been capitalized at $10M through the Available University Fund of the University of Texas. The two primary programs of the fund are: Existing Ventures Program. Many university startups have difficulty raising desired levels of funding to continue development of technologies through to the final stages of commercialization. University equity positions may become diluted with preferred rights to new investors. The UT Horizon Fund co-invests with new investors to continue university equity participation all the way through to commercialization. By doing so, UT System can increase its return on investment both in terms of delivering real products and services beneficial to society as well as to providing financial return. New Ventures Program. The biggest bottleneck at the earliest stages of commercialization is access to entrepreneurial talent. Seasoned entrepreneurs are necessary to help facilitate effective business planning critical for growth and development and to seek regulatory approval and other activities. About UT System: Established by the Texas Constitution in 1876, The University of Texas System consists of nine academic universities and six health institutions, including UT Austin, UT MD Anderson Cancer Center and UT Southwestern Medical Center, along with 12 other institutions. The mission of The University of Texas System is to provide high-quality educational opportunities for the enhancement of the human resources of Texas, the nation, and the world through intellectual and personal growth. System administration is based in Austin, Texas. Offices are also located in Midland, Texas (University Lands/West Texas Operations) and Washington, D.C. (Federal Relations). These offices are responsible for the central management and coordination of the academic and health institutions. The UT System has a special responsibility for managing the Permanent University Fund (PUF), the Available University Fund (AUF), other endowments, managing university lands, carrying out the Board of Regents' policies, collaborating with the Board of Regents on strategic planning, and serving as consultants to the institutions on issues ranging from academic programs to fund raising. In addition, the System provides a wide range of centralized, cost-effective, and value-added services on behalf of the UT institutions and the public. Investment criteria by provider (Ascension Health example) Opportunities are evaluated for potential clinical, operational and financial benefits to our limited partner health systems in addition to the financial return to the venture fund. Diversification is also a consideration; AHV seeks to balance the portfolio across sectors and stages to mitigate investment risk. Every opportunity is evaluated against the following criteria: Industry - Healthcare segments including medical devices, medical and information technology and services. AHV has also selectively invested in other healthcare venture funds. Investment Size - Approximately $5 million per round; up to $10 million per company. Company Stage - Expansion- to late-stage within three to five years of a potential liquidating event. Adoption Potential - Sustainable competitive advantage with compelling benefit sufficient to influence market adoption. Management Team - Established team with demonstrated relevant experience, depth and capability to build the business to scale and attract customers. Other - AHV typically requests a Board observer seat for each portfolio company. In information and communication technology companies A decade after the NASDAQ stock exchange peaked at 5,132 on March 10, 2000, the index was 2,358 - less half its high point. This fall affected technology investing as the NASDAQ was an important market to sell venture capital-backed companies, often with business models based on using the internet. Technology companies with corporate venturing divisions were even more cyclical investors at the top of the market than independent venture capital firms and then many of the last funds to be raised before the peak were subsequently closed, such as EDS/AT Kearney (in 2002–2003, according to AT Kearney), or sold, such as Comdisco. However, non-technology firms have continued to invest their corporate venture capital in information and communication technology businesses and in recent years a number of non-US-based technology companies have expanded or started their corporate venturing units, according to the July 2010 issue of Global Corporate Venturing. The non-technology firms interested in buying stakes includes global advertising agency WPP, oil major Chevron and Dow Chemical, while non-US companies include Korean conglomerate Samsung and Chinese computer maker Legend Holdings. However, a number of US-based technology companies with corporate venture capital units in the 1990s, such as IBM and Microsoft, have concentrated more on other forms of finding external innovation, such as partnering or competitions. In the utilities sector, including telecom operators Utilities have traditionally been businesses where the customer is the regulator rather than the rate-payer using the service. This has traditionally meant innovation, including through the use of corporate venturing, has been of lower priority than gaining market share and pricing power unless required by regulatory action. Corporate venturing at utilities, therefore, has historically been more pro-cyclical to the economic than other industry sectors. However, telecom operators in particular, such as Deutsche Telekom's T-Venture and formerly France Telecom's Innovacom, have built up a successful track record through at least one economic cycle and an increasing number of utilities across the electric, gas and phone industries have started to increase their use of external innovation and corporate venturing. Korea Telecom in July 2010 set up a KRW1 trillion (US$830m) corporate venturing fund, the largest announced fund since the technology, media and telecoms bubble burst in 2000 and 2001. In the media sector Media companies have found their business model being transformed by the internet and digitalisation of information. The invention of the printing press in Germany about 1440 is widely regarded as the most important event of the second Christian millennium, which reflects the role wider and faster dissemination of information has in society. The evolution to web-based storage and transfer of media content and control being passed from media owners to people more broadly is affecting business models and established communication companies are using corporate venturing as a tool to help understand the changes. This role can be powerful both for the venturing parent and economies where they operate. The two most influential corporate venturing units in the media sector, South Africa-based Naspers and US-based Interactive Data Group, have chosen to primarily operate in emerging markets away from established mainstream media groups. Their success has allowed Naspers to survive and become the largest media group in emerging markets operating in more than 127 countries and IDG to create one of China's largest venture capital groups and a model to expand across Asia, including India. The success and fears for the future of other groups has encouraged a host of expansion and new media units to be set up, including Kaplan. In the energy and clean-tech sector Clean technology, the application of digital or wireless products to the energy or other industries to reduce power consumption or improve efficiency, has been described as the "first global technological revolution", according to consultants at Cleantech Group. Since 2005, Cleantech Group has tracked a +50% increase in corporate venturing in clean-tech and energy from 79 to about 199 in 2009. Global Corporate Venturing selected US oil major Chevron as the most influential corporate venturing unit among the energy and natural resource companies. Other large companies with Corporate Venture Capital or Corporate Strategic Partnership arms include: Dow Venture Capital Invests in several cleantech sectors including: materials science, alternative energy technologies, and water technologies. Saint Gobain External Venturing The largest construction products manufacturer in the world, this is the Saint-Gobain unit dedicated to developing strategic partnerships between the Group and start-up companies all over the world in the green building, energy, and advanced materials spaces. MAHLE Corporate Venture Capital MAHLE Corporate Venture Capital (MVC) is the corporate venture arm of MAHLE GmbH. MVC invests in funds as well as start-up companies related to the cleantech- and automotive sector, respectively drive-train and mechatronics solutions. MVC is located in Germany. Energy Technology Ventures A joint venture involving General Electric, NRG Energy, and ConocoPhillips focused on the development of next-generation energy technologies. The JV will invest in, and offer commercial collaboration opportunities to venture- and growth-stage energy technology companies in the renewable power generation, smart grid, energy efficiency, oil, natural gas, coal and nuclear energy, emission controls, water and biofuels sectors, primarily in North America, Europe and Israel. In the financial sector Financial services companies have long been interested corporate venturers. Banks and insurers have been active limited partners in independent venture capital funds, albeit with below-average returns. According to this academic paper, "banks have long been important private equity investors. The motivations for their investment activity, however, are frequently more complex than those of other LPs". Banks have sometimes invested in venture capital to gain early access to companies before their flotation (initial public offering, IPO). With the decrease in number of IPOs after the dot.com boom, poor financial returns from investing in venture capital over the past decade, regulatory restrictions and relative better performance in using debt-backed securities, there are fewer banks and other financial services investors active in the sector, according to research by Global Corporate Venturing. The remaining financial services investors are more likely to be boutique merchant banks, such as Burrill & Co., than mainstream universal banks or bancassurers. However, a nascent class of large firms, such as insurer The Hartford and bank Citigroup, has started to emerge using corporate venturing, i.e. investing in VC funds or directly in third parties for a minority equity position, as a tool to help their business with product development or understand new technologies/services. This model is similar to the approach taken in other economic sectors and led to Citigroup being ranked the most influential corporate venturing unit in financial services in November 2010. In the transport and logistics sector Companies in the transport and logistics sector have been occasional sponsors of corporate venturing units, with an increase in technology starting to see further resurgence. Volvo group has been an active investor in this sector since 1997 through its VC company Volvo Technology Transfer. US-listed General Motors set up a $100m fund in June 2010 while post and logistics group Deutsche Post DHL set up DHL Innovation Center into its DHL Solutions & Innovations unit in late 2009. However, other established groups cut back on their corporate venturing activities during the financial crisis, with Netherlands-based TNT winding up the Logispring II fund where it was a majority investor in late 2009. JetBlue Technology Ventures has been an example of a corporate venture capital unit that has been active in investing in the airline and transportation business. Professional organizations Global Corporate Venturing Global Corporate Venturing is a media group providing news, data and comment for the industry and the wider entrepreneurial and venture community. The flagship title is Global Corporate Venturing, a monthly PDF magazine, along with daily news updates online and a LinkedIn community message and discussion group. Strategic Venture Association The Strategic Venture Association is an organization dedicated to the needs of the corporate investing and strategic partnering community. The Association's goal is to bring together professionals to educate, inform and collaborate with each other around topics core to the group's interests. Given the current market conditions, the importance of working with external organizations to seek innovation, partnerships and investment opportunities has never been greater. National Venture Capital Association Mission: the National Venture Capital Association (NVCA), comprising more than 450 member firms, is the premier trade association that represents the U.S. venture capital industry. NVCA's mission is to foster greater understanding of the importance of venture capital to the U.S. economy, and support entrepreneurial activity and innovation. The NVCA represents the public policy interests of the venture capital community, strives to maintain high professional standards, provides reliable industry data, sponsors professional development, and facilitates interaction among its members. ICEX Corporate Venturing Knowledge Exchange Community This is a membership group made up solely of CV executives at large global companies. It is a private, confidential exchange where members share advice and experience on common challenges to improve their strategic investment and business model innovation activities in the large corporate setting. Limited to 12 companies, the community meets in-person and virtually to discuss current issues and share lessons learned. ICEX facilitates executive exchange for business and technology leaders at large global companies in areas of innovation, transformation, infrastructure, and enterprise architecture. http://www.icex.com References Venture capital
12448496
https://en.wikipedia.org/wiki/Charlie%20Jackson%20%28software%29
Charlie Jackson (software)
Charlie Jackson is an American computer software entrepreneur who founded Silicon Beach Software in 1984 and co-founded FutureWave Software in 1993. FutureWave created the first version of what is now Adobe Flash. He was an early investor in Wired magazine, Outpost.com, Streamload and Angelic Pictures. Jackson is currently founder/CEO of Silicon Beach Software, which develops and publishes application software for Windows 10. Business life Startups Jackson founded Silicon Beach Software in 1984. The company developed and published Macintosh software. It was best known for its graphics editors SuperPaint, Digital Darkroom and the multimedia authoring application SuperCard. Silicon Beach was acquired by Aldus Corporation in 1990. That year he was named Entrepreneur of the Year in San Diego for High Tech. In 1984, Jackson also founded the San Diego Macintosh User Group. Jackson co-founded FutureWave Software with Jonathan Gay in 1993. FutureWave developed and published FutureSplash Animator. Macromedia acquired FutureWave in 1996 and renamed the product Flash 1.0, which in turn became Adobe Flash when Macromedia was acquired by Adobe Systems. Since late 2009, Jackson has been a mentor for San Diego sessions of the Founder Institute. In 2015, using the name Silicon Beach Software again, he founded a company to develop graphics software for Windows 10. The company's first product is SaviDraw. Investments Although no longer an active seed investor, Jackson made some notable investments in the 1990s. In 1993, he and Nicholas Negroponte were the two seed investors in Wired magazine. In 1994, Jackson loaned Wired Ventures the money that allowed the company to start up HotWired, the first commercial web magazine. Jackson was the seed investor in Outpost.com, an early online reseller of computer equipment. Outpost.com gained some notoriety for its TV ads in which gerbils were shot out of a cannon and wolves attacked a high school marching band. Jackson was the first investor in Angelic Pictures, Inc. Jackson was the first investor in Streamload, an online media storage and retrieval company that was subsequently renamed Nirvanix and he was the first investor in Pacific Coast Software, publisher of WebCatalog, an e-commerce package. Current Ventures Jackson is a principal in Angelic Pictures, Inc., a movie production company. He has been an executive producer of Angelic's movies, The Month of August, Hole in One: American Pie Plays Golf, Beach Bar, Music High, La Migra, Fearless and Space Samurai: Oasis. Jackson also owns two small businesses in San Diego, CA. Epic Volleyball Club is a junior volleyball organization which trains approximately 400 athletes annually. VolleyHut.com is an online reseller of volleyball equipment. In 2000, VolleyHut challenged Amazon.com on its use of patents. Silicon Beach Software is a developer/publisher of multimedia software for Windows 10. Early life and education Jackson (born 1948) grew up in Imperial Beach, California. As a teenager, he also spent three years in Istanbul, Turkey, where he earned a B.E.P.C. degree from a French school. Jackson earned a BA degree in Near Eastern Studies from UCLA in 1972, a master's degree in linguistics from San Diego State in 1978 and a C.Phil. in linguistics from UCSD in 1980. He was an active duty Marine Corps officer from 1972 to 1976 and Reserve officer from 1976 to 1989. Sports Jackson's sporting background is varied and extensive. While in Istanbul, he won the county youth championship in pole vault. In high school, he competed in cross country and track and field. At UCLA, Jackson was a letter winner in soccer and lightweight rowing. His senior year he was co-captain of the lightweight rowing team. In the Marine Corps, Jackson became a competitive rifle and pistol shooter, earning the Marine Corps' highest award for rifle shooting, the Distinguished badge. In 1978, he was the High Marine at the National Championships for Service Rifle, held at Camp Perry, Ohio. In the '90s, Jackson returned to competitive shooting. In 1993 and 1994, he earned a spot on the US National Team in Rapid Fire Pistol and competed internationally. In 1996, his three-man team won the U.S. National Championship in Rapid Fire Pistol. In 1994, Jackson attended the World Masters Games in Brisbane, Australia, where over 24,000 athletes competed for World Championship titles in their respective age groups. In the 45 - 49 age group, Jackson won Gold medals in Rapid Fire Pistol and 4-man Beach Volleyball and a Silver medal in 2-man Beach Volleyball. From 1997 to 2000, Jackson served on the board of USA Volleyball, chairing the Olympic Beach Volleyball Committee. Jackson was a member of the 2000 U.S. Olympic Team in the capacity of Assistant Team Leader, Beach Volleyball. In 2000 and 2001, Jackson owned and operated Beach Volleyball America (BVA), a U.S. professional beach volleyball tour. Currently, Jackson owns and operates Epic Volleyball Club, a junior club in the San Diego area. References External links Silicon Beach Software (current) The Computer Chronicles: TV Coverage of MacWorld Boston 1988 including Silicon Beach Software's Charlie Jackson San Diego Macintosh User Group Wired.com Angelic Pictures Founder Institute American computer businesspeople Wired (magazine) people American volleyball coaches People from Imperial Beach, California UCLA Bruins men's soccer players 1948 births United States Marine Corps officers Living people Military personnel from California Association footballers not categorized by position Association football players not categorized by nationality
1344
https://en.wikipedia.org/wiki/Apple%20I
Apple I
The Apple Computer 1, originally released as the Apple Computer and known later as the Apple I, or Apple-1, is a desktop computer released by the Apple Computer Company (now Apple Inc.) in 1976. It was designed by Steve Wozniak. The idea of selling the computer came from Wozniak's friend and co-founder Steve Jobs. The Apple I was Apple's first product, and to finance its creation, Jobs sold his only motorized means of transportation, a VW Microbus, for a few hundred dollars (Wozniak later said that Jobs planned instead to use his bicycle to get around), and Wozniak sold his HP-65 calculator for $500. Wozniak demonstrated the first prototype in July 1976 at the Homebrew Computer Club in Palo Alto, California. Production was discontinued on September 30, 1977, after the June 10, 1977 introduction of its successor, the Apple II, which Byte magazine referred to as part of the "1977 Trinity" of personal computing (along with the PET 2001 from Commodore Business Machines and the TRS-80 Model I from Tandy Corporation). History On March 5, 1975, Steve Wozniak attended the first meeting of the Homebrew Computer Club in Gordon French's garage. He was so inspired that he immediately set to work on what would eventually become the Apple I computer. After building it for himself and showing it at the club, he and Steve Jobs gave out schematics (technical designs) for the computer to interested club members and even helped some of them build and test out copies. Then, Steve Jobs suggested that they design and sell a single etched and silkscreened circuit board—just the bare board, with no electronic parts—that people could use to build the computers. Wozniak calculated that having the board design laid out would cost $1,000 and manufacturing would cost another $20 per board; he hoped to recoup his costs if 50 people bought the boards for $40 each. To fund this small venture—their first company—Jobs sold his van and Wozniak sold his HP-65 calculator. Very soon after, Steve Jobs arranged to sell "something like 50" completely-built computers to the Byte Shop (a computer store in Mountain View, California) at $500 each. To fulfill the $25,000 order, they obtained $20,000 in parts at 30 days net and delivered the finished product in 10 days. The Apple I went on sale in July 1976 at a price of , because Wozniak "liked repeating digits" and because of a one-third markup on the $500 wholesale price. The first unit produced was used in a high school math class, and donated to Liza Loop's public-access computer center. About 200 units were produced, and all but 25 were sold within nine or ten months. In April 1977, the price was dropped to $475. It continued to be sold through August 1977, despite the introduction of the Apple II in April 1977, which began shipping in June of that year. In October 1977, the Apple I was officially discontinued and removed from Apple's price list. As Wozniak was the only person who could answer most customer support questions about the computer, the company offered Apple I owners discounts and trade-ins for Apple IIs to persuade them to return their computers. These recovered boards were then destroyed by Apple, contributing to their rarity today. Overview Wozniak's design originally used a Motorola 6800 processor, which cost $175, but when MOS Technology introduced the much cheaper 6502 microprocessor ($25) he switched. The Apple I CPU ran at 1.022,727 MHz, a fraction (2⁄7) of the NTSC color carrier which simplified video circuitry. Memory used the new 4K bit DRAM chips, and was 4K Bytes, expandable to 8KB on board, or 64KB externally The board was designed to use the next generation of 16K bit memory chips when they became available. An optional $75 plug-in cassette interface card allowed users to store programs on ordinary audio cassette tapes. A BASIC interpreter, originally written by Wozniak, was provided that let users easily write programs and play simple games. An onboard AC power supply was included. The Apple I's built-in computer terminal circuitry was distinctive. All one needed was a keyboard and a television set. The Apple 1 did not come with a case. It was either used as-is or some chose to build custom (mostly wooden) cases. Competing machines such as the Altair 8800 generally were programmed with front-mounted toggle switches and used indicator lights (red LEDs, most commonly) for output, and had to be extended with separate hardware to allow connection to a computer terminal or a teletypewriter machine. This made the Apple I an innovative machine for its day. Collectors' item As of February 2022, 62 Apple-1 computers have been confirmed to exist and according to not verified information 20 more a likely exist. 41 1st batch, 39 2nd batch and 2 unknown versions. Most are now in working condition. An Apple I reportedly sold for US$50,000 at auction in 1999. In 2008, the website Vintage Computing and Gaming reported that Apple I owner Rick Conte was looking to sell his unit and was "expecting a price in excess of $15,000 U.S." The site later reported Conte had donated the unit to the Maine Personal Computer Museum in 2009. A unit was sold in September 2009 for $17,480 on eBay. A unit belonging to early Apple Computer engineers Dick and Cliff Huston was sold on March 23, 2010, for $42,766 on eBay. In November 2010, an Apple I sold for £133,250 ($210,000) at Christie's auction house in London. The high price was likely due to the rare documents and packaging offered in the sale in addition to the computer, including the original packaging (with the return label showing Steve Jobs' parents' address, the original Apple Computer Inc "headquarters" being their garage), a personally typed and signed letter from Jobs (answering technical questions about the computer), and the original invoice showing "Steven" as the salesman. The computer was brought to Polytechnic University of Turin where it was fixed and used to run the BASIC programming language. On June 15, 2012, a working Apple I was sold at auction by Sotheby's for a then-record $374,500, more than double the expected price. This unit is on display at the Nexon Computer Museum in Jeju City, South Korea. In October 2012, a non-working Apple I from the estate of former Apple Computer employee Joe Copson was put up for auction by Christie's, but found no bidder who was willing to pay the starting price of US$80,000 (£50,000). Copson's board had previously been listed on eBay in December 2011, with a starting bid of $170,000 and failed to sell. Following the Christie's auction, the board was restored to working condition by computer historian Corey Cohen. Copson's Apple I was once again listed on eBay, where it sold for US$236,100.03 on April 23, 2015. On November 24, 2012, a working Apple I was sold at auction by Auction Team Breker for €400,000 (US$515,000). On May 25, 2013, a functioning 1976 model was sold for a then-record €516,000 (US$668,000) in Cologne. Auction Team Breker said "an unnamed Asian client" bought the Apple I. This particular unit has Wozniak's signature. An old business transaction letter from Jobs also was included, as well as the original owner's manual. On June 24, 2013, an Apple I was listed by Christie's as part of a special online-only auction lot called "First Bytes: Iconic Technology From the Twentieth Century." Bidding ran through July 9, 2013. The unit sold for $390,000. In November 2013, a working unit speculated to have been part of the original lot of 50 boards delivered to the Byte Shop was listed by Auction Team Breker for €180,000 ($242,820), but failed to sell during the auction. Immediately following the close of bidding, a private collector purchased it for €246,000 ($330,000). This board was marked "01-0046," matching the numbering placed on other units sold to the Byte Shop and included the original operation manuals, software cassettes, and shipping box autographed by Steve Wozniak. The board also bears Wozniak's signature. In October 2014, a working, early Apple I was sold at auction for $905,000 to the Henry Ford Museum in Dearborn, Michigan. The sale included the keyboard, monitor, cassette decks and a manual. The auction was run by Bonhams. On December 13, 2014, a fully functioning, early Apple I was sold at auction for $365,000 by auction house Christie's. The sale included a keyboard, custom case, original manual and a check labeled "Purchased July 1976 from Steve Jobs in his parents' garage in Los Altos". On May 30, 2015, a woman reportedly dropped off boxes of electronics for disposal at an electronics recycling center in the Silicon Valley of Northern California. Included in the items removed from her garage after the death of her husband was an original Apple I computer, which the recycling firm sold for $200,000 to a private collector. It is the company's practice to give back 50% of the proceeds to the original owner when an item is sold, so they want to find the mystery donor. On September 21, 2015, an Apple I bearing the Byte Shop number 01-0059 was listed by Bonhams Auctions as part of their "History of Science and Technology" auction with a starting bid of US$300,000. The machine was described as, "in near perfect condition." The owner, Tom Romkey, "...only used the Apple-1 once or twice, and ...set it on a shelf, and did not touch it again." The machine did not sell. However, Glenn and Shannon Dellimore, the co-founders of GLAMGLOW, a beauty company which they sold to Estee Lauder Companies, bought it after the auction through Bonhams Auction house. On the 40th Anniversary of Apple Computers 2016 the Dellimore's working Apple-1 went on loan and on display in 'Artifact' at the V&A Museum in London, England. On August 26, 2016, (the 40th Anniversary year of Apple Computers), the rarest Apple-1 in existence, an Apple-I prototype made and hand-built by Steve Jobs himself (according to Apple-1 expert Corey Cohen) and dubbed the 'Holy Grail' of computers was sold for $815,000 to winning bidders Glenn and Shannon Dellimore, the co-founders of cosmetics firm Glamglow, in an auction by Charitybuzz. The for-profit internet company that raises funds for nonprofit organizations declared that ten percent of the proceeds will go to the Leukemia and Lymphoma Society, based in New York. On April 15, 2017, an Apple I removed from Steve Jobs's office by Apple quality control engineer Don Hutmacher was placed on display at Living Computers: Museum + Labs. This Apple I was modified by Dan Kottke and Bill Fernandez. This previously unknown unit was purchased from Hutmacher's heirs for an undisclosed amount. On September 25, 2018, a functioning Apple I was purchased at a Dallas auction for $375,000 by an anonymous buyer. On May 23, 2019, an Apple I was purchased through Christie's auction house in London for £371,000. This Apple I is uniquely built into the bottom half of a briefcase and the lot included a modified cassette interface card, Panasonic RQ-309DS cassette tape recorder, SWTPC PR-40 alphanumeric printer, Sanyo VM4209 monitor and Motorola M6800 microprocessor. On March 12, 2020, a fully-functional Apple I was purchased at a Boston auction for $458,711. The lot included the original board with a Synertek CPU, Apple Cassette Interface, display case, keyboard kit, power supply, monitor and manuals. On November 9, 2021, one sold with user manuals and Apple software on two cassette tapes for $500,000 (many wrote 400,000 and forgot the premium), originally purchased by a college professor then sold to his student for $650. Serial numbers Both Steve Jobs and Steve Wozniak have stated that Apple did not assign serial numbers to the Apple l. Several boards have been found with numbered stickers affixed to them, which appear to be inspection stickers from the PCB manufacturer/assembler. A batch of boards is known to have numbers hand-written in black permanent marker on the back; these usually appear as "01-00##". Until January 2022, it was unknown who wrote the serial number on Apple of the 1st batch. As of January 2022, 29 Apple-1s with a serial number are known. The highest known number is . Neither the Apple company founders, nor Paul Terrell (founder of the Byte Shop), nor various Byte Shop employees remembered the originator. These numbers are sometimes incorrectly referred to as ‘Byte Shop numbers’. That was just a theory, which is now disproved. After several years of research and collecting type specimens, Achim Baqué (the curator of the Apple-1 Registry), had two original Apple-1s subjected to forensic analysis by PSA Los Angeles. The results were conclusive for both Apple-1s. It is the handwriting of Steve Jobs. The story was published on (February 10, 2022) during a Zoom meeting on the occasion of the World Computer Day. The story and ‘Letter of Authenticity’ were published same day. Museums displaying an original Apple 1 Computer United States American Computer & Robotics Museum in Bozeman, Montana Computer History Museum in Mountain View, California Computer Museum of America in Roswell, Georgia Smithsonian Museum of American History in Washington, DC Living Computers: Museum + Labs in Seattle, Washington System Source Computer Museum in Hunt Valley, Maryland Australia Powerhouse Museum in Sydney, New South Wales Germany Heinz Nixdorf MuseumsForum in Paderborn (working condition) Deutsches Museum in Munich (working condition) United Kingdom Science Museum, London in London, United Kingdom South Korea Nexon Computer Museum in Jeju Island, South Korea Switzerland ENTER Computer Museum in Solothurn, Switzerland Clones and replicas Several Apple I clones and replicas have been released in recent years. These are all created by hobbyists and marketed to the hobbyist/collector community. Availability is usually limited to small runs in response to demand. Replica 1: Created by Vince Briel. A software-compatible clone, produced using modern components, released in 2003 at a price of around $150. PE6502: Created by Jason Putnam. A single board computer kit made with all through-hole and current production components. Runs Apple 1 "Integer BASIC", a clone of AppleSoft BASIC (floating point capable), Wozmon and Krusader- all built-in ROM. 32k of RAM, and a Parallax Propeller terminal. Software compatible with the Apple 1. A-One: Created by Frank Achatz, also using modern components. RC6502 Apple I Replica, which uses a modern or period CPU and MC6821 PIA, and usually modern RAM and ROM. The system is modular, with multiple boards plugging into a backplane, but a single-board version (using an Arduino Nano to replace the keyboard and video hardware with a serial interface) is also available. Obtronix Apple I reproduction: Created by Steve Gabaly, using original components or equivalents thereof. Sold through eBay. Mimeo 1: Created by Mike Willegal. A hardware kit designed to replicate a real Apple I as accurately possible. Buyers are expected to assemble the kits themselves. Newton 1: Created by Michael Ng and released in 2012. Similar to the Mimeo 1, but is made using the same materials and same obsolete processing technique commonly used in the 1970s. Over 400 bare boards, kits and assembled boards were sold. There are Newton NTI and non-NTI versions available. Brain Board, a plug-in firmware board for the Apple II that, with the optional "Wozanium Pack" program, can emulate a functional Apple-1. Replica by MDesk. An accurate PCB copy of original Apple 1 was researched in 2012–2014. A few PCBs without components were sold for $26 in 2014. SmartyKit 1 computer kit: created by Sergey Panarin with package design by Greg Chemeris and released in 2019. A hardware kit on breadboards designed to replicate a real Apple I with modern components (ROM, RAM, Arduino controllers for video and keyboard) and real 6502 CPU. Made to teach anyone how to build a computer and how it works. Was presented at CES 2020 in Las Vegas and then featured in Apple Insider, WIRED, Tom's Hardware. Emulation Apple 1js, a web-based Apple I emulator written in JavaScript. MESS, a multi-system emulator able to emulate the Apple I. OpenEmulator, an accurate emulator of the Apple I, the ACI (Apple Cassette Interface) and CFFA1 expansion card. Pom1, an open source Apple I emulator for Microsoft Windows, Arch Linux and Android devices. Apple 1 Emulator, an emulator for the SAM Coupé home computer. CocoaPom, a Java-based emulator with a Cocoa front-end for Macintosh. Sim6502, an Apple I emulator for Macintosh. Green Delicious Apple-1, an emulator for the Commodore 64. See also Computer museums History of computer science History of computing References Citations Sources Price, Rob (1987). So Far: The First Ten Years of a Vision. Cupertino, Calif.: Apple Computer. . Owad, Tom (2005). Apple I Replica Creation: Back to the Garage. Rockland, Mass.: Syngress Publishing. . External links Apple I Computer specifications Bugbook Computer Museum blog. Apple 1 display. Apple I Owners Club Apple I Operational Manual German making-of article to recreate the Apple I Operational Manual Apple I project on www.sbprojects.com Apple 1 Computer Registry Macintosh Prehistory: The Apple I John Calande III blog – Building the Apple I clone, including corrections on the early history of Apple Computer Apple 1 | Cameron's Closet – includes display of the Apple 1's character set on real hardware, compared to on most emulators Computer-related introductions in 1976 Apple II family Apple Inc. hardware Early microcomputers 6502-based home computers
51199838
https://en.wikipedia.org/wiki/ThoughtSpot
ThoughtSpot
ThoughtSpot, Inc. is a technology company that produces business intelligence analytics search software. The company is based in Sunnyvale, California, and was founded in 2012. History ThoughtSpot was founded in 2012 by a team of engineers who previously worked for Google, Oracle, Microsoft, Yahoo, and other Silicon Valley companies. The CEO and co-founder, Ajeet Singh, previously co-founded the company Nutanix. In late 2012, ThoughtSpot raised $10.7 million in Series A funding led by Lightspeed Venture Partners. In 2014, the company raised $30 million in Series B funding led by Khosla Ventures. In January 2016, the company opened an office in London. In February 2016, ThoughtSpot announced that it had increased its revenue by 810 percent over the previous year. In May 2016, ThoughtSpot raised $50 million in Series C funding led by General Catalyst Partners. In October 2016, the company expanded its series C funding with an investment from Hewlett Packard Pathfinder. As part of the investment, ThoughtSpot will enter the Pathfinder program and begin selling its software on Hewlett-Packard Enterprise infrastructure. In May 2018, the company raised $145 million in Series D funding from Sapphire Ventures, Lightspeed Ventures, Khosla Ventures, General Catalyst and others to expand its AI based analytics platform. At the time, the company was valued at over $1 billion. In March 2019, ThoughtSpot relocated their headquarters from Palo Alto to Sunnyvale. In August 2019, the company $28 million is Series E funding from Silver Lake, Sapphire Ventures, and Geodesic Capital. By 2020, the company had $100 million in annual recurring revenue. In January 2021, the company hired several new executives for a potential initial public offering later that year. On March 5, 2021 ThoughtSpot partnered with Indian information technology company Tech Mahindra. ThoughtSpot's clients include the companies Walmart, Apple, BT, Bed Bath & Beyond, Hightail and Fannie Mae. Technology ThoughtSpot allows for non-technical individuals to conduct a self-service data analysis search. The company introduced ThoughtSpot Monitor, a tool that monitors information for changing patterns or trends, in 2019 as part of its ThoughtSpot 6 software. Recognition In 2016, ThoughtSpot was named a "Cool Vendor in Analytics" by Gartner. In 2017, the company announced that it was included on Gartner Magic Quadrant for Business Intelligence and Analytics Platforms report. Later, Gartner included ThoughtSpot in the leaders quadrant for Analytics and BI platforms for the years 2019 and 2020. ThoughtSpot was included in Red Herring's "Top 100 North American Companies" list. The company was recognized on Glassdoor's "Best Places to Work" list for 2020. References American companies established in 2012 Software companies based in California Software companies established in 2012 Business intelligence Software companies of the United States
30149919
https://en.wikipedia.org/wiki/George%20Schussel
George Schussel
George Schussel (born 1941 in occupied France during World War II) is an American businessman and entrepreneur. In 1942, Schussel's father brought the family out of German-occupied territory into Spain, and subsequently into the United States. Educated at UCLA on the west coast and at Harvard on the east coast, Schussel became best known as the founder and chairman of Digital Consulting Institute (DCI). By 1998 DCI had become one of the most significant American conference and expo companies in the field of technology. Schussel's expertise on database, computing architectures, the internet and information management issues also inspired him to travel to many countries presenting lectures that gave his views on the latest computer technologies and probable directions for the future of computer technology. As of 2004, Schussel had given over 1,000 seminars for other technology professionals in countries such as France, UK, Belgium, Venezuela, Canada, Mexico, South Africa, Japan, and Australia. Accomplishments George Schussel has been the inventor and chairperson of computer industry trade shows such as Database World, Client/Server World, and Creating the Real Time Enterprise. His lectures have scored 9 on a 10-point scale and were noted for underlining and explaining technical issues, while focusing on the business benefits and uses of technology. Schussel has authored the 1985 book Data Management: Past, present and future (Critical technology report), as well as co-authored the 1994 book Rightsizing Information Systems (Professional Reference). He has also authored or co-authored over 100 articles or columns in leading computer industry journals such as Computerworld, Datamation, Client Server Today and Data Based Advisor. During his time at DCI, Schussel was credited as having consulted major clients such as Cullinet, Computer Associates, Revelation Technologies, Hewlett Packard, Sybase, AT&T/NCR, DEC, Sequent Computer Systems, Borland and IBM. In 1998, Schussel was a recipient of the IEEE Computer Society's Computer Entrepreneur award for his important contributions to the computing industry and profession as an entrepreneurial leader, advisor, and member. Other recipients of the Computer Entrepreneur award that year were Bill Gates, Paul Allen, Steve Jobs, and Steve Wozniak. Schussel was also the recipient of the Outstanding Industrial Engineer of the Year award from the Institute of Industrial Engineers. Additionally, Schussel was a fellow of the American Association for the Advancement of Science, and had CDP certification from the Data Processing Management Association. Background/education Schussel received his bachelor's degree from the University of California in physics and mathematics in 1961. Afterwards, he was accepted into Harvard University, and there received his master's degree in applied mathematics and computer science in 1962. In 1966 he received his doctorate from Harvard Business School in marketing and computer science. After graduation, he spent time lecturing and held a faculty appointment at the University of Southern California, Harvard, MIT, and the University of Alabama. Prior to founding DCI in 1983, Schussel was Vice President and CIO at the American Mutual Group of insurance companies in Wakefield, Massachusetts. There he was the senior manager for the administration of a multimillion-dollar computer budget and 200 full-time personnel handling all data processing for the American Mutual Group. DCI George Schussel was the founder, as well as the chairman of Digital Consulting Institute (DCI), which was started in 1982 in his Lynnfield, Massachusetts, home. As of 1998, DCI was the largest American-owned information systems conference and trade show company, holding small to large seminars across the world intended for professional audiences. As chairman, Schussel forecasted industry trends as well as identified new fields of opportunity for DCI trade shows. His input was crucial in the company's compounded growth rate of 30% per year through the 1990s. DCI's revenue was generated from ticket sales to participants who attended their seminars, trade shows, and other events, as well as from the contracting and selling of booth space to vendors participating in their shows. DCI also ran trade show events for other companies such as Sybase, IBM and Microsoft. Tax dispute In 2001, the IRS began an investigation of the tax accounting of DCI for the 1995 year, a period during which Schussel had been company CEO. By 2004, the IRS had decided that DCI had not been in compliance with US tax laws for its international business. DCI's position, as supported by tax counsel, was that its tax reporting on international income had been handled in accordance with US laws and in much the same manner as other companies such as Cisco Systems, Apple, IBM and AIG, which used foreign subsidiaries to hold assets. In 2004, George Schussel was charged by the United States District Court for the District of Massachusetts with conspiracy and tax evasion. By 2007, unable to agree on these tax issues, Schussel was individually tried for tax evasion. As of 2017, all issues, both civil and criminal, resulting from this case had been settled. Charity In March 2000, Schussel was recognized in the MIT Sloan School of Management alumni magazine for his philanthropic contributions towards MIT. In 1998 Schussel donated money to MIT Sloan for the endowment of a Professorship of Management Science chair. Currently this chair is held by Erik Brynjolfsson whose book ‘Race Against the Machine’ was CIO Insight's No. 1 pick for the top 10 IT-Business Books of 2011. Schussel has also served on the Dean's Leadership Council at the MIT Sloan School. Although Schussel is not an alumnus of MIT, while a graduate student at Harvard, the classes he took at MIT gave him an early foundation in computer storage and retrieval techniques, which later proved valuable as he became expert in database technology. In addition, Schussel's family members held six degrees from the school, while one of his daughters met her husband there. Beginning in 2016, the Schussels created the Schussel Family Fund which provides research funds to the Weinstock Laboratory of the Dana Farber Cancer Institute in Boston. A primary goal of the laboratory is to understand and develop clinical treatments for T-cell lymphomas. In 2017, George and Sandra became members of DFCI's Joint Visiting Committee on Basic Science. Additional funds have gone towards supporting research at the Smilow Cancer Hospital on T-cell lymphomas. Prison Justice for America (PJA) In 2013 Schussel founded the web site Prison Justice for America as a non-profit public service. The site launched with over 250 articles on the subject of criminal justice as practiced in the USA. Over the course of four years, PJA connected mentors with those seeking assistance on their reintegration into free society. The United States incarcerates a higher percentage of its citizens than any other country in the world, and most individuals leaving the criminal justice system continue to suffer discrimination as they attempt to re-enter society. PJA's belief was that the whole approach of "tough on crime" had failed American society, and the goal was to support individuals to re-enter society as useful productive citizens. Over 100 individuals received personalized help and advice and in 2017 the site was shuttered. References 1941 births Living people American businesspeople Harvard Business School alumni People from Lynnfield, Massachusetts
17844640
https://en.wikipedia.org/wiki/Marcus%20J.%20Ranum
Marcus J. Ranum
Marcus J. Ranum (born November 5, 1962, in New York City, New York, United States) is a computer and network security researcher. He is credited with a number of innovations in firewalls, including building the first Internet email server for the whitehouse.gov domain, and intrusion detection systems. He has held technical and leadership positions with a number of computer security companies, and is a faculty member of the Institute for Applied Network Security. Education Marcus Ranum was born in New York City, and graduated from Gilman School in Baltimore, Maryland before attending Johns Hopkins University where he obtained a Bachelor of Arts in Psychology in 1985. Career Ranum helped design and implement Digital Equipment Corporation's Secure External Access Link (SEAL) (later AltaVista firewall), regarded as the first commercial bastion host firewall, in 1990. He left DEC to work for Trusted Information Systems (TIS) as chief scientist and development manager for Internet security products. It was at TIS that Ranum became responsible for the whitehouse.gov Internet email site. Once charged with that responsibility, Ranum advocated that the whitehouse.com domain be registered as well. Despite his advice, it was not registered by the government, but was later registered for an adult entertainment provider. At TIS, he developed the TIS Internet Firewall Toolkit (fwtk) under a grant from DARPA. After TIS, he worked for V-One as chief scientist, and was extensively involved in that company's IPO. Three months after that IPO, Ranum formed his own company, Network Flight Recorder (NFR), and served as CEO for three years before stepping into a CTO role. Ranum later left NFR to consult for TruSecure, before joining Tenable Network Security as CSO. In addition to his various full-time positions, Ranum has also held board or advisory positions at NFR Security, Protego Networks, and Fortify Software. Public presentations Ranum has spoken to USENIX audiences at LISA 1997, 1999 (tutorial) LISA 2000 (keynote), 2002, and 2003 (tutorial). He spoke out against full disclosure at the Black Hat Security Briefings in 2000. More recently, Ranum has spoken at Interop in 2005 and 2007, CanSecWest in 2010, and Secure360 in 2011. He previously taught courses for the SANS Institute. Influence Ranum's work has been cited in at least 15 published U.S. patents, as well as numerous other computer and network security articles and books. "Ranum's Law" Ranum is cited as the author of an eponymous law, "You can't solve social problems with software." Awards TISC "clue" award, 2000. Inducted into the ISSA hall of fame, 2000 or 2001. Techno-Security Professional of the Year, 2005. Publications Articles Marcus has co-authored a series of "Face Off" articles with Bruce Schneier, which have appeared approximately bi-monthly in Information Security Magazine since July, 2006. Ranum is one of a number of editors of the SANS Newsbites semiweekly email newsletter. Books The Myth of Homeland Security. Host Intrusion Monitoring Using Osiris and Samhain with Brian Wotring and Bruce Potter. Web Security Sourcebook with Aviel D. Rubin and Dan Geer. Personal life Currently, Ranum lives in Morrisdale, Pennsylvania. His hobbies include photography and firearms. He maintains an active stock photography account on DeviantArt, and he wrote an essay for Oleg Volk's pro-firearms site. www.a-human-right.com. Marcus Ranum was also interviewed by digital artist Brandon Pence for the NWFLAA which can be read in 2 parts: Part 1 and Part 2. He is an atheist, maintaining a blog on the Freethought Blogs network. References External links Marcus Ranum's personal website Ranum interview with RationalSecurity (2007-06-25) Security Solutions profile of Ranum (2006-12-01) Ranum interview with IEEE Security and Privacy magazine (2006-09-01) Ranum interview with SecurityFocus (2005-06-21) Ranum's DeviantArt website (stock) Ranum's DeviantArt website (portfolio) DojoSec Lecture — March 2009 — Ranum's discussing the failure of the notion of Cyber-warfare 1962 births Living people People associated with computer security Usenet people Digital Equipment Corporation people American atheists American computer specialists
4028150
https://en.wikipedia.org/wiki/Open%20for%20Business
Open for Business
Open for Business (OFB) was an online news blog with a technology focus. It featured articles on topics including computers, technology, politics, current events, theology and philosophy. The site also contained a fiction section with short stories and poetry. History OFB was founded on October 5, 2001 as the "open-source migration guide". It was started by Timothy R. Butler after a mailing list discussion, and featured articles and white papers discussing migration to Linux. Originally, OFB featured very little original content, instead mimicking Slashdot and similar sites that included little more than a few small comments on the articles posted. Steven Hatfield helped add postings to the site. The site then started to add free and open source software news. About a month after the site was founded, the first original editorial content appeared and OFB continued to publish approximately one original work per month after that. In late April 2002, Butler announced a relaunch of the site that included a reduction in links to other sites and a further increase in original content. The relaunch also brought forth the first version of a blue sphere logo and the new tagline "the Independent Journal of Open Source Migration". On July 4, 2002, Open for Business, LinuxandMain.com, KernelTrap and Device Forge's LinuxDevices and DesktopLinux.com formed LinuxDailyNews (LDN), an aggregation site that was intended to help increase the publicity of independent open source news sites. LDN featured a center column that showed story highlights and two side columns that displayed all stories from the member sites in blocks. The site was launched as part of DesktopLinux.com's "wIndependence Day" promotion and had an early spike in popularity following a mention on Slashdot. In subsequent months, the site's traffic decreased. It was taken down in 2004 after a hacker managed to deface the site; although plans existed to restore the site, they were never followed through with and Device Forge assumed the rights to LDN's domain name. In February 2003, the site finalized its transition to an original content provider, as opposed to a site of links, by moving non-original content to a separate "News Watch" section. New contributing editor Ed Hurst began a series on his switch to FreeBSD in September 2003, beginning a long running series of FreeBSD articles that Hurst continued to add until October 2006. Butler also began OFB's coverage of Mac OS X computers. OFB's second associate editor Eduardo Sánchez returned in mid-2004 as a contributing editor. Hurst was promoted to associate editor simultaneously. The site continued in a similar fashion, with its mix of coverage on Linux, BSD and Mac OS X through early 2006. During this period it changed its motto to "the Independent Journal of Open Standards and Open Source". Due to other obligations, the site's editors ceased writing content for the site in early 2006, though it remained open during this period. On its fifth anniversary, Butler announced the relaunch of the site on October 5, 2006. The new OFB adopted the site's current general interest focus, de-emphasizing its past emphasis on Open Source and technology. The site changed its purple and blue, PHP-Nuke-based design that had been used with only minor modifications since the site's original launch, to a simpler, content-oriented design using a custom backend. The old site was archived as OFB Classic to preserve access to past articles. The last article was posted on January 25, 2013. Contributors Regular contributors included: Timothy R. Butler, editor-in-chief (2001–present). Ed Hurst, contributing editor (2003–2004), associate editor (2004–present). Eduardo Sánchez, associate editor (2002–2003), contributing editor (2004–present). Jason P. Franklin, contributing editor (2007–present). Steven Hatfield, associate editor (2001–2002), contributing editor (2003). John-Thomas Richards, contributing editor (2002) References External links Technology websites
69320114
https://en.wikipedia.org/wiki/DNS%20%28retail%20company%29
DNS (retail company)
DNS Retail (Russian: OOO «ДНС Ритейл», also known in English as CSN Retail LLC) is the owner of a Russian retail chain specialising in the sale of computers, electronics, and household goods, and also a manufacturer of computer hardware including laptops, tablets and smartphones. In 2019, it became the 6th-largest retail company in Russia, and in 2021, DNS was the 22nd-largest private company in Russia. As of 2021, there are more than 2,000 branches across Russia, and in May 2021, the first branches were opened in Kazakhstan. The company's headquarters are located in Vladivostok. The general director of the company is Aleksei Popov. Popov is also the general director and co-owner of the parent company DNS Group. History DNS (short for Digital Network System) was founded in 1998 in Vladivostok, after the founders' previous business specialising in corporate computer services ended in bankruptcy. The company initially sold computers assembled by hand inside a retail store. In 2005, the company began to expand its retail presence nationally, opening branches in Nakhodka and Khabarovsk. In 2006, a further branch was opened in Irkutsk. By 2009, there were DNS stores in cities across Russia, including Chita, Novosibirsk, Ekaterinburg and Rostov on Don. At the same time, further branches were opened in regions where the company already had a presence. By July 2013, the chain consisted of more than 700 stores located in over 200 cities across Russia. In addition, the company owns 10 distribution sites, a computer and laptop manufacturing plant in Artyom, and computer assembly plants in Moscow Oblast and Novosibirsk. In April 2014, the company acquired the Computer World retail chain (consisting of 21 stores in Saint Petersburg and a further 11 in the Northwestern Federal District). In March 2019, the company further acquired the St. Petersburg-based retail chain . In May 2021, DNS expanded its retail network beyond Russia for the first time, opening branches in Kazakhstan. Business activities In the first half of 2011, the company assembled 193,000 personal computers, making it the largest PC assembler in Russia. The company also produces laptops, desktop computers, monitors, smartphones, computer power supplies and computer accessories under the brands DNS, DEXP, and ZET Gaming. Company structure The company was founded by 10 acquaintances and residents of Vladivostok, who had experience working in the computer industry. As of 2015, 9 of them continue to work for the company; the remaining founder died and passed his share of the company on to his family. Prior to 2018, the company consisted of more than 50 legal entities, each of which was registered in a different region but connected to the company by their common owners. In March 2018, however, the company was restructured and all legal entities were merged into the limited liability company DNS Retail (also known as CSN Retail in English). References External links Official website (Russia) Official website (Kazakhstan) Companies based in Vladivostok Retail companies of Russia Russian brands
37782415
https://en.wikipedia.org/wiki/Systematic%20Paris-Region
Systematic Paris-Region
Systematic Paris-Region is an Île-de-France business cluster created in 2005, devoted to complex systems and ICT. History During its first two years of operation, The cluster Systematic Paris-Region has launched 207 research projects representing 975 million euros of investments consisting of 380 million from state aid, from the Agence Nationale de la Recherche (ANR), Oséo and local authorities. As of September 2011, Systematic Paris-Region has permitted the development of 318 collaborative R&D projects, at a total cost of 1.4 billion euros in R&D effort and a support revenue of about €500 million from the state (via the Fonds unique interministériel (Single Interministerial Fund)), national agencies, ANR, Oséo, EUREKA, ERDF and territorial collectivities. The project of a French competitiveness cluster on free software was consigned to Systematic Paris-Region following the Comité interministériel d'aménagement et de développement du territoire (Interministerial Committee for the development and territorial competitiveness, CIADT) of July 5, 2007. Presentation 600 organizations are involved in the R&D network of the cluster: 366 SMEs-SMBs, 116 companies, 24 ETI, 79 research centers and educational institutions, 19 territorial authorities and 15 investors. The cluster brings together beyond the R&D collaborative ecosystem 1060 small and medium enterprises. Since its inception in 2005, Systematic focuses its activity on the digital revolution for 6 markets : Automotive & Transportation, Telecom, Security & Defence, now called Digital Trust & Safety, Intelligent Energy Management, The technological heart of the cluster collects information and communication technology and the design tools and development systems necessary to design, develop, operate and manage an infrastructure, systems and equipment specific to each of these three application markets, The software is the core technology with, since 2007, a proactive strategy in the field of Free Software in order to bring together stakeholders in their Île-de-France communities and to promote the emergence of a thriving Free Software industry. From 2009 to 2010, Systematic deploys its technologies and solutions to two new market areas which development is increasingly reliant on the expertise and know-how of Systematic and its members:: ICT and Sustainable city in partnership with the poles and Advancity and Cap Digital; the priorities in this area are: - E-Services for the city; - Design tools and simulation for the building and the city; - Management systems and supervision of the building for the city and the environment; - Transport systems and mobility. ICT and Health, in partnership with the cluster Medicen, the priorities in this area are: - Modeling and simulation for the life sciences; - Medical imaging (computers, sensors, software for processing and analysis); - Telemedicine and medical supervision. This deployment will be driven by existing theme groups with coordination and oversight arrangements between the partners clusters. 39 R&D projects have already been launched by Systematic Paris-Region, which are part of the new themes. Each thematic group division has updated its strategic roadmap by including the following new elements: Issues in 2011 in each group Theme identified by prospective studies conducted in preparation of the strategic plan. Development areas and issues specific to the new spheres of cluster activities: - The ICT and sustainable city area was analyzed in 2009; - ICT and health field will be in 2010 with an update of this document. In France, the first four themes of the cluster Systematic Paris-Region comprised no less than 320,000 employees, including 250,000 in services and 70,000 in industry. By itself, the software segment in complex systems represent a global market of 300 billion euros. Free software A Free Software theme membership directory is available on its website. References External links Official website Economy of France Landscape architecture Paris-Saclay
2096623
https://en.wikipedia.org/wiki/William%20Yeager
William Yeager
William "Bill" Yeager (born June 16, 1940, San Francisco) is an American engineer. He is best known for being the inventor of a packet-switched, "Ships in the Night," multiple-protocol router in 1981, during his 20-year tenure at Stanford's Knowledge Systems Laboratory as well as the Stanford University Computer Science department. The code routed Parc Universal Packet (PUP), XNS, IP and CHAOSNet. The router used Bill's Network Operating System (NOS). The NOS also supported the EtherTIPS that were used throughout the Stanford LAN for terminal access to both the LAN and the Internet. This code was licensed by Cisco Systems in 1987 and comprised the core of the first Cisco IOS. This provided the groundwork for a new, global communications approach. He is also known for his role in the creation of the IMAP mail protocol. In 1984 he conceived of a client/server protocol, designed its functionality, applied for and received the grant money for its implementation. In 1985 Mark Crispin was hired to work with Bill on what became the IMAP protocol. Along with Mark, who implemented the protocols details and wrote the first client, MMD, Bill wrote the first Unix IMAP server. Bill later implemented MacMM which was the first MacIntosh IMAP client. Frank Gilmurray assisted with the initial part of this implementation. At Stanford in 1979 Bill wrote the ttyftp serial line file transfer program, which was developed into the MacIntosh version of the Kermit protocol at Columbia University. He was initially hired in August 1975 as a member of Dr. Elliott Levanthal's Instrumentation Research Laboratory. Here, Bill was responsible for a small computer laboratory for biomedical applications of mass spectrometry. This laboratory in conjunction with several chemists, and the Department of inherited rare diseases in the medical school made significant inroads in identifying inherited rare diseases from the gas chromatograph, mass spectrometer data generated from blood and urine samples of sick children. His significant accomplishment was to complete a prototype program initiated by Dr. R. Geoff Dromey called CLEANUP. This program "extracted representative spectra from GC/MS data," and was later used by the EPA to detect water pollutants. From 1970 to 1975 he worked at NASA Ames Research Center where he wrote, as a part of the Pioneer 10/11 mission control operating system, both the telemetry monitoring and real time display of the images of Jupiter. After his stint at Stanford he worked for 10 years at Sun Microsystems. At Sun as the CTO of Project JXTA he filed 40 US Patents, and along with Rita Yu Chen, designed and implemented the JXTA security solutions. In 2002 he along with Jeff Altman, then a contributor to the JXTA Open Source community, initiated the effort to establish the Internet Research Task Force (IRTF) Peer-to-Peer working group. The working group was created in 2003. Bill was the working group chair until 2005. As Chief Scientist at Peerouette, Inc., he filed 2 US and 2 European Union Patents. He has so far been granted 20 US Patents 4 of which are on the SIMS High Performance Email Servers which he invented and with a small team of engineers implemented, and 16 on P2P and distributed computing. In the Summer of 1999 under the guidance of Greg Papadopoulos, Sun's CTO, and reporting directly to Carl Cargill, Sun's director of corporate standards, led Sun's WAP Forum team with the major objective, "... to work with the WAP Forum on the convergence of the WAP protocol suite with IETF, W3C and Java standards." During this same period of time he invented the iPlanet Wireless Services. The latter was a Java proxy between IMAP Mail servers and either WAP Servers, or Web Browers. It proxied the following markup languages: The Handheld Device Markup Language, HDML, the Wireless Markup Language, WML, as well as HTML. This was a one person project supported by SFR/Cegetel in France. The primary goal was to enable email service to WAP phones. He received his bachelor's degree in mathematics from the University of California, Berkeley in 1964; his master's degree in mathematics from San Jose State University in San Jose, California, in 1966; and completed his doctoral course work at the University of Washington in Seattle, Washington in 1970. Then decided to abandon mathematics for a career in software engineering and research to the skepticism of his thesis advisor because Bill thought the future was in computing. Patents Personal Server and network - Patent Application for Peerouette P2P Technology Global community naming authority - Patent Application for Peerouette P2P Technology [US Patent 6,167,402 - High Performance Message Store] [US Patent 6,735,770 - Method and apparatus for high performance access to data in a message store] [US Patent 6,418,542 - Critical signal thread] [US Patent 6,457,064 - Method and apparatus for detecting input directed to a thread in a multi-threaded process] [US Patent 7,065,579 - System using peer discovery and peer membership protocols for accessing peer-to-peer platform resources on a network] [US Patent 7,127,613 - Secured peer-to-peer network data exchange] [US Patent 7,136,927 - Peer-to-peer resource resolution] [US Patent 7,167,920 - Peer-to-peer communication pipes] [US Patent 7,213,047 - Peer trust evaluation using mobile agents in peer-to-peer networks] [US Patent 7,203,753 - Propagating and updating trust relationships in distributed peer-to-peer networks] [US Patent 7,222,187 - Distributed trust mechanism for decentralized networks] [US Patent 7,254,608 - Managing Distribution of Content Using Mobile Agents in Peer-to-Peer Networks] [US Patent 7,275,102 - Trust Mechanisms for a Peer-to-Peer Network Computing Platform] [US Patent 7,290,280 - Method and apparatus to facilitate virtual transport layer security on a virtual network] [US Patent 7,308,496 - Representing Trust in Distributed Peer-to-Peer Networks] [US Patent 7,340,500 - Providing peer groups in a peer-to-peer environment] [US Patent 8,108,455 - Mobile Agents in Peer-to-Peer networks] [US Patent 8,160,077 - Peer-to-Peer communication pipes] [US Patent 8,176,189 - Peer-to-Peer network computing platform] [US Patent 8,359,397 - Reliable peer-to-peer connections] References External links Sun Microsystems biography Valley of the Nerds: Who Really Invented the Multiprotocol Router, and Why Should We Care? "A start-up's true tale", Mercury News, 2001-01-12 Interview at Networkworld.com 1940 births Living people 21st-century American engineers American inventors Sun Microsystems people UC Berkeley College of Letters and Science alumni San Jose State University alumni University of Washington alumni Stanford University faculty
22822534
https://en.wikipedia.org/wiki/MINIMOP
MINIMOP
MINIMOP was an operating system which ran on the International Computers Limited (ICL) 1900 series of computers. MINIMOP provided an on-line, time-sharing environment (Multiple Online Programming, or MOP in ICL terminology), and typically ran alongside George 2 running batch jobs. MINIMOP was named to reflect its role as an alternative to the MOP facilities of George 3, which required a more powerful machine. MINIMOP would run on all 1900 processors apart from the low-end 1901 and 1902 and required only 16K words of memory and two 4 or 8 million character magnetic disks. Each user was provided with a fixed size file to hold his data, which was subdivided into a number of variable sized subfiles. The command language could be extended with simple macros. Implementation MINIMOP was implemented as a multithreaded (sub-programmed in ICL terminology) user level program running on the standard executive (low level operating system) of the ICL 1900. The program under control facilities of executive were used to run user programs under MINIMOP. All user I/O operations were trapped by MINIMOP and emulated rather than accessing real peripherals. As memory was at a premium user programs would be swapped out of memory whenever they needed to wait (for input or output) or when they reached the end of their time slice. MAXIMOP Queen Mary College, London, now Queen Mary, University of London, later developed MAXIMOP, an improved system largely compatible with MINIMOP. The ICL Universities Sales Region started distributing MAXIMOP, and it was used at over 100 sites. References ICL operating systems
1631878
https://en.wikipedia.org/wiki/Write%20%28Unix%29
Write (Unix)
In Unix and Unix-like operating systems, is a utility used to send messages to another user by writing a message directly to another user's TTY. History The write command was included in the First Edition of the Research Unix operating system. A similar command appeared in Compatible Time-Sharing System. Sample usage The syntax for the write command is: $ write user [tty] message The write session is terminated by sending EOF, which can be done by pressing Ctrl+D. The tty argument is only necessary when a user is logged into more than one terminal. A conversation initiated between two users on the same machine: $ write root pts/7 test Will show up to the user on that console as: Message from root@wiki on pts/8 at 11:19 ... test See also List of Unix commands talk (Unix) wall (Unix) References Unix user management and support-related utilities Standard Unix programs Unix SUS2008 utilities
14644656
https://en.wikipedia.org/wiki/Hans%20Hacker
Hans Hacker
Hans Hacker (March 4, 1910 – December 27, 1994) was a ceramic decal designer and painter. Hacker was born on March 4, 1910 in the city of Waldenburg (now Wałbrzych), Germany. Waldenburg was in the Silesia Province and became part of Poland after World War II. Hacker was one of seven children. His father August was a commercial ceramics artist and his mother, Eliese Moore Hacker, was a kindergarten teacher. Hacker was showing and selling paintings by the time he was 11 or 12 years old. He graduated from the Breslau Art School in Breslau, Germany. Family On October 22, 1934, Hacker married Johanna "Hanna" Krause, a daughter of Heinrich and Selma Spatscheck Krause. A native of Hacker's village, she had been born on October 1, 1913. They had three children. Hanna Hacker died January 12, 2008 in Burlington, Vermont at the age of 94. Professional career As an adult, Hacker worked as head designer for E. Wunderlich and Company (a large producer of decals for the worldwide ceramics industry) in Germany. As a representative for Wunderlich, he first visited East Liverpool, Ohio (a center of pottery production, with 24 potteries in the area at the time) in 1932. He traveled back and forth between Germany and Ohio over the next half dozen years, tending to the growing business relationship between Wunderlich and Commercial Decal which made ceramics decals in the USA. As the Nazis came to power in Germany in the late 1930s, Hacker and his family sought to leave the country and decided to settle permanently in East Liverpool in 1939. Hacker was hired by Commercial Decal as an art consultant for its East Liverpool decal plant. He was later named art and technical director of Commercial Decal. He retired from Commercial Decal in 1977, although he continued working as a consultant for many years afterward. Especially via his work in perfecting the slide-off decal method, Hacker became a celebrated decal and ceramic designer. He was the most prolific designer of dinnerware patterns in history. Fine artist In addition to his work in the ceramics industry, Hacker was a painter who exhibited equal skill with oils and watercolors. He painted hundreds of scenes of East Liverpool and the surrounding area—including many paintings of Little Beaver Creek and the historic small village of Fredericktown six miles north of East Liverpool. He also did numerous paintings of northern West Virginia, just across the Ohio River from East Liverpool. His works are a comprehensive historic and visual record of two centuries of life in the area. Accolades and awards Hacker's work is exhibited at the Smithsonian Institution and the Butler Institute of American Art in Youngstown, Ohio. Additionally, a number of East Liverpool institutions and businesses such as the Museum of Ceramics, the Carnegie Library and the Dawson Funeral Home have Hacker's work on display. On August 16, 1985, for his contribution to East Liverpool history and culture, Hacker was honored by the East Liverpool Historical Society with a "Hans Hacker Appreciation Dinner." At that time, Jack Lanam, former curator of the East Liverpool Historical Society said, "Take his paintings and put them together like a jigsaw puzzle, lay them out, and you have a city [East Liverpool]." Fellow renowned ceramic artist Don Schreckengost pointed out at the dinner that Hacker was unusual in his ability to pursue fine art as a prolific avocation while working in commercial art as his livelihood. "Oftentimes a person doing commercial art for industry has a hard time finding time to do art for a hobby," said Schreckengost. "East Liverpool has been fortunate to have someone of his stature to live here as a citizen, and to contribute to the community." "He lived at a time when he was able to make sketches and make records with his camera," said Schreckengost. But, he added, Hacker "does things that a camera can't always do, with the feeling for light and the feeling for the time. He sees a lot more than meets the average person's eye - that's what a true artist does." East Liverpool Mayor Norm Bucher designated August 16, 1985 as "Hans Hacker Day" in the city, and the first-ever lifetime membership in the Historical Society was granted that day to Hacker. On June 22, 2007 Hacker was posthumously inducted into the Lou Holtz Upper Ohio Valley Hall of Fame in East Liverpool, Ohio. Frank C. Dawson, Holtz President, said, "There is no way to measure the impact of his work. He was so gifted and talented that his work will live on for years to come." The next day, Hacker was also honored with an art exhibition, jointly sponsored by the East Liverpool Historical Society and the Kent State-East Liverpool Branch. At that time, East Liverpool Historical Society President Tim Brookes said that East Liverpool was fortunate to "end up as the adoptive home of someone so creative." "Due to his interest and talent, he was able to preserve and document the appearance of structures that have ceased to exist, and specific moments in time from the city's past have been preserved in his work, hopefully forever," said Brookes. Death Hacker died on December 27, 1994. References Lou Holtz Upper Ohio Valley Hall of Fame June 5th, 2008 article on the Hans Hacker Archive Project October 3rd, 2009 article on the Hans Hacker Archive Project Family tree 1910 births 1994 deaths 20th-century American painters American male painters People from East Liverpool, Ohio
29498188
https://en.wikipedia.org/wiki/NuGet
NuGet
NuGet (pronounced "New Get") is a package manager designed to enable developers to share reusable code. It is a software as a service solution whose client app is free and open-source. The Outercurve Foundation initially created it under the name NuPack. Since its introduction in 2010, NuGet has evolved into a larger ecosystem of tools and services. Overview NuGet is a package manager for developers. It enables developers to share and consume useful code. A NuGet package is a single ZIP file that bears a .nupack or .nupkg filename extension and contains .NET assemblies and their needed files. NuGet was initially distributed as a Visual Studio extension. Starting with Visual Studio 2012, both Visual Studio and Visual Studio for Mac can natively consume NuGet packages. NuGet's client, nuget.exe is a free and open-source, command-line app that can both create and consume packages. MSBuild and .NET Core SDK (dotnet.exe) can use it when it is present. NuGet is also integrated with SharpDevelop. It supports multiple programming languages, including: .NET Framework packages .NET packages Native packages written in C++, with package creation aided by CoApp See also Binary repository manager Chocolatey NuGet ProGet Software repository Web Platform Installer WinOps References External links Chocolatey .NET software 2010 software Free package management systems Microsoft free software Software using the Apache license
2051596
https://en.wikipedia.org/wiki/Tenea
Tenea
Tenea () is a municipal unit within the municipality of Corinth, Corinthia, Peloponnese, Greece. The municipal unit has an area of . Until 2011, it was a municipality whose seat was in Chiliomodi. The modern city is named after ancient Tenea, established approximately SE of Corinth and NE of Mycenae shortly after the Trojan War. According to Pausanias, Tenea's founders were Trojan prisoners of war whom Agamemnon had allowed to build their own town. The name Tenea is styled upon Tenedos, the founders' home town, whose mythological eponym was the hero Tenes. Tenea and Rome, according to Virgil's Aeneid, had in the years following the Trojan War produced citizens of Trojan ancestry. Under the leadership of Archias in 734 or 733 BC, Teneans and Corinthians established the joint colony of Syracuse in Sicily, the homeland of Archimedes. History Strabo mentions Tenea: {| class="toccolours" style="margin-left: 1em; margin-right: 2em; font-size: 95%; background:#c6dbf7; color:black; width:40em; max-width: 40%;" cellspacing="5" | style="text-align: left;" | Tenea, also, is in Korinthia, and in it is a temple of the Apollon Teneatos; and it is said that most of the colonists who accompanied Archias, the leader of the colonists to Syracuse, set out from there, and that afterwards Tenea prospered more than the other settlements, and finally even had a government of its own, and, revolting from the Corinthians, joined the Romans, and endured after the destruction of Corinth... And it is said that Polybos raised Oedipus here. And it seems, also, that there is a kinship between the peoples of Tenedos and Tenea, through Tennes the son of Kyknos, as Aristotle says; and the similarity in the worship of Apollon among the two peoples affords strong indications of such kinship. |- | style="text-align: left;" | Strabo, (8.6.22) |} as does Pausanias: Tenea was the most important place in ancient Corinthia after the city of Corinth and its port towns; it was situated 60 stadia south of Corinth, according to Pausanias, hence the southern gate of Corinth was called the Teneatic. Stephanus of Byzantium describes Tenea as lying between Corinth and Mycenae. Pausanias says that the Teneatae claimed descent from the inhabitants of Tenedos, who were brought over from Troy as prisoners, and settled by Agamemnon in this part of Corinthia; and that it was in consequence of their Trojan origin that they worshipped Apollo above all the other gods. Strabo also mentions here the temple of Apollo Teneates, and says that Tenea and Tenedos had a common origin in Tennes, the son of Cycnus. It was at Tenea that Oedipus was said to have passed his childhood. It was also from this place that Archias took the greater number of the colonists with whom he founded Syracuse. After the destruction of Corinth by Lucius Mummius Achaicus, Tenea had the good fortune to continue undisturbed, because it is said to have assisted the Romans against Corinth. We cannot, however, suppose that an insignificant place like Tenea could have acted in opposition to Corinth and the Achaean League; and it is more probable that the Teneatae were spared by Mummius in consequence of their pretended Trojan descent and consequent affinity with the Romans themselves. Archaeological findings Ruins of ancient Tenea are one kilometre south of Chiliomodi. Some archaeological finds are housed in the Archaeological Museum of Ancient Corinth. The most famous find, the Kouros of Tenea (c. 550 BC), found near Athikia in 1846, is in the Munich Glyptothek. It is a great example of 6th century BC Greek sculpture and of the so-called Aeginetean or archaic smile. In 1984, archaeologists discovered a sarcophagus of the Greek early archaic period containing the skeletal remains of what had been a high-society woman along with offerings. In 2013, archaeologists surveyed a site in the area and, encouraged by pottery and other small finds, began excavating. They said that “The concentration of ceramics and architectural remains… were the reasons that led us to the excavation of the site,” In 2017, they found a trove of riches while digging up what had been a dual-chambered burial ground at the Tenea site. In 2018, they found “proof of the existence of the ancient city” of Tenea lead by Elena Korka near the village of Chiliomodi. An image of the excavation site depicts stone walls, clay and marble floors, about 200 rare coins, the remains of what were probably houses from the settlement. During the excavation, seven burials with vases and jewelry were revealed dating to the Roman and Hellenistic periods. Besides, skeletons of a woman and an infant were found. According to Elena Korka, Tenea’s cutting coins was the indicator of its complete independence. In 2019, a complex of massive baths, roughly , was discovered in Tenea, dating back to the Roman times between the end of 3rd to mid-1st century BC. Subdivisions The municipal unit Tenea is subdivided into the following communities (constituent villages in brackets): Agionori Agios Vasileios Chiliomodi Klenia Koutalas (Koutalas, Mapsos, Spathovouni) Stefani Historical population See also List of traditional Greek place names References External links Kouros of Tenea Apollo of Tenea Municipality of Tenea Strabo, Book 8 Gallery and description [in Greek] of monuments in and around Tenea. Korka, Eleni; Lefantzis, Michalis; Corso, Antonio. Archaeological Discoveries from Tenea. Actual Problems of Theory and History of Art: Collection of articles. Vol. 9.'' Ed: A. V. Zakharova, S. V. Maltseva, E. Iu. Staniukovich-Denisova. Lomonosov Moscow State University / St. Petersburg: NP-Print, 2019, pp. 172–179. ISSN 2312-2129 Populated places in ancient Corinthia Former populated places in Greece Cities in ancient Peloponnese Locations in Greek mythology Populated places in Corinthia Populated places established in the 2nd millennium BC Trojan colonies Greek city-states
62289443
https://en.wikipedia.org/wiki/Roni%20Rosenfeld
Roni Rosenfeld
Roni Rosenfeld is an Israeli-American computer scientist and computational epidemiologist, currently serving as the head of the Machine Learning Department at Carnegie Mellon University. He is an international expert in machine learning, infectious disease forecasting, statistical language modeling and artificial intelligence. Education Rosenfeld received his B.Sc. in Mathematics and Physics from Tel Aviv University in 1985. He received his Ph.D. in Computer Science from Carnegie Mellon University in 1994. While a graduate student, he developed and open-sourced a statistical language modeling toolkit to allow anyone to create statistical language models from their own corpora and experiment with and extend the toolkit's capabilities. The toolkit has been used by more than 100 NLP laboratories in more than 20 countries. Rosenfeld's Ph.D. thesis, A Maximum Entropy Approach to Adaptive Statistical Language Modeling, was advised by Raj Reddy and Xuedong Huang and won the 2001 Computer, Speech and Language award for “Most Influential Paper in the Last 5 Years.” Career Shortly after receiving his Ph.D., Rosenfeld joined the faculty of the Carnegie Mellon School of Computer Science as an assistant professor. He was promoted to the rank of associate professor in 1999 and received tenure in 2001. In 2005 he was promoted to Professor of Language Technologies, Machine Learning Computer Science and Computational Biology in the School of Computer Science at Carnegie Mellon University. Rosenfeld also holds adjunct appointments at the University of Pittsburgh School of Medicine, Department of Computational and Systems Biology. From 2002–2003, Rosenfeld was a visiting professor at the University of Hong Kong. Rosenfeld is the Director of Carnegie Mellon's Machine Learning for Social Good (ML4SG) program. He has held educational leadership positions in a variety of programs, including the M.S. in Computational Finance (1997-1999), Graduate Computational and Statistical Learning (2001-2003), M.S. in Machine learning (2017) and Undergradaute Minor in Machine learning. Rosenfeld was appointed Head of Carnegie Mellon's Machine Learning Department in 2018. Research Rosenfeld's research interests include Epidemiological Forecasting, Information and Communication Technologies for Development (ICT4D), and Machine Learning for Social Good. Epidemiological forecasting Rosenfeld is a world expert in epidemiological forecasting. He founded and directs the Delphi research group, which has won most of the epidemiological forecasting challenges organized by the U.S. CDC and other U.S. government agencies. In December 2016, the CDC named his group the “Most Accurate Forecaster” for 2015–2016, and in October 2017, the Delphi group's two systems took the top two spots in the 2016-2017 flu forecasting challenge. The CDC recognized Rosenfeld's Delphi group at Carnegie Mellon University as having contributed the most accurate national-, regional-, and state-level influenza-like illness forecasts and national-level hospitalization forecasts to the site. In 2019, the CDC recognized forecasts provided by the Delphi group at Carnegie Mellon as having been the most accurate for five seasons in a row, and named the Delphi group an Influenza Forecasting Center of Excellence, a five-year designation that includes $3 million in research funding. Rosenfeld describes his forecasting research goal as “to make epidemiological forecasting as universally accepted and useful as weather forecasting is today.” His recent work in the area has focused on selecting high value epidemiological forecasting targets (e.g. Influenza and Dengue); creating baseline forecasting methods for them; establishing metrics for measuring and tracking forecasting accuracy; estimating the limits of forecastability for each target; and identifying new sources of data that could be helpful to the forecasting goal. Honors and awards 2017 Joel and Ruth Spira Teaching Award 2017 CDC Influenza Forecasting Challenge "Most Accurate Forecaster" 1992 Allen Newell Medal for Research Excellence References External links Roni Rosenfeld at Carnegie Mellon Flu Season is Here Why Didn't We See It Coming via Wired Flu Season Comes Around Computer Scientists Prepare & Make Predictions via WESA AI Helping Turn the Tide Against Flu in 2 Important Ways via NBCNews When Will Flu Season Be Worst? These Researchers Think They Might Know via NewsWeek How Bad Will the Flu Season Get? Forecasters Are Competing to Figure it Out via The Scientist CMU Receives 1M from Chicago Software Company to Fund Machine Learning Projects via Triblive How funny messages from ‘Polly’ can fight Ebola via Futurity CMU Researchers Fine Tune Flu Season Forecasting via CampusTechnology CMU is the world's best at predicting influenza activity via Pittsburgh Post-Gazette How a Drunken Chipmunk Voice Helps Send a Public Service Message via NPR Phone game helps illiterate Pakistanis find employment via The Tartan Carnegie Mellon flu forecasting named CDC center of excellence via Pittsburgh Post-Gazette CDC FluSight: Flu Forecasting Tel Aviv University alumni Carnegie Mellon University faculty Machine learning researchers American people of Israeli descent 1959 births Living people
50655667
https://en.wikipedia.org/wiki/Michael%20Hauben
Michael Hauben
Michael Frederick Hauben (May 1, 1973 – June 27, 2001) was an American Internet theorist and author. He pioneered the study of the social impact of the Internet. Based on his interactive online research, in 1993 he coined the term and developed the concept of Netizen to describe an Internet user who actively contributes towards the development of the Net and acts as a citizen of the Net and of the world. Along with Ronda Hauben, he co-authored the 1997 book Netizens: On the History and Impact of Usenet and the Internet. Hauben's work is widely referenced in many scholarly articles and publications about the social impact of the Internet. Early life Hauben was born on May 1, 1973 in Boston, Massachusetts, son of Jay and Ronda Hauben. He was an active participant in the Bulletin Board System (BBS) communities in the Detroit/Ann Arbor area in Michigan where his family had moved. Work and scholarship Hauben participated in the founding meetings of the Amateur Computerist in 1987. From 1991 to 1997 he attended Columbia University in NYC, earning a BA in Computer Science (Columbia College 1995) and an MA in Communication (Teachers College 1997). During his studies at CU, Hauben did much of his original research and writing. He was all that time an active employee of the CU Academic Information Systems (AcIS), serving for one year as a Postmaster and Consultant for Electronic Mail. Hauben was co-author of the book Netizens: On the History and Impact of Usenet and the Internet, a draft of which was put online in 1994. Print editions in English (IEEE Computer Society Press) and Japanese (Chuokoron-Sha, Inc) were published in 1997. Based on his interactive online research, Hauben coined the term 'Netizen' and introduced it into popular use. In the Preface to Netizens, Hauben wrote: My initial research concerned the origins and development of the global discussion forum Usenet....I wanted to explore the larger Net and what it was and its significance. This is when my research uncovered the remaining details that helped me to recognize the emergence of Netizens. There are people online who actively contribute towards the development of the Net. These people understand the value of collective work and the communal aspects of public communications. These are the people who discuss and debate topics in a constructive manner, who e-mail answers to people and provide help to new-comers, who maintain FAQ files and other public information repositories, who maintain mailing lists, and so on. These are people who discuss the nature and role of this new communications medium. These are the people who as citizens of the Net I realized were Netizens." Hauben observed that, "The word citizen suggests a geographic or national definition of social membership. The word Netizen reflects the new non-geographically based social membership. So I contracted the phrase net.citizen to Netizen. His 1993 article "Common Sense: The Impact the Net Has on People's Lives" was an analysis of responses Hauben received to questions he posted on newsgroups and mailing lists. The article begins, Welcome to the 21st Century. You are a Netizen (a Net Citizen), and you exist as a citizen of the world thanks to the global connectivity that the Net makes possible. You consider everyone as your compatriot. You physically live in one country but you are in contact with much of the world via the global computer network. Virtually, you live next door to every other single Netizen in the world. Geographical separation is replaced by existence in the same virtual space. This article became Chapter One of Netizens. While still an undergraduate, Hauben began to develop a theoretical framework for his vision of the social impact of the net and the netizens. In his article "The Expanding Commonwealth of Learning: Printing and the Net," he applied his study of the Printing Revolution especially the work of Elizabeth Eisenstein to an analysis of the trajectory in which the Internet and netizens are taking society. He wrote, "Comparing the emergence of the printing press to the emergence of the global computer network will reveal some of the fascinating parallels which demonstrate how the Net is continuing the important social revolution that the printing press had begun." Quoting Hauben's work, one author wrote, "On the extraordinary explosion of knowledge with the Gutenberg printing press, see Eisenstein, The Printing Revolution in Early Modern Europe. On the intellectual foundation of the Internet actually being based on the Gutenberg printing press, see Hauben, The Expanding Commonwealth of Learning: Printing and the Net." Using a similar method of analysis, Hauben found insights about the Internet in the understandings of the 19th Century Scottish philosopher James Mill about the importance of "liberty of the press". He argued that the net was making it possible for citizens as netizens to be the watchdogs over governments which Mill argued was the function of liberty of the press. In a footnote to his article "The Computer as a Democratizer," referring to Usenet, Hauben wrote that "the discussions are very active and provide a source of information that makes it possible to meet James Mill's criteria for both more oversight over government and a more informed population. In a sense, what was once impossible, is now possible." Hauben was invited to Japan in 1995 by Shumpei Kumon, sociology professor and director of GLOCOM (the Japanese Center for Global Communication). In Japan, Hauben was welcomed in Tokyo at GLOCOM and then in Oita by members of COARA, the computer network community in Beppu. At the Hypernetwork '95 Beppu Bay Conference, Hauben spoke about "The Netizens and Community Networks." He was interviewed by the local Nisshi-Nippon Press. Then in Kyoto, he attended two network conferences and was an honored guest at a reception with the Mayor. Hauben was a speaker also at the GLOCOM Intelprise-Enterprise Collaboration Program (IECP). Throughout his stay in Japan, Hauben met Japanese computer and network enthusiasts to discuss the growing importance of this new medium and his vision of netizenship. Hauben also appeared in documentaries about the Internet on TV Tokyo and in write-ups in newspapers in Tokyo and Oita. Prof. Kumon included a chapter by Hauben in his 1996 book The Age of the Netizen. In 1997, the Japanese translation of Netizens: On the History and impact of Usenet and the Internet was published in a run of 5000 copies. When he returned home from Japan, Hauben broadened his vision of the impact the Internet and the netizens would have on society. He saw in the work of the American anthropologist Margaret Mead that even in the 1960s a global culture was emerging. Using the writings of Mead, he countered the critics who claimed that the Internet's mass culture was snuffing out cultural differences. He saw instead that "more and more people of various cultures are understanding the power of the new communication technologies. More and more people are reacting against the mass media and corporate dominance and calling for a chance to express their views and contribute their culture into the global culture." Hauben presented his analysis of Internet culture at the 1997 IFIP WG 9.2/9.5 conference in Corfu, Greece. Hauben also explored the question whether participatory democracy and netizenship are related. He studied the Port Huron Statement created in 1962 by the Students for a Democratic Society (SDS) and other sources to see what lessons he could learn about the 1960s that would help to understand the importance of the Internet and the emergence of the netizens. He opened his analysis with the observation that "the 1960s was a time of people around the world struggling for more of a say in the decisions of their society. . . People rose up to protest the ways of society which were out of their control. . . ." Hauben's conclusion was that "the development of the Internet and emergence of the netizens is an investment in a strong force towards making direct democracy a reality. The new technologies present the chance to overcome the obstacles preventing the implementation of direct democracy. Online communication forums also make possible the discussion necessary to identify today's fundamental questions." Hauben was an avid music fan. He was a DJ of ambient techno music on WBAR, the Barnard College student radio station. With Min-Yen Kan he developed one of the original web sites for band listings, the Ever Expanding Web Music Listing! In 1996, an article in The Daily Herald (Chicago, IL) described the Ever Expanding Web Music Listing as "probably the World Wide Web's most comprehensive one-stop resource for all things musical." In the late 1990s, Hauben did online reviews of live music performances in New York City. He was concerned that the youth music scene in NYC not slip into drugs and commercial dominance. He analyzed trends in youth music culture and sent out pointers to upcoming events. He saw peer-to-peer music reviews as an alternative to commercial advertising. Influence of Hauben's work In the second half of the 1990s, the Internet rapidly spread around the world. Online and off line, the term netizen was becoming widely used. Scholars began to refer to Hauben's research. For example, the Polish scholar and diplomat Leszek Jesien, quoting Hauben, urged the European political leaders to look at netizenship as a possible model for a new European citizenship. Boldur Barbat, a Romanian scientist, reviewed Netizens concluding it is a catalyst for the continuing of information technology and an optimistic future. Citing Hauben's work, Cameroonian sociologist Charly G Mbock saw netizenship as a necessary component of any fight against corruption and as a sign of hope for "a more equitable sharing of world resources through efficient interactions." Turkish educator Dr. E. Özlem Yiğit and Palestinian scholar Khaled Islaih also referred to Hauben as a source of their understandings of the importance of netizenship for their respective communities. Hauben's work on netizens and the Internet is known in China and has influenced how some academics and government officials analyze the impact of the Internet on society. In his study of new media and social media in the Philippines, Aj Garchitorena, as some of his theoretical foundation, cited Hauben's work especially Hauben's "Theory of the Netizen and the Democratisation of Media." Garchitorena also built on Hauben's insight that the net "brings the power of the reporter to the Netizen." With its spread, two general uses of the term netizen developed. Hauben explained, "The first is a broad usage to refer to anyone who uses the Net, for whatever purpose.... The second usage is closer to my understanding,... people who care about Usenet and the bigger Net and work towards building the cooperative and collective nature which benefits the larger world. These are people who work towards developing the Net. … Both uses have spread from the online community, appearing in newspapers, magazines, television, books and other off-line media. As more and more people join the online community and contribute towards the nurturing of the Net and towards the development of a great shared social wealth, the ideas and values of Netizenship spread. But with the increasing commercialization and privatization of the Net, Netizenship is being challenged." He called on scholars, "to look back at the pioneering vision and actions that have helped make the Net possible and examine what lessons they provide." He argued that is what he and the Netizens book tried to do. One contributor to the 2004 celebration of the 250th Anniversary of Columbia University in New York City, referring to Hauben's contribution wrote, "While the prevalence and universality of the Internet today may lead some to take it for granted, Michael Hauben did not. A pioneer in the study of the Internet's impact on society, Hauben helped identify the collaborative nature of the Internet and its effects on the global community." Legacy After sustaining injuries resulting from an accident in December 1998 when he was hit by a taxi, Hauben died in New York City on June 27, 2001, a victim of suicide. At the time of his death, he had lost a job, accumulated a large credit card debt, and was about to lose his apartment. The significance of Hauben's contribution to the appreciation of the emergence of the netizen is a deeper sense that the Internet is accompanied by an expansion of the fullness of human empowerment. In 2012, cultural anthropologist Shirley Fedorak summed up Hauben's contribution. She wrote. "Studies have found that greater participation in the political landscape is influenced by access to information.... Indeed, Michael Hauben identified a new form of citizenship emerging from widespread use of the Internet. Hauben coined the term netizens, and he considered them crucial for building a more democratic human society. These individuals are empowered through the Internet and use it to solve socio-political problems and to explore ways of improving the world." Bibliography Netizens: On the History and Impact of Usenet and the Internet published May 1997 by IEEE Computer Society Press. () "Culture and Communication," chapter in The Ethical Global Information Society: Culture and Democracy Revisited, Jacques Berleur and Diane Whitehouse, Editors, IFIP, pp. 197–202 published 1997 by Chapman & Hall. "Netizens," in CMC Magazine, February 1997, http://www.december.com/cmc/mag/1997/feb/hauben.html "Birth of Netizens," chapter in The Age of Netizens, Shumpei Kumon, published 1996 by NTT Press, "Netizens" in The Thinker Vol 2, No. 5 February 2, 1996, p. 1, Stanford University. "OnLine Public Discussion and the Future of Democracy," in Proceedings Telecommunities 95: Equity on the Internet, Victoria, B.C. Co-author, "Interview with Henry Spencer: On Usenet News and C News," chapter in Internet Secrets, edited by John R. Levine and Carol Baroudi, published 1995 by IDG Books. "Exploring New York City's Online Community," in CMC Magazine, May 1995. http://www.ibiblio.org/cmc/mag/1995/may/hauben.html "Participatory Democracy From the 1960s and SDS into the Future Online", 1995 reprinted in Amateur Computerist Vol. 11 No. 1, http://www.ais.org/~jrh/acn/ACn11-1.pdf "A New Democratic Medium: The Global Computer Communications Network," in HKCUS Quarterly, no. 14 July 1994, p. 26. Special Issue on Hong Kong Media Facing 1997. References External links Netizens: On the History and Impact of Usenet and the Internet Table of Contents (online edition) Michael Hauben Collected Works The Netizens Cyberstop (Hauben's original home page) Ever Expanding Web Music Listing! (1991-2001) C250 Celebrates Your Columbians: Michael Hauben Internet Pioneer A Memory of Michael Hauben, the Inventor of NETIZEN Memorial Page J.C.R. Licklider And The Universal Network Netizen Participation in Internet Governance 1973 births 2001 deaths Columbia College (New York) alumni Teachers College, Columbia University alumni Internet theorists Suicides in New York City
14092016
https://en.wikipedia.org/wiki/1957%20Philadelphia%20Phillies%20season
1957 Philadelphia Phillies season
Offseason November 19, 1956: Del Ennis was traded by the Phillies to the St. Louis Cardinals for Rip Repulski and Bobby Morgan. Regular season The Phillies integrated during the 1957 season. John Kennedy, the team's first black player, made his debut with the Phillies on April 22, 1957, at Roosevelt Stadium against the Brooklyn Dodgers. Season standings Record vs. opponents Notable transactions April 5, 1957: Tim Harkness, Ron Negray, Elmer Valo, Mel Geho (minors), and $75,000 were traded by the Phillies to the Brooklyn Dodgers for Chico Fernández. The Phillies completed the deal by sending Ben Flowers to the Dodgers on April 8. June 17, 1957: Frank Baumholtz was released by the Phillies. June 26, 1957: Warren Hacker was selected off waivers by the Phillies from the Cincinnati Redlegs. Game log |- style="background:#fbb" | 1 || April 16 || Dodgers || 6–7 (12) || Clem Labine (1–0) || Robin Roberts (0–1) || None || 37,667 || 0–1 |- style="background:#fbb" | 2 || April 18 || @ Giants || 2–6 || Rubén Gómez (1–0) || Curt Simmons (0–1) || None || 8,585 || 0–2 |- style="background:#bfb" | 3 || April 20 || @ Giants || 6–5 || Harvey Haddix (1–0) || Al Worthington (0–1) || Bob Miller (1) || 8,875 || 1–2 |- style="background:#fbb" | 4 || April 21 (1) || @ Giants || 1–2 || Johnny Antonelli (1–1) || Robin Roberts (0–2) || None || see 2nd game || 1–3 |- style="background:#bfb" | 5 || April 21 (2) || @ Giants || 8–5 || Jack Sanford (1–0) || Curt Barclay (0–1) || Don Cardwell (1) || 14,230 || 2–3 |- style="background:#fbb" | 6 || April 22 || @ Dodgers || 1–5 || Roger Craig (1–0) || Jim Hearn (0–1) || Clem Labine (2) || 11,629 || 2–4 |- style="background:#bfb" | 7 || April 24 || Pirates || 8–5 || Curt Simmons (1–1) || Bob Friend (1–2) || Bob Miller (2) || 15,849 || 3–4 |- style="background:#bfb" | 8 || April 26 || Giants || 5–0 || Don Cardwell (1–0) || Johnny Antonelli (1–2) || None || 14,118 || 4–4 |- style="background:#fbb" | 9 || April 27 || Giants || 2–10 || Rubén Gómez (3–0) || Robin Roberts (0–3) || None || 7,577 || 4–5 |- style="background:#bfb" | 10 || April 28 (1) || Giants || 11–2 || Jack Sanford (2–0) || Al Worthington (0–2) || Bob Miller (3) || see 2nd game || 5–5 |- style="background:#fbb" | 11 || April 28 (2) || Giants || 7–8 || Curt Barclay (1–2) || Jack Meyer (0–1) || Marv Grissom (1) || 19,482 || 5–6 |- style="background:#fbb" | 12 || April 30 || Redlegs || 3–6 || Brooks Lawrence (2–1) || Harvey Haddix (1–1) || None || 14,851 || 5–7 |- |- style="background:#fbb" | 13 || May 1 || Redlegs || 6–8 (16) || Warren Hacker (1–1) || Turk Farrell (0–1) || Hersh Freeman (1) || 8,606 || 5–8 |- style="background:#bfb" | 14 || May 2 || Cubs || 4–2 || Robin Roberts (1–3) || Dick Drott (0–3) || None || 4,591 || 6–8 |- style="background:#bfb" | 15 || May 3 || Cubs || 9–6 || Turk Farrell (1–1) || Jackie Collum (1–1) || Jim Hearn (1) || 6,160 || 7–8 |- style="background:#bfb" | 16 || May 4 || Cubs || 5–2 || Jack Sanford (3–0) || Jim Brosnan (0–1) || None || 3,283 || 8–8 |- style="background:#fbb" | 17 || May 5 (1) || Cardinals || 4–8 || Lloyd Merritt (1–0) || Bob Miller (0–1) || None || see 2nd game || 8–9 |- style="background:#fbb" | 18 || May 5 (2) || Cardinals || 0–2 || Sam Jones (2–0) || Harvey Haddix (1–2) || None || 27,213 || 8–10 |- style="background:#bfb" | 19 || May 7 || Braves || 8–4 || Robin Roberts (2–3) || Warren Spahn (4–1) || None || 20,421 || 9–10 |- style="background:#bfb" | 20 || May 8 || Braves || 2–1 || Don Cardwell (2–0) || Gene Conley (0–1) || None || 17,739 || 10–10 |- style="background:#bfb" | 21 || May 10 || @ Pirates || 3–1 || Jack Sanford (4–0) || Ron Kline (0–4) || None || 10,027 || 11–10 |- style="background:#bfb" | 22 || May 11 || @ Pirates || 7–2 || Harvey Haddix (2–2) || Bob Friend (2–3) || None || 4,994 || 12–10 |- style="background:#bfb" | 23 || May 12 (1) || @ Pirates || 6–2 || Curt Simmons (2–1) || Luis Arroyo (0–4) || Bob Miller (4) || see 2nd game || 13–10 |- style="background:#fbb" | 24 || May 12 (2) || @ Pirates || 1–6 || Vern Law (2–1) || Robin Roberts (2–4) || None || 10,457 || 13–11 |- style="background:#bfb" | 25 || May 14 || @ Redlegs || 10–8 || Turk Farrell (2–1) || Raúl Sánchez (2–1) || Robin Roberts (1) || 9,829 || 14–11 |- style="background:#fbb" | 26 || May 15 || @ Redlegs || 2–7 || Brooks Lawrence (4–1) || Jack Sanford (4–1) || None || 12,442 || 14–12 |- style="background:#fbb" | 27 || May 16 || @ Cardinals || 0–5 || Lindy McDaniel (2–1) || Harvey Haddix (2–3) || None || 5,377 || 14–13 |- style="background:#bfb" | 28 || May 17 || @ Cardinals || 5–3 || Robin Roberts (3–4) || Sam Jones (2–2) || None || 9,367 || 15–13 |- style="background:#bfb" | 29 || May 18 || @ Cardinals || 7–5 || Curt Simmons (3–1) || Lloyd Merritt (1–1) || None || 5,232 || 16–13 |- style="background:#bbb" | – || May 19 (1) || @ Cubs || colspan=6 | Postponed (rain); Makeup: September 17 |- style="background:#bbb" | – || May 19 (2) || @ Cubs || colspan=6 | Postponed (rain); Makeup: September 18 |- style="background:#fffdd0" | 30 || May 21 || @ Braves || 1–1 (5) || None || None || None || 15,936 || 16–13–1 |- style="background:#fbb" | 31 || May 22 || @ Braves || 3–4 (13) || Juan Pizarro (2–2) || Robin Roberts (3–5) || None || 21,775 || 16–14–1 |- style="background:#bfb" | 32 || May 24 || Pirates || 7–3 || Jack Sanford (5–1) || Bob Friend (3–4) || Turk Farrell (1) || 17,340 || 17–14–1 |- style="background:#bfb" | 33 || May 25 || Pirates || 8–6 || Harvey Haddix (3–3) || Ron Kline (0–6) || Turk Farrell (2) || 6,445 || 18–14–1 |- style="background:#fbb" | 34 || May 26 (1) || Pirates || 5–13 || Roy Face (1–3) || Curt Simmons (3–2) || None || see 2nd game || 18–15–1 |- style="background:#bfb" | 35 || May 26 (2) || Pirates || 6–3 || Don Cardwell (3–0) || Luis Arroyo (1–5) || None || 13,557 || 19–15–1 |- style="background:#fbb" | 36 || May 27 || Dodgers || 1–5 || Don Drysdale (4–1) || Robin Roberts (3–6) || None || 20,673 || 19–16–1 |- style="background:#bfb" | 37 || May 28 || @ Giants || 16–6 || Bob Miller (1–1) || Johnny Antonelli (3–6) || None || 4,977 || 20–16–1 |- style="background:#bfb" | 38 || May 29 || @ Giants || 7–5 (10) || Robin Roberts (4–6) || Marv Grissom (0–2) || None || 2,216 || 21–16–1 |- style="background:#bfb" | 39 || May 30 (1) || @ Giants || 2–1 (10) || Curt Simmons (4–2) || Rubén Gómez (6–3) || None || see 2nd game || 22–16–1 |- style="background:#fbb" | 40 || May 30 (2) || @ Giants || 1–8 || Curt Barclay (3–3) || Don Cardwell (3–1) || None || 19,887 || 22–17–1 |- style="background:#bfb" | 41 || May 31 || Dodgers || 2–1 || Robin Roberts (5–6) || Don Drysdale (4–2) || None || 24,381 || 23–17–1 |- |- style="background:#bfb" | 42 || June 1 || Dodgers || 3–0 || Jack Sanford (6–1) || Roger Craig (1–2) || None || 30,621 || 24–17–1 |- style="background:#bfb" | 43 || June 2 || Dodgers || 5–3 || Turk Farrell (3–1) || Don Newcombe (4–5) || None || 20,259 || 25–17–1 |- style="background:#fbb" | 44 || June 3 || Dodgers || 0–4 || Johnny Podres (5–2) || Don Cardwell (3–2) || None || 18,218 || 25–18–1 |- style="background:#bfb" | 45 || June 4 || Redlegs || 3–1 || Harvey Haddix (4–3) || Brooks Lawrence (4–3) || None || 20,759 || 26–18–1 |- style="background:#fbb" | 46 || June 5 || Redlegs || 2–4 (11) || Johnny Klippstein (2–4) || Turk Farrell (3–2) || None || 15,771 || 26–19–1 |- style="background:#bfb" | 47 || June 6 || Redlegs || 6–2 || Robin Roberts (6–6) || Don Gross (4–2) || None || 27,307 || 27–19–1 |- style="background:#bfb" | 48 || June 7 || Cubs || 1–0 || Jack Sanford (7–1) || Dave Hillman (0–2) || None || 12,845 || 28–19–1 |- style="background:#bbb" | – || June 8 || Cubs || colspan=6 | Postponed (rain); Makeup: July 11 as a traditional double-header |- style="background:#fbb" | 49 || June 9 (1) || Cubs || 3–7 || Dick Drott (4–6) || Robin Roberts (6–7) || Turk Lown (3) || see 2nd game || 28–20–1 |- style="background:#fffdd0" | 50 || June 9 (2) || Cubs || 4–4 || None || None || None || 17,375 || 28–20–2 |- style="background:#fbb" | 51 || June 11 || Cardinals || 2–5 || Murry Dickson (3–2) || Curt Simmons (4–3) || None || 23,888 || 28–21–2 |- style="background:#fbb" | 52 || June 12 || Cardinals || 0–4 || Larry Jackson (8–2) || Robin Roberts (6–8) || None || 22,749 || 28–22–2 |- style="background:#bfb" | 53 || June 13 || Cardinals || 8–1 || Jack Sanford (8–1) || Vinegar Bend Mizell (1–4) || None || 22,509 || 29–22–2 |- style="background:#fbb" | 54 || June 14 || Braves || 2–10 || Warren Spahn (7–3) || Don Cardwell (3–3) || None || 29,465 || 29–23–2 |- style="background:#fbb" | 55 || June 15 || Braves || 2–7 || Bob Buhl (6–2) || Harvey Haddix (4–4) || None || 12,178 || 29–24–2 |- style="background:#fbb" | 56 || June 16 (1) || Braves || 2–3 || Juan Pizarro (3–5) || Bob Miller (1–2) || None || see 2nd game || 29–25–2 |- style="background:#bfb" | 57 || June 16 (2) || Braves || 1–0 || Curt Simmons (5–3) || Lew Burdette (5–3) || None || 30,520 || 30–25–2 |- style="background:#bfb" | 58 || June 18 || @ Cubs || 7–6 || Jim Hearn (1–1) || Dave Hillman (0–4) || None || 6,092 || 31–25–2 |- style="background:#fbb" | 59 || June 19 (1) || @ Cubs || 0–9 || Dick Drott (6–6) || Don Cardwell (3–4) || None || see 2nd game || 31–26–2 |- style="background:#fbb" | 60 || June 19 (2) || @ Cubs || 3–4 || Moe Drabowsky (4–5) || Bob Miller (1–3) || Turk Lown (5) || 10,939 || 31–27–2 |- style="background:#bfb" | 61 || June 20 || @ Cubs || 7–2 || Harvey Haddix (5–4) || Don Kaiser (2–5) || None || 5,210 || 32–27–2 |- style="background:#bfb" | 62 || June 21 || @ Braves || 6–1 || Curt Simmons (6–3) || Lew Burdette (5–4) || None || 33,533 || 33–27–2 |- style="background:#bfb" | 63 || June 22 || @ Braves || 4–2 || Jack Sanford (9–1) || Bob Trowbridge (2–1) || None || 25,498 || 34–27–2 |- style="background:#fbb" | 64 || June 23 (1) || @ Braves || 6–7 || Bob Buhl (8–2) || Robin Roberts (6–9) || None || see 2nd game || 34–28–2 |- style="background:#fbb" | 65 || June 23 (2) || @ Braves || 3–7 || Taylor Phillips (3–2) || Seth Morehead (0–1) || Bob Trowbridge (1) || 36,037 || 34–29–2 |- style="background:#bfb" | 66 || June 24 || @ Braves || 10–4 || Harvey Haddix (6–4) || Warren Spahn (7–5) || Jim Hearn (2) || 15,600 || 35–29–2 |- style="background:#bfb" | 67 || June 26 || @ Cardinals || 11–3 || Curt Simmons (7–3) || Lindy McDaniel (7–4) || None || 14,192 || 36–29–2 |- style="background:#fbb" | 68 || June 27 || @ Cardinals || 4–6 || Von McDaniel (3–0) || Jack Sanford (9–2) || Hoyt Wilhelm (7) || 25,133 || 36–30–2 |- style="background:#fbb" | 69 || June 28 || @ Redlegs || 1–7 || Hal Jeffcoat (7–5) || Don Cardwell (3–5) || None || 16,848 || 36–31–2 |- style="background:#fbb" | 70 || June 29 || @ Redlegs || 3–8 || Tom Acker (9–3) || Robin Roberts (6–10) || None || 10,290 || 36–32–2 |- style="background:#fbb" | 71 || June 30 (1) || @ Redlegs || 1–6 || Brooks Lawrence (9–4) || Curt Simmons (7–4) || None || see 2nd game || 36–33–2 |- style="background:#fbb" | 72 || June 30 (2) || @ Redlegs || 2–6 || Johnny Klippstein (3–7) || Harvey Haddix (6–5) || Brooks Lawrence (1) || 32,584 || 36–34–2 |- |- style="background:#bfb" | 73 || July 1 || @ Pirates || 5–4 || Jim Hearn (2–1) || Bob Smith (0–2) || Seth Morehead (1) || 14,680 || 37–34–2 |- style="background:#bfb" | 74 || July 4 (1) || Giants || 2–1 || Harvey Haddix (7–5) || Rubén Gómez (10–6) || None || see 2nd game || 38–34–2 |- style="background:#bfb" | 75 || July 4 (2) || Giants || 6–2 || Curt Simmons (8–4) || Stu Miller (3–4) || Turk Farrell (3) ||| 30,442 || 39–34–2 |- style="background:#fbb" | 76 || July 5 || @ Dodgers || 5–6 || Roger Craig (3–5) || Robin Roberts (6–11) || Johnny Podres (1) || 13,324 || 39–35–2 |- style="background:#bfb" | 77 || July 6 || @ Dodgers || 9–4 || Jack Sanford (10–2) || Don Drysdale (6–6) || None || 8,939 || 40–35–2 |- style="background:#bfb" | 78 || July 7 (1) || @ Dodgers || 2–1 || Warren Hacker (4–2) || Sal Maglie (3–2) || Turk Farrell (4) || see 2nd game || 41–35–2 |- style="background:#bfb" | 79 || July 7 (2) || @ Dodgers || 5–3 || Jim Hearn (3–1) || Clem Labine (3–5) || Turk Farrell (5) || 16,805 || 42–35–2 |- style="background:#bbcaff;" | – || July 9 ||colspan="7" |1957 Major League Baseball All-Star Game at Sportsman's Park in St. Louis |- style="background:#bfb" | 80 || July 11 (1) || Cubs || 1–0 (11) || Harvey Haddix (8–5) || Bob Rush (1–8) || None || see 2nd game || 43–35–2 |- style="background:#bfb" | 81 || July 11 (2) || Cubs || 3–1 || Jack Sanford (11–2) || Don Elston (2–1) || None || 25,897 || 44–35–2 |- style="background:#fbb" | 82 || July 12 || Cubs || 2–5 || Jim Brosnan (1–3) || Robin Roberts (6–12) || None || 11,526 || 44–36–2 |- style="background:#bfb" | 83 || July 13 || Cubs || 5–2 || Warren Hacker (5–2) || Dick Drott (8–8) || Turk Farrell (6) || 4,541 || 45–36–2 |- style="background:#bfb" | 84 || July 14 (1) || Cardinals || 6–2 || Curt Simmons (9–4) || Von McDaniel (4–1) || None || see 2nd game || 46–36–2 |- style="background:#bfb" | 85 || July 14 (2) || Cardinals || 11–4 || Jim Hearn (4–1) || Larry Jackson (10–5) || None || 26,451 || 47–36–2 |- style="background:#bfb" | 86 || July 15 || Cardinals || 6–2 || Jack Sanford (12–2) || Vinegar Bend Mizell (3–7) || None || 33,906 || 48–36–2 |- style="background:#fbb" | 87 || July 16 || Braves || 2–6 || Warren Spahn (10–7) || Harvey Haddix (8–6) || None || 24,846 || 48–37–2 |- style="background:#fbb" | 88 || July 17 || Braves || 3–10 || Lew Burdette (8–6) || Jack Meyer (0–2) || Don McMahon (3) || 24,596 || 48–38–2 |- style="background:#fbb" | 89 || July 18 || Braves || 2–4 || Bob Buhl (10–6) || Curt Simmons (9–5) || Don McMahon (4) || 24,385 || 48–39–2 |- style="background:#fbb" | 90 || July 19 || Redlegs || 2–7 || Joe Nuxhall (4–5) || Jack Sanford (12–3) || None || 27,147 || 48–40–2 |- style="background:#bfb" | 91 || July 20 || Redlegs || 7–5 || Turk Farrell (4–2) || Don Gross (4–6) || None || 11,574 || 49–40–2 |- style="background:#fbb" | 92 || July 21 (1) || Redlegs || 2–4 || Brooks Lawrence (11–5) || Robin Roberts (6–13) || Hersh Freeman (7) || see 2nd game || 49–41–2 |- style="background:#fbb" | 93 || July 21 (2) || Redlegs || 4–6 || Art Fowler (1–0) || Harvey Haddix (8–7) || Johnny Klippstein (1) || 26,787 || 49–42–2 |- style="background:#fbb" | 94 || July 23 || @ Braves || 0–1 || Bob Buhl (11–6) || Curt Simmons (9–6) || None || 34,243 || 49–43–2 |- style="background:#bfb" | 95 || July 24 || @ Braves || 3–1 || Jack Sanford (13–3) || Gene Conley (3–5) || None || 32,412 || 50–43–2 |- style="background:#bfb" | 96 || July 25 || @ Braves || 5–3 || Robin Roberts (7–13) || Warren Spahn (10–8) || Turk Farrell (7) || 28,545 || 51–43–2 |- style="background:#bfb" | 97 || July 26 || @ Cubs || 3–1 (10) || Warren Hacker (6–2) || Bob Rush (1–11) || Turk Farrell (8) || 8,957 || 52–43–2 |- style="background:#fbb" | 98 || July 27 || @ Cubs || 1–6 || Jim Brosnan (2–4) || Harvey Haddix (8–8) || None || 11,323 || 52–44–2 |- style="background:#bfb" | 99 || July 28 (1) || @ Cubs || 3–2 || Curt Simmons (10–6) || Tom Poholsky (1–6) || Turk Farrell (9) || see 2nd game || 53–44–2 |- style="background:#bfb" | 100 || July 28 (2) || @ Cubs || 7–1 || Jack Sanford (14–3) || Dick Drott (9–9) || None || 20,512 || 54–44–2 |- style="background:#bfb" | 101 || July 29 || @ Cubs || 6–0 || Robin Roberts (8–13) || Don Elston (3–4) || None || 3,637 || 55–44–2 |- style="background:#bfb" | 102 || July 30 || @ Redlegs || 8–5 || Bob Miller (2–3) || Hersh Freeman (5–2) || None || 15,813 || 56–44–2 |- style="background:#fbb" | 103 || July 31 || @ Redlegs || 5–6 (11) || Art Fowler (3–0) || Bob Miller (2–4) || None || 12,721 || 56–45–2 |- |- style="background:#fbb" | 104 || August 1 || @ Redlegs || 3–4 || Johnny Klippstein (5–10) || Curt Simmons (10–7) || Tom Acker (4) || 11,720 || 56–46–2 |- style="background:#fbb" | 105 || August 2 || @ Cardinals || 4–5 (10) || Willard Schmidt (10–1) || Bob Miller (2–5) || None || 21,898 || 56–47–2 |- style="background:#fbb" | 106 || August 3 || @ Cardinals || 1–3 || Von McDaniel (6–2) || Robin Roberts (8–14) || Billy Muffett (1) || 18,955 || 56–48–2 |- style="background:#bfb" | 107 || August 4 (1) || @ Cardinals || 5–4 (12) || Harvey Haddix (9–8) || Larry Jackson (12–6) || None || see 2nd game || 57–48–2 |- style="background:#fbb" | 108 || August 4 (2) || @ Cardinals || 1–4 || Lindy McDaniel (10–6) || Warren Hacker (6–3) || None || 29,098 || 57–49–2 |- style="background:#fbb" | 109 || August 6 || @ Pirates || 3–5 || Ron Kline (3–15) || Jack Sanford (14–4) || None || 11,136 || 57–50–2 |- style="background:#bfb" | 110 || August 8 || @ Pirates || 6–3 || Curt Simmons (11–7) || Vern Law (7–7) || Bob Miller (5) || 5,238 || 58–50–2 |- style="background:#fbb" | 111 || August 9 || @ Giants || 2–6 || Ray Crone (5–6) || Robin Roberts (8–15) || None || 6,247 || 58–51–2 |- style="background:#bbb" | – || August 10 || @ Giants || colspan=6 | Postponed (rain); Makeup: August 11 as a traditional double-header |- style="background:#fbb" | 112 || August 11 (1) || @ Giants || 0–5 || Curt Barclay (7–7) || Harvey Haddix (9–9) || None || see 2nd game || 58–52–2 |- style="background:#bfb" | 113 || August 11 (2) || @ Giants || 2–0 || Jack Sanford (15–4) || Johnny Antonelli (11–12) || None || 13,880 || 59–52–2 |- style="background:#fbb" | 114 || August 13 || Pirates || 0–6 || Bob Friend (8–15) || Warren Hacker (6–4) || None || 14,129 || 59–53–2 |- style="background:#fbb" | 115 || August 14 || Pirates || 3–10 || Vern Law (9–7) || Curt Simmons (11–8) || None || 8,641 || 59–54–2 |- style="background:#fbb" | 116 || August 16 || Giants || 1–2 || Stu Miller (5–8) || Robin Roberts (8–16) || None || 16,733 || 59–55–2 |- style="background:#bfb" | 117 || August 17 || Giants || 3–1 || Jack Sanford (16–4) || Rubén Gómez (13–10) || None || 7,929 || 60–55–2 |- style="background:#fbb" | 118 || August 18 (1) || Giants || 4–5 || Mike McCormick (2–0) || Warren Hacker (6–5) || Marv Grissom (8) || see 2nd game || 60–56–2 |- style="background:#fbb" | 119 || August 18 (2) || Giants || 0–1 || Al Worthington (8–8) || Harvey Haddix (9–10) || None || 14,591 || 60–57–2 |- style="background:#bfb" | 120 || August 20 (1) || Cubs || 2–1 (10) || Turk Farrell (5–2) || Moe Drabowsky (9–11) || None || see 2nd game || 61–57–2 |- style="background:#fbb" | 121 || August 20 (2) || Cubs || 2–5 || Don Elston (4–5) || Robin Roberts (8–17) || None || 15,129 || 61–58–2 |- style="background:#fbb" | 122 || August 22 || Cardinals || 5–6 || Larry Jackson (13–6) || Jack Sanford (16–5) || Billy Muffett (3) || 21,947 || 61–59–2 |- style="background:#bfb" | 123 || August 23 || Cardinals || 3–2 || Harvey Haddix (10–10) || Vinegar Bend Mizell (5–10) || None || 20,284 || 62–59–2 |- style="background:#fbb" | 124 || August 24 || Cardinals || 2–5 || Herm Wehmeier (6–6) || Curt Simmons (11–9) || None || 11,049 || 62–60–2 |- style="background:#fbb" | 125 || August 25 || Braves || 3–7 || Warren Spahn (16–8) || Robin Roberts (8–18) || None || 14,051 || 62–61–2 |- style="background:#bfb" | 126 || August 26 || Braves || 4–3 || Turk Farrell (6–2) || Ernie Johnson (6–2) || None || 21,397 || 63–61–2 |- style="background:#fbb" | 127 || August 27 || Redlegs || 2–5 || Joe Nuxhall (8–7) || Don Cardwell (3–6) || None || 15,520 || 63–62–2 |- style="background:#fbb" | 128 || August 28 || Redlegs || 5–6 || Brooks Lawrence (12–11) || Curt Simmons (11–10) || Johnny Klippstein (2) || 15,134 || 63–63–2 |- style="background:#bfb" | 129 || August 30 || Pirates || 4–3 || Jack Sanford (17–5) || Vern Law (10–8) || None || 8,157 || 64–63–2 |- style="background:#bfb" | 130 || August 31 || Pirates || 7–1 || Don Cardwell (4–6) || Bob Friend (10–17) || None || 5,141 || 65–63–2 |- |- style="background:#bfb" | 131 || September 1 (1) || Pirates || 11–3 || Robin Roberts (9–18) || Bob Purkey (10–13) || None || see 2nd game || 66–63–2 |- style="background:#fbb" | 132 || September 1 (2) || Pirates || 3–6 || Whammy Douglas (2–2) || Curt Simmons (11–11) || Roy Face (7) || 11,294 || 66–64–2 |- style="background:#bfb" | 133 || September 2 (1) || @ Dodgers || 10–4 || Warren Hacker (7–5) || Danny McDevitt (6–2) || None || see 2nd game || 67–64–2 |- style="background:#bfb" | 134 || September 2 (2) || @ Dodgers || 7–4 || Jim Hearn (5–1) || Roger Craig (5–8) || None || 18,895 || 68–64–2 |- style="background:#bfb" | 135 || September 3 || @ Dodgers || 3–2 (12) || Turk Farrell (7–2) || Don Drysdale (14–8) || Bob Miller (6) || 10,190 || 69–64–2 |- style="background:#fbb" | 136 || September 4 || Dodgers || 3–12 || Don Newcombe (11–11) || Don Cardwell (4–7) || Ed Roebuck (6) || 17,615 || 69–65–2 |- style="background:#fbb" | 137 || September 5 || Dodgers || 1–3 || Carl Erskine (4–2) || Robin Roberts (9–19) || Ed Roebuck (7) || 18,087 || 69–66–2 |- style="background:#fbb" | 138 || September 6 || @ Pirates || 2–3 || Ron Kline (7–15) || Harvey Haddix (10–11) || None || 6,915 || 69–67–2 |- style="background:#fbb" | 139 || September 7 || @ Pirates || 3–6 || Red Swanson (3–2) || Jack Sanford (17–6) || Roy Face (9) || 4,612 || 69–68–2 |- style="background:#bfb" | 140 || September 8 (1) || @ Pirates || 7–4 || Turk Farrell (8–2) || Bob Purkey (10–14) || Robin Roberts (2) || see 2nd game || 70–68–2 |- style="background:#fbb" | 141 || September 8 (2) || @ Pirates || 2–6 || Bob Smith (1–3) || Warren Hacker (7–6) || None || 12,021 || 70–69–2 |- style="background:#fbb" | 142 || September 10 || @ Cardinals || 3–4 (14) || Lindy McDaniel (14–8) || Robin Roberts (9–20) || None || 10,997 || 70–70–2 |- style="background:#fbb" | 143 || September 11 || @ Cardinals || 6–14 || Herm Wehmeier (9–6) || Harvey Haddix (10–12) || Hoyt Wilhelm (11) || 10,674 || 70–71–2 |- style="background:#fbb" | 144 || September 13 || @ Redlegs || 7–8 || Tom Acker (10–5) || Jack Sanford (17–7) || Johnny Klippstein (3) || 6,805 || 70–72–2 |- style="background:#bfb" | 145 || September 14 || @ Redlegs || 5–0 || Robin Roberts (10–20) || Brooks Lawrence (15–12) || None || 4,527 || 71–72–2 |- style="background:#bfb" | 146 || September 15 || @ Braves || 3–2 (10) || Turk Farrell (9–2) || Warren Spahn (19–10) || None || 34,920 || 72–72–2 |- style="background:#fbb" | 147 || September 16 || @ Braves || 1–5 || Bob Buhl (17–6) || Harvey Haddix (10–13) || None || 20,929 || 72–73–2 |- style="background:#fbb" | 148 || September 17 || @ Cubs || 1–7 || Moe Drabowsky (11–14) || Jack Sanford (17–8) || None || 1,727 || 72–74–2 |- style="background:#fbb" | 149 || September 18 || @ Cubs || 4–6 || Dick Drott (15–11) || Robin Roberts (10–21) || Don Elston (6) || 2,094 || 72–75–2 |- style="background:#bfb" | 150 || September 20 || @ Dodgers || 3–2 || Turk Farrell (10–2) || Carl Erskine (5–3) || Jim Hearn (3) || 6,749 || 73–75–2 |- style="background:#bfb" | 151 || September 21 || @ Dodgers || 3–2 || Jack Sanford (18–8) || Johnny Podres (12–9) || Turk Farrell (10) || 5,118 || 74–75–2 |- style="background:#fbb" | 152 || September 22 || @ Dodgers || 3–7 || Don Drysdale (17–9) || Robin Roberts (10–22) || None || 6,662 || 74–76–2 |- style="background:#bfb" | 153 || September 24 || Giants || 5–0 || Curt Simmons (12–11) || Curt Barclay (9–9) || None || 7,019 || 75–76–2 |- style="background:#bfb" | 154 || September 27 || Dodgers || 3–2 || Jack Sanford (19–8) || Bill Harris (0–1) || None || 11,595 || 76–76–2 |- style="background:#fbb" | 155 || September 28 || Dodgers || 4–8 || Ed Roebuck (8–2) || Don Cardwell (4–8) || Johnny Podres (3) || 5,797 || 76–77–2 |- style="background:#bfb" | 156 || September 29 || Dodgers || 2–1 || Seth Morehead (1–1) || Roger Craig (6–9) || None || 9,886 || 77–77–2 |- | style="text-align:left;" | The second game on April 28 was suspended (Sunday curfew) in the bottom of the seventh inning with the score 7–8 and was completed August 16, 1957. The May 21 game was called after 5 innings (reduced to 5 innings) with the score tied 1–1. It was replayed from the start on June 24, 1957. The second game on June 9 was suspended (Sunday curfew) in the top of the tenth inning with the score 4–4. Because at least nine innings had been completed, the game was replayed from the beginning on August 20, 1957. The July 17 game was protested by the Phillies between the first and second innings. The protest was later denied. Roster Player stats Batting Starters by position Note: Pos = Position; G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Other batters Note: G = Games played; AB = At bats; H = Hits; Avg. = Batting average; HR = Home runs; RBI = Runs batted in Pitching Starting pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Other pitchers Note: G = Games pitched; IP = Innings pitched; W = Wins; L = Losses; ERA = Earned run average; SO = Strikeouts Relief pitchers Note: G = Games pitched; W = Wins; L = Losses; SV = Saves; ERA = Earned run average; SO = Strikeouts Farm system LEAGUE CHAMPIONS: Tampa Moultrie franchise transferred to Brunswick, June 1, 1957 Notes References 1957 Philadelphia Phillies season at Baseball Reference Philadelphia Phillies seasons Philadelphia Phillies season Philadelphia
923453
https://en.wikipedia.org/wiki/Dynamic%20routing
Dynamic routing
Dynamic routing, also called adaptive routing, is a process where a router can forward data via a different route for a given destination based on the current conditions of the communication circuits within a system. The term is most commonly associated with data networking to describe the capability of a network to 'route around' damage, such as loss of a node or a connection between nodes, so long as other path choices are available. Dynamic routing allows as many routes as possible to remain valid in response to the change. Systems that do not implement dynamic routing are described as using static routing, where routes through a network are described by fixed paths. A change, such as the loss of a node, or loss of a connection between nodes, is not compensated for. This means that anything that wishes to take an affected path will either have to wait for the failure to be repaired before restarting its journey, or will have to fail to reach its destination and give up the journey. All Protocols There are several protocols that can be used for dynamic routing. Routing Information Protocol (RIP) is a distance-vector routing protocols that prevents routing loops by implementing a limit on the number of hops allowed in a path from source to destination. Open Shortest Path First (OSPF) uses a link state routing (LSR) algorithm and falls into the group of interior gateway protocols (IGPs). Intermediate System to Intermediate System (IS-IS) determines the best route for data through a packet-switched network. Interior Gateway Routing Protocol (IGRP) and its advanced form Enhanced Interior Gateway Routing Protocol (EIGRP) are used by routers to exchange routing data within an autonomous system. Alternate paths Many systems use some next-hop forwarding protocol—when a packet arrives at some node, that node decides on-the-fly which link to use to push the packet one hop closer to its final destination. Routers that use some adaptive protocols, such as the Spanning Tree Protocol, in order to "avoid bridge loops and routing loops", calculate a tree that indicates the one "best" link for a packet to get to its destination. Alternate "redundant" links not on the tree are temporarily disabled—until one of the links on the main tree fails, and the routers calculate a new tree using those links to route around the broken link. Routers that use other adaptive protocols, such as grouped adaptive routing, find a group of all the links that could be used to get the packet one hop closer to its final destination. The router sends the packet out any link of that group which is idle. The link aggregation of that group of links effectively becomes a single high-bandwidth connection. In practice Contact centres employ dynamic routing to increase the operational efficiency of call agents, which boosts both agent and customer satisfaction. This adaptive strategy is commonly referred to as omnichannel, where an integrative customer experience is coupled with increased responsiveness by agents. Dynamic routing is also associated with neuroscience in respect to studies on the relationship between sensory and mnemonic signals and decision making. People using a transport system can display dynamic routing. For example, if a local railway station is closed, people can alight from a train at a different station and use another method, such as a bus, to reach their destination. Another example of dynamic routing can be seen within financial markets. See also Static routing Convergence (routing) Routing in delay-tolerant networking References External links Session-based routing holds the key to the Internet’s future Routing
7236177
https://en.wikipedia.org/wiki/NetObjects
NetObjects
NetObjects, Inc. is a software company founded in 1995 by Samir Arora, David Kleinberg, Clement Mok and Sal Arora. The company is best known for the development of NetObjects Fusion, a web design application for small and medium enterprises with designers who need complete control over page layout and a similar user interface as desktop publishing applications. In its first phase, NetObjects was based in Redwood City, California, and ceased operations in 2001 after selling its assets to Website Pros (now Web.com) and a portfolio of patents to Macromedia. In 2009 NetObjects was re-established as an independent software company. History Beginnings From 1992 to 1995 the founders of NetObjects had worked at Rae Technology and before that in part at Apple Computer investigating proto-types of web browsers, information navigation and web design tools. In 1995 NetObjects was founded to market NetObjects Fusion, a new design tool to build web sites. The term "web site", well-known and widespread today, was created by the work of Samir Arora, David Kleinberg, Clement Mok and Sal Arora. and they were awarded the first web site builder patent as inventors. Initially NetObjects was as a privately held company with the Series A venture investment led by Rae Technology, Series B by Norwest Venture Partners and Venrock Associates, followed by Novell, Mitsubishi and AT&T Ventures and the last round by Perseus Capital, L.L.C. In April 1997 IBM invested $100 million to acquire a majority of the company. The deal had a valuation of $150 million. Launch of NetObjects Fusion and IPO NetObjects Fusion 1.0 was released in 1996. As the first complete web design tool it was seen as groundbreaking by technology observers. NetObjects was elected as one of "25 Cool Technology Companies" of 1996 by Fortune. Also in 1996, NetObjects Fusion won PC Magazine's Editors' Choice award. CNET's Builder.com elected Samir Arora one of the Web Innovators of 1997, and in 1998 NetObjects received the prestigious Gold award from the Industrial Designers Society of America (IDSA). Eleven U.S. patents were granted for Internet-related technologies (design and utility). Releases 2.0 (1997) and 3.0 (1998) of NetObjects Fusion again gained positive reactions by the PC press as well as commercial success on the market. In 1999 IBM brought NetObjects to the stock exchange with initial public offering while remaining the major shareholder. The initial public offering (IPO) on NASDAQ raised $72 million. The board of directors consisted of six people: Samir Arora as Chairman of the Board, Chief Executive Officer and President, and five directors including John Sculley from Apple Computer, three representatives from IBM and one from Novell. Success on the market and the stock exchange In the following years numerous product-bundling deals were made with nearly all the big PC sellers like Dell and HP, and with Internet service providers like UUNET, Earthlink or 1 & 1 (Germany). The company itself said it licensed the distribution of more than 15 million copies of NetObjects Fusion. In 2000 the stock price of NETO (ticker symbol) reached its record high of $45 11/16 USD, making NetObjects worth $1.5 billion. Revenue had started at $7.2 million in 1997, reached $15 million in 1998, $23.2 million in 1999 and peaked at $34.2 million for fiscal year 2000 (October 1999 - September 2000). On March 3, 2000, TheStreet.com's Adam Lashinsky praised NetObject's financial performance and its early adoption of e-business: "And, more so than many start-ups, NetObjects has managed to deliver on what it has promised. It has slightly beaten the expectations of the friendly analysts who follow it. And quarter by quarter, it has steadily reduced its operating losses. Plus, it got lucky. It was firmly entrenched as a business-to-business software company before the term gained currency and B2B companies took off." Shift in strategy In 1998 the company had developed and since then distributed NetObjects Authoring Suite and the related "Collage" product, which as content management solutions were aimed at big businesses and ranged at much higher price levels than NetObjects Fusion. However, IBM and NetObjects decided that its target market was the sector of small and medium enterprises, so it would focus on its flagship application NetObjects Fusion which would fit within the scope of these customers. In the beginnings of the concept of "software as a service" (SaaS), the company secondly made a bet on its ability to recognize technological trends and coined a strategy shift to a subscription model. To this end NetObjects Matrix was developed and GoBizGo.com, an e-commerce solution was started. Subscribing web and online services would help small businesses keep pace with the Internet. To finance this shift of strategy, the NetObjects Enterprise Division with 40 employees along with two applications, Collage and NetObjects Authoring Suite, was sold for $18 million to UK-based Merant (merged in 2004 with Serena Software Inc., based in San Mateo, California). High hopes were based on the NetObjects Matrix platform and its possibilities to position NetObjects as a "Business Service Provider". A version for Mac was announced, and a cooperation with IBM Global Services was forged. Challenges and crisis However, several factors led NetObjects to a crisis starting in 2000. Tough competition from Microsoft, Macromedia and Adobe put pressure on market share and falling prices of web-design applications affected revenues. Also, long-term revenue effects of bundling deals in the software industry are controversial. NetObjects slashed prices for NetObjects Fusion from release 1.0 to release 4.0 by more than 50%. Older versions stayed in distribution for even lower prices. Technical demands for large business web sites changed and required direct access of programmers to HTML code — which NetObjects Fusion was not designed for. Its target market were designers who need complete control over page layout and a similar user interface as desktop publishing applications. IBM decisions and sale of NetObjects In 2001 revenue decreased sharply, a result of changing markets, price cuts, strategy shift to Software as a Service. Subscription fees from NetObjects Matrix started coming in but the company faced losses: total revenues for the first three quarters of FY 2001 were $4.22 million, whilst costs were $7.67 million. NetObjects started to raise $50 million in a private placement with Deutsche Bank. But IBM, which controlled the NetObjects Board, did not approve the placement. In the summer of 2001, the markets plummeted with the bursting of the dot-com bubble. And ultimately IBM as the majority shareholder decided to sell NetObjects. NetObjects Fusion, NetObjects Matrix including the MatrixBuilder, BizGoBiz and other assets were sold to Website Pros (now Web.com), a web design and services company based in Jacksonville, Florida Additionally a portfolio of seven patents was sold to Macromedia (now Adobe), the distributor of Dreamweaver, the long-term main competitor of NetObjects Fusion. NetObjects as a division of Website Pros Website Pros (WSP) (now Web.com) went on developing and distributing future versions of NetObjects Fusion and offering subscription services based on this application, representing the mixed business model that was invented at NetObjects. License revenue from sales of NetObjects Fusion reached nearly $3.58 million in 2006, $2.4 million in 2007, and $2.5 million in 2008. In May 2009 NetObjects Fusion was sold. NetObjects as a re-established company In May 2009 NetObjects Inc. was re-established as an independent company. It acquired the NetObjects Fusion product line from Web.com. A smaller part of the amount was transferred instantly, while $3.0 million remained payable from future revenue of NetObjects Fusion sales until 2013. In terms of management and staff, there are no overlapping between the old and new companies with the same name. Steve Raubenstine, who was vice president of the NetObjects Fusion division at Web.com (former Website Pros), serves as President and CEO of the new NetObjects Inc. Products NetObjects Fusion: Web design tool created in 1996. Sold to Website Pros (now Newfold Digital) in 2001. In 2009 a management buyout of the NetObjects Fusion division of Website Pros created the second coming of an independent NetObjects. Fusion was the main part of what management bought. NetObjects still distributes Fusion. The latest release is Version 15, Update #1. NetObjects released Update #1 in March 2015. NetObjects Authoring Server: Collaborative Web development and content management solution. Created in 1999. Sold to UK-based Merant in 2000. After Merant's merger with Serena Software in 2004, distributed as "Collage". Discontinued in 2008. The predecessor of Authoring Server was NetObjects Team Fusion, introduced as a client–server application in 1998. NetObjects MatrixBuilder: Online Web Page and Web Service builder, first released in 2000. Sold to Website Pros (now Newfold Digital) in 2001. Website Pros sold MatrixBuilder licenses directly to customers. Website Pros also used MatrixBuilder internally to develop websites for customers. References Software companies based in California Software companies based in Pennsylvania Software companies established in 1995 Software companies disestablished in 2001 Defunct software companies of the United States 1995 establishments in California 2001 disestablishments in California Re-established companies Software companies established in 2009 2009 establishments in Pennsylvania
24645519
https://en.wikipedia.org/wiki/HP%20Mini
HP Mini
HP Mini is a former line of small computers categorized as netbooks manufactured by Hewlett-Packard. They either contained a custom version of Ubuntu Linux, Microsoft Windows XP Home Edition or Windows 7 Starter operating system. Like most netbooks, they were not built with CD/DVD drives. However, HP did sell portable DVD-ROMs with HP's LightScribe disc imaging software. These netbooks are best used for written documents, small programs and web browsing. They can run standard software, but given their low price, they tend to have low end specifications, causing poor performance. They were announced from mid-2007, and marketed from 2008 through 2012. Models 2133 Mini-Note PC The first model. Mini 1000 and Compaq Mini 700 The HP Mini 1000 is a netbook by HP, adapting that company's HP 2133 Mini-Note PC education/business netbook for the consumer market. A similar but cheaper model named the HP Compaq Mini 700 will also be available in some regions with different cosmetics. A special edition machine, the HP Mini 1000 Vivienne Tam Edition, designed in collaboration with Vivienne Tam is also available. The three computers have similar specifications. Specifications Processor and memory — The HP Mini 1000 uses a 1.60 GHz Intel Atom N270 Processor and includes 1 GB of DDR2-533 memory with support for up to 2GB. The Mini has only one slot for RAM. Due to Microsoft's restrictions, the XP versions were only sold with 1GB of RAM, but a user can easily upgrade to 2GB by accessing the slot on the bottom of the computer and replacing the module. Storage — The HP Mini 1000 shipped with either a 16/32GB SSD or a 60/80GB 1.8" hard disk drive. The HP Mini 1000 Mi Edition was also available with an 8GB SSD. A ZIF SATA connector is used as opposed to standard PATA/SATA connector cables. Motherboard — The motherboard uses the Intel 945GSE northbridge chipset and Intel ICH7M southbridge. The motherboard model is HP361A. The northbridge component provides the integrated Intel GMA950 graphics core. Display — The Mini features either an 8.9- or 10.1-inch LED-backlit display. The 8.9" display has a resolution of 1024x600 pixels, the 10.1" is 1024x576 (10.2" @ 1024x600 is no longer sold). Both models feature stereo speakers, a webcam, and a single audio jack for both mic and headphones. Both the unit and the dock connector can carry a VGA connection. A first-party adaptor is available from the HP online store. Power — A 3-cell battery is included as standard. A 6-cell battery can be ordered as an accessory, or (with the Mini 1000 and Digital Clutch only) chosen in place of the 3-cell battery during configuration. The 3-cell and 6-cell batteries provide up to 3 hours and 6 hours of run-time, respectively. Connectivity — In addition to the aforementioned card reader, the system has two standard USB ports, a 10/100 Mbit Ethernet port, a single 3.5mm audio in/out mini-jack, and a power connector. The Mini 1000 also has a proprietary dock connector which can carry VGA, USB, RJ-45 (over USB), analogue audio in/out, and power. An 802.11b/g wireless NIC (Broadcom BCM4312) is included for Wi-Fi, while Bluetooth 2.0+EDR and a built-in HSDPA modem are options. It can be connected to Verizon for a one- or two-year contract. Software The Mini has Windows XP & Windows 7 Starter installed at launch and can be upgraded to Windows 10, while Mi (a special HP operating system based on Ubuntu, named for "mobile Internet" containing "HP MediaStyle" based on Elisa) was released in early January 2009 on the HP Mini 1000 "Mi Edition." Known issues Integrated Microphone Array does not work under Windows Vista or Windows 7 RC1 Build 7100 and earlier Audible popping sound under Windows 7 Prior to RC1 and Windows 10 "November Update" and older Poor 3.5mm output audio quality using Windows XP drivers (some models) Mini 1000 - built in webcam only works well with bright lighting conditions. This can be fixed by removing the reflective piece. This square piece is glued in place and is easiest removed by spreading the casing around the monitor and pulling it out. This will create a small opening for dust. Requires HP-customized version of Windows XP to prevent crashes on bootup. Due to a power adapter port design defect, HP-provided power adapter cannot fit flush, allowing the power adapter to come easily disconnected. All Windows 8 apps (except PC Settings app) will not work on HP Mini 1000 because display resolution is too small. This can be fixed by enabling "Display1_DownScallingLevel" on regedit.exe and restart the computer. Reception Initial reviews have been positive, complimenting the computer's keyboard and aesthetics as particular selling points in comparison to its market rivals, and the improved battery life and performance, and reduced price, as particularly important improvements over its antecedent, the HP 2133. However, reviewers noted that the diminutive 1.8-inch hard drive, usually used in digital audio players, performed slower than the 2.5-inch drives in competitors and criticized the decision to charge separately for a VGA adaptor. Although the battery life has been improved, it still does not stand out from the competition. 110 The HP Mini 110 is a line of low-end netbooks computers manufactured and sold by HP. The Mini 110 laptops have a different cases, similar to compact palmtops models (in early versions) or a regular affordable netbooks (last releases). 210 Some models of HP Mini 210 cannot run latest version of Windows 10 due to display driver problems. 311 This netbook was the first to use the Nvidia Ion platform, which allows hardware acceleration of high-definition video and increased gaming performance. It went on sale on HP's online store on September 24, 2009 for $399.99. The laptop can be customized with either the Intel Atom N270 or N280 and uses the nVidia GeForce 9400M G graphics used in the ION platform. The unit is equipped with 1 GB of 1066 MHz DDR3 SDRAM soldered to the motherboard and a SO-DIMM slot which allows an upgrade to 2 GB or 3 GB. The netbook has a 160 GB, 250 GB, or 320 GB SATA hard drive @ 5,400 RPM. There are options for an external DVD burner or DVD burner/Blu-ray reader combo drive, as netbooks do not have integrated optical drives due to their small size. The Mini 311 has an 11.6" Led-backlit BrightView widescreen with a 1366 x 768 resolution and has an integrated webcam standard. Wi-Fi card options include Wireless G or Wireless N cards with optional Bluetooth and as well as optional Mobile Broadband from Verizon Wireless, AT&T or Sprint. I/O connectors include a 5-in-1 removable card reader, 3 USB 2.0 ports, a Fast Ethernet port, a VGA output, an HDMI output. The netbook uses Altec Lansing stereo speakers. There are 2 models in some areas. One has 1GB RAM and Windows XP with ION LE and the other has 2GB RAM and Windows 7 with ION. In some regions, including Europe, the device is sold under the Compaq brand. 2140 The HP Mini 2140 is an update to the HP 2133 Mini-Note PC which was announced in early January 2009. Details The new components are a 1.6 GHz single-core Intel Atom processor, a 10.2-inch "standard definition" or "high definition" (1024×576 or 1366×768 pixel) LED-backlit LCD display (with a glass cover and acrylic coating), an Intel GMA 950 graphics adapter, and a 160GB HDD (5400 or 7200rpm) or 80GB eMMC-based solid-state drive. Operating systems available are similar to those for the Mini-Note 2133: SUSE Linux Enterprise Desktop, FreeDOS, Windows Vista (Home Basic or Business), Windows XP Home (only on 1GB RAM models) or Professional (through downgrade rights from Vista Business). Its features, accessories and appearance are otherwise identical to the HP 2133, however HP predict that the new processor and screen will give it up to 8.5 hours of run time on the 6-cell battery. A docking station will be made available. The updated machine was initially available in various configurations with prices starting at $499USD for a system with the "standard definition" display, hard disk drive, 1GB of RAM, and 3-cell battery, without Bluetooth or 802.11n support, running Windows XP Home. The successor version, HP Mini 2150 was announced, but not presented. Reception A review by Laptop Magazine (of a system with a 1024×576 pixel display and 2GB of RAM running Windows XP) has complimented HP for addressing common criticisms of the earlier model. The reviewer notes that the Mini 2140 produced much less heat, although the underside did become warm, and had much improved battery life compared to the Mini-Note 2133. Their system continuously loaded websites for 3 hours and 32 minutes on the small 3-cell battery, and 7 hours and 19 minutes on the larger 6-cell battery. However, the reviewer chided HP for the low resolution display which showed approximately "two lines" less than netbooks with 1024×600 pixel displays. Otherwise, praises and criticisms of the 2140 were similar to those for the 2133. The magazine gave the system their "Editor's Choice" award. Similar products from HP A new HP notebook similar in appearance to the Mini-Note, called the HP Mini 1000 Vivienne Tam Edition, was unveiled in October 2008, with a launch expected for December that year. The small pink computer is a collaboration with fashion designer Vivienne Tam, and has a 10-inch screen, a 1.6 GHz Intel Atom processor, 1GB of RAM, and an 80GB hard disk drive. A few days later, a black notebook of otherwise similar appearance called the "HP Mini 1000" was informally revealed by a banner on the company's store, and officially announced on the 29 October 2008. Unlike the 2133, this device is meant for the home market. 5101 5102 5103 The HP Mini 5103 was announced in 2010. Compared to the HP Mini 210, it had advanced features like touch screen, Intel Atom N550 Processor. 1103 References External links CNET review of HP Mini 1000 CNET Review of HP Mini 311 CNet review of HP Mini 210 Engadget article about HP Mini 210 Eweek article about HP Mini 210 Mini Discontinued products Linux-based devices Subnotebooks Netbooks ru:HP 2133 Mini-Note PC
55296156
https://en.wikipedia.org/wiki/European%20Digital%20SME%20Alliance
European Digital SME Alliance
The European DIGITAL SME Alliance is a community of small and medium ICT enterprises (SMEs). Its members are national sectorial digital SME associations in 30 countries and regions in the EU and neighboring countries, all together it associates more than 45,000 SMEs. DIGITAL SME was established in 2007 (back then it was named Pan European ICT & eBusiness Network for SME) to represent the voice of ICT SMEs and their interests in the European institutions and other international organisations. DIGITAL SME is the first European association of the ICT sector exclusively. The current president is Oliver Grün. History and main achievements Following a prominent public discussion on patentability of software, the Pan European ICT & eBusiness Network for SMEs was founded in 2007. Its establishment was initiated by UEAPME (European Association of Craft, Small and Medium-sized Enterprises), now referred to as SMEunited, whose member DIGITAL SME has been from then on. In 2013, the Pan European ICT & eBusiness Network for SMEs was among the founding members of SBS (Small Business Standards). Since then, DIGITAL SME representatives have always been appointed to SBS Board, and its experts have been taking part in the technical committees of European standardisation bodies: George Sharkov – ETSI TC CYBER expert, Massimo Vanetti – OneM2M expert, Fabio Guasconi – ISO/IEC JCT 1 SC 27 expert, George Babinov – ETSI ATTM expert, Emil Dimitrov – ETSI TCCE expert. In addition, DIGITAL SME and SBS representative prof. Vladimir Poulkov was elected as a vice chairman of the ETSI general assembly in 2016. In 2016, the Pan European ICT & eBusiness Network for SMEs has changed its name to European DIGITAL SME Alliance. The same year, it became a founding member of another important body – European Cyber Security Organisation (ECSO). DIGITAL SME is particularly active also in the management of ECSO: its President Oliver Grün was appointed as SME member of ECSO Managing Board and of the Partnership Board, while DIGITAL SME's Secretary General Sebastiano Toffaletti is chairing the WG4 - Support to SMEs, coordination with countries (in particular East and Central EU) and regions. The same year (2016), DIGITAL SME also became a member and a pledger of the newly created Digital Skills and Jobs Coalition - an initiative created by the European Commission to tackle e-skills gap in Europe. In the period of 2015-2016, DIGITAL SME was campaigning for the creation of an open market for data usage, where both manufacturers and users of data-producing machines are entitled to use the data. DIGITAL SME's recommendations were taken up by the European Commission that announced the Communication on Building a European Data Economy and accompanying Staff Working document. The reference was made to DIGITAL SME's position paper (published earlier in 2015). In May 2017, European DIGITAL SME Alliance launched the #digitalSME4skills campaign that was initiated in the framework of Digital Skills and Jobs Coalition. The campaign calls European SMEs to train young professionals and help them gaining digital skills through Apprenticeship schemes. In addition, DIGITAL SME President Oliver Grün joined the Coalition's governing board. In July 2017, reacting to the upcoming review of European Cybersecurity Strategy, European DIGITAL SME Alliance has published its position paper on cybersecurity, developed in a cooperation with ECSO. Since its establishment in 2007, DIGITAL SME was guided by 5 presidents: from June 2015: Oliver Grün (Germany) 2013–2015: Bo Sejer Frandsen (Denmark) 2012–2013: Charles Huthwaite (UK) 2010–2012: Bruno Robine (France) 2007–2010: Johann Steszgal (Austria) Members Members of the European DIGITAL SME Alliance are national associations of ICT SMEs in European Union and neighboring countries. The member organisations are: AGORIA (Belgium) BASSCOM - Bulgarian Association of Software Companies (Bulgaria) CNA - Comunicazione e Terziario Avanzato, Confederazione Nazionale dell’Artigianato e della Piccola e Media impresa (Italy) ESTIC / CONETIC - Asociación Empresarial del Sector TIC (Spain) DIGITAL SME France (France) It-forum midtjylland (Denmark) UKITA - United Kingdom IT Association (UK) Skillnet Ireland (Ireland) BITMi – Bundesverband IT-Mittelstand e.V. (Germany) Vojvodina ICT Cluster (Serbia) Belgrade Chamber of Commerce, IT Association (Serbia) STIKK - Kosovo Association of Information and Communication Technology (Kosovo) Balkan and Black Sea ICT Clusters Network (Albania, Bosnia and Herzegovina, Romania, Bulgaria, Greece, Latvia, Montenegro, Serbia, North Macedonia, Ukraine) CLUSIT – Associazione Italiana per la Sicurezza Informatica (Italy) Goals and activities European DIGITAL SME Alliance has the following goals: to represent the interests of its members vis-à-vis the institutions of the European Union; to promote exchange of experience and know-how amongst its members; to carry out actions at the European level, dealing with variety of topics that are relevant to ICT SMEs. Such actions or initiatives might include: participation in or organization of training programmes, conferences, seminars, research activities; to provide information to the members on European policies related to ICT sector; to promote SMEs’ interests in the standardisation process, to raise SMEs’ awareness about standardization, to motivate SMEs to become involved in the standardisation process. European DIGITAL SME Alliance performs the following activities: monitors EU policies and regulations on ICT, gets involved into their creation processes, and informs its members about these policies (e.g. promotion of 10 policy ideas to enhance the EU digital SME Economy ; lobbying against the tax breaks for the multinational companies, etc.); participates in the EU funded projects, such as UNICORN, Sabina, eCF Alliance, cyberwatching.eu, COMPACT, ICT sectoral approach in SBS ; participates in and organizes conferences and seminars; takes part in various European initiatives, working groups, etc. (such as European Digital Skills and Jobs Coalition, European Social Dialogue for the IT sector, etc.); produces newsletters and studies; performs research activities on ICT related areas; participates in the Social Dialogue. DIGITAL SME Project activities UNICORN is a Horizon 2020 project that aims at simplifying the design, deployment and management of multi-cloud services. DIGITAL SME acts as a project communications coordinator, and is reaching out to SMEs’ community. SABINA is an EU funded Horizon 2020 project that aims at developing new technology and financial models to exploit synergies between electrical flexibility and the thermal inertia of buildings . DIGITAL SME is in charge of all standardisation related activities of the SABINA project . These include analysis of standards landscape in the scope of SABINA and provision of recommendations on the elevation of SABINA project items to formal standards. eCF Alliance is an Erasmus+ project that aims at developing ICT vocational education training programmes and certifications based on eCF framework and the ESCO IT occupations . Main DIGITAL SME responsibility in the project – project governance, and management of the Stakeholders’ Committee. cyberwatching.eu is a Horizon 2020 project which seeks to create and maintain an Observatory of the European and national research and innovation projects in the field of cybersecurity . DIGITAL SME is responsible for stakeholders’ (especially SMEs’) engagement, early validation and animation of the end-users club for SMEs and the marketplace. ICT Sectoral approach in SBS Digital SME is responsible for the implementation and coordination of all the ICT sectoral activities within the Working Programme of the Small Business Standards . This includes, but is not limited to, representation of ICT SMEs’ interests in the standardisation processes at European and international standardization bodies, awareness raising activities among the SMEs (but also among the policy makers, other organisations, etc.) about the benefits of standards. Partnerships European DIGITAL SME Alliance is: a founding member of Small Business Standards (SBS) ; a founding member of European Cyber Security Organisation (ECSO) ; a member of the European Association of Craft, Small and Medium-sized Enterprises (UEAPME). References Information technology organizations based in Europe Organisations based in Belgium Organizations established in 2007 Employers' organizations Organizations related to small and medium-sized enterprises
35081486
https://en.wikipedia.org/wiki/Allwinner%20A1X
Allwinner A1X
The Allwinner A1X is a family of single-core SoC devices designed by Allwinner Technology from Zhuhai, China. Currently the family consists of the A10, A13, A10s and A12. The SoCs incorporate the ARM Cortex-A8 as their main processor and the Mali 400 as the GPU. The Allwinner A1X is known for its ability to boot Linux distributions such as Debian, Ubuntu, Fedora, and other ARM architecture-capable distributions from an SD card, in addition to the Android OS usually installed on the flash memory of the device. A1x Features Video acceleration HD video decoding (up to 3840x2160) Supports popular video codecs, including VP8, AVS, H.264 MVC, VC-1, and MPEG-1/2/4 HD Video Encoding (H.264 High Profile) Display controller Multi-channel HD displays Built-in HDMI YPbPr, CVBS, VGA LCD interfaces: CPU, RGB, LVDS up to full 1080p HDTV Memory DDR2/DDR3 SDRAM, 32-bit SLC/MLC/TLC/DDR NAND Connectivity USB 2.0 CSI, TS SD Card 3.0 10/100 Ethernet controller CAN bus (A10 only) Built-in SATA 2.0 Interface I²S, SPDIF and AC97 audio interfaces PS2, SPI, TWI and UART Storage and boot devices NAND flash SPI NOR flash SD card USB SATA Implementations Many manufacturers have adopted the Allwinner A1X for use in devices running the Android operating system and the Linux operating System. The Allwinner A1X is used in tablet computers, set-top boxes, PC-on-a-stick, mini-PCs, and single-board computers. PengPod, Linux-based 7 and 10-inch tablets. Gooseberry, a board based on the A10 SoC similar to the Raspberry Pi. Cubieboard, a board based the A10 SoC. Tinkerforge RED Brick, a board based on the A10s SoC CHIP (computer), a $9 SoC computer based on the A13 Operating System support Linux support The Allwinner A1X architecture is referred to as 'sunxi' in the Linux kernel source tree. The source code is available at GitHub. At the moment, stable and full hardware support is limited to 3.0.x and 3.4.x kernels. Recent mainline versions of the kernel run, but do not offer NAND access and have only limited 3D-acceleration. FreeBSD support There is a work in progress on support Efika on FreeBSD. At the moment, not all on-board peripherals are working. OpenBSD support As of May 2015, OpenBSD's armv7 port supports the Cubieboard and pcDuino boards based on the Allwinner A1X. NetBSD support NetBSD contains support for the Allwinner A10. Documentation No factory sourced programmers manual is publicly available for the A10S CPU at this moment. Allwinner A-Series Apart from the single-core A1x (A10/A13/A10s/A12), two new more powerful Cortex-A7 Allwinner SoCs have been released by Allwinner, the A10-pin-compatible dual-core Allwinner A20, and the quad-core Allwinner A31. References External links Cubieboard on linux-sunxi A13 A10 A10s ARM architecture Embedded microprocessors
23309
https://en.wikipedia.org/wiki/Paul%20Vixie
Paul Vixie
Paul Vixie is an American computer scientist whose technical contributions include Domain Name System (DNS) protocol design and procedure, mechanisms to achieve operational robustness of DNS implementations, and significant contributions to open source software principles and methodology. He also created and launched the first successful commercial anti-spam service. He authored the standard UNIX system programs SENDS, proxynet, rtty and Vixie cron. At one point he ran his own consulting business, Vixie Enterprises. Career Vixie was a software engineer at Digital Equipment Corporation (DEC) from 1988 to 1993. After he left DEC in 1994, he founded Internet Software Consortium (ISC) together with Rick Adams and Carl Malamud to support BIND and other software for the Internet. The activities of ISC were assumed by a new company, Internet Systems Consortium in 2004. Although ISC operates the F root name server, Vixie at one point joined the Open Root Server Network (ORSN) project and operated their L root server. In 1995 he cofounded the Palo Alto Internet Exchange (PAIX) and, after Metromedia Fiber Network (MFN) bought it in 1999, served as the chief technology officer to MFN / AboveNet and later as the president of PAIX. In 1998 he cofounded Mail Abuse Prevention System (MAPS), a California non-profit company with the goal of stopping email abuse. Vixie is the author of several Request for Comments (RFC)s, including a Best Current Practice document on "Classless IN-ADDR.ARPA Delegation" (BCP 20), and some Unix software. He stated in 2002 that he "now hold[s] the record for 'most CERT advisories due to a single author.'" In 2008, Vixie served as a judge for the Mozilla Foundation's "Download Day", an attempt to set a Guinness World Record for most downloads in a single day for a new piece of software. Vixie served on the Board of Trustees of the American Registry for Internet Numbers (ARIN) from 2005 to 2013, and served as chairman in 2009 and 2010. Vixie also serves on the Security and Stability Advisory Committee of ICANN. Vixie attended George Washington High School in San Francisco, California. He received a Ph.D. in Computer Science from Keio University in 2011. In 2013, after nearly 20 years at ISC, he founded a new company, Farsight Security, Inc. spinning off the Security Business Unit from ISC. In 2014, Vixie was inducted into the Internet Hall of Fame as an Innovator. Realizations BIND Vixie cron DHCP sendmail Publications Book Patent United States Patent 6,581,090, "Internet communication system," issued October 1996. References External links Paul Vixie's CircleID Page Living people Writers from California American computer scientists Free software programmers American technology writers Computer science writers Digital Equipment Corporation people Unix people American technology chief executives American chief technology officers Open source advocates 1963 births
46781813
https://en.wikipedia.org/wiki/John%20Rust
John Rust
John Philip Rust (born May 23, 1955) is an American economist and econometrician. John Rust received his PhD from MIT in 1983 and taught at the University of Wisconsin, Yale University and University of Maryland before joining Georgetown University in 2012. John Rust was awarded Frisch Medal in 1992 and became the fellow of Econometric Society in 1993. John Rust is best known as one of the founding fathers of the structural estimation of dynamic discrete choice models and the developer of the nested fixed point (NFXP) maximum likelihood estimator which is widely used in structural econometrics. However, he had published papers on broad range of topics including equilibrium in the markets of durable goods, social security, retirement, disability insurance, nuclear power industry, real estate economics, rental car industry, transportation research, auction markets, computational economics, dynamic games. Education and career John Rust was born in Wisconsin on May 23, 1955. He graduated from Waukesha High School in 1973, completed B.A. in Mathematics in 1977 at the University of Pennsylvania, and received his Ph.D. in Economics from Massachusetts Institute of Technology in 1983. His dissertation titled “Stationary Equilibrium in a Market for Durable Assets” under the supervision of Daniel McFadden was published as Econometrica article in 1985. After graduating from University of Pennsylvania in 1977, John Rust worked as research analyst for Morgan Stanley in New York City for two years. His first academic job was at the University of Wisconsin (assistant professor, 1983-1987, associate professor, 1987-1989, and full professor, 1990-1996), after which he had professorial positions at Yale University (1996-2001) and University of Maryland (2001-2011) before starting his current affiliation with Georgetown University. John Rust had been affiliated with a number of governmental bodies, including Board of Governors, Federal Reserve System (research consultant, 1995), Panel of Expert Reviewers of Social Security Administration’s MINT Model (member, 1998-1999), Technical Panel of Social Security Advisory Board (member, 1998-1999), Long Term Modeling Advisory Group U.S. Congressional Budget Office (member, 2001-2004), Social Security Administration (advisor for demonstration project resulting from the 1999 Work Incentives Improvement Act, 2000-2003). He has also been a member of Steering Committee of the Health and Retirement Study at University of Michigan (2000-2002), senior advisor at The Brattle Group (since 2004) and fellow of TIAA-CREF Institute, New York (since 2005). Research and contributions Dynamic discrete choice models Rust is best known for developing methods for analyzing dynamic discrete choice. In his best-known paper, he modeled the decision of Harold Zurcher, superintendent of the Madison, Wisconsin Metropolitan Bus Company, whether and when to replace the engines of buses in his fleet, and developed the nested fixed point algorithm to estimate the model using data on when the buses were replaced. This paper is one of the first dynamic stochastic models of discrete choice estimated using real data, and continues to serve as classical example of the problems of this type. The methods Rust developed have been used to study dynamic economic decisions in other contexts, including retirement and occupational choice. Methodological debate Although John Rust is a prominent structural econometrician, in the ongoing debate between adepts of the structural and non-structural econometric approaches, he has expressed a balanced position. I really do not understand the widespread antipathy towards structural econometrics. I do not see any basis for the belief that the reduced form approaches adopted by statistical modelers is more justified or legitimate (or is less subjective) than the structural econometric approach adopted by economic modelers. Both types of modelers have to impose strong assumptions, and it seems all that we can say is that these models and the underlying identifying assumptions are just different. It really isn’t productive to criticize the status quo in economics these days, nor is it productive to try to promote the virtues of structural estimation. Criticism only encourages the practitioners to rally around the flag. I think it is equally a waste of time to try to engage in salesmanship. Instead, in his review of "The Limits of Inference without Theory" by Kenneth Wolpin titled "The Limits of Inference with Theory" John Rust brings attention to the limits of inference inherent to any econometric approach, and argues that collection of better data and closer cooperation between structural and the experimental economics will lead to more useful empirical knowledge. My main message is though there is ample room for getting far more knowledge from limited data (and even more when we have access to “big data” ) by optimally combining inductive and deductive approaches to inference and learning, it is important to recognize that there are a number of inherent limits to inference that may be insuperable. These limits were not adequately addressed in Wolpin’s book, and motivated the title of this review. John Rust holds a stronger position on the issue of disconnect between theoretical economics and econometrics from the real world (empirical) problems. At the "Causality in the Social Sciences Conference" held at Stanford University on December 5–6, 2014 he gave a talk titled "" where he pointed out that development of complicated econometric theories is rewarded disproportionally to their practical usefulness. Professional service In 2004 John Rust co-founded the software development company Technoluddities, Inc. which operates several web-based software products widely used by the economic profession. Technoluddities, Inc. owns trademarks to three of these services, namely Editorial Express, Conference Maker and Head Hunter. Editorial Express Editorial Express is web-based editorial tracking software that can enable "paper-free" operation of the key editorial functions of a journal. Some of the features of this system include guaranteed low pricing, secure operations and data encryption, electronic submission of papers and referee reports, easy assignment of editors and referees, built-in email notification and automatic reminders, statistical functions and reporting. Editorial express is regarded by many as one of the best journal management systems. Editorial Express used by many leading journals in economics including Econometrica, Quarterly Journal of Economics, RAND Journal of Economics, Review of Economic Studies Journal of Applied Econometrics, International Economic Review, Review of Economics and Statistics, Journal of Finance and other. Conference Maker Conference Maker is web-based software for organizing international conferences. Conference Maker allows a program chair (or several co-chairs) and their selected program committee to handle the submission process in a decentralized fashion. All members of the program committee can log in at any time via secure password protected accounts and can view all submissions online. Program committee members are assigned certain subsets of submissions (usually designated by the person making the submission, unless overwritten by program committee members) and can make accept/reject decisions by clicking a button. There is also a simple interface for forming sessions, searching for discussants and session chairs, posting/updating the conference program to an automatically generated web page, and sending mass emails to arbitrarily selected subgroups of users. Over 625 international conferences have used Conference Maker since it was introduced in 2001, more than 150,000 submissions have been made to Conference Maker for these various conferences and over 290,000 people worldwide have used it. Head Hunter Head Hunter is web-based academic recruiting software specially designed as a "back end interface" to the EconJobMarket.org. Some of the features of this system include paperless operation, built-in scheduling module, easy setup, high security, electronic applications and reference letters. Head Hunter is one of the internal interfaces (or "back ends") which facilitate for the departments the work with the applications and reference letters collected by EconJobMarket.org centralized application collection system. EconJobMarket.org EconJobMarket.org (EJM) is a nonprofit organization that facilitates the flow of information in the economics job market by providing a secure central repository for the files of job-market candidates (including papers, reference letters, and other materials) accessed on line. EJM was founded in 2007 by Martin Osborne, John Rust, and Joel Watson, and is run by a group of academic economists who volunteer their time and effort. EconJobMarket.org is endorsed by The Econometric Society, Canadian Economics Association, European Economic Association, Eurasia Business and Economics Society, Society for Economic Dynamics, Verein für Socialpolitik, VOX and walras.org The theoretical foundation for the creation of EconJobMarket.org is described in Chapter 7 of The Handbook of Market Design. EJM does not attempt to fundamentally alter the decentralized “endogenous search and matching” process by which the economics job market currently operates. Since there is unrestricted entry of intermediaries similar to EJM and a number of for-profit and non-profit organizations are currently competing in this market, we discuss the problem of market fragmentation that can occur when too many organizations attempt to intermediate trade in the market. Contrary to conventional wisdom in industrial organization theory, we show that unrestricted entry and competition of intermediaries can result in suboptimal outcomes. We discuss conditions under which the market might be improved if there is sufficient coordination to promote information sharing, such as establishing a dominant information clearinghouse that operates as a non-profit public service — a role EJM is trying to fulfill. EconJobMarket.org grew in various significant characteristics (number of job adds posted, number of recruiters' accounts, number of applicants' accounts, number of applications transmitted, number of recommenders' accounts, number of recommendations transmitted) between its inception and 2011 at an average annual rate between 79% and 194%. Selected publications Solution and estimation of structural dynamic models Market equilibrium, durable goods Retirement and disability Rental cars Nuclear power plants Philosophy of science Books See also List of economists References External links John Rust personal webpage John Rust at IDEAS.RePeC.org NFXP software and manual Technoluddities, Inc. Editorial Express Conference Maker Head Hunter EconJobMarket.org Living people 1955 births Econometricians Fellows of the Econometric Society Massachusetts Institute of Technology alumni University of Wisconsin–Madison faculty Yale University faculty University of Maryland, College Park faculty Georgetown University faculty 20th-century American economists 21st-century American economists University of Michigan people